Methods and applications of distributed and decentralized Model Predictive Control

Size: px
Start display at page:

Download "Methods and applications of distributed and decentralized Model Predictive Control"

Transcription

1 thesis_main 2014/1/26 15:56 page 1 #1 Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Doctoral Programme in Information Technology Methods and applications of distributed and decentralized Model Predictive Control Doctoral Dissertation of: Giulio Betti Supervisor: Prof. Riccardo Scattolini Co-supervisor: Dr. Marcello Farina Tutor: Prof. Maria Prandini The Chair of the Doctoral Program: Prof. Carlo Fiorini 2013 XXVI Cycle

2 thesis_main 2014/1/26 15:56 page 2 #2

3 thesis_main 2014/1/26 15:56 page 1 #3. Giulio Betti Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria giulio.betti.job@gmail.com

4 thesis_main 2014/1/26 15:56 page 2 #4

5 thesis_main 2014/1/26 15:56 page 3 #5 I m bac. Michael Jeffrey Jordan

6 thesis_main 2014/1/26 15:56 page 4 #6

7 thesis_main 2014/1/26 15:56 page 5 #7 Abstract This thesis deals with the theoretical development of distributed and decentralized control algorithms based on Model Predictive Control (MPC) for linear systems subject to constraints on inputs and states. In all the presented techniques, the basic idea consists in considering the coupling terms among the subsystems as disturbances to be rejected. Part of this disturbance is assumed to be nown over all the prediction horizon, while the remaining one is considered unnown but bounded. To reject this second term, a robust approach is implemented using polytopic invariant sets. A regulation problem for distributed control is initially described, together with some practical solutions needed to deal with implementation issues. A continuous-time version of the proposed approach is also provided. Secondly, two different distributed solutions to the tracing problem are given. In the first one, developed for tracing piecewise constant setpoints, a fictitious reference signal is used to guarantee feasibility. The second one, instead, can be used for tracing constant setpoints and relies on the description of the dynamic system in the so-called velocity-form, which allows one to insert an integral action in the closed-loop. The properties of systems described in velocityform are then investigated for the centralized case. Finally, the results derived for the centralized systems in velocity-form are used to develop a completely decentralized approach with integral action for tracing piecewise constant references. Several simulation examples are reported to show the performances of the proposed algorithms.

8 thesis_main 2014/1/26 15:56 page 6 #8

9 thesis_main 2014/1/26 15:56 page I #9 Summary and publications In recent years, process plants, electrical, communication and traffic networs and manufacturing systems have been characterized by an increasing complexity. Usually, they result to be composed by a huge number of relatively small or medium-scale subsystems interacting via inputs, states or outputs. The design of a centralized controller for the whole large-scale system is often a difficult challenge due to possible limitations, for instance, in computing capabilities or communication bandwidth. Reliability and robustness of the overall control system could represent additional reasons for avoiding centralized controllers. According to these motivations, remarable efforts have been put in the research field of hierarchical, decentralized and distributed control. Among all the possible solutions, particularly interesting appear to be those based on Model Predictive Control (MPC). Several reasons can be listed to support this claim: first of all, MPC is a multivariable optimal control technique and, among the advanced control methods, it is probably the only one really adopted by the industrial world and able to cope with operational constraints on inputs, states and outputs. Secondly, in the last years fundamental properties such as stability of the closed-loop systems and robustness with respect to several classes of external disturbances have been proved for many different MPC formulations. Lastly, when controlling large-scale systems in a distributed or decentralized framewor, the values of inputs, states and outputs are explicitly computed over all the prediction horizon at each sampling time by a local controller designed with MPC and they can be used as I

10 thesis_main 2014/1/26 15:56 page II #10 information to be transmitted to other local controllers to coordinate their actions. This data transmission can greatly simplify the design of a distributed control system and can allow one to obtain performances close to those of a centralized controller. In this thesis, we present a number of distributed control algorithms based on the Distributed Predictive Control (DPC) approach. They are designed for linear systems subject to constraints on inputs and states, described in state space form and decomposable in several nonoverlapping subsystems. The main idea of DPC is that, at every sampling time, each subsystem transmits to its neighbors the reference trajectories of its inputs and states over all the prediction horizon. Moreover, by adding proper additional constraints to the MPC formulation, all the local controllers are able to guarantee that the real values of their inputs and states lie in a specified invariant neighbor of the corresponding reference trajectories. In this way, each subsystem has to solve an MPC problem where the reference trajectories received from the other controllers represent a disturbance nown over all the prediction horizon, while the differences between the reference and the real values of its neighbors inputs and states can be seen as an unnown bounded disturbance to be rejected. To this end, a robust tube-based MPC formulation is adopted and implemented using the theory of polytopic invariant sets. This thesis also presents a completely decentralized version of DPC, named DePC (Decentralized Predictive Control) for strongly decoupled systems. In this case, no transmission of information is required, as all the interactions via inputs and states are seen as unnown disturbances to be rejected. The thesis is structured as follows. Chapter 1 provides an overview of the most common centralized MPC algorithms for regulation, tracing and disturbance rejection, as well as a short description of the main solutions for distributed and decentralized control, with particular attention to those based on MPC. In Chapter 2, the basic formulation of DPC for regulation of discretetime systems is presented [BC1]. Efficient techniques for the computation of polytopic robust invariant sets, for the initialization of reference trajectories and for the online solutions to possible large and unexpected disturbances are also shown [J4], in order to provide both the theoretical results and some practical hints useful for real industrial applications. Since the discrete-time domain does not allow to consider the process inter-sampling behavior in the MPC optimization problem, II

11 thesis_main 2014/1/26 15:56 page III #11 in Chapter 3 a continuous-time version of DPC is illustrated [J3]. Chapters 4 and 5 present two extensions of DPC for the solution of the tracing problem: the first one, see [C3, J3], can be used to trac piecewise constant targets. The main improvement with respect to the standard DPC formulation consists in including among the optimization variables of the MPC problem the value of the reference point that each subsystem really tracs at each sampling time. An additional term in the cost function is also considered to penalize the distance of this extra optimization variable from the desired external set point. The second solution [C1] is based on inserting an integral action in the closed-loop by rewriting the initial system in the so called velocityform. This formulation guarantees rejection of constant disturbances and can be used to trac constant targets: in fact, recursive feasibility can not be proved if the value of the reference signal is changed. In Chapter 6, to overcome the limitation of being able to manage only constant references for systems in velocity-form, a thorough study of their properties for centralized control is described for both the case of nominal [C2] and disturbed [J1] systems. The obtained results allow one to use the velocity-form also in presence of varying set points. Once again, time-varying external references are handled adding as optimization variable the set point really traced at each time instant. The cost function term weighting its distance from the external one guarantees an asymptotic convergence of the first one to the latter. The outcomes of the previous developments for centralized control of systems in velocity-form have been exploited to develop a decentralized control method for tracing, called DePC [C4], and presented in Chapter 7. It is able to manage piecewise constant set points and to reject external disturbances, asymptotically deleting their effects in case they are constant. Eventually, some conclusions are drawn in Chapter 8. List of publications International journals J1. G. Betti, M. Farina, and R. Scattolini. A robust MPC algorithm for offset-free tracing of constant reference signals. IEEE Transactions on Automatic Control, 58(9): , III

12 thesis_main 2014/1/26 15:56 page IV #12 J2. M. Farina, G. Betti, L. Giulioni, and R. Scattolini. An approach to distributed predictive control for tracing - theory and applications. IEEE Transactions on Control Systems Technology, in press. J3. M. Farina, G. Betti, and R. Scattolini. Distributed predictive control of continuous-time systems. Systems and Control Letters (submitted). J4. G. Betti, M. Farina, and R. Scattolini. Implementation issues and applications of a distributed predictive control algorithm. Journal of Process Control (submitted). International conferences proceedings C1. G. Betti, M. Farina, and R. Scattolini. Distributed predictive control for tracing constant references. In American Control Conference (ACC), 2012, pages C2. G. Betti, M. Farina, and R. Scattolini. An MPC algorithm for offset-free tracing of constant reference signals. In Decision and Control (CDC), 2012 IEEE 51st Annual Conference on, pages C3. M. Farina, G. Betti, and R. Scattolini. A solution to the tracing problem using distributed predictive control. In European Control Conference (ECC), 2013, pages C4. G. Betti, M. Farina, and R. Scattolini. Decentralized predictive control for tracing constant references. In Decision and Control (CDC), 2013 IEEE 52st Annual Conference on, pages Boo chapters BC1. G. Betti, M. Farina, and R. Scattolini. Distributed MPC: a noncooperative approach based on robustness concepts. In J.M. Maestre and R.R. Negenborn, editors, Distributed Model Predictive Control made easy, volume 69 of Intelligent Systems, Control and Automation: Science and Engineering, pages Springer Netherlands, IV

13 thesis_main 2014/1/26 15:56 page V #13 Contents I Introduction 1 1 Introduction Notation Model Predictive Control Standard MPC formulation Robust tube-based MPC MPC for tracing piecewise constant references Invariant sets Polytopic approximation of the minimal robust invariant set Maximal output admissible set Distributed and decentralized MPC Decentralized control based on MPC Distributed control based on MPC II Solutions to the regulation problem 25 2 Distributed Predictive Control Statement of the problem and main assumption Description of the approach The online phase: the i-dpc optimization problems Convergence results and properties of DPC V

14 thesis_main 2014/1/26 15:56 page VI # Properties of DPC Implementation issues The discretization method Computation of the state-feedbac gain and of the weighting matrices Computation of the RPI sets and of the terminal sets Generation of the reference trajectories (for systems without coupling constraints) Simulation examples Temperature control Four-tans system Cascade coupled flotation tans Reactor-separator process Conclusions Appendix Proof of Theorem Proof of Algorithm Continuous-time DPC Partitioned continuous-time systems DPC for continuous-time systems Models: perturbed, nominal and auxiliary Statement of the i-dpc problems Properties of DPC Tuning of the design parameters Choice of the control gains Ki c, K i Choice of Q i, R i, Pi, X F i Choice of ˆQ i, ˆR i, ˆPi, λ Simulation example Conclusions Appendix Recursive feasibility The collective problem Proof of convergence III Solutions to the tracing problem 87 4 DPC for tracing 93 VI

15 thesis_main 2014/1/26 15:56 page VII # Interacting subsystems Control system architecture The reference output trajectory management layer The reference state and input trajectory layer The robust MPC layer The distributed predictive control algorithm Off-line design On-line design Simulation examples Conclusions Appendix Proof of recursive feasibility and convergence of the reference management problem Proof of recursive feasibility of the i-dpc problem Proof of convergence for the robust MPC layer DPC for systems in velocity-form The system System under control Partitioned system System with integrators The DPC algorithm for tracing Nominal models and control law Input and state constraints i-dpc problems Convergence results Implementation issues Simulation example Conclusions Appendix Proof of Proposition Centralized MPC with integral action MPC for offset-free tracing: nominal systems Statement of the problem The velocity form The maximal output admissible set The MPC problem MPC for offset-free tracing: disturbed systems VII

16 thesis_main 2014/1/26 15:56 page VIII # Statement of the problem and transformation in velocity-form The maximal output admissible set computed using tightened constraints The MPC problem Simulation examples Conclusions Appendix Proof of Proposition 6.2 and of Proposition Proof of Theorem 6.1 and of Theorem Proof of Corollary Proof of Corollary Proof of Corollary Proof of Proposition DePC for tracing The dynamical system System under control Partitioned system Subsystems with integrators The DePC algorithm for tracing Nominal models and control law Constraints for inputs and states Computation of the terminal set i-depc problems Convergence results Example Conclusions IV Conclusions Conclusions 181 Bibliography 185 VIII

17 thesis_main 2014/1/26 15:56 page 1 #17 Part I Introduction 1

18 thesis_main 2014/1/26 15:56 page 2 #18

19 thesis_main 2014/1/26 15:56 page 3 #19 1 Introduction This chapter introduces the basic ideas underlying robust MPC based on polytopic invariant sets and a short review on distributed and decentralized predictive control. The goal is to provide in a simple and compact form the main techniques used in the following chapters, as well as to clearly describe the scientific context of this thesis. To improve the readability of the thesis, all the information about the adopted notation has been listed at the beginning of this Chapter. 1.1 Notation The symbols and recurring terms used throughout the thesis are listed below. The discrete-time index is denoted by, and the dependence on time of the variables of discrete-time systems is denoted by a subscript, e.g. x, u +1. The continuous-time index is denoted by t, and the dependence on time of the variables of continuous-time systems is represented in bracets, e.g. x(t), u(t). For a discrete-time signal s and a, b N, a b, we denote (s a ; s a+1 ;... ; s b ) with s [a:b]. For a continuous-time variable s(t) and a given time interval T R + (T can be open or closed), the trajectory s(t) with t T is denoted by s(t ). The short-hand v = (v 1,..., v s ) denotes a column vector with s (not necessarily scalar) components v 1,..., v s. 3

20 thesis_main 2014/1/26 15:56 page 4 #20 4 CHAPTER 1. INTRODUCTION Only when distributed or decentralized techniques are discussed, the large-scale system (centralized) matrices and vectors are written in bold. In all the other cases, standard fonts are used to indicate also possibly non-scalar elements. The expression x 2 Q stands for x[ T ]Qx, where x is a column vector and x T is the transpose of x. I α stands for an identity matrix of order α. Where clear from the context, 0 is used to represent a matrix of proper dimensions with all its elements equal to zero. All the eigenvalues of a Schur matrix have absolute value strictly less than 1. A matrix is said to be Hurwitz if all its eigenvalues have negative real part. λ M ( ) and λ m ( ) are the maximum and the minimum eigenvalues of a matrix, respectively. A polyhedron is a subset of R n defined by the intersection of a finite number of closed half-spaces, i.e. a polyhedron X is a set defined by a finite number of inequalities: X {x R n f ix g i, i I} where f i R n, g i R and I is a finite index set. A polytope is a bounded polyhedron. Given two sets A and B, their Minowsi sum (set addition) is A B {a + b a A, b B}. Their Minowsi, or Pontryagin, difference (set subtraction) is A B {a a B A}. We denote by M i=1 P i the Minowsi sum of the sets {P 1,..., P M }. A generic p-norm ball centered at the origin in the R dim space is defined as follows: B (dim) p,ε (0) := {x R dim : x p ε}. Given a generic compact set L, H = box(l) is the smallest hyperrectangle containing L with faces perpendicular to the cartesian axis. int(x) denotes the interior of set X. A continuous function α : R + R + is a K function if and only if i) α(0) = 0, ii) it is strictly increasing and iii) α(s) + as s +.

21 thesis_main 2014/1/26 15:56 page 5 # MODEL PREDICTIVE CONTROL Model Predictive Control In this Section the fundamentals of Model Predictive Control are illustrated. MPC is probably the most widely used advanced control technique for control of industrial plants (see, e.g., [127,128]). Its main features, that made it particularly suited for several applications, are: the control problem is reformulated as an optimization one, in which it is possible to include different and possibly conflicting goals. In the control problem formulation it is possible to explicitly consider constraints on inputs, outputs and states. This is achieved by predicting the evolution of the system as a function of an admissible sequence of future control inputs. It is possible to design the controller also using empirical process models obtained, for instance, through step or impulse responses of the system. An in-depth presentation is beyond the scope of this thesis: for detailed discussions, the reader is referred to the textboos [20, 95, 136] and to the survey papers [56, 105] Standard MPC formulation Consider a time-invariant, linear, discrete-time system described by x +1 = Ax + Bu (1.1) where x R n is the state vector, u R m is the input vector and the pair (A, B) is assumed to be reachable. The states and the inputs have to fulfill the constraints x X R n and u U R m. Given a prediction horizon of duration equal to N I + time steps, the goal at time step is to compute the sequence of N control variables u [:+N 1] = (u, u +1,..., u +N 1 ) that minimizes the finite horizon cost function N 1 V (x, u [:+N 1] ) = ( x +ν 2 Q + u +ν 2 R) + x +N 2 P (1.2) ν=0 where Q R n n, R R m m and P R n n are positive definite matrices. Note that the definite positiveness of matrix Q is not strictly

22 thesis_main 2014/1/26 15:56 page 6 #22 6 CHAPTER 1. INTRODUCTION required [136], and here is ased only to simplify the following proofs. The term x +ν 2 Q + u +ν 2 R represents the stage cost where matrices Q and R are design parameters, while matrix P appearing in the terminal cost x +N 2 P has to be carefully selected in order to guarantee the convergence of the control algorithm. For the same reason, an additional terminal constraint x +N X f is required as well, where the properties of the set X f X will be specified later on. If the sets X, U and X f are polytopes, it is easy to verify that, given the current value of the state x, the optimization problem min u [:+N 1] V (x, u [:+N 1] ) s.t. x +ν X ν = 0,..., N 1 u +ν U ν = 0,..., N 1 x +N X f x +1 = Ax + Bu (1.3) can be written as a quadratic programming problem, i.e., as an optimization problem of the form min z 1 2 z Υz + ϖ z s.t. Λz σ (1.4) whose solutions can be computed with well nown and computationally efficient algorithms [19]. Let us denote by u [:+N 1] = (u, u +1,..., u +N 1 ) the optimal control sequence computed at time. According to the receding horizon (or moving horizon) principle, the rationale underlying MPC is to use only the first element of the optimal sequence, i.e. u, and to solve again the optimization problem (1.3), referred to the horizon [ + 1,..., + N + 1], at the next time step. As the control input is the outcome of an optimization procedure solved at each time step, the control law is an implicit state-feedbac one. In order to prove that: i) the optimization problem results to be feasible also at time step + 1; ii) the convergence of the closed-loop system is guaranteed;

23 thesis_main 2014/1/26 15:56 page 7 # MODEL PREDICTIVE CONTROL 7 an auxiliary state-feedbac control law K a x, not used to compute the control law, but necessary for the mathematical proofs and for the design of the controller, together with a proper pair of weighting matrix P and terminal set X f, must be selected. The most common choices are the following ones: 1. Zero Terminal Constraint (ZTC): P = 0, K a = 0 and X f = {0}. 2. Quasi-Infinte LQ (QILQ): P is the unique positive definite solution of the Riccati algebraic (stationary) equation P = A P A + Q A P B(R + B P B) 1 B P A; K a is the infinite horizon LQR (linear quadratic regulator) gain: K a = (R + B P B) 1 B P A; X f is chosen such that X f X, K a X f U and (A + BK a )X f X f. 3. Arbitrary Stabilizing Gain (ASG): K a is selected as an arbitrary stabilizing gain (using, e.g., eigenvalue assignment techniques); P is the solution of the Lyapunov equation (A + BK a ) P (A + BK a ) P = (Q + K a RK a); X f is chosen such that X f X, K a X f U and (A + BK a )X f X f. Note that a possible way to select X f in QILQ and ASG is to define it as the ellipsoidal invariant set X f = {x x P x o} with the positive scalar o small enough to guarantee X f X and K a X f U. Then, it can be transformed in a polytopic positively invariant set as explained in [5]. A setch of the recursive feasibility and of the convergence is now provided for all the three proposed choices. Let assume to be at generic time instant and that the optimal control sequence is u [:+N 1] = (u, u +1,..., u +N 1 ), associated to the optimal value of the objective function V o (x ). Applying the control input u, the system reaches the state x +1 = Ax + Bu. If at time + 1 the sequence u [+1:+N 1] of the control sequence computed at time is applied to the system, the state at time + N is x +N X f and a feasible trajectory is followed. Therefore, thans to the properties of X f, the control sequence ũ [+1:+N] = (u +1, u +2,..., K a x +N ) is an admissible solution to the optimization problem at time + 1. The corresponding cost function value Ṽ (x +1, ũ [+1:+N] ) is suboptimal, i.e. by definition it holds that V o (x +1 ) Ṽ (x +1, ũ [+1:+N] ).

24 thesis_main 2014/1/26 15:56 page 8 #24 8 CHAPTER 1. INTRODUCTION Considering all the terms of the cost function, it is possible to see that Ṽ (x +1, ũ [+1:+N] ) V o (x ) = x 2 Q u 2 R + x +N 2 Q + K a x +N 2 R x +N 2 P + (A + BK a )x +N 2 P = x 2 Q u 2 R + x +N 2 Q+K a RKa P +(A+BKa) P (A+BK a) (1.5) Since ZTC guarantees x +N = 0 while QILP and ASG, recalling that the stationary Riccati equation can be also written as P = Q + K ark a + (A + BK a ) P (A + BK a ), lead to Q + K ark a P + (A + BK a ) P (A + BK a ) = 0, in all cases we have Ṽ (x +1, ũ [+1:+N] ) V o (x ) = x 2 Q u 2 R (1.6) The suboptimality of Ṽ (x +1, ũ [+1:+N] ) allows one to write V o (x +1 ) V o (x ) ( x 2 Q + u 2 R) (1.7) When the state is different from zero, the cost function is therefore a monotonic strictly decreasing positive function. Therefore, V o (x ) V o (x +1 ) 0. But V o (x ) V o (x +1 ) x 2 Q + u 2 R, thus ( x 2 Q + u 2 R ) 0 as well. The definite positiveness of matrices Q and R implies that inputs and states converge to the origin Robust tube-based MPC If the system to be controlled is affected by an external unnown (but bounded) disturbance, using the standard technique on the nominal system [85, 148], i.e. the system obtained by the real one neglecting the disturbance, could easily lead to the loss of the stability properties and/or to constraints violation [125]. As it has been clarified in [63], nominal MPC can be nonrobust even with respect to arbitrarily small disturbances. For this reason, attention has been recently focused on the development of MPC algorithms ensuring desired robustness properties, see, e.g., [11,79,80,103,129]. This activity has led to the development of two broad classes of algorithms: one is based on a min-max formulation of the optimization problem that defines MPC, see [86,100,129,147]. The other one relies on the a priori evaluation of the disturbance effect over the prediction horizon and the enforcement of tighter and tighter constraints to the predicted state trajectories, see [30, 62, 82, 84]. Among the latter ones, the so-called tube-based. method discussed in [81, 107] has received much attention for its simplicity and in view of the fact that it requires an on-line computational

25 thesis_main 2014/1/26 15:56 page 9 # MODEL PREDICTIVE CONTROL 9 load comparable to that of nominal MPC. In the reminder of this Section, the approach described in [107] will be presented more in detail, since it plays a fundamental role in the DPC control scheme. Consider a linear, discrete-time system under control described by x +1 = Ax + Bu + w (1.8) where x X R n, u U R m and w W R n is an unnown but bounded disturbance. U, X and W are polytopes containing the origin in their interior. The nominal system corresponding to (1.8) is ˆx +1 = Aˆx + Bû (1.9) Consider a control gain K selected in such a way that F = A + BK is Schur and the robust positive invariant (RPI) set Z verifying F Z W Z (details on how to compute Z as a polytopic set will be given in Section 1.3). It can be easily proved that if x ˆx Z and if the real system is controlled with u = û + K(x ˆx ) (1.10) then x +1 ˆx +1 Z for all w W. Therefore, the input u to system (1.8) is computed as the sum of two terms: namely, by the nominal input û obtained as the solution of a standard MPC optimization problem solved considering the nominal model (1.9) and by the corrective term K(x ˆx ), which has the role of eeping the real system state as close as possible to that of the nominal system. The fact that x +1 ˆx +1 Z, anyway, is guaranteed only if x ˆx Z: x is the measured, current state of the real system and therefore can not be instantaneously changed. On the contrary, ˆx is the current state of a fictitious, non-existing system and can therefore be initialized arbitrarily. The tube-based MPC considers the nominal system (1.9) with tighter constraints and its main innovation relies in adding its current state to the set of optimization variables. The cost function to be minimized is V (ˆx, û [:+N 1] ) = 1 N 1 ( ˆx +ν 2 Q + û +ν 2 2 R) ˆx +N 2 P (1.11) ν=0 where Q R n n, R R m m and P R n n are positive definite matrices. As in the standard MPC case, the simplest choice for parameters K and P consists in synthesizing K using the LQR criterion and computing P as the corresponding solution of the stationary Lyapunov

26 thesis_main 2014/1/26 15:56 page 10 #26 10 CHAPTER 1. INTRODUCTION equation. The quadratic programming problem to be solved at each time step is min ˆx,û [:+N 1] V (ˆx, û [:+N 1] ) s.t. x ˆx Z ˆx +ν ˆX ν = 0,..., N 1 û +ν Û ν = 0,..., N 1 (1.12) ˆx +N ˆX f ˆx +1 = Aˆx + Bû where ˆX = X Z, Û = U KZ and ˆX f is such that ˆXf ˆX, K ˆX f Û and F ˆX f ˆX f. It is assumed that W is small enough to guarantee that Z int(x) and KZ int(u). Once the optimal pair (ˆx, û [:+N 1] ) is computed, the control action for the real system is given by Equation (1.10), i.e., u = û + K(x ˆx ). Setting the design parameters as described, the proof of recursive feasibility turns out to be very similar to that of the standard MPC. Specifically, at time + 1 it holds that x +1 ˆx +1 Z. Denoting û [+1:+N] = (û [+1:+N 1], K ˆx +N ) the properties of ˆX f guarantee that the tuple (ˆx +1, û [+1:+N] ) is a (suboptimal) feasible solution to the optimization problem (1.12). As for the convergence proof, the corresponding (suboptimal) cost function Ṽ (ˆx +1, û [+1:+N] ) is characterized by V o (x +1 ) Ṽ (ˆx +1, ũ [+1:+N] ) where V o (x +1 ) is the optimal value of the cost function at time + 1 obtained using (ˆx +1 +1, û [+1:+N] +1 ). As in the standard case, thus Ṽ (ˆx +1, ũ [+1:+N] ) V o (x ) = ( ˆx 2 Q + û 2 R) V o (x +1 ) V o (x ) ( ˆx 2 Q + û 2 R) The cost function turns out to be a monotonic strictly decreasing positive function, therefore V o (x ) V o (x +1 ) 0. Being V o (x ) V o (x +1 ) ˆx 2 Q + û 2 R

27 thesis_main 2014/1/26 15:56 page 11 # MODEL PREDICTIVE CONTROL 11 we have that ( ˆx 2 Q + û 2 R ) 0 as well. Since Q and R are positive definite, the nominal inputs and states converge to zero. Since the real states are always constrained in an invariant neighborhood defined by Z with respect to the nominal ones, they will be steered to the neighborhood of the origin Z. Note that, for the same reason, the tighter constraints represented by ˆX and Û, ensure the fulfillment of the real constraints x X and u U at all the time instants MPC for tracing piecewise constant references For practical application purposes, model predictive controllers must be able not only to regulate the system state to zero, but also to handle non-zero target steady states which can be provided by a steady state target optimizer [112]. The standard solution to this problem consists in changing the system state coordinates, i.e. shifting the system state to the desired steady state [114]. Unfortunately, the new target steady state could be unreachable and, moreover, after such shifting, feasibility may not be guaranteed. The latter problem can be solved by re-calculating the control horizon and the terminal set for the new target steady state, but the complexity of this procedure does not allow one to perform the recalculation on-line. This problem has motivated several solutions proposed in the literature [10, 16, 31, 58, 121, 141]. Recently, an effective MPC algorithm for tracing was proposed in [6, 52, 88, 89]. The main ingredients are: i) the online computation of the actual target to be really traced at each time instant, ii) the penalization of the deviation between the artificial steady state and the desired one at the optimization problem level. The controller steers the system to any admissible target steady state while satisfying the system constraints; if the desired target is not admissible, the system is steered to the closest admissible steady state. The system to be controlled is x +1 = Ax + Bu y = Cx + Du (1.13) where x X R n is the state, u U R m is the input and y R m is the output. The number of outputs is assumed to be equal to the number of inputs only for the sae of simplicity: extensions to nonsquare systems are shown in the provided references. Inputs and states are subject to constraints defined by compact, convex polyhedra X and

28 thesis_main 2014/1/26 15:56 page 12 #28 12 CHAPTER 1. INTRODUCTION U containing the origin in their interior and defined by and X = {x R n : A x x b x } U = {u R m : A u u b u } Let ȳ be a generic set-point with ( x, ū) representing the related steadystate pair, i.e. [ ] 1 [ [ x A In B 0 = ū] C D ] [ ] 0 ȳ = S I 1 ȳ = Mȳ = m I m [ ] Mx ȳ (1.14) M u where it is assumed that the system matrix S has full ran and therefore can be inverted. The set of all the admissible set-points is denoted as Y = {ȳ = C x + Dū : x X, ū U, (A I n ) x + Bū = 0} R m As in the regulation cases, an auxiliary state-feedbac control law and a proper terminal invariant set are required to ensure recursive feasibility. For a generic target ȳ, the auxiliary control law is given by u = K(x x) + ū = Kx + [ K K ] Mȳ = Kx + Lȳ (1.15) which guarantees that the system is steered to the steady-state ( x, ū) provided that A + BK is Schur. Since the system is subject to constraints, the main goal is to find a set such that the constraints are fulfilled when the auxiliary control law is used. From now on, we define the set of all the λ-admissible set points ȳ λ as Y λ = {ȳ λ = C x λ + Dū λ : x λ λx, ū λ λu, (A I n ) x λ + Bū λ = 0} with λ (0, 1]. To find this invariant set, consider the extended state q = (x, ȳ λ ) and its closed loop dynamics when the auxiliary control law (1.15) is used to reach x λ [ ] [ ] [ ] x+1 A + BK BL x q +1 = ȳ λ = 0 I m ȳ λ = A q q (1.16) Now select a positive scalar λ 1, and define a set Q λ given by all the values of q such that: i) the steady state ( x λ, ū λ ) = Mȳ λ lies inside the

29 thesis_main 2014/1/26 15:56 page 13 # MODEL PREDICTIVE CONTROL 13 set λx λu X U ; ii) the state x lies inside X and the auxiliary control law (1.15) corresponding to x belongs to U. The set Q λ = {q = (x, ȳ λ ) : u = Kx + Lȳ λ U, x X, x λ = M x ȳ λ λx, ū λ = M u ȳ λ λu} (1.17) is a polyhedron defined by the inequalities A x 0 [ ] b x A u K A u L x 0 A x M x ȳ λ b u λb x (1.18) 0 A u M u λb u The actual constraints for the system correspond to Q 1, i.e. to Q λ with λ = 1. The need of tightening the constraints for the steady-state comes from the method used for computing the invariant set, as it will be explained later. An invariant set Ω q for the extended state q, i.e. such that A q Ω q Ω q, is said to be an admissible invariant set for tracing if the extended state fulfills all the constraints, which means Ω q W 1. The biggest among all possible admissible invariant sets for tracing is said to be the maximal invariant set for tracing (MIST), and it can be proved to be defined by [59] O q = {q : A i qq Q 1, i 0} (1.19) Unfortunately, in general it is not possible to compute O, q because it might be not finitely determined by a finite set of constraints. On the contrary, choosing λ (0, 1), it is possible to state that the MIST with respect to Q λ O q,λ = {q : A i qq Q λ, i 0} (1.20) is a finitely determined polytope, and therefore computable [59]. Note that, since λ can be chosen arbitrarily close to 1, the obtained invariant set is arbitrarily close to the real maximal invariant set O q. After having chosen arbitrarily λ (0, 1), let s assume that we want to trac a desired λ-admissible target ȳ λ,d corresponding to the couple ( x λ,d, ū λ,d ) = Mȳ λ,d. The MPC for tracing considers as decision variable an artificial λ-admissible reference ȳ λ,a, with ( x λ,a, ū λ,a ) = Mȳ λ,a, and the deviation between the artificial steady state x λ,a and the desired steady state x λ,d is penalized. This penalization guarantees that x λ,a asymptotically reaches x λ,d. In order to present the control technique in the clearest possible way, we will consider only the case of

30 thesis_main 2014/1/26 15:56 page 14 #30 14 CHAPTER 1. INTRODUCTION λ-admissible targets. Anyway it is easy to show that, if we have to trac a generic non-λ-admissible target ȳ d corresponding to ( x d, ū d ) = Mȳ d, possibly non-admissible for system (1.13) because outside the set Y, the penalization of the distance between x λ,a and x d maes the system evolve to a λ-admissible steady state such that its deviation with the desired (although possibly unreachable) steady state x d is minimized. The cost function over the prediction horizon N is N 1 V (x, ȳ λ,d, u :+N 1, ȳ λ,a ) = ( x +ν x λ,a 2 Q + u +ν ū λ,a 2 R) ν=0 + x +N x λ,a 2 P + x λ,a x λ,d 2 T (1.21) where the matrices Q R n n, R R m m, P R n n and T R n n are all assumed to be positive definite. The input sequence over all the prediction horizon u [:+N 1] and ȳ λ,a, i.e., the target really traced at time, are the decision variables, while the current state x and the desired target ȳ λ,d are parameters of the cost function. The MPC problem to be solved is min V (x, ȳ λ,d, u :+N 1, ȳ λ,a ) s,u [:+N 1] s.t. x +ν X ν = 0,..., N 1 u +ν U ν = 0,..., N 1 (x +N, ȳ λ,a ) O w,λ x +1 = Ax + Bu (1.22) The proofs of recursive feasibility of the MPC problem, of asymptotic stability of the controller and of convergence of the closed-loop system to ȳ λ,d are based on three main steps. 1. Prove that, if at the initial time instant feasibility holds for a certain value of ȳ λ,a, with standard arguments it is possible to demonstrate that eeping ȳ λ,a constant at next time steps guarantees recursive feasibility while the system is steered towards ȳ λ,a. 2. Prove that, if the system reaches an equilibrium point ȳ λ,a ȳ λ,d, there exists a suboptimal input sequence that decreases the value of the cost function and that steers the system from ȳ λ,a to ȳ λ,d while fulfilling all the constraints. In other

31 thesis_main 2014/1/26 15:56 page 15 # INVARIANT SETS 15 words, the only closed-loop stable equilibrium point compatible with the proposed minimization problem is the one corresponding to ȳ λ,a = ȳ λ,d. 3. From the previous steps infer that the system asymptotically converges to the unique equilibrium point for the closed-loop system, where ȳ λ,a = ȳ λ,d. The mathematical details, here sipped to avoid repetitions, can be found in Chapter 6 for systems enlarged with integrators. 1.3 Invariant sets Given an autonomous system of the form x +1 = F x (1.23) where x R n and F is Schur, a set P is defined to be positively invariant if F x P for all x P, i.e., if F P P. Assume that the system is affected by a disturbance and is described by x +1 = F x + w (1.24) where w W R n is an unnown bounded external disturbance. A set P is defined to be robustly positively invariant (RPI) if F x +w P for all x P and w W, i.e., if F P W P. Invariant sets play an important role in control of constrained systems [15]. In fact, if the constraints are violated at any time instant, serious consequences may arise: for example, physical components may be damaged or saturation may cause a loss of closed-loop stability [8,65,157]. Considering in particular the case of MPC, as shown in the previous Sections, invariant sets can be required to prove recursive feasibility of the control algorithm or to design robust controllers Polytopic approximation of the minimal robust invariant set For a system with dynamic equation (1.24), it is often necessary to compute the minimal RPI (mrpi) P, which is the RPI set in R n that is contained in every closed RPI set of (1.24). For instance, this need arises when a target set in robust time-optimal control has to be computed [106], when the properties of the maximal RPI have to

32 thesis_main 2014/1/26 15:56 page 16 #32 16 CHAPTER 1. INTRODUCTION be studied [78] or when robust predictive controllers have to be designed [81, 107] (as explained in Section 1.2.2). Consider the discrete-time, linear, time-invariant system (1.24), where it is assumed that matrix F R n n is Schur and set W is a polytope containing the origin in its interior. It is possible to demonstrate that the mrpi set P exists, is compact, contains the origin in its interior and is given by [78] P = F i W (1.25) i=0 Since P is a Minowsi sum of infinitely many terms, it is generally impossible to obtain an explicit characterization of it. However, if there exist an integer φ 1 and a scalar α [0, 1) such that F φ = αi n, then P = (1 α) 1 φ 1 i=0 F i W [78]. Therefore, if F is nilpotent with index φ, i.e., if F φ = 0, it is possible to compute φ 1 P = F i W (1.26) i=0 In [131, 132] the assumption that there exist an integer φ 1 and a scalar α [0, 1) such that F φ = αi n is relaxed, and replaced by the assumption that there exists an integer φ 1 and a scalar α [0, 1) that satisfy F φ W αw. In this case we can no longer compute P in a finite number of steps, but we can solve the problem of computing an RPI set P(α, φ) that contains the mrpi set and that is as close as desired to it. In fact, it can be proved that φ 1 P(α, φ) = (1 α) 1 F i W (1.27) is a convex, compact, polytopic RPI set of (1.24) containing P. The approximation error decreases as α increases. Summarizing, the computation of the polytopic RPI outer approximation to the mrpi can be done as described in the next algorithm. i=0

33 thesis_main 2014/1/26 15:56 page 17 # INVARIANT SETS 17 Algorithm 1.1 Computation of the RPI outer approximation of the mrpi 1. Choose arbitrarily α [0, 1). 2. Starting from φ = 1, increase the integer φ until a value φ 1 is found such that F φ W αw. 3. Evaluate P(α, φ) using (1.27). Note that, due to the need of resorting to Minowsi sums, finding the approximation to the mrpi set can result in an extremely computationally demanding procedure. Finally, it is worth recalling that the same authors extended the approach also to continuous-time systems [133] Maximal output admissible set The standard MPC algorithms for regulation and tracing presented above usually require the computation of invariant terminal sets related to the auxiliary state-feedbac control law and guaranteeing that constraints on states, inputs and outputs are fulfilled during the evolution of the system. An important contribution to the computation of such invariant set is presented in [59], where the notion of maximal output admissible set (MOAS) is introduced. Consider an autonomous, discrete-time, time-invariant linear system x +1 = F x y = Cx (1.28) where x R n is the state and the output y R p has to satisfy an output constraint y Y for all 0. For such a system, a set O R n is said to be output admissible if x 0 O implies that y Y for all > 0, i.e. if CF x 0 Y for all > 0. The maximal output admissible set O is the biggest one, namely the set of all initial conditions x 0 for which CF x 0 Y holds for all > 0. We underline that output admissible sets are not necessarily positively invariant, but it is easy to show that the maximal output admissible set is always positively invariant [95]. In addition, we stress the fact that computing the MOAS for system (1.28) can solve several different problems. For

34 thesis_main 2014/1/26 15:56 page 18 #34 18 CHAPTER 1. INTRODUCTION instance, if a terminal set for standard MPC with constraints x X and u U is required, one can apply the results of [59] to the system x +1 = (A + BK)x [ ] In (1.29) y = x K imposing y = (x, u ) Y = X U. If Y = {y R m : f iy g i, i I} is a convex polytope containing the origin and, referring to system (1.28), the pair (F, C) is observable and the matrix F is strictly or simply stable (eigenvalues with absolute value equal to one are admitted), then O results to be convex, bounded and a neighbor of the origin. Moreover, if F is asymptotically stable, then O is described by a finite set of constraints and can be computed as follows. Algorithm 1.2 Computation of the MOAS 1. Initialize t = For each i I, maximize J i (x) = f icf t+1 x with respect to x subject to f jcf h x g j for all j I and for all h = 0, 1,..., t. Denote with Ji the maximum values of J i (x). 3. If J i g i for all i I, then stop and let t = t. Otherwise, set t = t + 1 and go to step Define the maximal output admissible set as O = {x R n : f icf t x g i, i I, 0 t t } If the pair (F, C) is non-observable, it is possible to show that O (F, C, Y) = O (F 1, C 1, Y) R n q, where the observable pair (F 1, C 1 ) can be obtained changing the state coordinates so that (F, C) taes the form C = [ C 1 0 ] [ ] F1 0, F = F 3 F 2 being C 1 R p q, F 1 R q q, F 2 R (n q) (n q) and F 3 R (n q) q. O turns out to be a set with infinite extent in those directions belonging to the unobservable subspace.

35 thesis_main 2014/1/26 15:56 page 19 # DISTRIBUTED AND DECENTRALIZED MPC 19 A second important case is the one where matrix F is not asymptotically stable but has d simple eigenvalues in 1. In this case, one can always find a coordinates change which transforms (F, C) in the form C = [ [ ] ] FS 0 C S C L, F = 0 I d where F S R (n d) (n d) is asymptotically stable and the partitioning of C and F is dimensionally consistent. Then define [ ] CL C Ĉ = S C L 0 and Y(ɛ) = {y R p : f iy g i ɛ, i I}, where ɛ can be arbitrarily chosen such that 0 < ɛ ɛ 0 being ɛ 0 = max{ g i, i I}. It can be demonstrated that O (F, Ĉ, Y Y(ɛ)) is finitely determined and contained in O (F, C, Y). Thus, O (F, Ĉ, Y Y(ɛ)) can be used to approximate O (F, C, Y), and the precision of the approximation increases as ɛ 0. It is easy to see that using the set (1.18) with λ < 1 for computing the MIST, as explained in Section 1.2.3, is equivalent to adopting the set Y Y(ɛ) with ɛ > Distributed and decentralized MPC In the last years, the size of systems to be controlled is continuously increasing, mainly due to the advances in technology and telecommunications. Common examples are smart electrical grids, networs of sensors and actuators, big chemical plants and water or traffic managing systems [22, 25, 110, 115, 116, 161, 163, 164]. Since the sixties, researchers have been developing decentralized and distributed methods with guaranteed closed-loop stability [37, 38, 94, 151, 152, 154] to overcome the difficulties that arise when centralized techniques are applied to large-scale systems. Specifically, non-trivial problems can be caused, for instance, by possible limitations due to computing capabilities or communication bandwidth. Reliability and robustness of the overall control system could represent additional reasons for avoiding centralized solutions and for adopting distributed or decentralized ones. In the latter cases, in fact, the original system is controlled by several local agents, each one taing into account only a small portion of the whole system.

36 thesis_main 2014/1/26 15:56 page 20 #36 20 CHAPTER 1. INTRODUCTION In this thesis, we assume that the decomposition of the initial largescale system into several small-scale interacting subsystems is given. In-depth analysis on how to decompose the overall input and output vectors into input-output pairs or groups while minimizing mutual influences among subsystems can be found in [26, 61, 66 68, 75, 117, 136, 142, 156, 171]. Note that sometimes, in order to obtain a proper decomposition, it can be useful to apply state transformations or permutations to the initial system [94, 152, 154]. In the framewor of distributed and decentralized control, an increasing attention has been devoted to algorithms based on optimization and receding horizon control. When using these techniques, in fact, the values over all the prediction horizon of inputs, states and output intrinsically computed at each sampling time by a local controller designed with MPC can be used as data to be broadcast to the neighbors. This information can simplify the design of a distributed control system and can allow one to obtain performances close to those of a centralized controller. In the following Sections, some control architectures proposed for decentralized and distributed MPC will be listed. We refer to [21, 136, 144] for additional details. Techniques for distributed solution of centralized MPC [46, 55, 93, 170] will not be covered: in these cases, the optimization problem is decomposed into smaller subproblems and, at each sampling time, the local controllers solve the centralized optimization problem using iterative decomposition algorithms (e.g., dual decomposition, price coordination). Optimality and feasibility are generally guaranteed only when the convergence of the decomposition algorithm (within each sampling time) is achieved Decentralized control based on MPC Often large-scale systems are controlled by resorting to decentralized techniques. In these cases, a non-overlapping decomposition of inputs u and controlled outputs y is needed, in order to group such variables into disjoint sets. Then, the obtained sets are coupled to produce non-overlapping pairs (or groups, if multivariable local control systems are used). Eventually, for each input-output pair (or group), a local controller is designed with the aim of regulate its own subsystem in a completely independent fashion. In Fig. 1.1, an example of decentralized control for a system constituted by two subsystems is given. It can be seen that the dynamics of each subsystem is influenced by the inputs

37 thesis_main 2014/1/26 15:56 page 21 # DISTRIBUTED AND DECENTRALIZED MPC 21 Figure 1.1: Decentralized control scheme of a system constituted by two subsystems. and the states of the other one. In designing decentralized controllers, the interactions due to coupling inputs and/or states are neglected, so that the synthesis of the control system becomes trivial. On the other hand, asymptotic stability of the overall large-scale system can be attained only when the interactions are wea. Strong (neglected) coupling terms, in fact, can lead to poor performances and instability of the whole system, as explained, e.g., in [41,169], where the effects of fixed modes is studied. Conditions to obtain stability using decentralized controllers can be found in [9,40,70 72,94,143,145,152,153,155]. As for decentralized predictive controllers, only few contributions can be found in the literature. This is probably due to the fact that extending to the decentralized case the standard stability analysis of the centralized MPC, based on the idea of using the optimal cost as a Lyapunov function, is not an easy tas. Examples of decentralized MPC where the local control law have been computed by neglecting the coupling terms among subsystems are reported in [1,45]. In [102], a decentralized MPC control method for nonlinear systems is presented. The dynamics is assumed to be affected by an external decaying disturbance, and stability is guaranteed, despite the effects of the exogenous disturbance and of the interactions among subsystems, by including a contraction constraint in the optimization problem. Coupling terms among subsystems are seen as disturbances to be rejected via a robust approach in [130], where Input to State Stability (ISS) [77] is used

38 thesis_main 2014/1/26 15:56 page 22 #38 22 CHAPTER 1. INTRODUCTION to prove stability. Also in [138, 139] interactions among subsystems are considered as disturbances to be rejected using the robust tubebased MPC algorithm. Linear discrete-time systems in a plug-and-play framewor are considered: specifically, the problem of plugging subsystems into (or unplugging subsystems from) an existing plant without spoiling overall stability and constraint satisfaction is addressed Distributed control based on MPC In Fig. 1.2 an example of a distributed controller is depicted. The difference with respect to the decentralized case is that some information is transmitted among the local regulators. In this way, each controller has some information about the behaviour of the neighbor systems. As already stated, when MPC is used to design the local controllers, the information exchange usually consists of the predicted control or state variables over the prediction horizon. A first classification of distributed MPC algorithms can be made on the basis of the communication networ topology: fully connected algorithms require information transmission from any local regulator to all the others; partially connected algorithms, on the contrary, are based on information transmission from any local regulator to a certain subset of the neighboring ones. In [137] a discussion of positive and negative sides of partially connected algorithms can be found: generally, it is possible to say that they are suited for wealy coupled subsystems, because in this case a reduction in the information exchange does not greatly affect the performance of the system. A second classification can be made based on the communication protocol: noniterative algorithms are characterized by a single transmission of information within each sampling time, while iterative algorithms allow local controllers to transmit and receive information many times within the sampling time. Obviously, local control systems woring with an iterative transmission protocol can tae decisions using an higher amount of information: often, iterations are even used for allowing a global consensus on the actions to be taen within the sampling interval [166, 167]. About this point, in fact, we can distinguish between independent, or noncooperating, algorithms, where each local regulator aims to minimize a local performance index, and cooperating algorithms, where each local controller minimizes a global cost function. Several distributed solutions based on MPC have been proposed in the literature. State feedbac non-cooperating, noniterative al-

39 thesis_main 2014/1/26 15:56 page 23 # DISTRIBUTED AND DECENTRALIZED MPC 23 Figure 1.2: Distributed control scheme of a system constituted by two subsystems. gorithms for discrete-time linear system are presented in [21], while in [43,83] independent, iterative, fully connected methods are described for discrete-time unconstrained linear systems represented by inputoutput models. An iterative, cooperating method for discrete-time linear systems is shown in [166]: interestingly, a global optimum is reached when the iterative procedure converges, but recursive feasibility and closed-loop stability are anyway guaranteed also if the information exchange is stopped at any intermediate iteration. In [2 4] a partially connected, noniterative, non-cooperating algorithm for discrete-time linear systems is described. Communication failures are taen into consideration, and conditions for a-posteriori stability analysis are given also in case of malfunctions. The control technique proposed in [73] can be classified as partially connected, noniterative and non-cooperating as well. It is suited for nonlinear, discrete-time systems and it is based on the idea of considering the effects of the interconnections among the subsystems as disturbances. Each local controller transmits the values of the predicted state trajectories so that the others can predict the disturbances they have to reject. The control inputs are computed using a min-max approach where local cost functions in worst-case scenario are minimized. Convergence to a set is proved. Another noncooperating, noniterative, partially connected algorithm, this time for nonlinear continuous-time systems, is presented in [44]. Proofs of feasibility and convergence are based on the theory explained in [109], and

40 thesis_main 2014/1/26 15:56 page 24 #40 24 CHAPTER 1. INTRODUCTION rely on the assumptions that the interactions among subsystems are wea and that the actual inputs and states trajectories do not differ too much from the predicted values. The latter requirement is imposed with a proper consistency constraint added to the optimization problem. In [50, 140] two different non-cooperating, noniterative, partially connected algorithms for linear discrete-time systems are proposed. In both cases, interactions among subsystems are considered as disturbances to be rejected, and to this end the robust MPC based on tubes is exploited. Command governors strategies for distributed supervision and control of large-scale systems are proposed in [23, 24, 162], where dynamically coupled interconnected linear systems possibly affected by bounded disturbances and subject to local and global constraints are considered. Finally, an experimental implementation of distributed MPC is described in [108].

41 thesis_main 2014/1/26 15:56 page 25 #41 Part II Solutions to the regulation problem 25

42 thesis_main 2014/1/26 15:56 page 26 #42

43 thesis_main 2014/1/26 15:56 page 27 #43 2 Distributed Predictive Control In this chapter the state feedbac Distributed Predictive Control (DPC) algorithm originally proposed in [50] is setched and extended. The overall system to be controlled is assumed to be composed by a number of interacting subsystems S i with non-overlapping states, linear dynamics and possible state and control constraints. The dynamics of each subsystem can depend on the state and input variables of the other subsystems, and joint state and control constraints can be considered. A subsystem S i is said to be a neighbor of subsystem S j if the state and/or control variables of S i influence the dynamics of S j or if a joint constraint on the states and/or on the inputs of S i and S j must be fulfilled. DPC has been developed with the following rationale: at each sampling time, the subsystem S i sends to its neighbors information about its future state x i and input ũ i reference trajectories, and guarantees that its actual trajectories x i and u i lie within certain bounds in the neighborhood of the reference ones. Therefore, these reference trajectories are nown exogenous variables for the neighboring subsystems to be suitably compensated, while the differences x i x i, u i ũ i are regarded as unnown bounded disturbances to be rejected. In this way, the control problem is set in the framewor of robust MPC, and the tube-based approach inspired by [107] is used to formally state and solve a robust MPC problem for each subsystem. The highlights of DPC are the following. It is not necessary for each subsystem to now the dynamical models governing the other subsystems (not even the ones of its 27

44 thesis_main 2014/1/26 15:56 page 28 #44 28 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL neighbors), leading to a non-cooperative approach. The transmission of information is limited (DPC is non-iterative and requires a neighbor-to-neighbor, i.e., partially connected, communication networ), in that each subsystem needs only to now the reference trajectories of its neighbors. Its rationale is similar to the MPC algorithms often employed in industry: reference trajectories, tailored on the dynamics of the system under control, are used. Convergence and stability properties are guaranteed under mild assumptions. The method can be extended to cope with the output feedbac case. 2.1 Statement of the problem and main assumption Consider a linear, discrete-time system described by the following statespace model: x +1 = Ax + Bu (2.1) where x X R n is the state and u U R m is the input, both subject to constraints. Letting x = (x [1],..., x[m] ), u = (u [1],..., u[m] ), system (2.1) can be decomposed in a set of M dynamically coupled non-overlapping subsystems, each one described by the following state-space model: where x [i] x [i] +1 = A iix [i] + B iiu [i] () + M j=1,j i X i R n i, n = M i=1 n i, and u [i] {A ij x [j] + B iju [j] } (2.2) U i R m i, m = M i=1 m i, are the state and input vectors of the i-th subsystem S i (i = 1,..., M), and the sets X i and U i are convex neighborhoods of the origin. The subsystem S j is said to be a dynamic neighbor of the subsystem S i if and only if the state or the input of S j affects the dynamics of S i i.e., iff A ij 0 or B ij 0. The symbol N i denotes the set of dynamic neighbors of S i (which excludes i). Note that the matrices A and B of system (2.1) have bloc entries A ij and B ij respectively, and that X = M i=1 X i and U = M i=1 U i are convex by convexity of X i and U i, respectively.

45 thesis_main 2014/1/26 15:56 page 29 # DESCRIPTION OF THE APPROACH 29 The states and inputs of the subsystems can be subject to coupling static constraints described in collective form by H s (x, u ) 0 where s = 1,..., n c. H s is a constraint on S i if x [i] and/or u [i] are arguments of H s, while C i = {s {1,..., n c } : H s is a constraint on i} denotes the set of constraints on S i. Subsystem S j is a constraint neighbor of subsystem S i if there exists s C i such that x [j] and/or u [j] are arguments of H s, while H i is the set of the constraint neighbors of S i. Finally, for all s C i, a function is defined h s,i (x [i], u [i], x, u) = H s (x, u), where x [i] and u [i] are not arguments of h s,i (a, b,, ). When X = R n, U = R m and n c = 0 the system is unconstrained. In general S j is called a neighbor of S i if j N i H i. In line with these definitions, the communication topology which will be assumed from now on is a neighbor-to-neighbor one. Indeed, we require that information is transmitted from subsystem S j to subsystem S i if S j is a neighbor of S i. The algorithm proposed in this Chapter is based on MPC concepts and aims to solve, in a distributed fashion, the regulation problem for the described networ of subsystems, while guaranteeing constraint satisfaction. Towards this aim, the following main assumption on decentralized stabilizability is introduced. Assumption 2.1 There exists a bloc diagonal matrix K =diag(k 1,..., K M ), with K i R m i n i, i = 1,..., M such that: i) A + BK is Schur. ii) F ii = (A ii + B ii K i ) is Schur, i = 1,..., M. 2.2 Description of the approach In DPC, at any time instant, each subsystem S i transmits to the subsystems having S i as neighbor its future state and input reference trajectories (to be later defined) x [i] +ν and ũ[i] +ν, ν = 0,..., N 1, respectively, referred to the whole prediction horizon N. Moreover, by adding suitable constraints to its MPC formulation, S i is able to guarantee that, for all 0, its real trajectories lie in specified time invariant neighborhoods of the reference trajectories, i.e, x [i] x[i] E i

46 thesis_main 2014/1/26 15:56 page 30 #46 30 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL and u [i] ũ [i] Eu i, where 0 E i and 0 E u i. In this way, the dynamics (2.2) of S i can be reformulated as x [i] +1 = A iix [i] + B iiu [i] + (A ij x [j] + B ijũ [j] ) + w[i] (2.3) j N i where and w [i] = j N i {A ij (x [j] x[j] ) + B ij(u [j] ũ[j] )} W i (2.4) W i = j N i {A ij E j B ij E u j } (2.5) As already discussed, the main idea behind DPC is that each subsystem solves a robust MPC optimization problem considering that its dynamics is given by (2.3), where the term j N i (A ij x [j] +ν + B ijũ [j] +ν ) can be interpreted as an input nown in advance over the prediction horizon ν = 0,..., N 1 to be suitably compensated and w [i] is a bounded disturbance to be rejected. By definition, w [i] represents the uncertainty of the future actions that will be carried out by the dynamic neighbors of subsystem S i. Therefore the local MPC optimization problem to be solved at each time instant by the controller embedded in subsystem S i must minimize the cost associated to S i for any possible uncertainty values, i.e., without having to mae any assumption on strategies adopted by the other subsystems, provided that their future trajectories lie in the specified neighborhood of the reference ones. Such conservative but robust local strategies adopted by each subsystem can be interpreted, from a dynamic non-cooperative game theoretic perspective, as maxmin strategies, i.e., the strategies that maximize the worst case utility of S i (for more details see, e.g., [150]). To solve local robust MPC problems (denoted i-dpc problems), the algorithm proposed in [107] has been selected in view of the facts that no burdensome min-max optimization problem is required to be solved on-line, and that it naturally provides the future reference trajectories x [i] and ũ [i], as it will be clarified later in this chapter. According to [107], a nominal model of subsystem S i associated to equation (2.3) must be defined to compute predictions ˆx [i] +1 = A iiˆx [i] + B iiû [i] + (A ij x [j] + B ijũ [j] ) (2.6) j N i

47 thesis_main 2014/1/26 15:56 page 31 # DESCRIPTION OF THE APPROACH 31 while the control law to be used for S i is u [i] = û[i] + K i(x [i] ˆx[i] ) (2.7) where K i, i = 1,..., M, must be chosen to satisfy Assumption 2.1. Letting z [i] = x[i] ˆx[i], in view of (2.3), (2.6) and (2.7) one has z [i] +1 = F iiz [i] + w[i] (2.8) where w [i] W i. Since W i is bounded and F ii is Schur, there exists a robust positively invariant (RPI) set Z i for (2.8) such that, for all z [i] Z i, then z [i] +1 Z i. Given Z i, define two sets, neighborhoods of the origin, E i and E u i, i = 1,..., M such that E i Z i E i and E u i K i Z i E u i, respectively. Finally, define the function ĥs,i such that the constraint ĥ s,i (ˆx [i], û[i], x, ũ ) 0 guarantees that h s,i (x [i], u[i], x, u x [i] ˆx[i] Z i, u [i] u ũ M i=1 Eu i. û[i] K iz i, x x M ) 0 for all i=1 E i, and The online phase: the i-dpc optimization problems At any time instant, we assume that each subsystem S i nows the future reference trajectories of its neighbors x [j] +ν, ũ[j] +ν, ν = 0,..., N 1, j N i H i {i} and, with reference to its nominal system (2.6) only, solves the following i-dpc problem. min ˆx [i],û[i] [:+N 1] Vi N (ˆx [i], û[i] subject to (2.6), to N 1 [:+N 1] ) = ( ˆx [i] ν=0 +ν 2 Q o i + û[i] +ν 2 R o i )+ ˆx[i] +N 2 P o i (2.9) to x [i] ˆx[i] Z i (2.10) ˆx [i] +ν x[i] +ν E i (2.11) û [i] +ν ũ[i] +ν Eu i (2.12) ˆx [i] +ν ˆX i X i Z i (2.13) û [i] +ν Ûi U i K i Z i (2.14)

48 thesis_main 2014/1/26 15:56 page 32 #48 32 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL for ν = 0,..., N 1, to the coupling state constraints for all s C i and to the terminal constraint ĥ s,i (ˆx [i], û[i], x, ũ ) 0 (2.15) ˆx [i] +N ˆX F i (2.16) Note that constraints (2.10), (2.11) and (2.12) are used to guarantee the boundedness of the equivalent disturbance w [i]. In fact in the i-dpc problem, for ν = 0, constraints (2.10), (2.11), and (2.12) imply that x [i] x[i] E i Z i E i and u [i] ũ[i] Eu i K i Z i E u i, which in turns guarantees that w [i] W i. This, in view of the invariance property of (2.8), implies that x [i] +1 ˆx[i] +1 Z i and, since (2.11) and (2.12) are imposed over the whole prediction horizon, it follows by induction that w [i] +ν W i for all ν = 0,..., N 1 and x [i] +ν ˆx[i] +ν Z i for all ν = 1,..., N. In (2.9), Q o i, Ri o, and Pi o are positive definite matrices and represent design parameters, whose choice is discussed in Section 2.4 to guarantee stability and convergence properties, while ˆX F i in (2.16) is a nominal terminal set whose properties will be discussed in the next Section. At time, let the pair (ˆx [i], û[i] [:+N 1] ) be the solution to the i-dpc problem and define by û [i] the input to the nominal system (2.6). Then, according to (2.7), the input to the system (2.2) is u [i] = û[i] + K i(x [i] ˆx[i] ) (2.17) Denoting by ˆx [i] +ν the state trajectory of system (2.6) stemming from ˆx [i] and û[i] :+N 1, at time it is also possible to compute ˆx[i] +N and. In DPC, these values incrementally define the trajectories of the reference state and input variables to be used at the next time instant + 1, that is K iˆx [i] +N x [i] +N = ˆx[i] +N, ũ[i] +N = K iˆx [i] +N (2.18) Note that the only information to be transmitted consists in the reference trajectories update (2.18). More specifically, at time step, subsystem S i computes x [i] +N and ũ[i] +N according to (2.18) and transmits their values to all the subsystems having S i as neighbor, before proceeding to the next time step.

49 thesis_main 2014/1/26 15:56 page 33 # CONVERGENCE RESULTS AND PROPERTIES OF DPC Convergence results and properties of DPC In order to state the main theoretical contribution of the paper, define the set of admissible initial conditions x 0 = (x [1] 0,..., x [M] 0 ) and initial reference trajectories x [j] [0:N 1], ũ[j] [0:N 1], for all j = 1..., M as follows. Definition 2.1 Letting x = (x [1],..., x [M] ), denote by X N := {x : if x [i] 0 = x [i] for all i = 1,..., M then ( x [1] [0:N 1],..., x[m] [0:N 1]), (ũ[1] [0:N 1],..., ũ[m] [0:N 1] ), (ˆx [1] 0/0,..., ˆx[M] 0/0 ), (û[1] [0:N 1],..., û[m] [0:N 1]) such that (2.2) and (2.10)-(2.16) are satisfied for all i = 1,..., M} the feasibility region for all the i-dpc problems. Moreover, for each x X N, let X x := {( x [1] [0:N 1],..., x[m] [0:N 1]), (ũ[1] [0:N 1],..., ũ[m] [0:N 1] ) : if x [i] 0 = x [i] for all i = 1,..., M then (ˆx [1] 0/0,..., ˆx[M] 0/0 ), (û[1] [0:N 1],..., û[m] [0:N 1]) such that (2.2) and (2.10)-(2.16) are satisfied for all i = 1,..., M} be the region of feasible initial reference trajectories. Assumption 2.2 Letting ˆX = M ˆX i=1 i, Û = M ˆX F = M ˆX i=1 F i, it holds that: i) Ĥ[i] i=1 Ûi and s (ˆx) 0 for all ˆx ˆX F, for all s C i, for all i = 1,..., M, where Ĥ[i] is such that Ĥ s [i] (ˆx) = ĥs,i(ˆx [i], Ki aux ˆx [i], ˆx, K auxˆx) for all s C i, for all i = 1,..., M. ii) ˆX F ˆX is an invariant set for ˆx + = (A + BK aux )ˆx; iii) û = K auxˆx Û for any ˆx ˆX F ; iv) for all ˆx ˆX F and, for a given constant κ > 0, V F (ˆx +) V F (ˆx) (1 + κ)l(ˆx, û) (2.19) where V F (ˆx) = M i=1 ˆx[i] 2 P o i and l(ˆx, û) = M i=1 ( ˆx[i] 2 Q o i + û[i] 2 R o i ).

50 thesis_main 2014/1/26 15:56 page 34 #50 34 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL In Section 2.4 we will show how to properly select matrices Q o i, Ri o, and Pi o to guarantee stability and convergence properties. It will also be explained how to choose the terminal set ˆX F i in order to satisfy Assumption 2.2. Assumption 2.3 Given the sets E i, E u i, and the RPI sets Z i for equation (2.8), there exists a real positive constant ρ E > 0 such that Z i B (n i) ρ E (0) E i and K i Z i B (m i) ρ E (0) E u i for all i = 1,..., M, where B (dim) ρ E (0) is a ball of radius ρ E > 0 centered at the origin in the R dim space. Proper ways to select the design parameters satisfying Assumption 2.3 are presented in the following section. Theorem 2.1 Let Assumptions be satisfied and let E i and E u i be neighborhoods of the origin satisfying E i Z i E i and E u i K i Z i E u i. Then, for any initial reference trajectories in X x0, the trajectory x, starting from any initial condition x 0 X N, asymptotically converges to the origin while fulfilling all the constraints. Proofs can be found in the Appendix Properties of DPC Some further properties of DPC are in order. Optimality issues. Global optimality of the interconnected closed loop system cannot be guaranteed using DPC. This, on the one hand, is due to the inherent conservativeness of robust algorithms and, on the other hand, is due to the game theoretic characterization of DPC. Namely, the provided solution to the control problem can be cast as a max-min solution of a dynamic non-cooperative game (see, e.g., [150]) where all the involved subsystems aim to optimize local cost functions which are different from each other: therefore, different and possibly conflicting goals inevitably imply suboptimality. Differently from suboptimal distributed MPC algorithms discussed in [136], whose solutions can be regarded as Nash solutions of non-cooperative games and which possibly lead to instability of the closed-loop system, the convergence of the DPC algorithm can be guaranteed, see Theorem 2.1. Robustness. As already discussed, the algorithm presented in this chapter basically relies on the tube-based robust MPC algorithm proposed in [107]. Namely, robustness is here used to cope with uncertainties on the input and state trajectories of the neighboring subsystems.

51 thesis_main 2014/1/26 15:56 page 35 # CONVERGENCE RESULTS AND PROPERTIES OF DPC 35 More specifically, the difference between the reference trajectories of the neighboring subsystems and the real ones is regarded as a disturbance, which is nown to be bounded in view of suitable constraints imposed in the optimization problem. Anyway, the described approach can be naturally extended for coping also with standard additive disturbances in the interconnected models (2.2), i.e., in case the interconnected perturbed systems are described by the dynamic equations x [i] +1 = A iix [i] + B iiu [i] () + {A ij x [j] + B iju [j] } + d[i] (2.20) j N i where d [i] D i R n i is an unnown bounded disturbance and the set D i is a convex neighborhood of the origin. In this case, we have again x [i] +1 = A iix [i] + B iiu [i] + (A ij x [j] + B ijũ [j] ) + w[i] (2.21) j N i where this time the disturbance acting on subsystem i is w [i] = j N i {A ij (x [j] x[j] ) + B ij(u [j] ũ[j] )} + d[i] W i (2.22) and W i = j N i {A ij E j B ij E u j } D i (2.23) The overall disturbed collective system, letting d = (d [1],..., d[m] with d D = M i=1 D i R n can be written as ) x +1 = Ax + Bu + d (2.24) and can be steered to a neighboring of the origin. Output feedbac. The approach that has been previously described for coping with unnown exogenous additive disturbances has been employed in [49] for designing a DPC algorithm for output feedbac control. Specifically, assume that the input and output equations of the system are the following x o[i] +1 = A ii x o[i] + B ii u [i] + j N i {A ij x o[j] + B ij u [j] } y [i] = C i x o[i] (2.25) where the state which is not directly available is here denoted as x o[i] for reasons that will become clearer later on. Denote with x [i] the

52 thesis_main 2014/1/26 15:56 page 36 #52 36 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL estimate of x o[i], for all i = 1,..., M. To estimate the state of (2.25) we employ a decentralized Luenberger-lie observer of the type x [i] +1 = A iix [i] +B iiu [i] +1 + j N i {A ij x [j] +B iju [j] +1 } L i(y [i] C ix [i] ) (2.26) Assume that the decentralized observer is convergent i.e., A + LC is Schur, where C = diag(c 1,..., C M ) and L = diag(l 1,..., L M ). Under this assumption it is possible to guarantee that the estimation error for each subsystem is bounded, i.e., x o[i] x [i] Σ i for all i = 1,..., M. In this way (2.26) exactly corresponds to the perturbed system (2.21), where d [i] = L ic i (x o[i] x [i] ) is regarded as a bounded disturbance, i.e., d [i] D i = L i C i Σ i. From this point on, the output feedbac control problem is solved as a robust state feedbac problem applied to the system (2.21). Details on this approach can be found in [49], where a condition and a constructive method are derived to compute the sets Σ i in such a way that Σ = M i=1 Σ i is an invariant set for the interconnected observer error. Tracing. As it will be shown in the next chapters, the DPC method can be extended also to the problem of tracing desired output signals. 2.4 Implementation issues The discretization method In most of the control applications, the model of the plant is developed in the continuous-time starting from physical laws. In this framewor, the sparse structure of the model clearly represents physical connections (such as mass or energy flows) among the subsystems, each one described by the linear (or linearized) model ẋ [i] (t) = A c iix [i] (t) + B c iiu [i] (t) + j N i {A c ijx [j] (t) + B c iju [j] (t)} (2.27) where the notation is coherent (mutatis mutandis) with the one adopted in (2.2). Unfortunately, the sparse, zero-nonzero pattern of the system (zero-nonzero matrices A c ij, B c ij) is lost when the exact ZOH (Zero- Order-Hold), Bacward Euler, or bilinear transformations are used to discretize the system, while it is preserved only by the Forward Euler (FE) transformation. However, it is well nown that with FE some

53 thesis_main 2014/1/26 15:56 page 37 # IMPLEMENTATION ISSUES 37 important properties of the underlying continuous time system can be lost; for example stability is maintained only for very small sampling times, which can be inadvisable in many digital control applications. In distributed and decentralized control techniques based on MPC, where discrete-time models are mainly utilized, the loss of sparsity can easily result in an increase of the controller complexity. For these reasons, in order to improve the performance of FE and to maintain sparsity, a new discretization method called Mixed Euler ZOH (ME ZOH) has been proposed in [33, 34], and its properties have been studied in [48]. In synthesis, ME-ZOH allows one to compute the matrices of (2.2) starting from the continuous-time model (2.27) as follows A ii (h) = e Ac ii h (2.28) A ij (h) = B ij (h) = h 0 h 0 e Ac ii t dta c ij, j i (2.29) e Ac ii t dtb c ij, i, j (2.30) where h is the adopted sampling time. It is apparent that the zerononzero structure of the matrices A c ij and B c ij is maintained together with some important properties, see [48] Computation of the state-feedbac gain and of the weighting matrices The design of the bloc diagonal matrix K satisfying Assumption 2.1, and the computation of the positive-definite bloc diagonal matrix P o = diag(p1 o,..., PM o ), P i o R n i n i can be done resorting to Linear Matrix Inequalities (LMIs) [18], see the Appendix and [12] for additional details. Algorithm 2.1 Computation of the state-feedbac gain and of the weighting matrices - Method 1 1. Define K = YS 1 and P o = S 1, and solve for Y and S the following LMI [ ] S SA T + Y T B T 0 (2.31) AS + BY S

54 thesis_main 2014/1/26 15:56 page 38 #54 38 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL with the additional constraints S ij = 0 i, j = 1,..., M (i j) (2.32) Y ij = 0 i, j = 1,..., M (i j) (2.33) where S ij R (n i+m i ) (n j +m j ), Y ij R n i (n j +m j ) are the blocs outside the diagonal of S and Y, respectively. Finally, denoting by S ii and Y ii the bloc diagonal elements of S and Y, respectively, the requirement that each bloc K i must be stabilizing for its i-th subsystem (recall again Assumption (2.1)), translates in the following set of conditions [ ] S ii S ii A T ii + Yii T Bii T 0 (2.34) A ii S ii + B ii Y ii S ii In conclusion, the computation of K and P o calls for the solution of the set of LMI s (2.77),(2.78), (2.79) and (2.80), which can be easily found with suitable available software (e.g., YALMIP [92]). 2. Once K and P o are available the parameters Q o i and Ri o must be chosen to satisfy (2.19). To this end, define Q = P o (A + BK) T P o (A + BK), choose an arbitrarily small positive constant κ and two bloc diagonal matrices Q = diag(q o 1,..., Q o M ) and R = diag(ro 1,..., RM o ), with Q o i 0 R n i n i and Ri o 0 R m i m i. Then proceed as follows: if set Q o = Q and R o = R; Q (Q + K T RK)(1 + κ) 0 (2.35) otherwise set Q = ηq and R = ηr, with 0 < η < 1, and repeat the procedure until (2.35) is fulfilled. Once Q and R satisfying (2.35) have been found, set Q o = Q and R o = R. Finally, extract from Q o and R o the submatrices Q o i and R o i of appropriate dimensions. A second possibility is discussed in [50], and requires that a set of control gains K i, i = 1,..., M, verifying Assumption 2.1 are given.

55 thesis_main 2014/1/26 15:56 page 39 # IMPLEMENTATION ISSUES 39 Algorithm 2.2 Computation of the the weighting matrices - Method 2 1. Define F ij = A ij + B ij K j, i, j = 1,..., M, and let ν i denote the number of dynamic neighbors of subsystem i plus 1. It is well nown that, if ν i F ii is Schur, then for any Q i = Q T i 0 there exists a matrix P i = Pi T 0 satisfying ν i F T ii P i F ii P i = Q i (2.36) Define the matrix M Q R M M with entries µ Q ij µ Q ii = λ m(q i ), i = 1,..., M (2.37a) µ Q ij = ν j F T jip j F ji 2, i, j = 1,..., M with i j (2.37b) 2. If M Q is Hurwitz, define the values of p i, i = 1,..., M as the entries of the strictly positive vector p satisfying M Q p Set P o i = p i P i. 4. Q o i 0, R o i 0, and κ i are chosen in such a way that ran([bii T Ri ot ] T ) = m i (2.38) (1 + κ i )(Q o i + (K i ) T Ri o R i ) Q i (2.39) where M Q i = p i Q i p j ν j FjiP T j F ji (2.40) j=1 5. Set κ = min(κ 1,..., κ M ) Computation of the RPI sets and of the terminal sets Two of the main issues in DPC are to verify, for all i = 1,..., M that i) E i Z i E i, E u i K i Z i E u i, Z i X i and K i Z i U i, and ii) ˆX F i ˆX i and K i ˆXF i Ûi. Concerning i), remember that Z i is the RPI set for equation (2.8) where the disturbance term w i lies in the set W i, which depends on sets Z j, j N i. For this reason, the problem can not be tacled by considering each subsystem separately. In this section we propose some alternative solutions to i).

56 thesis_main 2014/1/26 15:56 page 40 #56 40 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL Furthermore, to verify ii) we simply set ˆX F i = αz i for all i = 1,..., M, for an arbitrary and sufficiently small α (0, 1). Finally, remar that an algorithm for obtaining a polytopic invariant outer approximation of the minimal RPI set has been presented in Chapter 1 [131, 132]. The first technique to compute the RPI sets Z i is based on an empirical simplified distributed reachability analysis procedure, which has obtained remarable results in several applications. We will use rectangular sets (i.e., through the box operation) to greatly simplify the set-theoretical computations (e.g., the Minowsi sums), at the price of slightly more conservative results. Anyway, the same procedure could be performed without resorting to the box operation, provided that the number of states of the submodels is small enough. Algorithm 2.3 Computation of the RPI sets - Method 1 1. For all i = 1,..., M, arbitrarily choose hyperrectangles E i and E u i. 2. Initialize Z i = j N i {box(a ij E j ) box(b ij E u j )} for all i = 1,..., M. 3. For all i = 1,..., M, compute Z + i = box(f ii Z i ) { j N i {box(a ij Z j ) box(b ij K j Z j )}} { j N i {box(a ij E j ) box(b ij E u j )}}. 4. If Z + i Z i for all i = 1,..., M then go to step 5: by definition, the hyperrectangles Z i actually correspond to the required RPI sets. Otherwise set Z i = Z + i and repeat step If Z i X i and K i Z i U i then stop. Otherwise set E i = γ E i, U i = γ U i, with γ (0, 1), and go to step 2. A second possibility for computing the RPI sets Z i consists in solving a linear programming (LP) problem. For all i = 1,..., M, we define sets E i, E u i, E i and E u i as hypercubes, centred on the origin, with faces perpendicular to the cartesian axis and the scalars e i = E i, e u i = E u i, e i = E i and e u i = E u i corresponding to a half of the edge of each hypercube. Define also x i and u i as the infinity norms (i.e., a half of the edges) of the biggest cubes, centred on the origin, inscribed inside of X i and U i, respectively. If we define wi = j N i { A ij e j + B ij e u j }, using the properties of norm operators it is possible to state that wi W i,

57 thesis_main 2014/1/26 15:56 page 41 # IMPLEMENTATION ISSUES 41 where W i is the set containing the real disturbance affecting subsystem i and W i corresponds to a half of the edge of the smallest hypercube centered on the origin having faces perpendicular to the cartesian axis and containing W i. To compute the RPI set Z i for (2.8) (see [131, 132]) we use the hypercube W i having infinity norm wi, i.e., Z i = 1 si 1 1 α i l=0 F iiw l i, where s and α i [0, 1) must fulfill F s i ii W i α i W i. The latter is verified if F ii s i α i in view of properties of the norm operator and of the hypercubes. In addition, remar that Z i is contained inside the hypercube having infinity norm γ i wi, where γ i = 1/(1 α i ) s i 1 l=0 F ii l. These considerations suggest the following procedure for computing Z i. Algorithm 2.4 Computation of the RPI sets - Method 2 1. For all i = 1,..., M, arbitrarily choose parameters α i. 2. For all i = 1,..., M, compute s i such that F ii s i α i and than evaluate γ i = (1 α i ) 1 s i 1 l=0 F ii l. 3. Solve the following linear programming problem. subject to min o v ρ (2.41) ρ γ i wi i = 1,..., M (2.42) γ i wi + e i e i i = 1,..., M (2.43) K i γ i wi + u i e u i i = 1,..., M (2.44) γ i wi x i i = 1,..., M (2.45) K i γ i wi u i i = 1,..., M (2.46) e i ē i i = 1,..., M (2.47) u i ū i i = 1,..., M (2.48) where o v = ( e 1, e 1, u 1, e u 1,..., e M, e M, u M, e u M ) R4M contains only strictly positive elements. ē i and ū i are arbitrary positive parameters to be used in order to have sets E i and E u i bigger than a prescribed size. 4. Compute Z i = (1 α i ) 1 s i 1 l=0 F l iiw i. In the proposed optimization problem, the objective function combined together with constraints (2.42) aims at minimizing the largest RPI set.

58 thesis_main 2014/1/26 15:56 page 42 #58 42 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL Constraints (2.45) and (2.46) guarantee the existence of sets ˆX i and Ûi for all the subsystems. Lastly, constraints (2.43) and (2.44), if the LP problem turns out to be feasible, allow one to find the hypercubes E i, E i, E u i and E u i such that E i Z i E i and E u i K i Z i E u i. Note that these first two methods, with trivial modifications, can be also applied to systems where the subsystems are affected by an exogenous disturbance d [i] D i R n i and have a dynamics described by equation Specifically, Algorithm 2.3 has to be modified by initializing Z i = j N i {box(a ij E j ) box(b ij E u j )} box(d i ) and computing Z + i = box(f ii Z i ) { j N i {box(a ij Z j ) box(b ij K j Z j )}} { j N i {box(a ij E j ) box(b ij E u j )}}) box(d i ) for all i = 1,..., M. As for Algorithm 2.4, it is sufficient to define w i = j N i { A ij e j + B ij e u j } + d i, where d i = D i is equal to a half of the edge of the smallest hypercube with faces perpendicular to the cartesian axis and containing the set D i. A last option is the following [50]. Algorithm 2.5 Computation of the RPI sets - Method 3 1. Assume that E i can be equivalently represented in one of the following two ways: E i = {ε i R n i ε i = Ξ i δ i where δ i l i } = {ε i R n i fi,rε T i l i for all r} (2.49) where δ i R n δ i, Ξ i R n i n δi, f i,r R n i, and r = 1,..., r i for all i = 1,..., M. The constants l i R +, appearing in both equivalent definitions, can be regarded as scaling factors. Define the shape of the polyhedra with a proper setting of matrices Ξ i and vectors f i,r, i = 1,..., M. 2. Assuming that F ii is diagonalizable for all i = 1,..., M (which is always possible since K i s are design parameters) define N i, i = 1,..., M, such that F ii = N 1 i Λ i N i, where Λ i = diag(λ i,1,..., λ i,ni ), where λ i,j is the j-th eigenvalue of F ii.

59 thesis_main 2014/1/26 15:56 page 43 # IMPLEMENTATION ISSUES 43 Define also f i = f T i,1. f T i, r i i = 1,..., M. Then, compute the matrix M P R M M whose entries µ P ij are µ P ii = 1, i = 1,..., M (2.50a) µ P ij = f i N 1 1 i ( N i A ij Ξ j + N i B ij K j Ξ j ) 1 max j=1,...,ni λ i,j, i, j = 1,..., M with i j (2.50b) 3. If M P is Hurwitz, define the values of l i, i = 1,..., M, as the entries of the strictly positive vector l satisfying M P l < 0. For its computation use the following procedure: If the system is irreducible [47], l is the Frobenius eigenvector of matrix M P. If the system is reducible: a) Since the system is reducible there exists a permutation matrix H (where H T = H 1 ) such that M P = HM P H T is lower bloc triangular, with bloc elements M ij, whose diagonal blocs M ii are irreducible. Let ω i (strictly positive element-wise) be the Frobenius eigenvector of M ii, associated to the eigenvalue λ i < 0. b) Set α 1 = 1 and define recursively α i, for i > 1, such that α i λ i v i > i 1 j=1 α jm ij v j element-wise (v i is the number of dynamic neighbors of subsystem i). c) Define ω = [α 1 ω1 T,..., α M ωm T ]T, and v M = H T ω. From the definition of α i s, it follows that M P ω < 0, and so M P v M = H T HM P H T Hv M = H T M P ω < Compute E u i = K i E i, for all i = 1,..., M. 5. Compute W i = j N {A ij E j B ij E u j }, for all i = 1,..., M. 6. For all i = 1,..., M, compute Z i as the polytopic RPI outer δ- approximation of the minimal RPI (mrpi) set for (2.8), as shown in [132].

60 thesis_main 2014/1/26 15:56 page 44 #60 44 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL 7. For all i = 1,..., M, the sets E i, can be taen as any polytope satisfying E i Z i E i, and finally E u i = K i E i. If M P defined in (2.50) is not Hurwitz, the previous algorithm can not be applied. We remar that the algorithms presented for computing the sets only provide sufficient conditions, meaning that other criteria and algorithms can be devised and adopted, which can be even more efficient, especially when applied to specific case studies. In addition, the effectiveness of the proposed methods strongly depends on some arbitrary initial choices, i.e., in Algorithm 2.5 matrices Ξ i (or equivalently vectors f i ), i = 1,..., M, defining the shape of sets E i and in Algorithm 2.3 the shapes and dimension of the sets E i and E u i, i = 1,..., M. If the selected method results to be inapplicable for a given choice, a trial-and-error procedure is suggested, in order to find a suitable initial choice (which is nevertheless not guaranteed to exist) guaranteeing the applicability of the selected method Generation of the reference trajectories (for systems without coupling constraints) The DPC algorithm assumes that an initial feasible solution or, more specifically, an initial reference trajectory, exists. This problem can be cast as a purely offline design problem. On the other hand, disturbances of unexpected entity could occur during the ordinary system operation, alterating the system s condition (i.e., by producing constraint violation, e.g., x [i] +1 ˆx[i] +1 / Z i) with possible serious consequences on the future solution (e.g., concerning feasibility) of the control problems. Once this condition is detected by a given system S i, it must be broadcast to all other subsystems through an event-based emergency iterative transmission, and an extra-ordinary reset operation requires the recalculation of new suitable state and output reference trajectories for all subsystems. The simplest solution consists (according with the approach suggested in [44]) in generating these trajectories using a centralized controller. This has the drawbac that a centralized controller must be designed together with the distributed ones, and that it must be ept activated while the system is running in order to recover the proper functioning of the process if unpredicted external disturbances affect the plant. Obviously, the need of a centralized hidden supervisor

61 thesis_main 2014/1/26 15:56 page 45 # IMPLEMENTATION ISSUES 45 greatly reduces the advantages of utilizing a distributed control scheme. In this section, we present two different methods for the generation of the trajectories, useful both for offline reference trajectory generation (i.e., performed at time = 0) and for extra-ordinary reset operations, requiring number of iterative information exchanges among neighbors. The first method (i.e., Algorithm 2.7) is applicable when x ˆX F, while the second one (i.e., Algorithm 2.8) can be used when x / ˆX F. Therefore, at each time step, we first need a procedure to chec whether x ˆX F, i.e., that x [i] ˆX F i for all i = 1,..., M. To this purpose we define some useful notation: denote with G = (V, A) the connected, undirected communication graph supporting the distributed control architecture for system (2.1). V is the set of M nodes, each corresponding to a subsystem, while A is the set of undirected arcs connecting the nodes (given two nodes i, j V, there exists an undirected arc - of unitary length - i j A if and only if j N i or i N j ). We denote by Pmax s the longest among all the shortest paths lining all the possible pairs of nodes in V. Pmax s can be computed, for instance, using the Floyd-Warshall algorithm [17, 54, 124]. Pmax s represent the maximum number of hops required for sending information from a node to all other vertices. The following procedure has to be executed. Algorithm 2.6 Algorithm for evaluating whether x ˆX F 1. For all i = 1,..., M, initialize µ i = 1 if x [i] ˆX F i or µ i = 0 if x [i] / ˆX F i. Set ν = Receive µ j from all j N i and from all j : i N j. Set ν = ν For all i = 1,..., M, set µ i = min j:(i j) A {i} (µ j ). If ν < P s max go to step 2. If ν = P s max go to step For all i = 1,..., M, if µ i = 1, then controller i can conclude that x ˆX F. Otherwise, it holds that x ˆX F. Note that, after Pmax s iterations, it holds that µ i = µ j for all i, j = 1,..., M. We now present the two distributed techniques for generating the trajectories x [i] [:+N 1] and ũ[i] [:+N 1] that each subsystem has to transmit to its neighbors. The first one, to be used when the whole state x is inside ˆX F, is based on the auxiliary control law, and guarantees to find a solution. It requires N transmissions of information

62 thesis_main 2014/1/26 15:56 page 46 #62 46 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL from each subsystem to its neighbors. The second one, instead, is an optimization-based procedure which has been proved to be very effective when x / ˆX F. If a solution is found, the latter provides also the minimum prediction horizon length N such that a reference trajectory exists for all subsystems. Algorithm 2.7 Computation of the reference trajectories - Method 1 (x ˆX F ) 1. For all i = 1,..., M, initialize x [i] = x[i] and ũ[i] = K i x [i]. 2. Receive x [j] and ũ [j] from the neighbors (j N i ). If N = 1 stop. If N 2, set ν = 0 and then go to step For all i = 1,..., M, update the state reference trajectory as x [i] +ν+1 = A ii x [i] +ν + B iiũ [i] +ν + j N i {A ij x [j] +ν + B ijũ [j] +ν } and set ũ [i] +ν+1 = K i x [i] +ν Receive x [j] +ν+1 and ũ[j] +ν+1 from the neighbors (j N i). If ν = N 1 stop. Else, set ν = ν + 1 and go to step 3. This first algorithm is very intuitive, because it exploits the properties of the invariant terminal set ˆX F. The N transmissions of information allow a distributed state evolution equal to the one that would result from applying the centralized auxiliary state feedbac law to the entire system. Algorithm 2.8 Computation of the reference trajectories - Method 2 (x / ˆX F ) 1. For all i = 1,..., M, set ν = 0, arbitrarily define sets B i containing the origin (their role will be later specified), initialize x [i] = x[i] and receive x [j] for all j N i and for all j such that B ji For all i = 1,..., M, initialize ũ [i] 1 solving the following quadratic programming (QP) problem min ˆ x 0[ii] ũ [i] B ji 2 2 ˆ x 0[ji] B 1 ii (2.51) j:b ji 0 2

63 thesis_main 2014/1/26 15:56 page 47 # IMPLEMENTATION ISSUES 47 subject to ˆ x 0[ii] +1 = A ii x [i] + B iiũ [i] 1 + A ij x [j] (2.52) j N i ˆ x 0[ji] +1 = A jj x [j] + B jiũ [i] 1 + A ji x [i] (2.53) ũ [i] 1 Ûi (2.54) ˆ x [ii] +1 ˆX i (2.55) 3. For all i = 1,..., M - if ν = 0 receive ũ [j] 1 for all j N i; - if ν 1 receive x [j] +ν for all j N i. 4. For all i = 1,..., M, for all j : B ij 0 compute λ [ij] +ν = A ii x [i] +ν + B iiũ [i] +ν 1 + z N i \{j} {A iz x [z] +ν + B izũ [z] +ν 1 }. 5. For all i = 1,..., M, for all j : B ji 0, receive λ [ji] +ν. 6. For all i = 1,..., M, compute ũ [i] +ν solving the following QP problem subject to min ˆ x [ii] ũ [i] +ν ν j:b ji 0 B ji 2 2 ˆ x [ji] B ii 2 +ν+1 2 (2.56) 2 ˆ x [ii] +ν+1 = A ii x [i] +ν + B iiũ [i] +ν + j N i {A ij x [j] +ν + B ijũ [j] +ν 1 } (2.57) ˆ x [ji] +ν+1 = A ji x [i] +ν + B jiũ [i] +ν + λ[ji] +ν (2.58) ũ [i] +ν Ûi (2.59) ũ [i] +ν ũ[i] +ν 1 B i (2.60) ˆ x [ii] +ν+1 ˆX i j N i B ij B j (2.61) 7. For all i = 1,..., M, receive ũ [j] +ν for all j N i. 8. For all i = 1,..., M, update the state reference trajectory as x [i] +ν+1 = A ii x [i] +ν + B iiũ [i] +ν + j N i {A ij x [j] +ν + B ijũ [j] +ν }.

64 thesis_main 2014/1/26 15:56 page 48 #64 48 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL 9. If x +ν+1 X F i for all i = 1,..., M, then N = ν + 1 and stop. Else, set ν = ν + 1 and go to step 3. The second algorithm aims at iteratively finding feasible inputs using one-step predictions. Each controller minimizes a cost function including both the norm of the state variable of subsystem S i and the term B ji 2 2 B ii 2 2 ˆ x [ji] +ν+1 2, limiting the possible negative effect of the inputs of subsystem S i on the state of subsystem S j, for all j such that B ji 0. The importance of this factor becomes greater as the coupling strength through inputs increases. The one-step prediction equation (2.57) is affected only by the errors on its neighbors current inputs but, at the same time, such error is bounded using constraint (2.60) (and the uncertainty drops with the decrease of the size of the arbitrary set B i ). The fulfillment of constraints on future states is required to each subsystem only with respect to its own states, see constraint (2.61), which is coherent to the idea of having wea coupling terms among the subsystems, such that a robust approach for managing their interactions can be used. Finally note that the chec of the stopping criterion in step 9) requires Algorithm 2.6 to be applied; to reduce the iterations required, one can chec x +ν+1 X F only after N iterations and, in case, only periodically. 2.5 Simulation examples In this section, we present some simulation examples concerning widelyused and standard case studies in the context of distributed control Temperature control We aim at regulating the temperatures T A, T B, T C and T D of the four rooms of the building represented in Figure 2.1. The first apartment is made by rooms A and B, while the second one by rooms C and D. Each room is equipped with a radiator supplying heats q A, q B, q C and q D. The heat transfer coefficient between rooms A - C and B - D is 1 t = 1 W m 2 K 1, the one between rooms A - B and C - D is 2 t = 2.5 W m 2 K 1, and the one between each room and the external environment is e t = 0.5 W m 2 K 1. The nominal external temperature is T E = 0 C and, for the sae of simplicity, solar radiation is not considered. The volume of each room is V = 48 m 3, and the wall surfaces between the rooms are all equal to s r = 12 m 2, while those

65 thesis_main 2014/1/26 15:56 page 49 # SIMULATION EXAMPLES 49 of the external walls are equal to s e = 24 m 2. Air density and heat capacity are ρ = g m 3 and c = 1005 J g 1 K 1, respectively. Figure 2.1: Schematic representation of a building with two apartments. Letting φ = ρcv, the dynamic model is the following: φ dt A dt φ dt B dt φ dt C dt φ dt D = s r 2(T t B T A ) + s r 1(T t C T A ) + s e e(t t E T A ) + q A = s r 2(T t A T B ) + s r 1(T t D T B ) + s e e(t t E T B ) + q B = s r 1(T t A T C ) + s r 2(T t D T C ) + s e e(t t E T C ) + q C dt = s r 1(T t B T D ) + s r 2(T t C T D ) + s e e(t t E T D ) + q D (2.62) Letting i = {A, B, C, D}, the considered equilibrium point is: q i = q = 20s e e t W, with T i = T = 20 C in correspondence of T E = 0 C. Let δt i = T i T, δt E = T E T E, δq i = (q i q)/φ. In this way, denoting σ 1 = s r 1/φ, t σ 2 = s r 2/φ, t σ 3 = s e e/φ, t σ = σ 1 + σ 2 + σ 3, x = (δt A, δt B, δt C, δt D ), u = (δq A, δq B, δq C, δq D ) and d = [σ 3 σ 3 σ 3 σ 3 ] T δt E the previous model is rewritten in the continuoustime state space representation ẋ(t) = A c x(t) + B c u(t) + d(t), where σ σ 2 σ A c = σ 2 σ 0 σ 1 σ 1 0 σ σ 2, B c = σ 1 σ 2 σ The discrete-time system of the form (2.24) (with n = 4 and m = 4) is obtained by me-zoh discretization with sampling time h = 10 s. The partition of inputs and states is: x [1] = [ δt A δt B ] T, u [1] = [ δq A δq B ] T

66 thesis_main 2014/1/26 15:56 page 50 #66 50 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL x [2] = [ δt C δt D ] T, u [2] = [ δq C δq D ] T The constraints on the inputs and the states of the linearized system have been chosen as: x [1] min = [ 5 5 ] T, x [1] max = [ 5 x [2] min = [ 5 5 ] T, x [2] max = [ 5 5 ] T 5 ] T u [1] min = [ ] T, u [1] max = [ u [2] min = [ ] T, u [2] max = [ ] T ] T The real external temperature has been assumed to randomly vary between 10 C and 10 C. Matrices K i and P i representing a feasible solution to (2.77),(2.78), (2.79) and (2.80) are: [ ] K 1 = K 2 = [ , 0986 ] P 1 = P 2 = The weighting matrices used in the simulations are Q o 1 = Q o 2 = R1 o = R2 o = I 2. Algorithm 2.3 for computing the RPI sets has been used, while the initial reference trajectories have been generated using Algorithm 2.8. The results of the simulations, performed Figure 2.2: Trajectories of the states x [1] (lef) and x [2] (right) obtained with DPC (blac lines) and with cmpc (gray lines) for the temperature control problem. Solid lines: δt A and δt C ; dashed lines: δt B and δt D.

67 thesis_main 2014/1/26 15:56 page 51 # SIMULATION EXAMPLES 51 Figure 2.3: Inputs u [1] (left) and u [2] (right) obtained with DPC (blac lines) and with cmpc (gray lines) for the temperature control problem. Solid lines: δq A and δq C ; dashed lines: δq B and δq D. using the continuous-time process model, are shown in Figure 2.2, while the values of the input variables are depicted in Figure 2.3. To show the capability of Algorithm 2.7 to recover the reference trajectories, a sudden decrease of temperature T A is forced at t = 350 s (it could represent, for instance, the opening of a door). In both figures a comparison between DPC and centralized MPC (cmpc) is provided, showing only a small performance degradation Four-tans system A benchmar case often used to assess the effectiveness of distributed control algorithms is the four-tans system schematically drawn in Figure 2.4, originally described in [74] and then utilized, for instance, in [7, 108]. The goal is to regulate the levels h 1, h 2, h 3 and h 4 of the four tans. The manipulated inputs are the voltages of the two pumps v 1 and v 2. We assumed that a bounded unnown disturbance w = (w 1, w 2 ) affects the applied voltages, i.e., that the real input to the plant is u = (v 1 + w 1, v 2 + w 2 ). The parameters γ 1 and γ 2 (0, 1) represent the fraction of the water that flows inside the lower tans, and are ept

68 thesis_main 2014/1/26 15:56 page 52 #68 52 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL Figure 2.4: Schematic representation of a four-tans system. fixed during the simulations. The dynamics of the system is given by dh 1 = a 1 2gh1 dt A 1 + a 4 2gh4 A 4 + γ 1 1 dh 2 = a 2 2gh2 dt A 2 + (1 γ 1) 1 dh 3 A 2 v 1 A 1 v 1 = a 3 2gh3 dt A 3 + a 2 2gh2 A 2 + γ 2 2 dh 4 = a 4 2gh4 dt A 4 + (1 γ 2) 2 A 4 v 2 A 3 v 2 (2.63) where A i and a i are the cross-section of Tan i and the cross section of the outlet hole of Tan i, respectively. The coefficients 1 and 2 represent the conversion parameters from the voltage applied to the pump to the flux of water. The values of the parameters, taen from [74], are: A 1 = A 4 = 28 cm 2, A 2 = A 3 = 32 cm 2, a 1 = a 4 = cm 2, a 2 = a 3 = cm 2, 1 = 3.35 cm 3 V 1 s 1, 2 = 3.33 cm 3 V 1 s 1, γ 1 = 0.7, γ 2 = 0.6. The considered equilibrium point is v 1 = v 2 = 3 V, h 1 = cm, h 2 = cm, h 3 = cm and h 4 = cm. Letting δh l = h l h l, l = 1, 2, 3, 4 and δv i = v i v i, i = 1, 2, x = (δh 1, δh 2, δh 3, δh 4 ), u = (δv 1, δv 2 ), d = B(w 1, w 2 ), linearizing system (2.63) around the considered equilibrium point and discretizing it using me-zoh with sampling time h = 1 s, we obtain a linear system

69 thesis_main 2014/1/26 15:56 page 53 # SIMULATION EXAMPLES 53 of the form (2.24), where A = , B = Inputs and states are partitioned as: x [1] = [ δh 1 δh 2 ] T, u [1] = δv 1 x [2] = [ δh 3 δh 4 ] T, u [2] = δv 2 The constraints on the inputs and the states of the linearized system have been chosen as: x [1] min = [ ] T, x [1] max = [ ] T + x [1] min x [2] min = [ ] T, x [2] max = [ ] T + x [2] min u [1] min = u[2] min 3, u[1] max = u [2] max = 3 The disturbances w 1,2 on the applied voltages are assumed to randomly vary between 0.01 V and 0.01 V. Matrices K i and P i satisfying the LMI conditions are: K 1 = [ ], K 2 = [ ] P 1 = [ ], P 2 = [ The weighting matrices are Q o 1 = Q o 2 = I 2 and R1 o = R2 o = 1. To compute the RPI sets, the Algorithm 2.3 has been used, and the initial reference trajectories have been designed using Algorithm 2.7. The simulation results, obtained using the continuous-time nonlinear model, are reported in Figure 2.5, while in Figure 2.6 the applied real voltages are shown. In addition to the external disturbance (w 1, w 2 ), included in the robust controller design, at time t = 100 s an unpredicted impulse equal to 2 V has been applied to the first pump. Then, the reference trajectories have been re-generated online to recover the nominal operating conditions with algorithm 2.7. The performances are close to the ones obtained with centralized MPC. ]

70 thesis_main 2014/1/26 15:56 page 54 #70 54 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL Figure 2.5: Trajectories of the states x [1] (lef) and x [2] (right) obtained with DPC (blac lines) and with cmpc (gray lines) for controlling the four-tans system. Solid lines: x 1 and x 3 ; dashed lines: x 2 and x 4. Figure 2.6: Inputs u [1] (lef) and u [2] (right) obtained with DPC (blac lines) and with cmpc (gray lines) for controlling the four-tans system.

71 thesis_main 2014/1/26 15:56 page 55 # SIMULATION EXAMPLES Cascade coupled flotation tans Consider the level control problem of flotation tans discussed in [158]. The system consists of five tans connected in cascade with control valves between the tans (Figure 2.7). A flow of pulp q enters into the first tan. The goal is to eep the levels y i, i = 1,..., 5, stable in all the tans. The manipulated inputs are the signals to the valves v i, i = 1,..., 5. The mathematical model describing the dynamics of the Figure 2.7: Schematic representation of the flotation tans. levels inside the five tans is [158]: πr 2 dy 1 dt = q 1 v 1 y1 y 2 + h 1 πr 2 dy 2 dt = 1 v 1 y1 y 2 + h 1 2 v 2 y2 y 3 + h 2 πr 2 dy 3 dt = 2 v 2 y2 y 3 + h 2 3 v 3 y3 y 4 + h 3 πr 2 dy 4 dt = 3 v 3 y3 y 4 + h 3 4 v 4 y4 y 5 + h 4 πr 2 dy 5 dt = 4 v 4 y4 y 5 + h 4 5 v 5 y5 + h 5 (2.64) where r is radius of the tans, i, i = 1,..., 5 are the valves coefficients and h i, i = 1,..., 5 are the physical height differences between subsequent tans. We set r = 1 m, i = 0.1 m 2.5 /Vs, i = 1,..., 5 and h i = 0.5 m, i = 1,..., 5. The nominal value for the inlet flow is q = 0.1 m 3 s 1 and we assume that it is affected by an uncertainty w = ±0.5% randomly varying with the time. We consider the equilibrium point where ȳ i = 2 m, i = 1,..., 5, and, correspondingly, v i = m, i = 1,..., 4 and v 5 = V. Let δy i = y i ȳ i, i = 1,..., 5, δv i = v i v i, i = 1,..., 5, x = (δy 1, δy 2, δy 3, δy 4, δy 5 ), u = (δv 1, δv 2, δv 3, δv 4, δv 5 ) and d = B d w. The discrete-time linearized

72 thesis_main 2014/1/26 15:56 page 56 #72 56 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL system corresponding to system (2.64), with me-zoh discretization and a sampling time h = 5 s, has the form (2.24), where B d = [ ] T and A = B = The partitions of inputs and states, for i = 1,..., 5 is: x [i] = δy i, u [i] = δv i The constraints on the inputs and the states of the linearized system, for i = 1,..., 5, have been set as: x [i] min = 1, x[i] max = 1, u [i] min = v i, u [i] max = 3 v i terms K i and P i solving the LMI conditions are: K 1 = 0.287, K 2 = K 3 = K 4 = 0.143, K 5 = P 1 = 1.18, P 2 = 1.07, P 3 = 1.05, P 4 = 1, P 5 = 1 The weighting matrices, for i = 1,..., 5, are Q o i = Ri o = 1. To compute the RPI sets the Algorithm 2.4 has been used, while the initial reference trajectories have been designed using Algorithm 2.8. In Figure 2.8 we show the simulation results, obtained using the continuous-time nonlinear model, i.e., depicting the state and input of the first tan, directly affected by the external flow q. Figure 2.9 and Figure 2.10 report, respectively, the states and the inputs of the remaining four tans. Also in this case, only minor differences arise between the centralized and the distributed solutions. At time t = 300 s, a disturbance of magnitude w = 0.1 m 3 s 1 is applied to the plant, and the distributed control system reacts generating from scratch the reference trajectories (i.e., with Algorithm 2.7).

73 thesis_main 2014/1/26 15:56 page 57 # SIMULATION EXAMPLES 57 Figure 2.8: Trajectories of the state x [1] (left) and of the input u [1] (right) obtained with DPC (blac lines) and with cmpc (gray lines) for the control of the floating tans. Figure 2.9: Trajectories of the states x [2] (solid lines), x [3] (dashed lines), x [4] (dashdot lines) and x [5] (dotted lines) obtained with DPC (blac lines) and with cmpc (gray lines) for the control of the floating tans.

74 thesis_main 2014/1/26 15:56 page 58 #74 58 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL Figure 2.10: Inputs u [2] (solid lines), u [3] (dashed lines), u [4] (dash-dot lines) and u [5] (dotted lines) obtained with DPC (blac lines) and with cmpc (gray lines) for the control of the floating tans Reactor-separator process The DPC algorithm has been used for control of the reactor-separator process already considered in [91, 159] and shown in Figure The Figure 2.11: Schematic representation of the reactor-separator process. plant consists of three subsystems, i.e. two reactors and a separator. The reactant A is inserted in the two reactors, where it is converted to product B, with a side product C ; a significant recirculation from the separator to the first reactor maes the system heavily coupled.

75 thesis_main 2014/1/26 15:56 page 59 # SIMULATION EXAMPLES 59 The model, whose equations can be found in [91], is derived under the assumption of hydraulic equilibrium, so that each subsystem has three states: x [i] = (x Ai, x Bi, T i ) where x Ai and x Bi are the mass fractions of A and B in the vessel i, while T i is its temperature; each subsystem has the heat input Q i as control variable. Referring again to the model reported in [91], the plant parameters used in the experiments reported below are summarized in Table 1. Table 1: Parameters used in the reactor-separator process. Parameter Description Nominal value F 10 effluent flow rate vessel g s 1 F 02 effluent flow rate vessel g s 1 F R recycle flow rate 40 g s 1 V 1 volume vessel m 3 V 2 volume vessel 2 90 m 3 V 3 volume vessel m 3 1 pre-exp. value for react s 1 2 pre-exp. value for react s 1 E 1 /R norm. act. energy for react K E 2 /R norm. act. energy for react K x A10 mass fract. of A in ext. streams 1 x B10 mass fract. of B in ext. streams 0 T 10,20 feed stream temperatures 313 K H 1 heat of reaction for reaction 1-40 J g 1 H 2 heat of reaction for reaction 2-50 J g 1 C p heat capacity 2.5 J g 1 ρ solution density 0.15 g m 3 α A relative volatility of A 3.5 α B relative volatility of B 1.1 α C relative volatility of C 0.5 Correspondingly, with Q 1,2,3 = 10 J s 1, the equilibrium x A1 = 0.6, x B1 = 0.352, T1 = 327 K, x A2 = 0.536, x B2 = 0.4, T2 = K, x A3 = 0.285, x B3 = 0.565, T3 = K has been computed and the linearized model around this steady state has been derived and discretized with sampling time = 0.1 s (see Figure 2.12). For the design of DPC, first the gains K i, i = 1, 2, 3 have been designed for the pairs (A ii, B ii ) according to the LQ criterion with Q i = I 3 and R i = 10 6 for all i, which allows to verify Assumption 2.1. The weighting matrices and the sets have been chosen in order to satisfy Assumptions 2.2 and 2.3. We set N = 10. The capability of regulating the state trajectories of the linearized system to the origin has been tested in simulation, in face of a pertur-

76 thesis_main 2014/1/26 15:56 page 60 #76 60 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL bation of magnitude x [i] = ( x Ai, x Bi, T i ) where x Ai = 0.05, x Bi = 0.05 and T i = 5 for all i with respect to the equilibrium condition at time t = 0 s. Note that the constraints on the absolute input variables Q i [0, 50] for all i, see [159], result in the following constraints on the deviation of Q i with respect to Q i : Q i [ 10, 40]. Figure 2.12: State and input variables with DPC (blac lines) and with cmpc (grey lines) for all subsystems (i = 1: solid lines; i = 2, dotted lines, i = 3: dashed lines). The dynamics of x Ai, x Bi, T i and the inputs Q i with respect to the equilibrium conditions are shown in Figure 2.12, and compared with the ones obtained applying a centralized MPC controller (cmpc). Note that Q i saturate at values which are not on the boundary of the feasibility set, since the robustness arguments used to define DPC mae the constraints more conservative than the ones used in cmpc. This slightly degrades the performance of DPC with respect to cmpc in the regulation of T i. 2.6 Conclusions In this Chapter, starting from the theoretical results presented in [50], the DPC algorithm has been presented in a shape useful for practical implementation. All the offline design phase has been carefully studied, in order to propose algorithms as simple as possible. Some techniques

77 thesis_main 2014/1/26 15:56 page 61 # APPENDIX 61 for facing faults due to unpredicted external disturbances have been described as well. Several simulation results based on processes taen from the literature have been presented, showing the effectiveness of the approach, which leads to performances close to the ones obtained using a centralized solution. In the next chapters, we will extend the proposed approach to continuous-time systems and to the tracing problem. 2.7 Appendix Proof of Theorem 2.1 For the sae of completeness, we report the proof of Theorem 2.1 [50]. The collective problem Define the collective vectors ˆx = (ˆx [1],..., ˆx[M] ), x = ( x [1],..., x[m] ), ũ = (ũ [1],..., ũ[m] ), û = (û [1],..., û[m] ), w = (w [1],..., w[m] ), z = (z [1],..., z[m] ), and the matrices A = diag(a 11,..., A MM ), B = diag(b 11,..., B MM ), Ã = A A, B = B B. Collectively, we write equations (2.3) and (2.6) as x +1 = A x + B u + Ã x + Bũ + w (2.65) ˆx +1 = A ˆx + B û + Ã x + Bũ (2.66) In view of (2.7), u = û + K(x ˆx ), from (2.8), z +1 = (A + B K)z + w (2.67) Minimizing (2.9) for all i = 1,..., M is equivalent to minimize V N (x ) = min V N (ˆx, û [:+N 1] ) (2.68) ˆx,û [:+N 1]

78 thesis_main 2014/1/26 15:56 page 62 #78 62 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL subject to the dynamic constraints (2.66) and M x ˆx Z = i=1 M ˆx +ν x +ν E = i=1 û +ν ũ +ν Ũ = M i=1 Z i E i U i ˆx +ν ˆX û +ν Û H(ˆx +ν, û +ν, x +ν, ũ +ν ) 0 (2.69a) (2.69b) (2.69c) (2.69d) (2.69e) (2.69f) for ν = 0,..., N 1, and the terminal constraint ˆx +N ˆX F (2.70) In (2.69), H collects all the constraints (2.15) and, by i) in Assumption 2.2, H(ˆx, Kˆx, ˆx, Kˆx) 0 for all ˆx ˆX F. The collective cost function V N is defined as We also define V N (ˆx, û [:+N 1] ) = V N,0 (ˆx ) = N 1 subject to (2.66), (2.69b)-(2.70). ν=0 l(ˆx +ν, û +ν ) + V F (ˆx +N ) min V N (ˆx, û [:+N 1] ) (2.71) û [:+N 1] Feasibility From Definition 2.1, it collectively holds that X N = {x : if x 0 = x then x [0:N 1], ũ [0:N 1], ˆx 0/0, û [0,N 1] such that (2.66), (2.69) and (2.70) are satisfied} and that, for each point of the feasibility set x X N, X x := {( x [0:N 1], ũ [0:N 1] ) : if x 0 = x then ˆx 0/0, û [0,N 1] such that (2.66), (2.69) and (2.70) are satisfied}

79 thesis_main 2014/1/26 15:56 page 63 # APPENDIX 63 At time, x X N and ( x [:+N 1], ũ [:+N 1] ) X x. The optimal nominal input and state sequences obtained by minimizing the collective MPC problem are û [:+N 1]/ = {û /,..., û +N 1/ } and ˆx [:+N]/ = {ˆx /,..., ˆx +N/ }, respectively. Finally, recall that x +N = ˆx +N/ and ũ +N = Kˆx +N/. Define û +N/ = Kˆx +N/ and compute ˆx +N+1/ according to (2.66) from ˆx +N/ where û +N = û +N/. We obtain ˆx +N+1/ = A ˆx +N/ + B û +N/ + Ã x +N + Bũ +N = (A + BK)ˆx +N/ since x +N = ˆx +N/ and ũ +N = û +N/. In view of constraint (2.70) and Assumption 2.2, û +N/ Û and ˆx +N+1/ ˆX F. Therefore, they satisfy (2.69d), (2.69e) and (2.70). Also, according to Assumption 2.2, (2.19) holds. We also define the input sequence u [+1:+N]/ and the state sequence x [+1:+N+1]/ stemming from the initial condition ˆx +1/ and the input sequence u [+1:+N]/. In view of the feasibility of the i-dpc problem at time, we have that x +1 ˆx +1/ Z, ˆx +ν/ x +ν E, and û +ν/ ũ +ν Ũ for all ν = 1,..., N 1. Note also that ˆx +N/ x +N = 0 E and û +N/ ũ +N = 0 Ũ by (2.18). Furthermore, since x +N = ˆx +N/ ˆX F, from (2.70) it holds that H(ˆx +N/, û +N/, x +N, ũ +N ) 0 from i) of Assumption 2.2. Therefore x [+1:+N+1]/ and u [+1:+N]/ are feasible at + 1, since (2.69) and (2.70) are satisfied. This proves that x X N and ( x [:+N 1], ũ [:+N 1] ) X x implies that x +1 X N and ( x [+1:+N], ũ [+1:+N] ) X x+1. Convergence of the optimal cost function By optimality, V N,0 (ˆx +1/ ) V N (ˆx +1/, u [+1:+N]/ ), where V N (ˆx +1/, u [+1:+N]/ ) = Therefore we compute that N l(ˆx +ν/, û +ν/ ) + V (ˆx ) F +N+1/ ν=1 V N,0 (ˆx +1/ ) V N,0 (ˆx / ) l(ˆx /, û / ) + l(ˆx +N/, û +N/ )+ + V F (ˆx +N+1/ ) V F (ˆx +N/ ) (2.72) and, in view of (2.19), it follows that V N,0 (ˆx +1/ ) V N,0 (ˆx / ) ( ˆx / 2 Q + û / 2 o Ro) (2.73)

80 thesis_main 2014/1/26 15:56 page 64 #80 64 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL Since Q o and R o are positive definite matrices ˆx / 0 and û / 0 as. Finally, recall that the state x evolves according to x +1 = (A + BK) x + B ( û / Kˆx / ) By asymptotic convergence to zero of the nominal state and input signals ˆx / and û / respectively, we obtain that B ( û / Kˆx / ) is an asymptotically vanishing term. Since also (A + BK) is Schur by Assumption 2.1, we obtain that x 0 as Proof of Algorithm 2.1 In this Section, the approach based on Linear Matrix Inequalities [18], for computing the state-feedbac gain and the weighting matrices is described in detail. First recall that the closed-loop system composed by (2.1) and by the state feedbac u = Kx control law is stable if and only if there exist a matrix P o R n n such that { P o 0 (A + BK) T P o (A + BK) P o (2.74) 0 Now define the positive definite matrix S, such that P o = S 1 and rewrite conditions (2.74) as { S 0 (A T + K T B T )S 1 (A + BK) S 1 (2.75) 0 Pre- and post-multiplying (2.75) by S, and defining matrix Y R m n such that K = YS 1, (2.75) can be written as { S 0 (SA T + Y T B T )S 1 (2.76) (AS + BY) S 0 By using the Schur complement transformation [173], this expression can be transformed in the following LMIs [ ] S SA T + Y T B T 0 (2.77) AS + BY S whose solution S and Y allows to compute K = YS 1 and P o = S 1. Moreover, since P o is required to have a bloc diagonal structure, the additional constraints have to be included into the LMI problem: S ij = 0 i, j = 1,..., M (i j) (2.78)

81 thesis_main 2014/1/26 15:56 page 65 # APPENDIX 65 where S ij R n i n j are the blocs of S outside the diagonal, and S ii R n i n i are the diagonal blocs. Analogously, in order to have also K bloc diagonal, Y must be bloc diagonal as well: Y ij = 0 i, j = 1,..., M (i j) (2.79) where Y ij R m i n j are the blocs of Y outside the diagonal. Finally, each bloc K i must be stabilizing for its i-th subsystem (recall again Assumption 2.2), which translates in the following condition for each subsystem: [ ] S ii S ii Ā T ii + Yii T B ii T Ā ii S ii + B 0 (2.80) ii Y ii S ii

82 thesis_main 2014/1/26 15:56 page 66 #82 66 CHAPTER 2. DISTRIBUTED PREDICTIVE CONTROL

83 thesis_main 2014/1/26 15:56 page 67 #83 3 Continuous-time DPC With some notable exceptions, see e.g. [32, 44, 90, 146], the majority of the distributed control algorithms proposed so far have been developed for discrete-time systems, possibly obtained from an underlying continuous-time model, see also the Distributed Predictive Control (DPC) technique presented in Chapter 2. The discrete-time framewor is particularly suitable for the design of distributed MPC, since it also allows to easily develop methods based on distributed optimization approaches, see e.g. [7,27,28,42,76,134], or on agent negotiation, see e.g. [97, 98]. On the other hand, it does not allow to consider the process inter-sampling behavior in the optimization problem underlying any MPC algorithm; for this reason, the development of new and effective MPC methods for continuous-time systems is of interest. In this Chapter, the DPC algorithm is formulated in a continuoustime framewor. Also the continuous-time DPC is based on a noniterative scheme where the future state and control reference trajectories are transmitted among neighboring systems, i.e. systems with direct couplings through their state or control variables, and the differences between these trajectories and the true ones are interpreted as disturbances to be rejected by a proper robust control method. The continuous-time approach is characterized by a higher complexity, but it has the positive side of considering in the optimization problem the system behavior at all time instants. 67

84 thesis_main 2014/1/26 15:56 page 68 #84 68 CHAPTER 3. CONTINUOUS-TIME DPC 3.1 Partitioned continuous-time systems Consider a process made by M interacting systems described by the continuous-time linear model ẋ(t) = Ax(t) + Bu(t) (3.1) where x(t) X R n and u(t) U R m are the state and input vectors, respectively, both subject to constraints. Letting x(t) = (x [1] (t),..., x [M] (t)) and u(t) = (u [1] (t),..., u [M] (t)), the dynamics of each subsystem is given by ẋ [i] (t) = A ii x [i] (t) + B ii u [i] (t) + {A ij x [j] (t) + B ij u [j] (t)} (3.2) j i,j M where x [i] (t) X i R n i and u [i] (t) U i R m i are the state and input vectors, respectively, of the i-th system (i = 1,..., M), and it holds that n = M i=1 n i and m = M i=1 m i. The sets X i and U i defining the constraints are convex neighborhoods of the origin and we have that X = M i=1 X i and U = M i=1 U i, which are convex due to the convexity of X i and U i. The subsystems matrices A ij and B ij, i, j = 1,..., M are the bloc entries of the matrices A and B describing the dynamics of the large-scale system (3.1). In the following, subsystem j will be defined as a neighbor of subsystem i if and only if A ij 0 and/or B ij 0, and N i will denote the set of neighbors of subsystem i (which excludes i). Concerning systems (3.2), the following stabilizability assumption is introduced. Assumption 3.1 There exist matrices K i R m i n i, i = 1,..., M, such that F ii = (A ii + B ii Ki ) are Hurwitz. We define K =diag( K 1,..., K M ). As for the collective system (3.1), the following assumption on decentralized stabilizability is made. Assumption 3.2 There exists a bloc-diagonal matrix K c, defined as K c =diag(k c 1,..., K c M ) with Kc i R m i n i, i = 1,..., M, such that: i) A + BK c is Hurwitz. ii) F ii = (A ii + B ii K c i ) is Hurwitz, i = 1,..., M.

85 thesis_main 2014/1/26 15:56 page 69 # DPC FOR CONTINUOUS-TIME SYSTEMS 69 Note that Assumption 3.2 implies Assumption 3.1 which can be trivially satisfied by setting K i = K c i. However, since K i and K c i play different roles in the design algorithm to be presented, it can be useful to allow them to be different for performance enhancement. 3.2 DPC for continuous-time systems The continuous-time distributed control law described in the following is composed by two terms, the first one is a standard state feedbac, while the second one is computed by an MPC-based distributed control algorithm running with sampling period T and at sampling times t = T, N. For simplicity of notation, given the sampling instant t, the time instant t + ht will be denoted by t +h Models: perturbed, nominal and auxiliary In order to set up the proposed distributed control method, it will be assumed that at any time instant t each subsystem i transmits to its neighbors its continuous-time future state and input reference trajectories x [i] (t) and ũ [i] (t), t [t, t +N 1 ). Moreover, by adding suitable constraints to the MPC formulation, each subsystem will be able to guarantee that its state and control trajectories lie in specified time-invariant neighborhoods of the reference trajectories, i.e, for all t [t, t +N 1 ), x [i] (t) x [i] (t) E i and u [i] (t) ũ [i] (t) U i, where 0 E i and 0 U i. It is possible to rewrite (3.2) as the perturbed model ẋ [i] (t) = A ii x [i] (t)+b ii u [i] (t)+ j N i (A ij x [j] (t)+b ij ũ [j] (t))+w [i] (t) (3.3) where the term j N i (A ij x [j] (t) + B ij ũ [j] (t)) can be interpreted as a disturbance, nown in advance over the future prediction horizon of length (N 1)T (i.e., for all t [t, t +N 1 )), to be suitably compensated. On the other hand, w [i] (t) = j N i ( Aij (x [j] (t) x [j] (t)) + B ij (u [j] (t) ũ [j] (t)) ) W i (3.4) is a bounded unnown disturbance (i.e., W i = j N i {A ij E i B ij U i }) to be rejected. For the statement of the individual MPC sub-problems, hereafter denoted i-dpc problem, we rely on a continuous-time version, see [51],

86 thesis_main 2014/1/26 15:56 page 70 #86 70 CHAPTER 3. CONTINUOUS-TIME DPC of the robust MPC algorithm presented in [107] for constrained discretetime linear systems with bounded disturbances. As a preliminary step, define the i-th subsystem nominal model obtained from equation (3.3) by neglecting the disturbance w [i] (t): ˆx [i] (t) = A iiˆx [i] (t) + B ii û [i] (t) + j Ni(A ij x [j] (t) + B ij ũ [j] (t)) (3.5) The control law for the i-th perturbed subsystem (3.3) is given by u [i] (t) = û [i] (t) + K c i (x [i] (t) ˆx [i] (t)) (3.6) where K c i is the feedbac gain satisfying Assumption 3.2. Letting z [i] (t) = x [i] (t) ˆx [i] (t), from equations (3.3) and (3.6), one obtains ż [i] (t) = F ii z [i] (t) + w [i] (t) (3.7) where w [i] (t) W i. Since W i is bounded and F ii is Hurwitz, it is possible to define the robust positively invariant (RPI) set Z i for (3.7) (see, for example, [160] and [133]) such that, for all z [i] (t ) Z i, then z [i] (t) Z i for all t t. Here, we assume that the sets Z i are such that there exist non-empty sets ˆX i X i Z i and Ûi U i Ki c Z i. Given Z i, define the neighborhoods of the origin E i and U i, i = 1,..., M such that E i Z i E i and U i Ki c Z i U i, respectively. Assume also that it is possible to guarantee that, for suitably defined sets Ēi and Ūi and for t [t +N 1, t +N ), x [i] (t) Ēi and ũ [i] (t) Ūi. Then, with reference to the time interval t [t +N 1, t +N ], it is worth defining an auxiliary decentralized" model, obtained from equation (3.5) by neglecting the nown disturbance term: x [i] (t) = A ii x [i] (t) + B ii ū [i] (t) (3.8) Similarly to (3.6), the term û [i] (t) is set as follows û [i] (t) = ū [i] (t) + K i (ˆx [i] (t) x [i] (t)) (3.9) where K i is the feedbac gain satisfying Assumption 3.1. Letting s [i] (t) = ˆx [i] (t) x [i] (t), from (3.5), (3.8) and (3.9) one has ṡ [i] (t) = F ii s [i] (t) + w [i] (t) (3.10) where w [i] (t) = j N i {A ij x [j] (t) + B ij ũ [j] (t)} Wi and W i = j N i {A ij Ē j B ij Ū j }. Since W i is bounded and F ii is Hurwitz, it is possible to define a further robust positively invariant (RPI) set S i

87 thesis_main 2014/1/26 15:56 page 71 # DPC FOR CONTINUOUS-TIME SYSTEMS 71 for (3.10). The sets Ēi and Ūi must satisfy Ēi S i ˆX i, Ūi K i S i Ûi, S i E i and K i S i U i. Although not strictly necessary for the derivation of the properties of the proposed DPC method, for simplicity in the following it will be assumed that the feed-forward term ū [i] (t) is a piecewise constant signal, i.e., ū [i] (t) = ū [i] (t +N 1 ) for all t [t +N 1, t +N ) Statement of the i-dpc problems At any time instant t, given the future reference trajectories x [j] (t), ũ [j] (t), t [t, t +N 1 ), j N i {i}, for system i = 1,... M we define the following i-dpc problem subject to (3.5), (3.8), to for all t [t, t +N 1 ), to min V ˆx [i] (t ),û [i] ([t,t +N 1 )), x [i] (t +N 1 ),ū [i] i N (3.11) (t +N 1 ) x [i] (t ) ˆx [i] (t ) Z i (3.12) ˆx [i] (t) x [i] (t) E i (3.13) û [i] (t) ũ [i] (t) U i (3.14) ˆx [i] (t) ˆX i (3.15) û [i] (t) Ûi (3.16) ˆx [i] (t +N 1 ) x [i] (t +N 1 ) S i (3.17) x [i] (t) Ēi (3.18) ū [i] (t) Ūi (3.19) for all t [t +N 1, t +N ), and to the terminal constraint x [i] (t +N ) X F i (3.20) where X F i is a terminal set related to the i-th nominal subsystem (3.8), specified in the following section. The cost function V N is t+n 1 i Vi N = 1 ( ˆx [i] (t) 2ˆQi + û [i] (t) 2ˆRi )dt t 2 ˆx[i] (t +N 1 ) 2ˆPi + ( ) 1 t+n + λ ( x [i] (t) 2 Qi + ū [i] (t) 2 Ri )dt t +N 1 2 x[i] (t +N ) 2 Pi (3.21)

88 thesis_main 2014/1/26 15:56 page 72 #88 72 CHAPTER 3. CONTINUOUS-TIME DPC where λ is a positive constant and the symmetric, positive definite matrices ˆQ i, Qi, ˆRi, Ri, ˆPi, and P i are design parameters and are chosen as specified later. Denoting by X i (t ) = (ˆx [i] (t ), û [i] ([t, t +N 1 )), x [i] (t +N 1 ), ū [i] (t +N 1 )) the arguments of the cost function Vi N, the optimal solution to the i-dpc problem at time t is the 4-uple X i (t t ) = (ˆx [i] (t t ), û [i] ([t, t +N 1 ) t ), x [i] (t +N 1 t ), ū [i] (t +N 1 t )) The signal ˆx [i] (t t ), t [t, t +N 1 ] (respectively x [i] (t t ), t [t +N 1, t +N ]) is the solution to (3.5) (respectively to (3.8)) obtained with ˆx [i] (t t ) as initial conditionand û([t, t +N 1 ) t ) as input sequence (respectively with x [i] (t +N 1 t ) as initial condition and ū(t +N 1 t ) as constant input). According to (3.6), the control law for the system (3.2), for t [t, t +1 ), is given by u [i] (t) = û [i] (t t ) + K c i (x [i] (t) ˆx [i] (t t )) (3.22) Finally, the reference trajectories x [i] (t) and ũ [i] (t) are incrementally defined as follows. Specifically, for t [t +N 1, t +N ), we set x [i] (t) = x [i] (t t ) ũ [i] (t) = ū [i] (t t ) (3.23a) (3.23b) Then, these pieces of trajectories are transmitted to the subsystems j such that i N j, i.e., which need their nowledge to compute the future predictions ˆx [j] (t). Remar that, by solely transmitting x [i] (t +N 1 t ) and ū [i] (t +N 1 t ), the whole x [i] (t t ), t [t +N 1, t +N ), can be exactly reconstructed by subsystem j provided that j nows the dynamical model governing the subsystem i Properties of DPC In order to establish the main stability and convergence properties of the proposed distributed control law, the following definition and assumption must be introduced. The set of admissible initial conditions x(t 0 ) = (x [1] (t 0 ),..., x [M] (t 0 )) and initial reference trajectories x [j] (t), ũ [j] (t), for all j = 1..., M and t [t 0, t N 1 ), are defined as follows.

89 thesis_main 2014/1/26 15:56 page 73 # TUNING OF THE DESIGN PARAMETERS 73 Definition 3.1 Letting x = (x [1],..., x [M] ), denote by X N := {x : if x [i] (t 0 ) = x [i] for all i = 1,..., M then ( x [1] (t),..., x [M] (t)), (ũ [1] (t),..., ũ [M] (t)) for all t [t 0, t N 1 ), X i (t 0 t 0 ) such that (3.5), (3.8), and (3.12)- (3.20) are satisfied for all i = 1,..., M} the feasibility region for all the i-dpc problems. Moreover, for each x X N, let X x := {( x [1] (t),..., x [M] (t)), (ũ [1] (t),..., ũ [M] (t)) for all t [t 0, t N 1 ) : if x [i] (t 0 ) = x [i] for all i = 1,..., M then X i (t 0 t 0 ) such that (3.5), (3.8), (3.12)- (3.20) are satisfied for all i = 1,..., M} be the region of feasible initial reference trajectories. Assumption 3.3 Given the sets E i, U i, and the RPI sets Z i for equation (3.7), there exists a real positive constant ρ E > 0 such that Z i B (n i) ρ E (0) E i and Ki c Z i B (m i) ρ E (0) U i for all i = 1,..., M. Then, it is possible to state the following result (see the Appendix for the proof). Theorem 3.1 Let Assumptions 3.1, 3.2, and 3.3 be satisfied; then, there exist computable design parameters λ, ˆQ i, Q i, ˆR i, R i, ˆP i, P i such that, for any initial reference trajectories in X x(t0 ), the trajectory x(t), starting from any initial condition x(t 0 ) X N, asymptotically converges to the origin. A detailed discussion on how to select the design parameters and the sets of interest is reported in the following section. Remar 3.1 In the optimization problem (3.11) it has been assumed that û [i] ([t, t +N 1 )) is a generic function of time. However, for computational reasons, it is usually more convenient to resort to parameterized functions and to optimize with respect to the corresponding parameters. 3.3 Tuning of the design parameters In this section we show how to compute design parameters which guarantee that Theorem 3.1 holds.

90 thesis_main 2014/1/26 15:56 page 74 #90 74 CHAPTER 3. CONTINUOUS-TIME DPC Choice of the control gains K c i, K i The control laws (3.6), (3.9) require the nowledge of the gains Ki c and K i satisfying Assumptions 3.1 and 3.2. While the terms K i can be computed with any standard synthesis method provided that the pair (A ii, B i ) is stabilizable, the computation of K c =diag(k1, c..., KM c ) is more difficult, since both a collective and a number of local stability conditions must be fulfilled. For instance, this problem can be easily tacled in a centralized fashion by defining two bloc diagonal matrices S =diag(s 1,..., S M ), S i R n i,n i, and Y =diag(y 1,..., Y M ), Y i R m i,n i, and by solving the following set of LMI s, see [172] S 0 S i 0, i = 1,..., M SA T + AS + Y T B T + BY 0 S i A T ii + A ii S i + Y T i B T ii + B ii Y i 0, i = 1,..., M (3.24) Then, K c = YS 1 is the required stabilizing bloc diagonal matrix Choice of Q i, R i, P i, X F i In order to define matrices Q i, Ri, Pi, and the invariant set X F i, we must preliminarily define the auxiliary control law for the system (3.8), which must be consistent with the simplifying assumption that ū [i] (t) is piecewise constant. Assuming that the terminal constraint x [i] (t +N ) X F i is verified, we define the auxiliary control law, to be applied to system (3.8) for all t [t +N, t +N+1 ), as ū [i] (t) = ū [i] (t +N ) = K d i x [i] (t t+n ) (3.25) where the gain Ki d must stabilize the continuous-time system (3.8). Denoting, for all η [0, T ], A zoh ii (η) = e Aiiη and Bii zoh (η) = η 0 eaii(η ν) B ii dν and given x [i] (t +N ), for all t [t +N, t +N+1 ] one has x [i] (t) = zoh (t t +N ) x [i] (t +N ) x [i] (t +N+1 ) = Fii x d [i] (3.26) (t +N ) where Fii zoh (η) = A zoh ii (η) + Bii zoh (η)ki d and Fii d = Fii zoh (T ). Therefore, the gains Ki d can be computed with any standard stabilization method to guarantee that Fii d is Schur. This procedure allows also one to resort to the results reported in [101], Lemma 1, which turn out to be useful in the following developments. Specifically, given the symmetric

91 thesis_main 2014/1/26 15:56 page 75 # TUNING OF THE DESIGN PARAMETERS 75 weighting matrices Q i > 0 and R i > 0 appearing in (3.21) and which can be chosen as free design parameters, define two constants γ i1 > 0, γ i2 > 0 in such a way that γ i1 > λ M ( Q i ) γ i2 > T K d i 2 λ M ( R i ) (3.27a) (3.27b) Furthermore, define a matrix Q i in such a way that λ m (Q i ) > γ i1. Let the symmetric matrix P i be the unique positive definite solution of the following Lyapunov equation (F d ii) T Pi F d ii P i + Q i = 0 (3.28) where Q i = T zoh (F 0 ii (η)) T Q Fii zoh (η)dη + γ i2 I. Then, for each pair of sets Ēi, Ū i, it is proven in [101] that there exist a sampling period T [0, + ) and a constant c i > 0 such that the set satisfies, for all x [i] (t +N ) X F i conditions x [i] (t +N+1 ) 2 Pi x [i] (t +N ) 2 Pi Letting li ( x [i] (t), ū [i] (t)) = λ 2 X F i (K d i, T ) = { x [i] x [i] 2 Pi c i } (3.29) and for all t [t +N, t +N+1 ), the x [i] (t) Ēi, K d x [i] (t +N ) Ūi γ i1 t+n+1 t+t t (3.30a) t +N x [i] (η) 2 dη γ i2 x [i] (t +N ) 2 ( x [i] (η) 2 Qi + ū [i] (η) 2 Ri )dη (3.30b) (3.31a) V F i ( x [i] (t)) = λ 2 x[i] (t) 2 Pi (3.31b) from the definition of γ i1 > 0, γ i2 > 0 and V i F, and recalling (3.25), (3.31b) implies that x [i] (t +N+1 ) X F i and V F i ( x [i] (t +N+1 )) V F ( x [i] (t +N )) λ 2 i t+n+1 t +N ( x [i] (η) 2 Qi + ū [i] (η) 2 Ri )dη l i ( x [i] (t +N ), ū [i] (t +N )) (3.32)

92 thesis_main 2014/1/26 15:56 page 76 #92 76 CHAPTER 3. CONTINUOUS-TIME DPC Therefore, since properties (3.30a)- (3.32) are required to establish the main properties of the method (see the proof of Theorem 3.1), for any pair Q i, R i it is required to choose the weights P i in (3.21) according to (3.28) and the terminal set X F i in (3.20) according to (3.29) Choice of ˆQ i, ˆR i, ˆP i, λ The symmetric, positive definite matrices ˆQ i, ˆRi can be freely chosen according to specific design criteria, while, in order to guarantee the stability properties of Theorem 3.1, given an arbitrary constant α > 1, the matrix ˆP i must be computed to satisfy the following Lyapunov equation: Φ [i] x (T ) T ˆPi Φ[i] x (T ) ˆP i + Q x [i] + αi n = 0 (3.33) where T Q [i] x = 0 Φ [i] x (η) T ˆQi Φ[i] [i] x (η) + Φ u (η) T ˆRi Φ[i] u (η)dη [i] and, for all η = [0, T ], Φ x (η) = e F ii η [i] and Φ u (η) = K Φ[i] i x (η). For the tuning of scalar λ, there exists a positive number λ > 0 such that, if λ λ, then the convergence of the scheme is guaranteed. For a numerical assessment of λ, see the discussion in Section Simulation example Consider the problem of regulating the levels y i, i = 1,..., 5 of the five flotation tans system discussed in [158], see also Chapter 2, where a flow of pulp q enters into the first one. The tans are connected in cascade with control valves between subsequent reservoirs (Figure 3.1), and the manipulated inputs are the signals to the valves v i, i = 1,..., 5. We refer the reader to Chapter 2 for details about the dynamic model of the system, its parameters, the considered equilibrium point and the constraints on inputs and states. For the sae of simplicity, in this case external disturbances are not considered. Let δy i = y i ȳ i, i = 1,..., 5, δv i = v i v i, i = 1,..., 5, x = (δy 1, δy 2, δy 3, δy 4, δy 5 ) and u = (δv 1, δv 2, δv 3, δv 4, δv 5 ). The partitions of inputs and states, for i = 1,..., 5 is x [i] = δy i, u [i] = δv i. The weighting matrices, for i = 1,..., 5, are Q i = R i = 1 and ˆQ i = ˆR i = 1.

93 thesis_main 2014/1/26 15:56 page 77 # CONCLUSIONS 77 Figure 3.1: Schematic representation of the cascade coupled flotation tans. Note that, since the subsystems have all one state, the RPI sets can be computed easily. In fact, for a scalar system having dynamics ẋ(t) = λx(t)+w(t) with w W = {w R b w b}, we have that the RPI set Z is defined as Z = {x R b/λ x b/λ} [133]. In this way, the computation of the RPI sets Z i can be solved via a linear programming problem. The terms of the cost function, moreover, can be computed easily solving standard integrals. For subsystems with more than one state, symbolic calculus tools should be used. In Figure 3.2 the states trajectories, obtained using the continuoustime nonlinear model in simulation are depicted. Figure 3.3 shows the applied inputs. In both cases, a comparison with a standard centralized discrete-time MPC algorithm is provided. 3.5 Conclusions In this Chapter we have presented a non-cooperative distributed predictive control algorithm for continuous-time systems based on robust MPC, whose convergence properties have been proved. A realistic case study has been used for testing the performance of the algorithm. After the in-depth study of the regulation problem presented in first two chapters, in the next ones we will present some solutions to the tracing problem.

94 thesis_main 2014/1/26 15:56 page 78 #94 78 CHAPTER 3. CONTINUOUS-TIME DPC Figure 3.2: Trajectories of the state x [1] (solid lines), x [2] (dashed lines), on the left, and x [3] (solid lines), x [4] (dashed lines), x [5] (dash-dot lines), on the right, obtained with DPC (blac lines) and with cmpc (gray lines) for the control of the floating tans. Figure 3.3: Inputs u [1] (solid lines), u [2] (dashed lines), on the left, and u [3] (solid lines), u [4] (dashed lines), u [5] (dash-dot lines), on the right, used with DPC (blac lines) and with cmpc (gray lines) for the control of the floating tans.

95 thesis_main 2014/1/26 15:56 page 79 # APPENDIX Appendix Recursive feasibility First we prove that, given for all i = 1,..., M, an optimal feasible solution X i (t t ) to (3.11) at time t, the 4-uple X i (t +1 t ) = (ˆx [i] (t +1 t ), (û [i] ([t +1, t +N 1 ) t ), ū [i] (t t )+ + K i (ˆx [i] (t t ) x [i] (t t )), t [t +N 1, t +N )), x [i] (t +N t ), K d i x [i] (t +N t )) (3.34) is a feasible solution to (3.11) at time t +1. Recall that, according to (3.23), after the solution to (3.11) is computed at time t, each subsystem j transmits x [j] ([t +N 1, t +N )) = x [j] ([t +N 1, t +N ) t ) and ũ [j] ([t +N 1, t +N )) = ū [i] ([t +N 1, t +N ) t ) to all the subsystems i satisfying j N i. Importantly, in (3.34), for t [t +N 1, t +N ), the trajectory ˆx [i] (t t ) is computed, by subsystem i, according to system (3.5), with û [i] (t) = ū [i] (t t ) + K i (ˆx [i] (t t ) x [i] (t t )). Therefore it results that, for all t [t +N 1, t +N ) ˆx [i] (t t ) = A iiˆx [i] (t t ) + B (ū[i] ii (t t ) + K i (ˆx [i] (t t ) x [i] (t t )) ) + + j N i (A ij x [j] (t t ) + B ij ū [j] (t t )) = (A ii + B ii Ki )ˆx [i] (t t ) B ii Ki x [i] (t t )+ + j N i A ij x [j] (t t ) + M j=1 B ijū [j] (t t ) (3.35) On the other hand, the trajectory x [i] (t t ), for all t [t +N, t +N+1 ], is computed according to (3.8) with ū [i] (t t ) = Ki d x [i] (t +N t ), and therefore x [i] (t t ) = Fi zoh (t t +N ) x [i] (t +N t ). From (3.12) x [i] (t ) ˆx [i] (t ) Z i and, from (3.13)- (3.14), for t [t, t +1 ), it is guaranteed that ˆx [j] (t) x [j] (t) E j, û [j] (t) ũ [j] (t) E j for all j N i and w [i] (t) W i. Therefore, in view of the invariance of Z i with respect to (3.7), it holds that x [i] (t +1 ) ˆx [i] (t +1 t ) Z i. For t [t +1, t +N 1 ), constraints (3.13), (3.14), (3.15) and (3.16) are verified in view of the feasibility of (3.11) at time t. For t [t +N 1, t +N ), recalling (3.35) we have that ˆx [i] (t t ) x [i] (t t ) = (A ii + B ii Ki )(ˆx [i] (t t ) x [i] (t t )) + j N i ( Aij x [j] (t t ) + B ij ū [j] (t t ) ) (3.36)

96 thesis_main 2014/1/26 15:56 page 80 #96 80 CHAPTER 3. CONTINUOUS-TIME DPC and recall also that x [i] (t) = x [i] (t t ) and ũ [i] (t) = ū [i] (t t ) for all i = 1,..., M. In view of (3.17) ˆx [i] (t +N 1 t ) x [i] (t +N 1 t ) S i and from (3.18)- (3.19), it is guaranteed that j N i (A ij x [j] (t t ) + B ij ū [j] (t t )) W i for all j N i. In view of the invariance of S i with respect to (3.10), it holds that ˆx [i] (t t ) x [i] (t t ) = ˆx [i] (t t ) x [i] (t) S i. Furthermore, since û [i] (t t ) ū [i] (t t ) = û [i] (t t ) ũ [i] (t) K i S i and S i E i and K i S i U i, then (3.13) and (3.14) are also verified for t [t +N 1, t +N ). This also proves that ˆx [i] (t +N t ) x [i] (t +N t ) S i and that (3.17) is satisfied. Moreover, being Ēi S i ˆX i and Ūi K i S i Ûi, constraints (3.15) and (3.16) are verified for t [t +N 1, t +N ). Finally, note that x [i] (t +N ) X F i in view of (3.20) and of the definition (3.29) of X F i and (3.30b), the constraints (3.18), (3.19), (3.20) are also verified at time t +1. In view of this, the 4-uple X i (t +1 t ) is a feasible solution to (3.11) at time t +1. This implies that, given the optimal solution Xi (t +1 ) to the problem (3.11) at time t +1 (which is proved to exist, provided that (3.11) is feasible at time t ), for all i = 1,..., M it holds that, by optimality V N i (x(t +1 )) = V N The collective problem i (X i (t +1 )) V N i (X i (t +1 t )) (3.37) To prove the convergence to zero of the solution, we now define the collective problem, equivalent to the one considered in the previous sections. Define the vectors ˆx(t) = (ˆx [1] (t),..., ˆx [M] (t)), x(t) = ( x [1] (t),..., x [M] (t)), ū(t) = (ū [1] (t),..., ū [M] (t)), w(t) = (w [1] (t),..., w [M] (t)), z(t) = (z [1] (t),..., z [M] (t)), x(t) = ( x [1] (t),..., x [M] (t)) û(t) = (û [1] (t),..., û [M] (t)) ũ(t) = (ũ [1] (t),..., ũ [M] (t)) w(t) = ( w [1] (t),..., w [M] (t)) s(t) = (s [1] (t),..., s [M] (t)) Then define the matrices A = diag(a 11,..., A MM ), B = diag(b 11,..., B MM ), Ã = A A, B = B B. Collectively, we write equations (3.3), (3.5), and (3.8) as ẋ(t) = A x(t) + B u(t) + Ã x(t) + Bũ(t) + w(t) (3.38) ˆx(t) = A ˆx(t) + B û(t) + Ã x(t) + Bũ(t) (3.39) x(t) = A x(t) + B ū(t) (3.40)

97 thesis_main 2014/1/26 15:56 page 81 # APPENDIX 81 In view of (3.6) and (3.9), u(t) = û(t) + K c (x(t) ˆx(t)) and û(t) = ū(t)+ K(ˆx(t) x(t)). From this, and in view of (3.7) and (3.10), ż(t) = (A + B K c )z(t) + w(t) (3.41) ṡ(t) = (A + B K)s(t) + w(t) (3.42) Minimizing (3.11) at time t for all i = 1,..., M is equivalent to solve the following collective minimization problem V N (x(t )) = min X(t ) VN (X(t )) (3.43) where X(t ) = (X 1 (t ),..., X M (t )), subject to the dynamic constraints (3.39), (3.40) and M x(t ) ˆx(t ) Z = (3.44a) for all t [t, t +N 1 ), to and the terminal constraint ˆx(t) x(t) E = i=1 M i=1 û(t) ũ(t) Ũ = M ˆx(t) ˆX = i=1 Z i E i U i M ˆX i i=1 û(t) Û = M i=1 ˆx(t +N 1 ) x(t +N 1 ) S = Û i (3.44b) (3.44c) (3.44d) (3.44e) M S i (3.45) i=1 M x(t) Ē = Ē i (3.46) i=1 M x(t) Ū = Ū i (3.47) i=1 x(t +N ) X F = M i=1 X F i (3.48)

98 thesis_main 2014/1/26 15:56 page 82 #98 82 CHAPTER 3. CONTINUOUS-TIME DPC The collective cost function V N is N 2 V N = ˆl(ˆx(t+h ), û(t +h )) + ˆV F (ˆx(t +N 1 )) + h=0 l( x(t+n 1 ), ū(t +N 1 )) + V F ( x(t +N )) (3.49) where, from (3.31): 1 ˆl(ˆx(t), û(t)) = 2 λ l( x(t), ū(t)) = 2 t+t t t+t ˆV F (ˆx(t)) = 1 2 ˆx(t) 2ˆP V F ( x(t)) = λ 2 x(t) 2 P t ( ˆx(η) 2ˆQ + û(η) 2ˆR)dη ( x(η) 2 Q + ū(η) 2 R)dη (3.50a) (3.50b) (3.50c) (3.50d) and ˆQ =diag( ˆQ 1,..., ˆQ M ), ˆR =diag( ˆR 1,..., ˆR M ), ˆP =diag( ˆP 1,..., ˆP M ), Q =diag( Q 1,..., Q M ), R =diag( R 1,..., R M ), and P =diag( P 1,..., P M ) Proof of convergence Denote with X(t t ) = (X 1 (t t ),..., X M (t t )) the optimal solution to (3.43) at time t, and with X(t +1 t ) = (X 1 (t +1 t ),..., X M (t +1 t )) the feasible (non-optimal) solution to (3.43) at time t +1, where X i (t +1 t ) is defined in (3.34), for all i = 1,..., M. From (3.37) we have that V N (x(t +1 )) V N (x(t )) V N (X(t +1 t )) V N (X(t t )) ˆl(ˆx(t t ), û(t t )) + (a) + (b) (3.51) where (a) = ˆl(ˆx(t +N 1 t ), û(t +N 1 t )) l( x(t +N 1 t ), ū(t +N 1 t )) + ˆV F (ˆx(t +N t )) ˆV F (ˆx(t +N 1 t )) (b) = l( x(t +N t ), ū(t +N t )) + V F ( x(t +N+1 t )) V F ( x(t +N t )) Consider first term (b). If matrices P i, i = 1,..., M, are chosen as the solutions to the Lyapunov equations (3.28) then, from (3.32), for all i = 1,..., M V F i ( x i (t +N+1 t )) V F i ( x i (t +N t )) l i ( x i (t +N t ), ū i (t +N t )) (3.52)

99 thesis_main 2014/1/26 15:56 page 83 # APPENDIX 83 which, collectively, implies that (b) 0. Considering now term (a), define the following collective quantities: F =diag( F 11,..., F MM ), A zoh (η) =diag(a zoh 11 (η),..., A zoh MM (η)), B zoh (η) =diag(b11 zoh (η),..., BMM zoh (η)). Since ū(t t ) is constant for all t [t +N 1, t +N ), and recalling (3.35), for t [t +N 1, t +N ] it results that x(t t ) =A zoh (t t +N 1 ) x(t +N 1 t ) + B zoh (t t +N 1 )ū(t +N 1 t ) (3.53a) and, from (3.35) ˆx(t t ) = F ˆx(t t ) + (Ã B K) x(t t ) + Bū(t t ) = F ˆx(t t ) + (Ã B K)A zoh (t t +N 1 ) x(t +N 1 t )+ ( ) + (Ã B K)B zoh (t t +N 1 ) + B ū(t t ) (3.53b) Therefore, solving (3.53b), we obtain ˆx(t t ) = Φ x (t t +N 1 )ˆx(t +N 1 t ) + Γ 1x (t t +N 1 ) x(t +N 1 t ) + Γ 2x (t t +N 1 )ū(t +N 1 t ) (3.54) û(t t ) = Φ u (t t +N 1 )ˆx(t +N 1 t ) + Γ 1u (t t +N 1 ) x(t +N 1 t ) + Γ 2u (t t +N 1 )ū(t +N 1 t ) (3.55) where Φ x (η) = e F η, Γ1x (η) = η (η ν) 0 ef (Ã B K)A zoh (ν)dν, Γ 2x (η) = η (η ν) 0 ef ((Ã B K)B zoh (ν) + B)dν, Φu (η) = K Φ x (η), Γ 1u (η) = K( Γ 1x (η) A zoh (η)), Γ 2u (η) = I + K( Γ 2x (η) B zoh (η)). Denote, for brevity, ˆx +N 1 = ˆx(t +N 1 t ), v +N 1 = ( x(t +N 1 t ), ū(t +N 1 t )), and Γ x (η) = [ Γ1x (η) Γ2x (η) ] Γ u (η) = [ Γ1u (η) Γ2u (η) ] Ā x (η) = [ A zoh (η) B zoh (η) ] Ā u (η) = [ 0 I ]

100 thesis_main 2014/1/26 15:56 page 84 # CHAPTER 3. CONTINUOUS-TIME DPC Then, in view of (3.55) we compute the elements of term (a) as ˆl(ˆx(t+N 1 t ), û(t +N 1 t )) = 1 2 ˆx +N 1 2 T 0 Φ x(η) T ˆQ Φ x(η)+ Φ u(η) T ˆR Φ u(η)dη v +N 1 2 T 0 Γ x(η) T ˆQ Γ x(η)+ Γ u(η) T ˆR Γ u(η)dη + ˆx T +N 1( T 0 Φ x (η) T ˆQ Γx (η) + Φ u (η) T ˆR Γu (η)dη)v +N 1 (3.56a) ˆV F (ˆx(t +N t )) = 1 2 ˆx +N 1 2 Φx(T ) T ˆP Φ x(t ) v +N 1 2 Γx(T ) T ˆP Γ x(t ) + ˆx T +N 1 Φ x (T ) T ˆP Γx (T )v +N 1 (3.56b) l( x(t+n 1 t ), ū(t +N 1 t )) = λ 2 v +N 1 2 T 0 Āx(η)T QĀx(η))+Āu(η)T RĀu(η)dη (3.56c) Define, for simplicity: Q v = S xv = R v = Therefore T 0 T 0 T 0 ( Γx (η) T ˆQ Γx (η) + Γ u (η) T ˆR Γu (η)) dη ( Φx (η) T ˆQ Γx (η) + Φ u (η) T ˆR Γu (η)) dη (Āx (η) T QĀ x (η)) + Āu(η) T RĀ u (η) ) dη (3.57a) (3.57b) (3.57c) (a) = 1 2 ˆx +N 1 2 Φx(T ) T ˆP Φ x(t ) ˆP+Q x v +N 1 2 Γx(T ) T ˆP Γ x(t )+Q v λr v +ˆx T +N 1 ( Φ x (T ) T ˆP Γx (T ) + S xv )v +N 1 (3.58) Recall that ˆP is the bloc-diagonal matrix whose blocs ˆP i satisfy (3.33) for all i = 1,..., M, i.e., such that ˆP satisfies Φ x (T ) T ˆP Φx (T ) ˆP + Q x + αi = 0 (3.59) where α > 1 is an arbitrary scalar. The following procedure is proposed for defining a suitable scalar λ and matrix ˆP.

101 thesis_main 2014/1/26 15:56 page 85 # APPENDIX Define S P xv = Φ x (T ) T ˆP Γx (T )+S xv and an arbitrary scalar β > 0 such that or equivalently βi S T P xvs P xv (3.60) β S P xv 2 2 (3.61) 2. Define λ as the smallest value of λ > 0 satisfying λr v Γ x (T ) T ˆP Γx (T ) Q v βi (3.62) Note that, given ˆP and β > 0, since R v > 0, it is always possible to define λ > 0. finally, set λ > λ. According to the setched procedure and in view of (3.59) and (3.62), from (3.58) we can write (a) α 2 ˆx +N 1 2 β 2 v +N ˆx T +N 1 S P xvv +N 1 (3.63) Since it follows that ˆx +N 1 S P xv v +N 1 2 = 1 2 ˆx +N v +N 1 2 S T P xv S P xv ˆxT +N 1 S P xvv +N 1 ˆx T +N 1 S P xvv +N ˆx +N v +N 1 2 S T P xv S P xv and, from (3.63) (a) 1 α 2 ˆx +N v +N 1 2 S T P xv S P xv βi (3.64) Therefore, since β satisfies (3.60) and α > 1, then (a) 0. From (3.51), and having proved that both (a) 0 and (b) 0, we obtain that V N (x(t +1 )) V N (x(t )) ˆl(ˆx(t t ), û(t t )) (3.65) Therefore ˆl(ˆx(t t ), û(t t )) 0 as. Under suitable smoothness assumptions on û(t t ) and ˆx(t t ) and since ˆQ > 0 and ˆR > 0, it follows that ˆx([t, t +1 ) t ) 0 and û([t, t +1 ) t ) 0 as. Recalling now system (3.1) where, for all N, t [t, t +1 ), u(t) = û(t t ) + K c (x(t t ) ˆx(t)). We can write ẋ(t) = (A + BK c )x(t) + B(û(t t ) K c x(t t )) Since B(û(t t ) K c x(t t )) is an asymptotically vanishing term, and since A + BK c is Hurwitz in view of Assumption 3.2, we obtain that x(t) 0 as t.

102 thesis_main 2014/1/26 15:56 page 86 # CHAPTER 3. CONTINUOUS-TIME DPC

103 thesis_main 2014/1/26 15:56 page 87 #103 Part III Solutions to the tracing problem 87

104 thesis_main 2014/1/26 15:56 page 88 #104

105 thesis_main 2014/1/26 15:56 page 89 #105 Introduction to the tracing problem Most of the contributions in the field of model predictive control are referred to the so-called regulation problem, i.e. asymptotically steering the state of the system to zero [105, 135, 136]. On the contrary, minor attention (see, e.g., [53, 88]) has been placed in the design of control schemes for the asymptotic tracing of constant reference output signals, that represent an important issue from the industrial point of view. In the development of efficient industrial MPC algorithms, two main issues must be considered: i) the offset-free problem and ii) the unfeasible reference problem. The offset-free problem requires to develop methods which can guarantee asymptotic zero error regulation for piecewise constant and feasible reference signals, while the unfeasible reference problem aims at finding suitable solutions when the nominal constant reference signal cannot be reached due to the presence of state and/or control constraints. Regarding the offset-free problem, many solutions have been proposed so far. The most popular one consists of augmenting the model of the plant under control with an artificial disturbance, which must be estimated together with the system state. This disturbance can account for possible model mismatch or for the presence of real unnown exogenous signals. Depending on its assumed dynamics, many algorithms have been developed, such as those described in [96, 111, 113, 118, 120, 123]. Another approach directly relies on the Internal Model Principle [39], where an internal model of the reference, i.e. an inte- 89

106 thesis_main 2014/1/26 15:56 page 90 # grator, is directly included in the control scheme and fed by the output error. Then, the MPC algorithm is designed to stabilize the ensemble of the plant and the integrator. This strategy has been followed in [99, 104] where also more general exogenous signals and nonlinear, non square systems have been considered. A third solution to the offset-free problem consists of describing the system in the so-called velocity-form, see [122, 168], where the enlarged state is composed by the state increments and the output error, while the manipulated variable is the control increment. The unfeasible reference problem has been discussed in e.g. [29,119, 135]. More recently, a bright solution has been proposed in [6, 87 89], see Chapter 1. In these papers, the MPC cost function is complemented by a term explicitly penalizing the distance between the required (possibly unfeasible) reference signal and an artificial, but feasible, reference, which turns out to be one of the optimization variables. Stability and convergence results are proven both in nominal conditions and for perturbed systems. As for the contributions in the field of distributed control, only few papers on distributed MPC for tracing have been published (see, e.g., [53], where a cooperative distributed MPC scheme is proposed). Indeed, in a distributed setting, the standard approach, based on the reformulation of the tracing problem as a regulation one by computing at any set-point change of the output the corresponding state and control target values, cannot be followed due to the decentralization constraint. Often, in industrial applications, hierarchical structures are used [149], e.g. including i) a Real Time Optimization (RTO) for computing the optimal operating conditions and ii) a centralized regulator with Model Predictive Control (MPC) for tracing purposes. Two-layer structures, although very efficient in many practical applications, pose great difficulties when a distributed control is used at the lower layer. In fact, the references computed at the higher layer can ignore the presence of dynamic constraints among the subsystems and, as such, can lead to infeasible local optimization problems. This prevents one from directly applying the many decentralized and distributed MPC algorithms recently developed (see the examples reported in Chapter 1) for the regulation problem. In addition, in a distributed setting, the decentralization constraints do not allow to follow the standard approach, based on the reformulation of the tracing problem as a regulation one by computing at any set-point change of

107 thesis_main 2014/1/26 15:56 page 91 #107 the output the corresponding state and control target values. An alternative approach is described in [23,162], where a distributed sequential reference-governor approach is proposed. 91

108 thesis_main 2014/1/26 15:56 page 92 #108 92

109 thesis_main 2014/1/26 15:56 page 93 #109 4 DPC for tracing In this Chapter, a distributed MPC method based on Distributed Predictive Control (DPC) for the solution of the tracing problem is discussed. It is based on a hierarchical structure, depicted in Figure 4.1, that consists of three layers: I) a standard RTO optimization layer; II) an intermediate layer which transforms, for each local subsystem, the references y [i] set point computed with RTO into feasible trajectories x[i], ũ [i] and ỹ [i] for the state, input and output variables, respectively; III) in the lower layer local MPC regulators C i can communicate according to a prescribed information pattern and are designed with the statefeedbac DPC algorithm originally developed for the solution of the regulation problem. Notably, in the intermediate layer the reference trajectories are computed according to the same information pattern adopted at the lower layer, and the overall scheme guarantees that state and control constraints are fulfilled. Moreover, the controlled outputs reach the prescribed reference values computed with RTO whenever possible, or their nearest feasible value when feasibility problems arise due to the constraints. 4.1 Interacting subsystems Consider the collective dynamical model x +1 = Ax + Bu y = Cx (4.1a) (4.1b) 93

110 thesis_main 2014/1/26 15:56 page 94 # CHAPTER 4. DPC FOR TRACKING Figure 4.1: Overall control architecture for distributed tracing. in which x R n is the collective state vector, u R m is the collective input vector and y R m is the collective output vector. The dynamic system (4.1) can be decomposed in a set of M dynamically interacting non-overlapping subsystems which, according to the notation used in [94], are described by x [i] +1 = A iix [i] + B iiu [i] + E is [i] y [i] z [i] = C ix [i] = C zix [i] + D ziu [i] (4.2a) (4.2b) (4.2c) where x [i] Rn i and u [i] Rm i are the state and input vectors, respectively, of the i-th subsystem, while y [i] Rm i is its output vector. According to the non-overlapping decomposition, it holds that x = (x [1],..., x[m] ), n = M i=1 n i, that u = (u [1],..., u[m] ), m = M i=1 m i, and that y = (y [1],..., y[m] ). In line with the interactionoriented models introduced in [94], the coupling input and output vectors s [i] and z[i], respectively, are defined to characterize the interconnections among the subsystems; in a collective form, they are defined as s = (s [1],..., s[m] ), z = (z [1],..., z[m] ), and the interconnections

111 thesis_main 2014/1/26 15:56 page 95 # INTERACTING SUBSYSTEMS 95 among subsystems are described by means of the algebraic equation s = Lz (4.3) where L is called interconnection matrix. More specifically, the coupling input s [i] to subsystem i depends on the coupling output z[j] of the j-th subsystem according to s [i] = M j=1 L ij z [j] (4.4) We say that subsystem j is a dynamic neighbor of subsystem i if and only if L ij 0, and we denote as N i the set of dynamic neighbors of subsystem i (which excludes i). The input and state variables are subject to the local" constraints X i R n i, respectively, where the sets U i and X i are convex. The state transition matrices A 11 R n 1 n 1,..., A MM R n M n M of the M subsystems are the diagonal blocs of A, whereas the dynamic coupling terms between subsystems correspond to the non-diagonal blocs of A, i.e., A ij = E i L ij C zj, with j i. Correspondingly, B ii, i = 1,..., M, are the diagonal blocs of B, whereas the influence of the input of a subsystem upon the state of different subsystems is represented by the off-diagonal terms of B, i.e., B ij = E i L ij D zj, with j i. The collective output matrix is defined as C = diag(c 11,..., C MM ). Concerning system (4.1a) and its partition, the following main assumption on decentralized stabilizability is introduced: u [i] U i R m i and x [i] Assumption 4.1 There exists a bloc-diagonal matrix K, i.e. K = diag(k 1,..., K M ), with K i R m i n i, i = 1,..., M such that: i) F = A + BK is Schur. ii) F ii = (A ii + B ii K i ) is Schur, i = 1,..., M. We recall that the design of the stabilizing matrix K can be performed according to the procedure proposed in Chapter 1. Moreover, in order to solve the tracing problem for constant reference signals, the following standard assumption is made.

112 thesis_main 2014/1/26 15:56 page 96 # CHAPTER 4. DPC FOR TRACKING Assumption 4.2 Defining S = [ ] In A B C 0 then ran(s) = n + m. 4.2 Control system architecture We want to design a distributed state-feedbac control law, based on MPC, for the tracing of a given constant set-point signal y [i] set point Rm i, where y [i] set point can be obtained, e.g., by means of any RTO method, see again Figure 4.1. More specifically, our aim is to asymptotically steer the system output y [i] t to a constant desired value y [i] set point for all i = 1,..., M. The main idea behind the proposed algorithm is to consider a three-layer control architecture, see Figure 4.2. Figure 4.2: Control system architecture for distributed tracing. 1) The reference output trajectory management layer. For each subsystem i = 1,..., M, a local reference trajectory management unit is required, which defines the reference trajectory ỹ [i] +ν of the output y [i] +ν. Although it would be natural to tae ỹ[i] +ν = y[i] set point for all ν 0, this choice could easily lead to infeasible standard MPC optimization problems, even in the centralized framewor. Furthermore,

113 thesis_main 2014/1/26 15:56 page 97 # CONTROL SYSTEM ARCHITECTURE 97 in the distributed context this point is particularly critical, since too rapid changes in the output reference trajectory for a given subsystem could greatly affect the performance and the behavior of the other subsystems. Therefore ỹ [i] will be regarded as an argument of an optimization problem rather than a fixed parameter, and constraints limiting the time variation of the local reference signals will be defined and computed. This layer is completely decentralized, i.e., the transmission of information between local reference trajectory management units is not needed. 2) The reference state and input trajectory layer. For each subsystem i = 1,..., M assume that at any time instant the future reference trajectories ỹ [i] +ν, ν = 0,..., N 1, are available for all i = 1,..., M. In order to define the reference trajectories x [i], ũ[i], and z [i] of the corresponding state, input, and coupling output variables, we design a suitable algorithm, using the output reference information as data. 3) The robust MPC layer. For each subsystem i = 1,..., M, a robust MPC unit is designed to drive the real state and input trajectories x [i] and u[i] as close as possible to the reference ones x[i], ũ[i], while respecting the constraints on the same variables. As in the case of the reference state and input trajectory layer, information is required to be transmitted from reference trajectory management units of neighboring regulators, in a neighbor-to-neighbor fashion The reference output trajectory management layer For each subsystem i = 1,..., M, a local reference trajectory management unit defines, at any time, the reference trajectory ỹ [i] +ν of the output y [i] +ν. Similarly to the approach taen in [88], the values ỹ[i] will be regarded as an argument of an optimization problem itself, rather than a fixed parameter. In the distributed context, too rapid changes of the output reference trajectory of a given subsystem could greatly affect the performance and the behavior of the other subsystems. Therefore, the main requirement for guaranteeing good performance and constraint satisfaction of our control scheme is to limit the rate of variation in time of the output reference signals. Therefore we will require that, for all i = 1,..., M, for all 0 ỹ [i] +1 ỹ[i] B(m i) p,ε (0) (4.5)

114 thesis_main 2014/1/26 15:56 page 98 # CHAPTER 4. DPC FOR TRACKING This layer is completely distributed, i.e., the reference trajectories ỹ [i] of any subsystem i are computed only on the basis of local information and of the information provided by the reference trajectory management units of its constraint neighbors The reference state and input trajectory layer Two different methods can be used to obtain the signals ( x [i], ũ[i] ) based on the desired output trajectory ỹ [i]. The first one relies on the use of an integrator to expand the system (4.2a), while the second one maes use of an observer. In this Section, we provide detailed descriptions of both. As they are two alternative options, we will use the same notation for the two cases in order to improve the readability of the remainder of the Chapter. Anyway, it is important to recall that, depending on the technique chosen for this layer, the same symbols can tae different meanings. Computing the reference state and input trajectories using an integrator For each subsystem i = 1,..., M assume that at any time instant the future reference trajectories ỹ [i] +ν, ν = 0,..., N 1, are available. In order to define the reference trajectories ( x [i], ũ[i] ) based on the desired output trajectory ỹ [i], we expand the system (4.2a), referred to the reference trajectories, with an integrator, i.e., where, similarly to (4.2c) and (4.4) x [i] +1 = A ii x [i] + B iiũ [i] + E i s [i] ẽ [i] +1 = ẽ[i] + ỹ[i] +1 C i x [i] z [i] s [i] = C zi x [i] = j N i L ij z [j] + D ziũ [i] (4.6a) (4.6b) (4.6c) (4.6d) Define χ [i] = ( x[i], ẽ[i] A ij = ), [ ] Aii 0 C i I mi [ ] Aij if j = i if j i, B ij = [ ] [ ] Bij 0, G 0 i = I mi (4.6e)

115 thesis_main 2014/1/26 15:56 page 99 # CONTROL SYSTEM ARCHITECTURE 99 and consider the control law ũ [i] = K iχ [i] (4.6f) where K i = [ ] Ki x Ki e. Letting Fij = A ij + B ij K j, the dynamics of the variable χ [i] is therefore defined by the following dynamical system χ [i] +1 = F iiχ [i] + F ij χ [j] + G iỹ [i] +1 (4.7) j N i The gain matrix K i is to be determined as follows: denoting by A and B the matrices whose bloc elements are A ij and B ij, respectively, and K = diag(k 1,..., K M ), the following assumption must be fulfilled Assumption 4.3 The matrix F = A + BK is Schur. Note that the synthesis of the K i s can be performed according to the procedures proposed in Chapter 1. Define, for all i = 1,..., M and for all 0, χ [i]ss = (x [i]ss, e [i]ss ), i.e., the steady-state condition for (4.7) corresponding to the reference outputs ỹ [i] assumed constant, i.e., ỹ [i] +1 = ỹ[i], and satisfying for all i = 1,..., M χ [i]ss = F ii χ [i]ss + j N i F ij χ [j]ss + G i ỹ [i] (4.8) In view of (4.6), C i x [i]ss = ỹ [i] +1 and F is Schur stable (see Assumption 4.3). Then a solution to the system (4.8) exists and is unique. Collectively define χ ss ỹ = (ỹ [1],..., ỹ[m] = (χ[1]ss,..., χ [M]ss ), χ = (χ [1],..., χ[m] ), and ). From (4.6e)-(4.8) we can collectively write χ ss +1 χ ss = (I n+p F) 1 G(ỹ +1 ỹ ) (4.9) where G = diag(g 1,..., G M ). Therefore χ +1 χ ss +1 = F(χ χ ss ) + G(ỹ +1 ỹ ) + χ ss χ ss +1 = F(χ χ ss ) + (I n+p (I n+p F) 1 )G(ỹ +1 ỹ ) = F(χ χ ss ) (I n+p F) 1 FG(ỹ +1 ỹ ) (4.10) We can rewrite (4.10) as χ +1 χ ss +1 = F(χ χ ss ) + w (4.11)

116 thesis_main 2014/1/26 15:56 page 100 # CHAPTER 4. DPC FOR TRACKING where w can be seen as a bounded disturbance. In fact, in view of (4.5) ỹ +1 ỹ M i=1 B (m i) p,ε (0) (4.12) and therefore w W = (I n+p F) 1 FG M i=1 B(m i) p,ε (0). Under Assumption 4.3, for the system (4.11) there exists a possibly non-rectangular Robust Positive Invariant (RPI) set χ such that, if χ χ ss χ, then it is guaranteed that χ +ν χ ss +ν χ for all ν 0. This, in turn, implies that there exist sets χ i, i = 1,..., M, defined in such a way that χ M i=1 χ i, such that it is guaranteed that, for any initial condition χ 0 χ ss 0 χ, then for all 0. χ [i] χ[i]ss χ i (4.13) Computing the reference state and input trajectories using an observer Denote A ij = [ ] Aii B ii 0 I mi [ ] Aij B ij 0 0 if j = i if j i Define, for all i = 1,..., M and for all 0, χ [i]ss x [i]ss and u [i]ss, C i = [ C ii 0 ] (4.14) = (x [i]ss, u [i]ss ) where are the steady-state state and input values corresponding to the reference outputs ỹ [i] equations and satisfy the following steady-state χ [i]ss = A ii χ [i]ss + j N i A ij χ [j]ss (4.15) ỹ [i] = C i χ [i]ss It is easy to verify that Assumption 4.2 guarantees that a solution to the system (4.15) exists and is unique. x ss = (x[1]ss,..., x [M]ss ), u ss = (u[1]ss ỹ = (ỹ [1],..., ỹ[m] ), from (4.15) one has [ x ss ] [ ] xss +1 0 M S 1 I m u ss uss +1 In view of this, letting,..., u [M]ss ), and i=1 B (m i) p,ε (0) (4.16)

117 thesis_main 2014/1/26 15:56 page 101 # CONTROL SYSTEM ARCHITECTURE 101 from which it follows that, for all i = 1,..., M, there exists a set ss i such that, for all 0 χ [i]ss χ [i]ss +1 ss i (4.17) An observer is now designed to provide an estimate χ [i] = ( x[i], ũ[i] ) of the collective variable χ [i]ss using the output reference information as data. We let s [i] = j N i L ij z [j] (4.18) The dynamics of the variable χ [i] system is defined by the following dynamical χ [i] +1 = A iiχ [i] + A ij χ [j] + G i(ỹ [i] +1 C iχ [i] ) (4.19) j N i [ ] G x where G i = i are gains to be determined as follows: denoting by G u i A the matrix whose bloc elements are A ij, C = diag(c i ), and G = diag(g i ), the following assumption must be fulfilled Assumption 4.4 The matrix A GC is Schur. Note that the synthesis of the G i s can be performed according to the procedures proposed in Chapter 1. From (4.14)-(4.15), it follows that χ [i] +1 χ[i]ss +1 = (A ii G i C i )(χ [i] χ[i]ss ) (4.20) + j N i A ij (χ [j] + j N i A ij (χ [j]ss χ[j]ss ) + (A ii G i C i )(χ [i]ss χ [i]ss +1 ) χ [j]ss +1 ) In view of (4.17) we can rewrite (4.20) as where χ [i] +1 χ[i]ss +1 = (A ii G i C i )(χ [i] + j N i A ij (χ [j] χ[i]ss ) χ[j]ss ) + w [i] (4.21) w [i] = (A ii G i C i )(χ [i]ss χ [i]ss +1 ) + A ij (χ [j]ss χ [j]ss +1 ) W i j N i

118 thesis_main 2014/1/26 15:56 page 102 # CHAPTER 4. DPC FOR TRACKING can be regarded as a bounded disturbance in view of (4.17) and W i = (A ii G i C i ) ss i ( j N i A ij ss j ) (4.22) Under Assumption 4.4, for the system (4.21) there exists a possibly non-rectangular Robust Positive Invariant (RPI) set χ such that, if ( x x ss, ũ u ss ) χ, then it is guaranteed that ( x +1 x ss +1, ũ +1 u ss +1 )) χ. This, in turn, implies that there exist sets χ i, i = 1,..., M, such that, for any initial condition ( x 0 x ss 0, ũ 0 u ss 0 ) χ, it is possible to guarantee that for all 0. (χ [i] The robust MPC layer χ[i]ss ) χ i (4.23) For each subsystem i = 1,..., M, a robust MPC unit is designed to drive the real state and input trajectories x [i] and u[i] as close as possible to the reference ones x [i], ũ[i], while respecting the constraints on the same variables. As in the case of the reference state and input trajectory layer, information is required to be transmitted from reference trajectory management units of neighboring regulators, in a neighborto-neighbor fashion. Similarly to the regulation case, by adding suitable constraints to the MPC problem formulation, for each subsystem and for all 0 we will be able to guarantee that the actual coupling output trajectories lie in specified time-invariant neighborhoods of the reference trajectories. If z [i] z[i] Z i, where 0 Z i, in view of (4.3) and (4.6d) (or (4.18)) we guarantee that s [i] s[i] S i, where S i = j N i L ij Z j. In this way, (4.2a) can be written as where E i (s [i] x [i] +1 = A iix [i] + B iiu [i] + E i s [i] + E i(s [i] s[i] ) (4.24) s[i] ) is a bounded disturbance and the term E i s [i] +ν can be interpreted as an input, nown in advance over the prediction horizon ν = 0,..., N 1. For the statement of the individual MPC sub-problems, henceforth called i-dpc problems, we define the i-th subsystem nominal model associated to equation (4.24) ˆx [i] +1 = A iiˆx [i] + B iiû [i] + E i s [i] (4.25)

119 thesis_main 2014/1/26 15:56 page 103 # CONTROL SYSTEM ARCHITECTURE 103 or, if the observer is used Then we let ˆx [i] +1 = A iiˆx [i] + B iiû [i] + E i s [i] + Gx i (ỹ [i] +1 C ii x [i] ) (4.26) ẑ [i] = C ziˆx [i] + D ziû [i] (4.27) The control law for the i-th subsystem (4.24), for all 0, is assumed to be given by u [i] = û[i] + K i(x [i] where K i satisfies Assumption 4.1. We also define ε [i] ˆx[i] ) (4.28) = x[i] ˆx[i]. If the integrator is used for the reference state and inputs trajectory layer, from equation (4.24), (4.25) and (4.28) we obtain where ε [i] +1 = F ii ε [i] + w[i] (4.29) w [i] = E i(s [i] is a bounded disturbance since s [i] s[i] s[i] ) (4.30) S i. It follows that w [i] W i = E i S i (4.31) On the other hand, if the observer is used for the reference state and inputs trajectory layer, from (4.24), (4.26) and (4.28) we obtain again the expression (4.29) where, in this case, w [i] = E i(s [i] is a bounded disturbance since s [i] and (4.23) ỹ [i] +1 C ii x [i] = ỹ[i] +1 ỹ[i] + ỹ[i] C ii x [i] = ỹ [i] +1 ỹ[i] + ( C i)(χ [i] χ[i]ss It follows that s[i] ) Gx i (ỹ [i] +1 C ii x [i] ) (4.32) s[i] S i and, in view of (4.5) ) B (m i) p,ε (0) ( C i ) χ i (4.33) w [i] W i = E i S i ( G x i )(B (m i) p,ε (0) ( C i ) χ i ) (4.34)

120 thesis_main 2014/1/26 15:56 page 104 # CHAPTER 4. DPC FOR TRACKING In both cases, since w [i] is bounded and F ii is Schur, there exists an RPI E i for (4.29) such that, for all ε [i] E i, then ε [i] +1 E i. Therefore at time + 1, in view of (4.2c) and (4.27), it holds that z [i] +1 ẑ[i] +1 = (C zi + D zi K i )ε [i] +1 (C zi + D zi K i )E i. In order to guarantee that, at time + 1, z [i] +1 z[i] +1 Z i can be still verified by adding suitable constraints to the optimization problems, the following assumption must be fulfilled. Assumption 4.5 For all i = 1,..., M, there exists a positive scalar ρ i such that (C zi + D zi K i )E i B p,ρi (0) Z i (4.35) If Assumption 4.5 is fulfilled, we define, for all i = 1,..., M, the convex neighborhood of the origin z i satisfying z i Z i (C zi + D zi K i )E i (4.36) and we consider the constraint ẑ [i] +1 z[i] +1 z i, in such a way that z [i] +1 z[i] +1 = z[i] +1 ẑ[i] +1 + ẑ[i] +1 z[i] +1 (C zi + D zi K i )E i z i Z i (4.37) as required at all time steps The distributed predictive control algorithm It is now possible to state the i-dpc problem, to be solved at any time instant. The overall design problem is composed by a preliminary centralized off-line design and an on-line solution of the M i-dpc problems. These two steps are detailed in the following Off-line design The off-line design consists of the following procedure: 1. if the integrator is used for the reference state and inputs trajectory layer: compute the matrices K and K satisfying Assumptions 4.1 and 4.3. If the observer is used for the reference state and inputs trajectory layer: compute the matrices K and G satisfying Assumptions 4.1 and 4.4.

121 thesis_main 2014/1/26 15:56 page 105 # THE DISTRIBUTED PREDICTIVE CONTROL ALGORITHM If the integrator is used for the reference state and inputs trajectory layer: define B (m i) p,ε (0), compute χ (a RPI for (4.11)) and χ i. If the observer is used for the reference state and inputs trajectory layer: compute ss i with (4.16), (4.17); compute W i with (4.22) and W = M W i=1 i ; compute χ, which represents a RPI for the collection of subsystems (4.21), and compute χ i. 3. Compute the RPI sets E i for the subsystems (4.29) and the sets z i satisfying (4.36) and (4.37); 4. compute ˆX i X i E i, Û i U i K i E i, the positively invariant set Σ i for the equation δx [i] +1 = F iiδx [i] (4.38) such that (C zi + D zi K i )Σ i z i (4.39) 5. If the integrator is used for the reference state and inputs trajectory layer: compute the convex sets Y i such that [ ] ( Ini 0 Ki x Ki e Γ i (I n+p F) 1 G ) M j=1 Y j χ i [ ] Ini Σ K i ˆX (4.40) i Ûi i where Γ i is the matrix, of suitable dimensions, that selects the subvector χ [i] out of χ. Specifically, Y i is the set associated to ỹ [i] such that the corresponding steady-state state and input satisfy the control and state constraints defined by ˆX i and Ûi. If the observer is used for the reference state and inputs trajectory layer: compute the convex sets Y i such that [ ] 0 M H i S 1 Y I j xu i m j=1 [ Ini K i ] Σ i ˆX i Ûi (4.41) where H i is the matrix, of suitable dimensions, that selects the vector (x [i], u [i] ) out of (x, u), i = 1,..., M.

122 thesis_main 2014/1/26 15:56 page 106 # CHAPTER 4. DPC FOR TRACKING On-line design At any time instant the on-line design is performed in two steps, both based on the solution of a suitable distributed and independent optimization problem. First, the output reference trajectories ỹ [i] are recursively updated. Then, the real control input u [i] is computed with MPC. Computation of the reference outputs The output reference trajectory ỹ [i] t+n is computed by the reference output trajectory management layer to minimize the distance with the set-points and, at the same time, to fulfill the constraints. The optimization problem to be solved at each time instant is the following subject to where min V y ȳ [i] i (ȳ[i] +N, ) (4.42) +N ȳ [i] +N ỹ[i] +N 1 B(m i) p,ε (0) (4.43) ȳ [i] +N Y i (4.44) V y i (ȳ[i] +N ) = γ ȳ[i] +N ỹ[i] +N ȳ [i] +N y[i] set point 2 T i The weight T i must verify the inequality T i > γi mi (4.45) while γ is an arbitrarily small positive constant. At time, ȳ [i] +N is the solution to the optimization problem (4.42). Remar 4.1 In the implementation described in this Chapter, coupling constraints are not considered for simplicity. However, it is possible to include in the problem constraints involving the state of more than one subsystems. This, for instance, can be handled by imposing suitable constraints at the reference output trajectory management layer level. This implies that trasmission of information must be scheduled between local output trajectory management units. This issue has been explored in details in [60] and applied for controlling unicycle robots with collision avoidance capabilities.

123 thesis_main 2014/1/26 15:56 page 107 # THE DISTRIBUTED PREDICTIVE CONTROL ALGORITHM 107 Computation of the control variables The i-dpc problem solved by the i-th robust MPC layer unit is defined as follows: where Vi N (ˆx [i], û[i] min ˆx [i],û[i] [:+N 1] N 1 [:+N 1] ) = ν=0 Vi N (ˆx [i], û[i] [:+N 1] ) (4.46) ˆx [i] +ν x[i] +ν 2 Q i + û [i] +ν ũ[i] +ν 2 R i + ˆx [i] +N x[i] +N 2 P i (4.47) subject to (4.25) (or to (4.26)) and, for ν = 0,..., N 1, x [i] and to the terminal constraint ˆx[i] E i (4.48a) ẑ [i] +ν z[i] +ν z i ˆx [i] +ν ˆX i û [i] +ν Ûi (4.48b) (4.48c) (4.48d) ˆx [i] +N x[i] +N Σ i (4.49) Note that, in case the observer is used for the reference state and input trajectory layer, the value x [i] +N has to be computed with x [i] +N (ȳ[i] +N ) =A ii x [i] +N 1 + B iiũ [i] +N 1 + E i s [i] +N 1 + G x i (ȳ [i] +N C ii x [i] +N 1 ) (4.50) The weights Q i and R i in the performance index (4.47) must be taen as positive definite matrices of appropriate dimensions while, in order to prove the convergence properties of the proposed approach, it is advisable to select the matrices P i as the solutions of the (fully independent) Lyapunov equations F T ii P i F ii P i = (Q i + K T i R i K i ) (4.51) At time, the tuple (ˆx [i], û[i] [:+N 1], ȳ[i] +N ) is the solution to the i-dpc problem and û [i] is the input to the nominal system (4.25) (or to the nominal system (4.26)).

124 thesis_main 2014/1/26 15:56 page 108 # CHAPTER 4. DPC FOR TRACKING Remar 4.2 It is important to note that the problems (4.42) and (4.46) are independent from each other. In fact, on the one hand, it is easy to see that (4.42) does not depend on ˆx [i] and û[i] [:+N 1]. On the other hand note that, both the cost function Vi N and the constraints (4.48) are independent of ȳ [i] +N. Then, according to (4.28), the input to the system (4.2a) is u [i] = û[i] + K i(x [i] ˆx[i] ) (4.52) Moreover, we set ỹ [i] +N = ȳ[i] +N and the references ẽ[i] +N and x[i] +N+1 are computed from (4.6b) and (4.6a), respectively. Finally ũ [i] +N = Kx i x [i] +N + Ke i ẽ [i] +N x [i] +N and ũ[i] +N Denoting by ˆx [i] +ν from (4.6f) (in case the observer is used, are computed with (4.19) once ỹ[i] +N is given). the state trajectory of system (4.25) (or of system (4.26)) stemming from ˆx [i] and û[i] [:+N 1], at time it is also possible to compute ˆx [i] +N. The properties of the proposed distributed MPC algorithm for tracing can now be summarized in the following result (the proof is reported in the Appendix). Theorem 4.1 Let Assumptions be verified and the tuning parameters be selected as previously described. If at time = 0 a feasible solution to (4.42), (4.46) exists then, for all i = 1,..., M I) Feasible solutions to (4.42), (4.46) exist for all 0, i.e., constraints (4.43), (4.44), (4.48) and (4.49), respectively, are verified. Furthermore, the constraints (x [i], u[i] ) X i U i and for all i = 1,..., M are fulfilled for all 0. II) The resulting MPC controller asymptotically steers the i-th system to the admissible set-point y [i] feas.set point, where y[i] feas.set point is the solution to y [i] feas.set point =argmin y [i] y [i] set point 2 T i (4.53) y [i] Y i Note that when coupling static constraints (not considered in this Chapter) are present, the convergence to the nearest feasible solution to the prescribed set-point may be prevented for some initial conditions. These situations are denoted deadloc solutions in [162].

125 thesis_main 2014/1/26 15:56 page 109 # SIMULATION EXAMPLES Simulation examples We consider the problem of controlling the temperature of the apartment depicted in Figure 4.3 and constituted by two parts both with two rooms: rooms A and B belong to the first one, while rooms C and D to the second one. Each room is characterized by its own temperature (T A, T B, T C and T D ) and is endowed with its own radiator (supplying heats q A, q B, q C and q D ). For a detailed description of the model, of the used parameters and of the considered woring point, we refer to Chapter 2. The discrete-time system of the form (4.1), Figure 4.3: Schematic representation of a building with two apartments. with n = 4, m = 4, is obtained by zero-order-hold discretization with sampling time T = 30 s, and it is described by the matrices A = B = , C = The observer-based method has been used in the reference state and input trajectory layer. The matrices K i and G i fulfilling Assumption 4.1 and Assumption 4.4 have been computed as suggested in Chapter 2. The weights used in the simulation are and Q 1 = Q 2 = I 2,

126 thesis_main 2014/1/26 15:56 page 110 # CHAPTER 4. DPC FOR TRACKING R 1 = R 2 = I 2, T 1 = T 2 = I 2, γ = 10 6, and a prediction horizon of N = 3 was used. Figure 4.4: Trajectories of the output variables y [1] (above) and y [2] (below) obtained with DPC (blac solid lines) and with cmpc (dashed gray lines). Thin blac lines: desired set-points y [1,2] set point ; blac dash-dot lines: reference trajectories ỹ 1,2. Figure 4.5: Trajectories of the inputs variables u [1] (above) and u [2] (below) obtained with DPC (blac solid lines) and with cmpc (dashed gray lines). In the simulations, the reference trajectories for y [2] set point are both

127 thesis_main 2014/1/26 15:56 page 111 # SIMULATION EXAMPLES 111 always equal to zero, as well as the one related to T B, while the first output of the first subsystem, T A, should trac a piece-wise constant reference trajectory, which values are 2 and 1. The results achieved are depicted in Figure 4.4, while the trajectories of the input variables are shown in Figure 4.5. In both these figures, a comparison between the outputs obtained with DPC and with centralized MPC is provided: it is possible to see that the transients obtained with DPC are slower, due to the limitation imposed to the set-point variations by constraint (4.43). Moreover, the distributed approach can lead to a reduction of the admissible set-point due to constraint (4.44), where the sets Y i must satisfy (4.41). Figure 4.6: Schematic representation of the four-tans system. As second example, we consider now the four-tan system (see Figure 4.6) described in [74] and in Chapter 2, where we aim to control the water levels h 1 and h 3 of tans 1 and 3 using the command voltages of the two pumps v 1 and v 2. In this case, we assume there are no external disturbances affecting the system. The obtained linearized and discretized system with sampling time T = 0.5 s and zero-order-hold discretization has the form (4.1) with

128 thesis_main 2014/1/26 15:56 page 112 # CHAPTER 4. DPC FOR TRACKING n = 4, m = 2 where A = B = , C = [ We first show the results obtained with a controller in which an observer has been used in the reference state and input trajectory layer. The matrices K i and G i fulfilling Assumption 4.1 and Assumption 4.4 have been computed by means of suitable LMIs (see Chapter 2). ] Figure 4.7: Trajectories of the output variables y [1] (above) and y [2] (below) obtained with DPC (blac solid lines) and with cmpc (dashed gray lines). Thin blac lines: desired set-points y [1,2] set point ; blac dash-dot lines: reference trajectories ỹ 1,2. The weights used in the simulation are Q 1 = Q 2 = I 2, R 1 = R 2 = 1, T 1 = T 2 = 1, γ = 10 6, while the chosen prediction horizon is N = 3. In the simulations, the reference trajectory for y [2] set point is always equal to zero, while the output of the first subsystem should trac a piece-wise constant reference trajectory, which values are 0.25 and 0.5. The results achieved are depicted in Figure 4.7, while the trajectories of the input variables are shown in Figure 4.8.

129 thesis_main 2014/1/26 15:56 page 113 # SIMULATION EXAMPLES 113 Figure 4.8: Trajectories of the inputs variables u [1] (above) and u [2] (below) obtained with DPC (blac solid lines) and with cmpc (dashed gray lines). In both these figures, a comparison between the outputs obtained with DPC and with centralized MPC is provided: it is possible to see that also in this case, the limitation imposed to the set-point variation leads to slower transients. The same system has been used for testing the integrator-based method for the reference state and input trajectory layer. The matrices K i and K i fulfilling Assumption 4.1 and Assumption 4.2 have been computed as described in Chapter 2. The used weights and the prediction horizon are the ones shown for the previous case. In the simulations, the reference trajectories y [i] set point, i = 1, 2 are piece-wise constant (see Figure 4.9, grey dash-dotted lines). The results achieved are depicted in Figure 4.9, while the input trajectories are shown in Figure Notably, the set-point y [1] set point = 2.5 results infeasible to our algorithm, and hence the system output y [1] t converges to the nearest feasible value. The system result to be faster than in the case where the observer was used, so important performance improvements can be obtained using the integrator for managing the reference state and input trajectory layer.

130 thesis_main 2014/1/26 15:56 page 114 # CHAPTER 4. DPC FOR TRACKING Figure 4.9: Trajectories of the output variables y [1] (above) and y [2] (below) (blac solid lines) and reference outputs ỹ [1] (above) and ỹ [2] (below) (blac dashed lines). Grey dash-dotted lines: desired set-points y [1,2] set point. Figure 4.10: Trajectories of the inputs variables u [1] (above) and u [2] (below).

131 thesis_main 2014/1/26 15:56 page 115 # CONCLUSIONS Conclusions In this Chapter, a distributed MPC method for the solution of the tracing problem has been discussed, consisting in a hierarchical architecture based on Distributed Predictive Control. The overall scheme guarantees that state and control constraints are fulfilled and that the controlled outputs reach the prescribed reference values whenever possible, or their nearest feasible value when feasibility problems arise due to the constraints. The algorithm has been tested on two benchmar problems. In the next chapters, different solutions to the tracing problem including an integral action in the closed-loop will be presented. 4.6 Appendix Proof of recursive feasibility and convergence of the reference management problem Assume that, at step, a solution ȳ [i] +N i = 1,..., M. i = 1,..., M. ȳ [i] +N ỹ[i] +N = 0 B(m i) p,ε(0) and (4.44), respectively. to (4.42) exists for all Then a solution to (4.42) exists at step + 1 for all In fact, taing ȳ [i] +N+1 = ȳ[i] +N = ỹ[i] +N one has and ȳ[i] +N Y i, hence verifying (4.43) To prove convergence for the reference management layer note that, since at time + 1, ȳ [i] +N+1 = ỹ[i] +N is a feasible solution, we have that, in view of the optimality of the solution ȳ [i] +N+1 +1 V y i (ȳ[i] +N+1 +1, + 1) V y i (ȳ[i] +N, + 1) ȳ[i] +N y[i] set point 2 T i (4.54) In view of the fact that ȳ [i] +N+1 +1 = ỹ[i] +N+1 V y i (ȳ[i] +N+1 +1 and we rewrite (4.54) as for all, we write, + 1) = γ ỹ[i] +N+1 ỹ[i] +N 2 + ỹ [i] +N+1 y[i] set point 2 T i, ỹ [i] +N+1 y[i] set point 2 T i ỹ [i] +N y[i] set point 2 T i γ ỹ [i] +N+1 ỹ[i] +N 2 From this we infer that ỹ [i] +N+1 ỹ[i] +N 0 as, and that ỹ [i] +N y[i] set point 2 T i const (4.55)

132 thesis_main 2014/1/26 15:56 page 116 # CHAPTER 4. DPC FOR TRACKING as +. Assume, by contradiction, that ỹ [i] +N y[i] set point 2 T i c i, with c i > c o i, where c o i = y [i] feas.set point y[i] set point 2 T i (4.56) Note that this implies that ỹ [i] +N y[i] feas.set point for all i = 1,..., M. Assume that, given, for all the optimal solution to (4.42) is ȳ [i] +N = ȳ[i] where ȳ [i] y [i] set point 2 T i = c i. It results that V y i (ȳ[i] +N, ) = c i On the other hand, an alternative solution is given by ȳ [i] +N, where ȳ [i] +N = λ iȳ [i] + (1 λ i )y [i] feas.set point with λ i [0, 1). This solution is feasible provided that I) ȳ [i] +N ȳ[i] B m i p,ε(0) which can be verified if (1 λ i ) is sufficiently small, II) ȳ [i] +N Y i which is also satisfied if (1 λ i ) is sufficiently small (since Y i is convex and ȳ [i] y [i] feas.set point ). According to this alternative solution V y i (ȳ[i] +N, ) = γ ȳ[i] +N ȳ[i] 2 + ȳ [i] +N y[i] set point 2 T i Now if (4.45) is verified, then V y i (ȳ[i] +N, ) < V y i (ȳ[i], ). This contradicts the assumption that ỹ [i] +N y[i] set point 2 T i c i, with c i > c o i. Therefore, the only asymptotic solution compatible with (4.42), is that corresponding with ỹ [i] +N = y[i] feas.set point for all i. It is now proved that ỹ [i] y[i] feas.set point for. In view of Assumption 4.2, this implies that C i χ [i] = C i x [i] y[i] feas.set point for all i = 1,..., M Proof of recursive feasibility of the i-dpc problem Assume that, at step, a solution to (4.46) exists for all i = 1,..., M, i.e., (ˆx [i], û[i] [:+N 1] ). Next we prove that, at step + 1, a solution to (4.46) exists for all i = 1,..., M. To do so, we prove that the tuple (ˆx [i] +1, û[i] [+1:+N] ) satisfies the constraints (4.25) (or (4.26))

133 thesis_main 2014/1/26 15:56 page 117 # APPENDIX 117 and (4.48a)-(4.49) and is therefore a feasible (possibly suboptimal) solution to (4.46). Here û [i] [t+1:t+n] t is obtained with û [i] +N = ũ[i] +N + K i(ˆx [i] +N x[i] +N ) (4.57) First note that, in view of (4.6a), (4.25) and (4.57) (or, if the observer is used, in view of (4.26), and (4.57)) ˆx [i] +N+1 x[i] +N+1 = F ii(ˆx [i] +N x[i] +N ) (4.58) and therefore ˆx [i] +N+1 x[i] +N+1 Σ i in view of the definition of Σ i as a positively invariant set for (4.38), hence verifying (4.49). Therefore the constraint (4.49) is verified at step + 1. Moreover, in view of the robust positive invariance of sets E i with respect to equation (4.29), i = 1,..., M, x [i] +1 ˆx[i] +1 E i, and therefore (4.48a) is verified. Furthermore, in view of the feasibility of (4.48b)- (4.48d) at step, it follows that constraints (4.48b)-(4.48d) are satisfied at step + ν for ν = 1,..., N 1 and, from (4.49) and (4.39), ẑ [i] +N t z[i] +N = (C zi + D zi K i )(ˆx [i] +N x[i] +N ) (C zi + D zi K i )Σ i z i (4.59) Hence constraint (4.48b) is verified at time + N. Suppose now that the integrator is used for the reference state and input trajectory layer. In this case we have that ] [ ] [i] [ˆx +N x [i] û [i] +N +N ũ [i] +N [ Ini K i ] Σ i (4.60) where, from (4.13) [ ] x [i] [ ] ( ) +N Ini 0 ũ [i] K x +N i Ki e χ [i]ss +N χ i (4.61) In turn, in view of (4.8) and similarly to (4.9), χ [i]ss +N Γ i(i n+p F) 1 G M Y j (4.62) i=1

134 thesis_main 2014/1/26 15:56 page 118 # CHAPTER 4. DPC FOR TRACKING This eventually implies that, in view of (4.40) ] [i] [ [ˆx +N Ini 0 û [i] K x +N i Ki e ] ( Γ i (I n+p F) 1 G ) M i=1 Y j χ i [ Ini K i ] Σ i ˆX i Ûi (4.63) which verifies constraints (4.48c) and (4.48d) at time + N. Assume instead that the observer is used for the reference state and input trajectory layer. In this case we have that [ˆx [i] +N û [i] +N ] [ x [i] +N ũ [i] +N ] [ Ini K i ] Σ i where, from (4.23) [ ] [ ] x [i] +N x [i]ss ũ [i] +N +N u [i]ss +N xu i (4.64) In turn, in view of the definition of the sets Y i, [ ] x [i]ss [ ] +N 0 M u [i]ss H i S 1 I j=1 Y j (4.65) +N m This eventually implies that, in view of (4.41) ] [i] [ ] [ ] [ˆx +N 0 M û [i] (H i S 1 I j=1 Y j) xu Ini i Σ m K i ˆX i Ûi i +N which verifies constraints (4.48c) and (4.48d) at time + N Proof of convergence for the robust MPC layer At time the pair (ˆx [i], û[i] [:+N 1] ) is a solution to (4.46), leading to the optimal cost N 1 Vi N () = ˆx [i] +ν x[i] +ν 2 Q i + û [i] +ν ũ[i] +ν 2 R i ν=0 + ˆx [i] +N x[i] +N 2 P i (4.66)

135 thesis_main 2014/1/26 15:56 page 119 # APPENDIX 119 Since (ˆx [i] +1, û[i] [+1:+N] ) is a feasible solution to (4.46) at time + 1, by optimality V N i Vi N (ˆx [i] +1, û[i] ( + 1) V N i [+1:+N] ) = N ν=1 (ˆx [i] +1, û[i] [+1:+N] ), which is equal to ˆx [i] +ν x[i] +ν 2 Q i + û [i] +ν ũ[i] +ν 2 R i + + ˆx [i] +N+1 x[i] +N+1 2 P i (4.67) Adding and removing the terms ˆx [i] x[i] 2 Q i + û [i] ũ[i] 2 R i, and ˆx [i] +N x[i] +N 2 P i from the right hand side of (4.67), we obtain that V N i ( + 1) N 1 ν=0 ˆx [i] +ν x[i] +ν 2 Q i + û [i] +ν ũ[i] +ν 2 R i + ˆx [i] +N x[i] +N 2 P i + + ˆx [i] +N x[i] +N 2 Q i + û [i] +N t ũ[i] +N 2 R i + + ˆx [i] +N+1 x[i] +N+1 2 P i ˆx [i] +N x[i] +N 2 P i + ( ˆx [i] x[i] 2 Q i + û [i] ũ[i] 2 R i ) (4.68a) (4.68b) (4.68c) (4.68d) Note that (4.68a) is equal to Vi N () in (4.66). On the other hand, in view of (4.58), we can write (4.68b)-(4.68c) as (4.68b)- (4.68c) = ˆx [i] +N x[i] +N 2 F T ii P if ii P i +Q i +K T i R ik i which, in view of (4.51), implies that Therefore, for all i = 1,..., M V N i (4.68b)- (4.68c) = 0 (4.69) ( + 1) Vi N () ( ˆx [i] x[i] 2 Q i + û [i] ũ[i] 2 R i ) (4.70) and, according to standard arguments in MPC [105], we prove that, for all i = 1,..., M as. ˆx [i] x[i] û [i] ũ[i] (4.71a) (4.71b)

136 thesis_main 2014/1/26 15:56 page 120 # CHAPTER 4. DPC FOR TRACKING Suppose now the integrator is used for the reference state and input trajectory layer. In this case, consider the model (4.1a) and, collectively, the model (4.6a) and equation (4.28). We have that x +1 = Ax + B ( û + K(x ˆx ) ) (4.72) x +1 = A x + Bũ + BK( x x ) for all 0. Denote x = x x, ˆx = ˆx x and û = û ũ. From (4.72) x +1 = F x + B ( û K ˆx ) (4.73) Since, in view of (4.71), B ( û K ˆx ) 0 as, in view of Assumption 4.1 it holds that x 0 as, which implies that asymptotically C i x [i] C i x [i]. On the other hand, suppose we are in the case where the observer is used for the reference state and input trajectory layer. Consider the model (4.1a) and, collectively, the model (4.19) and equation (4.28). We have that x +1 = Ax + B ( û + K(x ˆx ) ) x +1 = A x + Bũ + G x (ỹ +1 C x ) + BK( x x ) for all 0, where G x = diag(g x 1,..., G x M ). From (4.74) (4.74) x +1 = F x + B ( û K ˆx ) + G x (ỹ +1 C x ) (4.75) Since, in view of (4.71) and of the convergence of the reference trajectory layer, B ( û K ˆx ) + G x (ỹ +1 C x ) 0 as, in view of Assumption 4.1 it holds that x t 0 as, which implies that asymptotically C ii x [i] C i x [i].

137 thesis_main 2014/1/26 15:56 page 121 #137 5 DPC for systems in velocity-form In this Chapter, DPC is extended to include an integral action in the closed-loop for the tracing of constant reference signals. Specifically, according to the approach proposed in e.g. [122,168], integrators are inserted into the loop and the corresponding enlarged system is described in velocity-form. If the process is affected by constant disturbances, this solution allows one to guarantee offset-free steady-state regulation for constant reference signals without the need to compute the corresponding steady-state values of the state and control vectors. Under standard assumptions in MPC, the closed-loop system enjoys stability properties, in the sense that the subsystems state trajectories starting from given sets in the state space converge to the required equilibrium. 5.1 The system System under control Consider a discrete-time, linear system, which obeys to the linear dynamics x +1 = Ax + Bu y = Cx (5.1) where x R n is the state, u R m is the input, and y R m is the output. The constant output reference target value to be traced is denoted by ȳ R m. To guarantee the existence and the uniqueness of the steady-state pair ( x, ū) R m+n such that the system output corresponds to ȳ, the following standard assumption is made. 121

138 thesis_main 2014/1/26 15:56 page 122 # CHAPTER 5. DPC FOR SYSTEMS IN VELOCITY-FORM Assumption 5.1 The input-output system (5.1) has no invariant zeros in 1, i.e., ([ ]) In A B ran = n + m C 0 The state and input vectors are constrained to lie in prescribed sets, i.e. x X, u U, where X, U are compact and convex sets. As it will more formally specified in Assumption 5.4, constants x and ū must lie in suitable subsets of X and U, respectively, which in turn implicitly define a set of admissible constant output references Ȳ. The sets X and U are here defined as the cartesian product of suitable subsets, according to the decomposition of the system into subsystems (see the next section) Partitioned system The system (5.1) is partitioned in M low order interconnected non overlapping subsystems, where a generic sub-model has x [i] Rn i as state vector, i.e., x = (x [1],..., x[m] ) and M i=1 n i = n. Accordingly, the state transition matrices A 11 R n 1 n 1,..., A MM R n M n M of the M subsystems are the diagonal blocs of A, whereas the non-diagonal blocs of A (i.e., A ij, with i j) define the dynamic coupling terms between subsystems. Also the input vector u is assumed to be partitioned in M non-overlapping sub-vectors u [i] Rm i, where u = (u [1],..., u[m] ) and u [i] is denoted as the i-th subsystem input. Correspondingly, B ij, i, j = 1,..., M, are the blocs of B defining the direct influence of input u [j] upon the state x [i]. Finally, the output y is partitioned into M non overlapping output vectors y [i] Rm i, with i = 1,..., M and M i=1 m i = m, where y [i] is assumed only to depend on x [i], for all i = 1,..., M. This implies that C has a bloc diagonal structure C = diag(c 1,..., C M ), where C i R m i n i for all i = 1,..., M. In accordance with the output partition, the output reference target ȳ can be seen as decomposed into M local output reference targets ȳ [i], consistent with the definition of y [i] t. From now on, the subsystem j is said to be a dynamic neighbor of subsystem i if and only if the state and/or the input of j affect the dynamics of subsystem i i.e., if and only if A ij 0 and/or B ij 0. By N i we denote the set of dynamic neighbors of subsystem i (which excludes i).

139 thesis_main 2014/1/26 15:56 page 123 # THE SYSTEM 123 According to these partitions, the i-th subprocess obeys to the linear dynamics x [i] +1 = A ii x [i] + B iiu [i] + j N i {A ij x [j] + B iju [j] } y [i] = C i x [i] (5.2) where x [i] X i R n i and u [i] U i R m i, being X i and U i convex sets consistent with the adopted partition, i.e. X = M i=1 X i, U = M i=1 U i. We assume that each subsystem enjoys the following property. Assumption 5.2 For each subsystem (5.2): i) the pair (A ii, B ii ) is reachable. ([ ]) Ini A ii) ran ii B ii = n C ii 0 i + m i System with integrators In order to solve the tracing problem, the partitioned system is now enlarged with m integrators and, according to [122,168], is described in velocity form. Specifically, letting δx [i] = x[i] x[i] 1, ε[i] = y[i] ȳ[i] and δu [i] = u[i] u[i] 1 δx [i] +1 = A ii δx [i] + B iiδu [i] ε [i] +1 = C i A ii δx [i] Letting ξ [i] A ii = = (δx[i], ε[i], system (5.2) is written as + ε[i] ) and [ ] Aii 0, B C i A ii I ii = mi B ij = + j N i {A ij δx [j] + B ijδu [j] } + C ib ii δu [i] + C i j N i {A ij δx [j] [ ] Bii, A C i B ij = ii [ ] Bij, C C i B i = [ ] 0 I mi ij system (5.3) can be written in compact form as [ Aij ] 0 C i A ij 0 ξ [i] +1 = A ii ξ [i] + B iiδu [i] + j N i {A ij ξ [j] + B ijδu [j] } ε [i] = C i ξ [i] + B ijδu [j] } (5.3) (5.4) For system (5.4) the following property holds (see the Appendix for the proof):

140 thesis_main 2014/1/26 15:56 page 124 # CHAPTER 5. DPC FOR SYSTEMS IN VELOCITY-FORM Proposition 5.1 Under Assumption 5.2, the pair (A ii, B ii ) is reachable. The set of M models (5.4) can be written in the collective form ξ +1 = Aξ + Bδu ε = Cξ (5.5) where ξ = (ξ [1],..., ξ[m] ), ε = (ε [1],..., ε[m] ), δu = (δu [1],..., δu[m] ), while A, B and C are the matrices whose bloc entries are A ij, B ij, and C ij, respectively. Concerning system (5.5) and its partition, the following main assumption on decentralized stabilizability is introduced. Assumption 5.3 There exists a bloc-diagonal matrix K, i.e. K = diag(k 1,..., K M ) with K i R m i (n i +m i ), i = 1,..., M, such that i) The matrix F = A + BK is Schur. ii) The matrices F ii = A ii + B ii K i are Schur. 5.2 The DPC algorithm for tracing Nominal models and control law DPC is developed under the assumption that, at any time instant, each subsystem (5.4) transmits its state and input reference trajectories ξ [i] +ν and δũ[i] +ν, ν = 0,..., N 1 to its neighbors. Moreover, by adding suitable constraints to its formulation, each subsystem is able to guarantee that, for all 0, its true state ξ [i] and input δu[i] trajectories lie in specified time-invariant neighborhoods of ξ [i] and δũ [i] respectively, i.e, ξ [i] [i] ξ E i and δu [i] δũ[i] Eu i, where 0 E i and 0 E u i. In this way (5.4) can be written as ξ [i] +1 = A iiξ [i] + B iiδu [i] + {A ξ[j] ij + B ijδũ [j] } + w[i] (5.6) j N i where w [i] = j N i {A ij (ξ [j] [j] ξ ) + B ij(δu [j] δũ[j] )} W i (5.7)

141 thesis_main 2014/1/26 15:56 page 125 # THE DPC ALGORITHM FOR TRACKING 125 is a bounded disturbance, specifically W i = j N i {A ij E j B ij E u j} (5.8) and j N i {A ij ξ[j] + B ijδũ [j] } can be seen as an input, nown in advance over the prediction horizon, which can be treated in the MPC formulation as a nown disturbance. With reference to the subsystems (5.6), it is possible to define the i-th subsystem nominal model as ˆξ [i] +1 = A [i] ii ˆξ + B iiδû [i] + {A ξ[j] ij + B ijδũ [j] } (5.9) j N i According to the robust MPC approach based on tubes [107], the control law for the true i-th subsystem (5.6) is defined, for all 0, as Letting z [i] = ξ[i] δu [i] = δû[i] + K i(ξ [i] [i] ˆξ, from (5.6)-(5.10) we obtain [i] ˆξ ) (5.10) z [i] +1 = (A ii + B ii K i )z [i] + w[i] (5.11) where w [i] W i. Since W i is bounded and F ii = A ii + B ii K i is Schur, there exists a robust positively invariant (RPI) set Z i (which is assumed to be symmetric with respect to the origin) for (5.11) such that, for all z [i] Z i, then z [i] +1 Z i. Note that, correspondingly, δu [i] δû[i] K iz i Input and state constraints The reformulation (5.4) of (5.2) requires to transform the original constraints on the state and control variables x [i] and u [i] in terms of constraints on δx [i] (i.e., on ξ [i] ) and δu [i]. To this end, first define the sets E i and U i, for all i = 1,..., M, containing the origin and satisfying E i Z i E i and E u i K i Z i E u i, respectively. Then, note that,

142 thesis_main 2014/1/26 15:56 page 126 # CHAPTER 5. DPC FOR SYSTEMS IN VELOCITY-FORM for ν = 1,..., N, it holds that x [i] +ν = x[i] x [i] + ν r=1 + ν r=1 ν u [i] +ν 1 = u[i] 1 + r=1 ν u [i] 1 + r=1 δˆx [i] +r + ν δˆx [i] +ν δû [i] (δx [i] +rν δˆx[i] +ν ) r=1 ν [ Ini 0 ] Z i (5.12a) r=1 +r 1 + ν δû [i] +r 1 (δu [i] +r 1 δû[i] +r 1 ) r=1 ν K i Z i r=1 (5.12b) Therefore, the constraints x [i] +ν X i and u [i] +ν 1 U i for ν = 1,..., N translate in the following: u [i] x [i] + ν 1 + ν r=1 r=1 δˆx [i] +r ˆX i (ν) = X i { δû [i] +r 1 Ûi(ν) = U i { ν [ Ini 0 ] Z i } (5.13a) r=1 ν K i Z i } r=1 (5.13b) In the following, these constraints on δˆx [i] will be transformed into equivalent constraints on ˆξ [i] (see (5.18) and (5.19) below). Note that, in view of the definition (5.13), Ûi(ν + 1) Ûi(ν) and ˆX i (ν + 1) ˆX i (ν) for all ν i-dpc problems The minimization problem for subsystem i (i-dpc) at instant is now stated starting from the nowledge of the stabilizing gain matrix K i, of the sets E i, E u i, Z i, E i, and E u i of the future input [j] and state reference trajectories for i and its neighbors ξ +ν and δũ[j] +ν, ν = 0,..., N 1, j N i, and of the output reference target ȳ [i]. With these ingredients, the i-dpc problem consists in the following: min ˆξ [i],δû[i] [:+N 1] Vi N (ˆξ [i], δû[i] N 1 [:+N 1] ) = ν=0 ˆξ [i] +ν 2 Q i + δû [i] +ν 2 R i + ˆξ [i] +N 2 P i (5.14)

143 thesis_main 2014/1/26 15:56 page 127 # THE DPC ALGORITHM FOR TRACKING 127 subject to the dynamic constraints (5.9), to the constraints ξ [i] [i] ˆξ Z i (5.15) ˆξ [i] [i] +ν ξ +ν E i (5.16) δû [i] +ν δũ[i] +ν Eu i (5.17) x [i] + [ I ni 0 ] ν u [i] 1 + ν r=1 r=1 with ν = 1,..., N, and to the terminal constraint ˆξ [i] +r ˆX i (ν) (5.18) δû [i] +r 1 Ûi(ν) (5.19) ˆξ [i] +N ΞF i (5.20) where the sets Ξ F i (assumed to be symmetric with respect to the origin) are defined in the following. Remar 5.1. I) If (5.15) is satisfied and (5.16)- (5.17) are verified for ν = 0, then ξ [i] [i] ξ E i Z i E i and δu [i] δũ[i] Eu i K i Z i E u i, which implies that w [i] W i. This, in view of the invariance property of (5.11) with respect to Z i, implies that ξ [i] [i] +1 ˆξ +1 Z i. Considering that constraints (5.16) and (5.17) are imposed over the whole prediction horizon, it follows by induction that w [i] +ν W i and ξ [i] [i] +ν ˆξ +ν Z i for all ν = 1,..., N. Therefore, the initially stated requirement on the boundedness of the disturbance w [i] is verified. II) In the quadratic cost function (5.14) the positive definite matrices Q i, R i and P i are suitable tuning parameters to be properly selected by the designer. III) The final cost Vi F = ˆξ [i] +N 2 P i must be selected (see the following Section) to guarantee suitable decreasing properties of the cost function. The solution of the i-dpc problem (5.14) at time is the pair (ˆξ [i] /, δû[i] [:+N 1]/); therefore, according to (5.10) and a receding horizon implementation, the input to the system (5.2), at instant, is δu [i] = δû[i] / + K i(ξ [i] [i] ˆξ / ) (5.21)

144 thesis_main 2014/1/26 15:56 page 128 # CHAPTER 5. DPC FOR SYSTEMS IN VELOCITY-FORM ˆξ [i] Finally, letting +ν/ be the trajectory stemming from /, δû [i] [:+N 1]/, and (5.9), the reference trajectories to be used in the next time instant + 1 are incrementally updated by appending the values ξ [i] +N = ˆξ [i] +N/ δũ [i] +N = K i ˆξ [i] +N/ ˆξ [i] (5.22a) (5.22b) to the reference trajectories previously defined for + ν + N Convergence results The convergence properties of the proposed algorithm can be proved according to the results reported in Chapter 2 and in [50]. First define the set of admissible initial conditions ξ 0 = (ξ [1] 0,..., ξ [M] 0 ) and initial [j] reference trajectories ξ [0:N 1], δũ[j] [0:N 1], for all j = 1..., M as follows. Definition 5.1 Letting ξ = (ξ [1],..., ξ [M] ), denote the feasibility region for all the i-dpc problems by Ξ N := {ξ : if ξ [i] 0 = ξ [i] for all i = 1,..., M then ( ξ [1] [M] [0:N 1],..., ξ [0:N 1] ), (δũ[1] [0:N 1],..., δũ[m] [0:N 1] ), (ˆξ [1] [M] 0/0,..., ˆξ 0/0 ), (δû[1] [0:N 1],..., δû[m] [0:N 1] ) such that (5.2) and (5.15)- (5.20) are satisfied for all i = 1,..., M} Moreover, for each ξ Ξ N, let Ξ N ξ [1] [M] := {( ξ [0:N 1],..., ξ [0:N 1]), (δũ[1] [0:N 1],..., δũ[m] [0:N 1] ) : if ξ [i] 0 = ξ [i] for all i = 1,..., M then (ˆξ [1] [M] 0/0,..., ˆξ 0/0 ), (δû[1] [0:N 1],..., δû[m] [0:N 1] ) such that (5.2) and (5.15)- (5.20) are satisfied for all i = 1,..., M} be the region of feasible initial reference trajectories with respect to a given initial condition. Assumption 5.4 Letting Z = M i=1 Z i, Û = M i=1 Ûi, and ˆΞ F = M i=1 ˆΞ F i, it holds that:

145 thesis_main 2014/1/26 15:56 page 129 # CONVERGENCE RESULTS 129 i) ˆΞ F is an invariant set for ˆξ + = F ˆξ; ii) Defining for simplicity of notation Ξ = s=1 F s (ˆΞ F Z), and H = diag( [ I n1 0 ],..., [ I nm 0 ] ) the following properties must be satisfied: x ˆX(N N 1 1) H Ξ H Z r=1 N 1 ū Û(N 1) K Ξ K Z r=1 (5.23a) (5.23b) iii) For all ˆξ ˆΞF ( ) ( ) V F ˆξ+ V F ˆξ l( ˆξ, δû) (5.24) where V F ( ˆξ) = M i=1 V i F (ˆξ [i] ) and l( ˆξ, δu) ˆ = M i=1 l i(ˆξ [i], δû [i] ), being l i (ˆξ [i], δû [i] ) = ˆξ [i] 2 Q i + δû [i] 2 R i. Assumptions 5.4-i) and 5.4-iii) are standard in stabilizing MPC algorithms, while Assumption 5.4-ii) corresponds to require that the the state and control variables of the closed-loop system formed by (5.5) and the set of control laws (5.10) satisfy the constraints on x and u. Assumption 5.5 Given the sets E i, U i, and the RPI sets Z i for equation (5.11), there exists a real positive constant ρ E > 0 such that Z i B (n i) ρ E (0) E i and K i Z i B (m i) ρ E (0) U i for all i = 1,..., M, where B (dim) ρ E (0) is a ball of radius ρ E > 0 centered at the origin in the R dim space. The main convergence result is now stated; since the velocity-form allows one to transform a tracing problem in a regulation one, it can be proved along the lines reported in Chapter 2 and in [50]. Theorem 5.1 Let Assumptions be satisfied and let E i and E u i be neighborhoods of the origin satisfying E i Z i E i and E u i K i Z i E u i. Then, for any initial reference trajectories in Ξ N ξ0, the trajectory ξ, starting from any initial condition ξ 0 Ξ N, asymptotically converges to the origin, so that the output y of system (5.1) converges to the reference ȳ.

146 thesis_main 2014/1/26 15:56 page 130 # CHAPTER 5. DPC FOR SYSTEMS IN VELOCITY-FORM 5.4 Implementation issues The DPC algorithm requires to compute the bloc diagonal matrix K = diag(k 1,..., K M ) satisfying Assumption 5.4. Moreover, condition (5.24) must be fulfilled by a proper selection of a matrix P = diag(p 1,..., P M ). Eventually, the sets E i, E u i, Z i, E i, E u i and Ξ F i must be characterized. Since the tracing problem has been transformed in a regulation one, the algorithms presented in Chapter 2 can be used. 5.5 Simulation example Consider the four-tans system of Figure 5.1, see [7,36,57,74,108,165], already introduced in Chapter 2. We recall that the goal is to control the levels h 1 and h 3 of Tans 1 and 3. The manipulated inputs are the voltages of the two pumps v 1 and v 2. The parameters γ 1 and γ 2 (0, 1) represent the fraction of water that flows inside the lower tans, and are ept fixed during the simulations. Figure 5.1: Schematic representation of the four-tans system. The dynamic model of the system and the values of the parameters have been described in Chapter 2. In this case, external disturbances are not considered. The linearization of the dynamic system and its zero-order-hold discretization with sampling time T = 0.5 s, leads to the model (5.1)

Giulio Betti, Marcello Farina and Riccardo Scattolini

Giulio Betti, Marcello Farina and Riccardo Scattolini 1 Dipartimento di Elettronica e Informazione, Politecnico di Milano Rapporto Tecnico 2012.29 An MPC algorithm for offset-free tracking of constant reference signals Giulio Betti, Marcello Farina and Riccardo

More information

Decentralized and distributed control

Decentralized and distributed control Decentralized and distributed control Centralized control for constrained discrete-time systems M. Farina 1 G. Ferrari Trecate 2 1 Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB) Politecnico

More information

IEOR 265 Lecture 14 (Robust) Linear Tube MPC

IEOR 265 Lecture 14 (Robust) Linear Tube MPC IEOR 265 Lecture 14 (Robust) Linear Tube MPC 1 LTI System with Uncertainty Suppose we have an LTI system in discrete time with disturbance: x n+1 = Ax n + Bu n + d n, where d n W for a bounded polytope

More information

EE C128 / ME C134 Feedback Control Systems

EE C128 / ME C134 Feedback Control Systems EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of

More information

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS D. Limon, J.M. Gomes da Silva Jr., T. Alamo and E.F. Camacho Dpto. de Ingenieria de Sistemas y Automática. Universidad de Sevilla Camino de los Descubrimientos

More information

Decentralized and distributed control

Decentralized and distributed control Decentralized and distributed control Constrained distributed control for discrete-time systems M. Farina 1 G. Ferrari Trecate 2 1 Dipartimento di Elettronica e Informazione (DEI) Politecnico di Milano,

More information

Autonomous navigation of unicycle robots using MPC

Autonomous navigation of unicycle robots using MPC Autonomous navigation of unicycle robots using MPC M. Farina marcello.farina@polimi.it Dipartimento di Elettronica e Informazione Politecnico di Milano 7 June 26 Outline Model and feedback linearization

More information

Set-valued Observer-based Active Fault-tolerant Model Predictive Control

Set-valued Observer-based Active Fault-tolerant Model Predictive Control Set-valued Observer-based Active Fault-tolerant Model Predictive Control Feng Xu 1,2, Vicenç Puig 1, Carlos Ocampo-Martinez 1 and Xueqian Wang 2 1 Institut de Robòtica i Informàtica Industrial (CSIC-UPC),Technical

More information

Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems

Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems Pavankumar Tallapragada Nikhil Chopra Department of Mechanical Engineering, University of Maryland, College Park, 2742 MD,

More information

Distributed and Real-time Predictive Control

Distributed and Real-time Predictive Control Distributed and Real-time Predictive Control Melanie Zeilinger Christian Conte (ETH) Alexander Domahidi (ETH) Ye Pu (EPFL) Colin Jones (EPFL) Challenges in modern control systems Power system: - Frequency

More information

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

Theory in Model Predictive Control : Constraint Satisfaction and Stability! Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time

More information

Disturbance Attenuation Properties for Discrete-Time Uncertain Switched Linear Systems

Disturbance Attenuation Properties for Discrete-Time Uncertain Switched Linear Systems Disturbance Attenuation Properties for Discrete-Time Uncertain Switched Linear Systems Hai Lin Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556, USA Panos J. Antsaklis

More information

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 11, Issue 3 Ver. IV (May - Jun. 2015), PP 52-62 www.iosrjournals.org The ϵ-capacity of a gain matrix and tolerable disturbances:

More information

Introduction to Model Predictive Control. Dipartimento di Elettronica e Informazione

Introduction to Model Predictive Control. Dipartimento di Elettronica e Informazione Introduction to Model Predictive Control Riccardo Scattolini Riccardo Scattolini Dipartimento di Elettronica e Informazione Finite horizon optimal control 2 Consider the system At time k we want to compute

More information

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton

More information

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1 A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1 Ali Jadbabaie, Claudio De Persis, and Tae-Woong Yoon 2 Department of Electrical Engineering

More information

On robustness of suboptimal min-max model predictive control *

On robustness of suboptimal min-max model predictive control * Manuscript received June 5, 007; revised Sep., 007 On robustness of suboptimal min-max model predictive control * DE-FENG HE, HAI-BO JI, TAO ZHENG Department of Automation University of Science and Technology

More information

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles HYBRID PREDICTIVE OUTPUT FEEDBACK STABILIZATION OF CONSTRAINED LINEAR SYSTEMS Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides Department of Chemical Engineering University of California,

More information

Postface to Model Predictive Control: Theory and Design

Postface to Model Predictive Control: Theory and Design Postface to Model Predictive Control: Theory and Design J. B. Rawlings and D. Q. Mayne August 19, 2012 The goal of this postface is to point out and comment upon recent MPC papers and issues pertaining

More information

Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System

Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System Ugo Rosolia Francesco Borrelli University of California at Berkeley, Berkeley, CA 94701, USA

More information

Stochastic Tube MPC with State Estimation

Stochastic Tube MPC with State Estimation Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems MTNS 2010 5 9 July, 2010 Budapest, Hungary Stochastic Tube MPC with State Estimation Mark Cannon, Qifeng Cheng,

More information

Structured State Space Realizations for SLS Distributed Controllers

Structured State Space Realizations for SLS Distributed Controllers Structured State Space Realizations for SLS Distributed Controllers James Anderson and Nikolai Matni Abstract In recent work the system level synthesis (SLS) paradigm has been shown to provide a truly

More information

Further results on Robust MPC using Linear Matrix Inequalities

Further results on Robust MPC using Linear Matrix Inequalities Further results on Robust MPC using Linear Matrix Inequalities M. Lazar, W.P.M.H. Heemels, D. Muñoz de la Peña, T. Alamo Eindhoven Univ. of Technology, P.O. Box 513, 5600 MB, Eindhoven, The Netherlands,

More information

GLOBAL STABILIZATION OF THE INVERTED PENDULUM USING MODEL PREDICTIVE CONTROL. L. Magni, R. Scattolini Λ;1 K. J. Åström ΛΛ

GLOBAL STABILIZATION OF THE INVERTED PENDULUM USING MODEL PREDICTIVE CONTROL. L. Magni, R. Scattolini Λ;1 K. J. Åström ΛΛ Copyright 22 IFAC 15th Triennial World Congress, Barcelona, Spain GLOBAL STABILIZATION OF THE INVERTED PENDULUM USING MODEL PREDICTIVE CONTROL L. Magni, R. Scattolini Λ;1 K. J. Åström ΛΛ Λ Dipartimento

More information

On the Inherent Robustness of Suboptimal Model Predictive Control

On the Inherent Robustness of Suboptimal Model Predictive Control On the Inherent Robustness of Suboptimal Model Predictive Control James B. Rawlings, Gabriele Pannocchia, Stephen J. Wright, and Cuyler N. Bates Department of Chemical & Biological Engineering Computer

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Output Input Stability and Minimum-Phase Nonlinear Systems

Output Input Stability and Minimum-Phase Nonlinear Systems 422 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 3, MARCH 2002 Output Input Stability and Minimum-Phase Nonlinear Systems Daniel Liberzon, Member, IEEE, A. Stephen Morse, Fellow, IEEE, and Eduardo

More information

Nonlinear Reference Tracking with Model Predictive Control: An Intuitive Approach

Nonlinear Reference Tracking with Model Predictive Control: An Intuitive Approach onlinear Reference Tracking with Model Predictive Control: An Intuitive Approach Johannes Köhler, Matthias Müller, Frank Allgöwer Abstract In this paper, we study the system theoretic properties of a reference

More information

ROBUST CONSTRAINED REGULATORS FOR UNCERTAIN LINEAR SYSTEMS

ROBUST CONSTRAINED REGULATORS FOR UNCERTAIN LINEAR SYSTEMS ROBUST CONSTRAINED REGULATORS FOR UNCERTAIN LINEAR SYSTEMS Jean-Claude HENNET Eugênio B. CASTELAN Abstract The purpose of this paper is to combine several control requirements in the same regulator design

More information

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 48, NO 9, SEPTEMBER 2003 1569 Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback Fabio Fagnani and Sandro Zampieri Abstract

More information

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization for robust control with constraints p. 1 Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization

More information

Denis ARZELIER arzelier

Denis ARZELIER   arzelier COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15

More information

Economic and Distributed Model Predictive Control: Recent Developments in Optimization-Based Control

Economic and Distributed Model Predictive Control: Recent Developments in Optimization-Based Control SICE Journal of Control, Measurement, and System Integration, Vol. 10, No. 2, pp. 039 052, March 2017 Economic and Distributed Model Predictive Control: Recent Developments in Optimization-Based Control

More information

The Tuning of Robust Controllers for Stable Systems in the CD-Algebra: The Case of Sinusoidal and Polynomial Signals

The Tuning of Robust Controllers for Stable Systems in the CD-Algebra: The Case of Sinusoidal and Polynomial Signals The Tuning of Robust Controllers for Stable Systems in the CD-Algebra: The Case of Sinusoidal and Polynomial Signals Timo Hämäläinen Seppo Pohjolainen Tampere University of Technology Department of Mathematics

More information

Distributed Receding Horizon Control of Cost Coupled Systems

Distributed Receding Horizon Control of Cost Coupled Systems Distributed Receding Horizon Control of Cost Coupled Systems William B. Dunbar Abstract This paper considers the problem of distributed control of dynamically decoupled systems that are subject to decoupled

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

Predictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions

Predictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions Predictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions M. Lazar, W.P.M.H. Heemels a a Eindhoven Univ. of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands

More information

EML5311 Lyapunov Stability & Robust Control Design

EML5311 Lyapunov Stability & Robust Control Design EML5311 Lyapunov Stability & Robust Control Design 1 Lyapunov Stability criterion In Robust control design of nonlinear uncertain systems, stability theory plays an important role in engineering systems.

More information

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems 1 Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems Mauro Franceschelli, Andrea Gasparri, Alessandro Giua, and Giovanni Ulivi Abstract In this paper the formation stabilization problem

More information

Global Analysis of Piecewise Linear Systems Using Impact Maps and Surface Lyapunov Functions

Global Analysis of Piecewise Linear Systems Using Impact Maps and Surface Lyapunov Functions IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 48, NO 12, DECEMBER 2003 2089 Global Analysis of Piecewise Linear Systems Using Impact Maps and Surface Lyapunov Functions Jorge M Gonçalves, Alexandre Megretski,

More information

Nonlinear Model Predictive Control for Periodic Systems using LMIs

Nonlinear Model Predictive Control for Periodic Systems using LMIs Marcus Reble Christoph Böhm Fran Allgöwer Nonlinear Model Predictive Control for Periodic Systems using LMIs Stuttgart, June 29 Institute for Systems Theory and Automatic Control (IST), University of Stuttgart,

More information

Enlarged terminal sets guaranteeing stability of receding horizon control

Enlarged terminal sets guaranteeing stability of receding horizon control Enlarged terminal sets guaranteeing stability of receding horizon control J.A. De Doná a, M.M. Seron a D.Q. Mayne b G.C. Goodwin a a School of Electrical Engineering and Computer Science, The University

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

MOST control systems are designed under the assumption

MOST control systems are designed under the assumption 2076 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 9, OCTOBER 2008 Lyapunov-Based Model Predictive Control of Nonlinear Systems Subject to Data Losses David Muñoz de la Peña and Panagiotis D. Christofides

More information

Stability Analysis of Linear Systems with Time-varying State and Measurement Delays

Stability Analysis of Linear Systems with Time-varying State and Measurement Delays Proceeding of the th World Congress on Intelligent Control and Automation Shenyang, China, June 29 - July 4 24 Stability Analysis of Linear Systems with ime-varying State and Measurement Delays Liang Lu

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees

Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees Pontus Giselsson Department of Automatic Control LTH Lund University Box 118, SE-221 00 Lund, Sweden pontusg@control.lth.se

More information

On the design of Robust tube-based MPC for tracking

On the design of Robust tube-based MPC for tracking Proceedings of the 17th World Congress The International Federation of Automatic Control On the design of Robust tube-based MPC for tracking D. Limon I. Alvarado T. Alamo E. F. Camacho Dpto. de Ingeniería

More information

Gramians based model reduction for hybrid switched systems

Gramians based model reduction for hybrid switched systems Gramians based model reduction for hybrid switched systems Y. Chahlaoui Younes.Chahlaoui@manchester.ac.uk Centre for Interdisciplinary Computational and Dynamical Analysis (CICADA) School of Mathematics

More information

Control of Mobile Robots

Control of Mobile Robots Control of Mobile Robots Regulation and trajectory tracking Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Organization and

More information

Applications of Controlled Invariance to the l 1 Optimal Control Problem

Applications of Controlled Invariance to the l 1 Optimal Control Problem Applications of Controlled Invariance to the l 1 Optimal Control Problem Carlos E.T. Dórea and Jean-Claude Hennet LAAS-CNRS 7, Ave. du Colonel Roche, 31077 Toulouse Cédex 4, FRANCE Phone : (+33) 61 33

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

A Decentralized Stabilization Scheme for Large-scale Interconnected Systems

A Decentralized Stabilization Scheme for Large-scale Interconnected Systems A Decentralized Stabilization Scheme for Large-scale Interconnected Systems OMID KHORSAND Master s Degree Project Stockholm, Sweden August 2010 XR-EE-RT 2010:015 Abstract This thesis considers the problem

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Bisimilar Finite Abstractions of Interconnected Systems

Bisimilar Finite Abstractions of Interconnected Systems Bisimilar Finite Abstractions of Interconnected Systems Yuichi Tazaki and Jun-ichi Imura Tokyo Institute of Technology, Ōokayama 2-12-1, Meguro, Tokyo, Japan {tazaki,imura}@cyb.mei.titech.ac.jp http://www.cyb.mei.titech.ac.jp

More information

Zero controllability in discrete-time structured systems

Zero controllability in discrete-time structured systems 1 Zero controllability in discrete-time structured systems Jacob van der Woude arxiv:173.8394v1 [math.oc] 24 Mar 217 Abstract In this paper we consider complex dynamical networks modeled by means of state

More information

ECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects

ECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects ECE7850 Lecture 8 Nonlinear Model Predictive Control: Theoretical Aspects Model Predictive control (MPC) is a powerful control design method for constrained dynamical systems. The basic principles and

More information

arxiv: v2 [cs.sy] 29 Mar 2016

arxiv: v2 [cs.sy] 29 Mar 2016 Approximate Dynamic Programming: a Q-Function Approach Paul Beuchat, Angelos Georghiou and John Lygeros 1 ariv:1602.07273v2 [cs.sy] 29 Mar 2016 Abstract In this paper we study both the value function and

More information

Nonlinear Control Design for Linear Differential Inclusions via Convex Hull Quadratic Lyapunov Functions

Nonlinear Control Design for Linear Differential Inclusions via Convex Hull Quadratic Lyapunov Functions Nonlinear Control Design for Linear Differential Inclusions via Convex Hull Quadratic Lyapunov Functions Tingshu Hu Abstract This paper presents a nonlinear control design method for robust stabilization

More information

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of

More information

Nonlinear Model Predictive Control Tools (NMPC Tools)

Nonlinear Model Predictive Control Tools (NMPC Tools) Nonlinear Model Predictive Control Tools (NMPC Tools) Rishi Amrit, James B. Rawlings April 5, 2008 1 Formulation We consider a control system composed of three parts([2]). Estimator Target calculator Regulator

More information

Riccati difference equations to non linear extended Kalman filter constraints

Riccati difference equations to non linear extended Kalman filter constraints International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R

More information

Improved MPC Design based on Saturating Control Laws

Improved MPC Design based on Saturating Control Laws Improved MPC Design based on Saturating Control Laws D.Limon 1, J.M.Gomes da Silva Jr. 2, T.Alamo 1 and E.F.Camacho 1 1. Dpto. de Ingenieria de Sistemas y Automática. Universidad de Sevilla, Camino de

More information

MPC for tracking periodic reference signals

MPC for tracking periodic reference signals MPC for tracking periodic reference signals D. Limon T. Alamo D.Muñoz de la Peña M.N. Zeilinger C.N. Jones M. Pereira Departamento de Ingeniería de Sistemas y Automática, Escuela Superior de Ingenieros,

More information

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state

More information

Equivalence of dynamical systems by bisimulation

Equivalence of dynamical systems by bisimulation Equivalence of dynamical systems by bisimulation Arjan van der Schaft Department of Applied Mathematics, University of Twente P.O. Box 217, 75 AE Enschede, The Netherlands Phone +31-53-4893449, Fax +31-53-48938

More information

On the Inherent Robustness of Suboptimal Model Predictive Control

On the Inherent Robustness of Suboptimal Model Predictive Control On the Inherent Robustness of Suboptimal Model Predictive Control James B. Rawlings, Gabriele Pannocchia, Stephen J. Wright, and Cuyler N. Bates Department of Chemical and Biological Engineering and Computer

More information

Robotics. Control Theory. Marc Toussaint U Stuttgart

Robotics. Control Theory. Marc Toussaint U Stuttgart Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions Robust Stability Robust stability against time-invariant and time-varying uncertainties Parameter dependent Lyapunov functions Semi-infinite LMI problems From nominal to robust performance 1/24 Time-Invariant

More information

A Novel Integral-Based Event Triggering Control for Linear Time-Invariant Systems

A Novel Integral-Based Event Triggering Control for Linear Time-Invariant Systems 53rd IEEE Conference on Decision and Control December 15-17, 2014. Los Angeles, California, USA A Novel Integral-Based Event Triggering Control for Linear Time-Invariant Systems Seyed Hossein Mousavi 1,

More information

Review of Controllability Results of Dynamical System

Review of Controllability Results of Dynamical System IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 13, Issue 4 Ver. II (Jul. Aug. 2017), PP 01-05 www.iosrjournals.org Review of Controllability Results of Dynamical System

More information

Automatic Control Systems theory overview (discrete time systems)

Automatic Control Systems theory overview (discrete time systems) Automatic Control Systems theory overview (discrete time systems) Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Motivations

More information

On the Stabilization of Neutrally Stable Linear Discrete Time Systems

On the Stabilization of Neutrally Stable Linear Discrete Time Systems TWCCC Texas Wisconsin California Control Consortium Technical report number 2017 01 On the Stabilization of Neutrally Stable Linear Discrete Time Systems Travis J. Arnold and James B. Rawlings Department

More information

NOWADAYS, many control applications have some control

NOWADAYS, many control applications have some control 1650 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 49, NO 10, OCTOBER 2004 Input Output Stability Properties of Networked Control Systems D Nešić, Senior Member, IEEE, A R Teel, Fellow, IEEE Abstract Results

More information

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization Global stabilization of feedforward systems with exponentially unstable Jacobian linearization F Grognard, R Sepulchre, G Bastin Center for Systems Engineering and Applied Mechanics Université catholique

More information

Stable Hierarchical Model Predictive Control Using an Inner Loop Reference Model

Stable Hierarchical Model Predictive Control Using an Inner Loop Reference Model Stable Hierarchical Model Predictive Control Using an Inner Loop Reference Model Chris Vermillion Amor Menezes Ilya Kolmanovsky Altaeros Energies, Cambridge, MA 02140 (e-mail: chris.vermillion@altaerosenergies.com)

More information

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 42, NO 6, JUNE 1997 771 Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach Xiangbo Feng, Kenneth A Loparo, Senior Member, IEEE,

More information

Graph and Controller Design for Disturbance Attenuation in Consensus Networks

Graph and Controller Design for Disturbance Attenuation in Consensus Networks 203 3th International Conference on Control, Automation and Systems (ICCAS 203) Oct. 20-23, 203 in Kimdaejung Convention Center, Gwangju, Korea Graph and Controller Design for Disturbance Attenuation in

More information

Common Knowledge and Sequential Team Problems

Common Knowledge and Sequential Team Problems Common Knowledge and Sequential Team Problems Authors: Ashutosh Nayyar and Demosthenis Teneketzis Computer Engineering Technical Report Number CENG-2018-02 Ming Hsieh Department of Electrical Engineering

More information

Necessary and Sufficient Conditions for Reachability on a Simplex

Necessary and Sufficient Conditions for Reachability on a Simplex Necessary and Sufficient Conditions for Reachability on a Simplex Bartek Roszak a, Mireille E. Broucke a a Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto,

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

Hybrid Systems - Lecture n. 3 Lyapunov stability

Hybrid Systems - Lecture n. 3 Lyapunov stability OUTLINE Focus: stability of equilibrium point Hybrid Systems - Lecture n. 3 Lyapunov stability Maria Prandini DEI - Politecnico di Milano E-mail: prandini@elet.polimi.it continuous systems decribed by

More information

Event-triggered second-moment stabilization of linear systems under packet drops

Event-triggered second-moment stabilization of linear systems under packet drops Event-triggered second-moment stabilization of linear systems under pacet drops Pavanumar Tallapragada Massimo Franceschetti Jorge Cortés Abstract This paper deals with the stabilization of linear systems

More information

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract

More information

Chapter 2 Optimal Control Problem

Chapter 2 Optimal Control Problem Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS Jorge M. Gonçalves, Alexandre Megretski y, Munther A. Dahleh y California Institute of Technology

More information

Lebesgue Measure on R n

Lebesgue Measure on R n CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

On Linear Copositive Lyapunov Functions and the Stability of Switched Positive Linear Systems

On Linear Copositive Lyapunov Functions and the Stability of Switched Positive Linear Systems 1 On Linear Copositive Lyapunov Functions and the Stability of Switched Positive Linear Systems O. Mason and R. Shorten Abstract We consider the problem of common linear copositive function existence for

More information

A Control Methodology for Constrained Linear Systems Based on Positive Invariance of Polyhedra

A Control Methodology for Constrained Linear Systems Based on Positive Invariance of Polyhedra A Control Methodology for Constrained Linear Systems Based on Positive Invariance of Polyhedra Jean-Claude HENNET LAAS-CNRS Toulouse, France Co-workers: Marina VASSILAKI University of Patras, GREECE Jean-Paul

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n. Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

Copyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems

Copyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems Chapter One Introduction 1.1 Large-Scale Interconnected Dynamical Systems Modern complex dynamical systems 1 are highly interconnected and mutually interdependent, both physically and through a multitude

More information

Global Analysis of Piecewise Linear Systems Using Impact Maps and Quadratic Surface Lyapunov Functions

Global Analysis of Piecewise Linear Systems Using Impact Maps and Quadratic Surface Lyapunov Functions Global Analysis of Piecewise Linear Systems Using Impact Maps and Quadratic Surface Lyapunov Functions Jorge M. Gonçalves, Alexandre Megretski, Munther A. Dahleh Department of EECS, Room 35-41 MIT, Cambridge,

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Sensor-fault Tolerance using Robust MPC with Set-based State Estimation and Active Fault Isolation

Sensor-fault Tolerance using Robust MPC with Set-based State Estimation and Active Fault Isolation Sensor-ault Tolerance using Robust MPC with Set-based State Estimation and Active Fault Isolation Feng Xu, Sorin Olaru, Senior Member IEEE, Vicenç Puig, Carlos Ocampo-Martinez, Senior Member IEEE and Silviu-Iulian

More information