Nonlinear reference tracking: An economic model predictive control perspective

Size: px
Start display at page:

Download "Nonlinear reference tracking: An economic model predictive control perspective"

Transcription

1 1 Nonlinear reference tracking: An economic model predictive control perspective Johannes Köhler, Matthias A. Müller, Frank Allgöwer Abstract In this paper, we study the system theoretic properties of a reference tracking Model Predictive Control (MPC) scheme for general reference trajectories and nonlinear discretetime systems subject to input and state constraints. Contrary to other existing theoretical results for reference tracking MPC, we do not modify the optimization problem with artificial references or terminal ingredients. Instead, we consider a simple, intuitive implementation and derive theoretical guarantees under a stabilizability condition on the system and suitable conditions on the reference trajectory. We provide sufficient conditions for exponential reference tracking for reachable reference trajectories. In addition, we study unreachable reference trajectories and derive sufficient conditions to ensure practical stability of the unknown optimal reachable reference trajectory. For both cases, we characterize the region of attraction. The results build on the notion of incremental stabilizability, strict dissipativity, and results in (Economic) MPC without terminal constraints. Index Terms Nonlinear Model Predictive Control, Constrained Control, Reference Tracking, Incremental Stability. I. INTRODUCTION Model Predictive Control (MPC) [1] is a well established control method that computes the control input by repeatedly solving an optimization problem online. The main advantages of MPC are the ability to cope with general nonlinear dynamics, hard state and input constraints, and general objective functions. Most of the existing theoretical results for MPC are available for the objective of setpoint stabilization and can be classified into schemes with terminal constraints [2], [3] and without terminal constraints [4], [5], [6]. Many applications require a more general tracking of dynamic reference trajectories, for example in the context of output regulation [7, Chapt. 8]. Correspondingly, there is a research effort to extend results for setpoint stabilization to reference tracking. Previous results in this direction are mostly focused on modifying the optimization problem with terminal constraints and/or artificial references in order to guarantee stability. In [8], [9] terminal sets and terminal costs are designed around a specific reference trajectory and asymptotic stability of the reference trajectory is guaranteed, which is an extension of [3] to the reference tracking case. In [10], [11] an artificial Johannes Köhler, Matthias A. Müller, and Frank Allgöwer are with the Institute for Systems Theory and Automatic Control, University of Stuttgart,70550 Stuttgart, Germany. ( {johannes.koehler, matthias.mueller, frank.allgower}@ist.uni-stuttgart.de) Johannes Köhler would like to thank the German Research Foundation (DFG) for financial support of the project within the International Research Training Group Soft Tissue Robotics (GRK 2198/1). Matthias A. Müller is indebted to the Baden-Württemberg Stiftung for the financial support of this research project by the Eliteprogramme for Postdocs. reference trajectory is optimized and stabilized with terminal constraints in order to deal with what we refer later to as unreachable reference trajectories. These schemes can ensure convergence to the optimal 1 reachable artificial reference trajectory. The limitation of these approaches is mainly due to the artificial reference trajectories. Namely, in order to guarantee convergence to the optimal reference, we have to assume that this reference is contained in the set of considered artificial reference trajectories (e.g. periodic with specified period), which can lead to very long artificial reference trajectories. Furthermore, computing a periodic reference trajectory for nonlinear systems and enforcing a terminal equality constraint on the system trajectory is computationally difficult. In [11] the more general case of regulating an output is considered. These results require the usage of artificial reference trajectories and terminal constraints to obtain a state reference trajectory and ensure recursive feasibility. In [12], [13] the issue of unpredictably changing reference trajectories is considered. The corresponding procedures either rely on robust control invariant sets or terminal constraints with artificial reference trajectories. Contrary to the above mentioned results, we consider a simple reference tracking formulation without terminal constraints and derive sufficient conditions including the system dynamics and the reference trajectory to guarantee desirable closedloop properties. This approach requires no complex design procedures or additional optimization variables. If it is possible to compute terminal ingredients for a given reference trajectory, the scheme presented in this paper is also able to ensure asymptotic tracking without complex design procedures. This result is an extension of setpoint stabilization results in [5], [14], [15]. If the reference trajectory cannot be reached, there exists a different trajectory that can be reached and that is as close as possible to the original reference trajectory. The proposed approach can ensure practical asymptotic stability of this unknown optimal reachable reference trajectory, without explicitly computing it. This result is based on techniques from Economic MPC without terminal constraints [16], [17]. We infer suitable conditions on the system dynamics, constraints, and reference trajectory, such that the desired closed-loop behavior can be guaranteed for general classes of reference trajectories. This implies that the proposed scheme is applicable also if the reference is not (completely) known a priori or changes online. Together with the fact that no complex design procedure is required, this makes the scheme 1 Optimal with respect to the original unreachable reference trajectory.

2 2 suitable for practical implementations. A preliminary version of the results in Section III can be found in the conference proceedings [18]. The paper is structured as follows: Section II presents the reference tracking MPC scheme and discusses the fundamental local incremental stabilizability property. Section III studies reachable reference trajectories and gives sufficient conditions for exponential stability of the tracking error. Section IV considers the case of unreachable reference trajectories and uses Economic MPC arguments to establish practical tracking of the unknown optimal reachable reference trajectory. Section V discusses the special case of linear systems. Section VI illustrates the results with numerical examples. Section VII concludes the paper. II. REFERENCE TRACKING MODEL PREDICTIVE CONTROL A. Notation The quadratic norm with respect to a positive definite matrix Q = Q is denoted by x 2 Q = x Qx. The minimal and maximal eigenvalue of a symmetric matrix Q = Q is denoted by λ min (Q) and λ max (Q), respectively. The positive real numbers are denoted by R >0 = {r R r > 0}, R 0 = R >0 {0}. By K we denote the class of functions α : R 0 R 0, which are continuous, strictly increasing and satisfy α(0) = 0. We denote the class of functions δ : R 0 R 0, which are continuous and decreasing with δ(k) = 0 by L. lim k B. Setup We consider the following nonlinear discrete-time system x(t + 1) = f(x(t), u(t)) (1) with the state x R n, control input u R m, and time step t N. We impose point-wise in time constraints on the state and input (x(t), u(t)) Z = X U, (2) for some compact set 2 Z. Given a reference trajectory (x ref, u ref ), we define the reference tracking stage cost l(x, u, t) = x x ref (t) 2 Q + u u ref (t) 2 R, (3) with a positive definite matrix Q = Q R n n and a positive semi-definite matrix R = R R m m. Remark 1. This formulation includes R = 0 as a possible weighting matrix. In this case the input reference u ref does not need to be known. Remark 2. In principle, it is possible to consider nonquadratic stage cost functions, which however makes the following derivations more involved and conservative. In case a system is not quadratically stabilizable [19], it might be possible to make a nonlinear state transformation resulting in a quadratically stabilizable system (see for example nonlinear systems in normal form [7]). 2 The following derivations can be extended to time-varying constraint sets Z(t) and time-varying dynamics f(x, u, t). The control goal is to achieve constraint satisfaction (x(t), u(t)) Z, t 0 and stabilize the reference trajectory for a (preferably large) set of initial conditions, called the region of attraction. We define the open-loop cost J N of an input sequence u( t) R m N at time step t as J N (x(t), t, u( t)) = x(0 t) = x(t), l(x(k t), u(k t), t + k), x(k + 1 t) = f(x(k t), u(k t)), with the current measured state x(t). The proposed reference tracking MPC scheme is characterized by the solution of the following optimization problem V N (x(t), t) := min u( t) J N(x(t), t, u( t)) (4) s.t. x(k t) X, u(k t) U. The solution to this optimization problem is the value function V N, and optimal state and input trajectories (x ( t), u ( t)). We define V N (x(t), t) =, in case there exists no feasible solution. We denote the resulting MPC feedback by u(t) = u (0 t). The resulting closed-loop system is given by x(t + 1) = f(x(t), u(t)) = x (1 t), t 0. (5) We define the tracking error e(t) := x(t) x ref (t). The closedloop error dynamic can be expressed as a time-varying system e(t + 1) =f(e(t) + x ref (t), u(t)) x ref (t + 1) =:g(e(t), t), t 0. (6) Achieving (uniform) asymptotic stability of the error dynamics (6) is equivalent to stabilizing the reference trajectory. C. Local incremental stabilizability In order to provide theoretical guarantees for tracking of generic reference trajectories, we assume that the system possesses a local stabilizability property. Since we are interested in the stability of a trajectory, we frame this assumption in the context of Lyapunov incremental 3 stability [21], [22], [23]. Assumption 1. There exists a control law κ: X Z R m, a δ-lyapunov function V δ : X Z R 0, that is continuous in the first argument and satisfies V δ (z, z, v) = 0 for all (z, v) Z, and parameters c δ,l, c δ,u, δ loc, k max R >0, ρ (0, 1), such that the following properties hold for all (x, z, v) X Z, (z +, v + ) Z with V δ (x, z, v) δ loc : with c δ,l x z 2 V δ (x, z, v) c δ,u x z 2, (7) κ(x, z, v) v k max x z, (8) V δ (x +, z +, v + ) ρv δ (x, z, v), (9) x + := f(x, κ(x, z, v)), z + := f(z, v). This assumption is comparable to local (exponential) stabilizability around arbitrary reference trajectories with a control 3 A related system property is contractivity and correspondingly universal stabilizability, compare [20].

3 3 law κ. The parameter k max is a local Lipschitz bound on the nonlinear stabilizing control law κ. To ensure asymptotic tracking for a specific reference trajectory, it suffices if this assumption is satisfied locally around the reference trajectory. Assuming this property on the full constraint set Z enables us to provide guarantees for generic classes of reference trajectories. Note that the controller κ and the δ-lyapunov function V δ are not required for the implementation of the proposed MPC scheme, but are only used for the analysis. If this assumption is satisfied with κ(x, z, v) = v, the system is (locally) incrementally stable [21], [22], [23]. D. Stabilizable linearization implies local incremental stabilizability In the following, we provide sufficient conditions for Assumption 1 to be satisfied based on the linearization of the nonlinear system (1). Assuming that f is twice differentiable, the first order Taylor-approximation of f around any point r = (z, v) Z can be written as f(z + x, v + u) (10) [ ] [ ] f f =f(z, v) + x + u + φ r ( x, u), x (z,v) u (z,v) }{{}}{{} =:A r =:B r with the remainder term φ r ( x, u) T ( x 2 + u 2 ). (11) Assumption 2. For any point r = (z, v) Z, the pair (A r, B r ) is stabilizable, i.e. there exist a matrix K r R m n and positive definite matrices P r, Q r R n n continuous in r, such that P r (A r + B r K r ) P r (A r + B r K r ) = Q r. (12) Furthermore, there exists a constant ɛ R >0, such that for any point r + = (z +, v + ) Z with z + = f(z, v), the corresponding matrix P r + satisfies the inequality λ max (Pr 1 P r +)Q r (λ max (Pr 1 P r +) 1)P r + ɛi. (13) Inequality (13) bounds the amount by which the Lyapunov function can change in any time step. A special case of this assumption is P r = P, in which case (13) is satisfied with ɛ = min r Z λ min (Q r ). Similar conditions are used for the stabilization of linear parameter varying systems with gain scheduling [24]. Proposition 1. Let Assumption 2 hold. Assume that f is twice continuously differentiable on Z. Then Assumption 1 is satisfied with V δ (x, z, v) = x z 2 P r, κ(x, z, v) = v + K r (x z), with K r, P r based on the point r = (z, v). Proof. Given the assumed continuity in r and compactness 4 of Z, we can define c δ,u := max max(p r ), c δ,l := min min(p r ), r Z r Z (14) k max := max r. r Z (15) Abbreviate x = x z, u = K r x, and r + with the corresponding matrix P r + according to Assumption 2. The remainder term φ r satisfies φ r ( x, u) (11),(15) Given (10), we obtain x + = f(z + x, v + K r x) T (1 + kmax) 2 }{{} =:T max x 2. (16) =f(z, v) + (A r + B r K r ) x + φ r ( x, K r x). The incremental Lyapunov function satisfies V δ (x +, z +, v + ) = (A r + B r K r ) x + φ r ( x, K r x) 2 P r + (A r + B r K r ) x 2 P r + + φ r( x, K r x) 2 P r (A r + B r K r ) x Pr + φ r ( x, K x) Pr +. With Assumption 2 we have We set which implies (A r + B r K r ) x 2 P r + λ max (Pr 1 P r +) (A r + B r K r ) x 2 P r (12) = λ max (Pr 1 P r +)( x 2 P r x 2 Q r ) = x 2 P r + (λ max (Pr 1 P r +) 1) x 2 P r λ max (Pr 1 P r +) x 2 Q r (13) x 2 P r ɛ x 2. x 2 P r ɛ (14) x 2 ρ := 1 1 ɛ (0, 1), 2 c δ,u ( 1 ɛ 2 1 c δ,u ) x 2 P r = ρ x 2 P r. Thus the incremental Lyapunov function satisfies V δ (x +, z +, v + ) ρ x 2 P r ɛ 2 x 2 + φ r ( x, K r x) 2 P r (A r + B r K r ) x Pr + φ r ( x, K x) Pr + ρ x 2 P r ɛ 2 x 2 + c δ,u (2T max x 3 + T 2 max x 4 ). Setting δ loc := c δ,l min { ( ɛ 8c δ,u T max ) 2, ɛ 4c δ,u T 2 max 4 For noncompact constraint sets Z, this derivation requires a finite bound on the infimum and supremum of P r and the supremum of K r. },

4 4 V δ (x, z, v) δ loc implies x implies δloc c δ,l, which in turn ɛ x 2 4c δ,u T max x 3 + 2c δ,u T 2 max x 4. Thus we have shown, that V δ (x, z, v) δ loc implies contractiveness of V δ and thus satisfaction of Assumption 1. This result shows how Assumption 1 can be satisfied for general nonlinear systems. The resulting bound on δ loc tends to be conservative, similar to the size of the terminal region in [3]. Remark 3. Proposition 1 provides sufficient conditions for local incremental stabilizability based on the linearization of the system dynamics. This is contrary to the standard notion of incremental stability [21], [22], [23], which cannot be directly concluded based on properties of the linearization. Analogous to [22, Prop. 3.4] it is possible to enlarge the local region of attraction δ loc by tracking an additional trajectory, which is equivalent to using a dynamic feedback. III. REACHABLE REFERENCES AND EXPONENTIAL TRACKING This section considers reachable reference trajectories and establishes sufficient conditions for exponential reference tracking. In Proposition 2, a local bound on the value function V N is derived, which is used in Theorem 2 to establish closedloop stability. This result is an extension and variation of setpoint stabilization results in [14] and hence our results are also of relevance in this context. A. Reachable reference trajectories We first need an assumption to characterize the reachability of a given reference trajectory. Assumption 3. The reference trajectory (x ref, u ref ) is such that x ref (t + 1) = f(x ref (t), u ref (t)). Furthermore, V δ (x(t), x ref (t), u ref (t)) δ ref implies x(t) X, κ(x(t), x ref (t), u ref (t)) U, with V δ, κ from Assumption 1 and some δ ref R >0. If the reference trajectory can be tracked and lies strictly in the constraint set, this Assumption is satisfied. This is similar to using tighter constraints for the reference trajectory (see for example tube-based robust MPC [25]). The following proposition shows that for such references there exists a suitable local upper bound on the value function V N. Proposition 2. Suppose that Assumptions 1 and 3 are satisfied. There exist constants γ, c R >0, such that for all N 2 and all e(t) 2 Q c, we have V N (x(t), t) γ e(t) 2 Q. (17) Proof. The idea of this proposition is that the controller κ in Assumption 1 can exponentially stabilize the reference trajectory. Using the conditions on the reference trajectory in Assumption 3, this controller also satisfies the state and input constraints. Part I: Show that applying the local controller in Assumption 1 is a feasible input sequence: The candidate input sequence is given by u(k t) =κ(x(k t), x ref (t + k), u ref (t + k)), x(k + 1 t) =f(x(k t), u(k t)), e(k t) =x(k t) x ref (t + k). x(0 t) = x(t), Due to the assumed continuity of V δ and (7), e(t) 2 Q c, with a small enough c implies V δ (x(t), x ref (t), u ref (t)) c δ,u e(t) 2 c δ,u λ min (Q) e(t) 2 Q. Thus, setting δ := min{δ loc, δ ref }, c := λ min (Q)δ/c δ,u, (18) e(t) 2 Q c implies V δ(x(t), x ref (t), u ref (t)) δ δ loc. Using the contractivity (9) recursively, we get V δ (x(k t), x ref (t + k), u ref (t + k)) ρ k δ δ δ ref. This implies (x(k t), u(k t)) X U using Assumption 3. Part II: Show that this candidate controller ensures V N (x(t), t) γ e(t) 2 Q. Since u is a feasible solution to (4), we have V N (x(t), t) J N (x(t), t, u( t)). Given the contractivity (Ass. 1) and the bounds on V δ, we have c δ,l e(k t) 2 V δ (x(k t), x ref (t + k), u ref (t + k)) ρ k c δ,u e(t) 2, e(k t) 2 Q λ max(q)c δ,u λ min (Q)c δ,l ρ k e(t) 2 Q. Similarly, we have u(k t) u ref (t + k) 2 R k max e(k t) 2 R λ max (R)kmax 2 e(k t) 2 R maxc δ,u ρ k e(t) 2 }{{} λ min (Q)c Q. (19) δ,l =:R max Correspondingly, we get J N (x(t), t, u( t)) = with l(x(k t), u(k t), t + k) γ e(t) 2 Q, γ := (λ max(q) + R max )c δ,u λ min (Q)c δ,l 1 1 ρ. (20) In conclusion, we have constructed a feasible input sequence u( t), which exponentially stabilizes the reference trajectory, resulting in the local upper bound on the value function (17). This proposition shows that we can locally bound V N based on assumptions on the reference trajectory and the system dynamics. Similar results for setpoint stabilization can be found in [5], [26, Sec. 6.2]. In [14] a property similar to (17) is assumed to study setpoint stabilization for MPC without terminal constraints.

5 5 B. Local stability The following theorem establishes local stability of the reference trajectory under the closed-loop system. Theorem 1. Let Assumptions 1 and 3 hold. Assume that the initial state x(0) satisfies V N (x(0), 0) cγ, with c, γ according to Proposition 2. There exists an N 0, such that for all N > N 0, there exists a constant α N R >0, such that the closed-loop system (5) satisfies e(t) 2 Q V N (x(t), t) γ e(t) 2 Q, V N (x(t + 1), t + 1) V N (x(t), t) α N l(x(t), u(t), t), for all t 0. The origin e = 0 is uniformly exponentially stable under the closed-loop error dynamics (6). Proof. The proof is closely related to the stability proof of MPC without terminal constraints [5, Variant 2] and split into three parts. Part I and II show the desired properties at time t 0, assuming V N (x(t), t) γc. Part III establishes that V N (x(t), t) γc holds recursively. Abbreviate l(k t) := l(x (k t), u (k t), t + k) and e (k t) := x (k t) x ref (t + k). Part I: A lower bound on V N (x(t), t) is given by V N (x(t), t) l(x(t), u(t), t) e(t) 2 Q. Proposition 2 in combination with V N (x(t), t) γc implies the upper bound V N (x(t), t) γ e(t) 2 Q. To see this we make a case distinction, whether e(t) 2 Q c or not. If e(t) 2 Q c the assertion directly follows from Proposition 2. Else if e(t) 2 Q > c, we have V N(x(t), t) cγ γ e(t) 2 Q. Part II: Show that V N decreases for large enough N: For any k x {0,..., N 1} we have V N (x(t), t) = l(k t) = k x 1 l(k t) + V N kx (x (k x t)). This implies V N kx (x (k x t)) V N (x(t), t) cγ. Thus, we know V N kx (x (k x t)) γ e (k x t) 2 Q as in Part I. Analogous to [5, Variant 2], one can show: ( ) γ 1 l(n 1 t) V N (x(t), t) (21) γ ( ) γ 1 γ l(x(t), u(t), t). (22) γ ln γ For N ln γ ln(γ 1), V N (x(t), t) γc in combination with (21) implies l(n 1 t) c. Thus, Proposition 2 ensures V 2 (x (N 1 t), t + N 1) γl(n 1 t). This implies For V N (x(t + 1), t + 1) N 2 k=1 l(k t) + V 2 (x (N 1 t), t + N 1) l(k t) l(n 1 t) + γl(n 1 t) k=1 = V N (x(t), t) l(x(t), u(t), t) + (γ 1)l(N 1 t) ) (22) (γ 1)N V N (x(t), t) (1 γ }{{ N 2 l(x(t), u(t), t). (23) } :=α N ln γ N > N 0 := 2 ln γ ln(γ 1), (24) we have α N R >0. Part III: Recursive Feasibility and Exponential Stability: Based on (23) and positive definiteness of the stage cost l, the condition V N (x(t), t) cγ is recursively satisfied and the derivations in Part I and II hold for all t 0. Thus, for any prediction horizon N > N 0 and any initial condition V N (x(0), 0) cγ, we have uniform exponential stability of the error dynamics using standard Lyapunov arguments [27]. C. Region of attraction Theorem 1 ensures local exponential stability of the tracking error e. Depending on the reference trajectory (δ ref ) and the local stabilizability (δ loc ), this result only ensures a small region of attraction, since c according to (18) is small. In the following, we demonstrate that this result can be extended to enlarge the region of attraction by increasing the prediction horizon N. Theorem 2. Let Assumptions 1 and 3 hold. For any V max R >0, there exist constants γ Vmax, N 1, such that for all N > N 1, there exists a constant α M R >0, such that for all initial conditions x(0) with V N (x(0), 0) V max, the closedloop system (5) satisfies e(t) 2 Q V N (x(t), t) γ Vmax e(t) 2 Q, V N (x(t + 1), t + 1) V N (x(t), t) α M e(t) 2 Q, for all t 0. The origin e = 0 is uniformly exponentially stable under the closed-loop error dynamics (6). Proof. Part I and II show the desired properties at time t 0, assuming V N (x(t), t) V max. Part III establishes that V N (x(t), t) V max holds recursively. Abbreviate again l(k t) := l(x (k t), u (k t), t + k). Part I: The lower bound on V N (x(t), t) is obvious. Denote γ Vmax := max{γ, Vmax c } with c, γ according to (18), (20). Proposition 2 and V N (x(t), t) V max imply V N (x(t), t) γ Vmax e(t) 2 Q γ Vmax l(x(t), u(t), t). (25) This can be shown with a case distinction based on whether e(t) 2 Q c or not, analog to Theorem 1 (compare [14], [18]).

6 6 Part II: Show that V N decreases for large enough N: Assume that the prediction horizon satisfies N M + γ Vmax 1, M N. Given V N (x(t), t) V max, there exists a k x N M, such that l(k x t) V N (x(t), t) N M + 1 V N (x(t), t) V max c. (26) γ Vmax γ Vmax Note that (25), (26) imply l(k x t) l(x(t), u(t), t). Analogous to Theorem 1 we have V N k (x (k t), t + k) γl(k t) γc for any k {k x,..., N 1}. Using [5, Variant 2] as in Theorem 1, we get ( ) M 1 γ 1 l(n 1 t) γ l(k x t) γ ( ) M 1 γ 1 γ l(x(t), u(t), t). (27) γ This yields V N (x(t + 1), t + 1) l(k t) l(n 1 t) + V 2 (x (N 1 t), t + N 1) k=1 V N (x(t), t) l(x(t), u(t), t) + (γ 1)l(N 1 t) (27) (γ 1)M V N (x(t), t) l(x(t), u(t), t) + γ M 2 l(x(t), u(t), t) ) (γ 1)M V N (x(t), t) l(x(t), u(t), t) (1 γ }{{ M 2. (28) } :=α M For ln γ N > N 1 := 2 +γ Vmax 1. (29) ln γ ln(γ 1) }{{} =N 0 we have M N 0 and thus α M R >0. Part III: Recursive Feasibility and Exponential Stability: Based on the decrease condition in Part II, we have V N (x(t), t) V max recursively satisfied and the derivations in Part I and II hold for all t 0. Uniform exponential stability of the error dynamics e follows analogous to Theorem 1. To summarize, we have shown that for any prediction horizon N > N 1 and any initial condition x(0) with V N (x(0), 0) V max, the closed-loop system exponentially stabilizes the reference trajectory. Remark 4. Theorem 2 implies the following closed-loop performance bound: K 1 lim K t=0 l(x(t), u(t), t) γ V max α M e(0) 2 Q. (30) The basic idea of this proof can be summarized as follows. For a large enough prediction horizon N, there exists a point k x, with l(k x t) c. This allows us to establish the bound V N k (x (k t), t + k) γl(k t) for all k k x based on Proposition 2 with the controller κ. Thus, the results in Theorem 1 can be used for the remainder of the trajectory to bound the cost of the appended piece, by appending the controller κ at the end. The implicitly defined candidate solution for time t + 1 is sketched in Figure 1. Fig. 1. Illustration of exponential reference tracking: reference trajectory x ref, candidate solution x( t + 1), previous optimal solution x ( t), local region of attraction e(t) 2 Q c. Remark 5. The resulting theoretical guarantees are closely related to [14], where setpoint stabilization without terminal constraints is studied based on a local stabilizability assumption around the setpoint. The method presented in [14] reframes the problem with the regional upper bound γ Vmax, i.e. V N (x(t), t) γ Vmax e(t) 2 Q on the sublevel sets V N (x(t), t) V max (see Proof Part I). Then one can proceed analogously to Theorem 1, with γ replaced by γ Vmax, resulting ln γ Vmax ln γ Vmax ln(γ Vmax 1) in N 1 = 2. This procedure is simple but conservative and in general results in larger prediction horizons N. On the other hand, we exploit the fact that we can append the open-loop trajectory with the controller κ at those points k which are close enough to the reference trajectory. In particular, we use the fact that a part of the open-loop solution lies in the local set V N k (x (k t), t + k) γc and then use standard arguments [5, Variant 2] to bound the cost of the appended piece in the candidate solution. This proof highlights some intrinsic properties and difficulties in establishing regional results based on local properties. This insight is captured in the candidate solution in Figure 1. If we wish to increase the region of attraction V max, we can simply increase the prediction horizon N by Vmax c compared to the local bound M N 0. This leaves the constant α M untouched and the resulting performance bound (30) grows linearly with V max. Such simple linear connections are rather hard to conclude using the method in [14]. In the limit, for V max, the bound in [14, Rk. 5] approaches N 2 Vmax c log V max. Remark 6. The resulting bound on the region of attraction V max has a certain tightness property in the following sense. Consider a system which has a local stabilizability property in the sense of Proposition 2 and we wish to estimate the largest V max, resulting in closed-loop stability. We claim that every estimate, solely based on this information, has to satisfy V max Nc in order to construct a candidate solution with the local controller. In other words, one can construct examples

7 7 that are locally controllable with Nc < V max and an initial condition V N (x(0), 0) V max that cannot be stabilized. Thus, alternative estimates on the sufficient prediction horizon N 1, which are only based on a local stabilizability property, can never be smaller than Vmax c. To recapitulate the result of Theorem 2: Assume that the system is locally stabilizable (Ass. 1). Given any reference trajectory, that is reachable (Ass. 3), there exists a large enough prediction horizon N such that the closed-loop system stabilizes the reference trajectory for all initial conditions in the region of attraction. Similarly, if the system is locally stabilizable on some subset Z stab, the same guarantees hold if the reference trajectory also lies in the interior of this region. This allows us to give guarantees on a whole set of reference trajectories, without reference trajectory specific design procedures. Remark 7. If it is possible to compute terminal sets around the reference based on a linearization as in [9], a time-varying version of Assumption 1 is satisfied with κ(x, x ref (t), u ref (t), t) = u ref (t) + K(t)(x x ref (t)), V δ (x, x ref (t), u ref (t), t) = x x ref (t) 2 P (t). Thus, the proposed MPC scheme exponentially stabilizes all reference trajectories that can be stabilized with [9], provided a sufficiently large prediction horizon N is used. The approaches in [8], [9] consider the linearization along a specific reference trajectory, leading to a linear time-varying system. The introduced notion of local incremental stabilizability on the other hand is independent of the reference trajectory and can be verified based on stabilizability of the linearization on the constraint set (Prop. 1). It should be re-emphasized that the main contribution of this section is the fact that we can obtain sufficient conditions by using basic assumptions on the reference trajectory and the system to generate a local set around the reference with the desired contractiveness properties. This is the tool that allows us to reframe the reference tracking problem in a way similar to the setpoint stabilization problem and thus to use well-established methods. In addition, this result contains an alternative (improved) stability proof for setpoint stabilization using a MPC framework without terminal constraints, based on local stabilizability properties. IV. UNREACHABLE REFERENCE TRAJECTORIES In the previous section, we established exponential tracking of reachable reference trajectories, which allows us to give generic guarantees on a class of reference trajectories. However, more realistic and interesting questions arise if we consider unreachable reference trajectories. In particular, we study unreachable reference trajectories (similar to unreachable setpoints [28]) from an Economic MPC perspective [29], [30]. In Theorem 4, we establish practical asymptotic stability of the unknown optimal reachable reference trajectory x p based on a strict dissipativity assumption. In Lemma 4, we discuss when the strict dissipativity property is satisfied, based on the notion of uniform suboptimal operation as an extension of the results in [17]. A. Unreachable reference trajectories and strict dissipativity In the following, the reference trajectory can be seen as an arbitrary bounded time-varying signal, which is not necessarily related to the system dynamics. We limit ourselves to p periodic reference trajectories with some p N and (x ref (t + p), u ref (t + p)) = (x ref (t), u ref (t)). Given the periodic reference trajectory x ref, u ref, we can define the best reachable p periodic trajectory as follows. Definition 1. The optimal reachable p-periodic reference trajectory (x p, u p ) is given by the solution to the following optimization problem V p,min = min u( ),x( ) p 1 l(x(k), u(k), k) (31) st. x(k + 1) = f(x(k), u(k)), x(0) = f(x(p 1), u(p 1)), x(k) X, u(k) U. Contrary to other results for optimal periodic operation [10], [31], [32], [33], for implementation of the MPC scheme we do not presume to know the period length p or the optimal reachable reference x p. This is only used as an analysis tool. For the implementation of the MPC algorithm we only require the next N steps of the reference trajectory. Due to the compactness of Z and the assumed boundedness of (x ref, u ref ), we can define c x,max := max x p(k) x ref (k), (32) k {0,...,p 1} c u,max := max u p(k) u ref (k). (33) k {0,...,p 1} Clearly, if the goal is to stay close to the reference trajectory x ref, a desirable property of the MPC would be to track the unknown reference x p. To establish such a property, we require strict constraint satisfaction of x p, similar to Assumption 3. Assumption 4. The optimal reachable reference trajectory (x p, u p ) from Definition 1 is such that V δ (x(t), x p (t), u p (t)) δ p,ref implies x(t) X, κ(x(t), x p (t), u p (t)) U, with V δ, κ from Assumption 1 and some δ p,ref R >0. There exists a function α V K, such that V δ (x(t), x p (t), u p (t)) δ p,ref and V δ (y, x p (t), u p (t)) δ p,ref imply V N (x(t), t) V N (y, t) α V ( x(t) x p (t) + y x p (t) ). The second part of this assumption requires local continuity 5 of the value function V N around the optimal reachable reference trajectory x p. Denote the stage cost of the optimal 5 A local bound on V N can be derived based on Assumption 1 and the strict constraint satisfaction of x p, compare Proposition 3. Local continuity of V N is a slightly stronger property. This can, for example, be ensured if the system is locally controllable, compare [32, Thm. 16], [16, Thm. 6.4].

8 8 reachable reference trajectory by l p (k) = l(x p (k), u p (k), k). The stage cost difference l p (k) l(x, u, k), can be positive or negative, depending on (x, u, k). Moreover, even if the system is initialized at the optimal p periodic solution, i.e. x(0) = x p (0), the solution to (4) does in general not follow this optimal trajectory, due to the finite time horizon N. Thus, we consider the notion of practical asymptotic stability. Lemma 1. ( [30, Thm 2.4][26, Thm. 2.23]) Let V be a time-varying practical Lyapunov function, with α 1 ( e(t) ) V (e(t), t) α 2 ( e(t) ), V (e(t + 1), t + 1) V (e(t), t) α 3 ( e(t) ) + θ, for all V (e(t), t) V max, V max > η := α 2 (α3 1 (θ)) + θ and α 1, α 2, α 3 K, θ R >0. Then for all initial conditions V (e(0), 0) V max, the origin e = 0 is uniformly practically asymptotically stable. The system uniformly converges to the set {(e, t) V (e, t) η}. Denote the error with respect to the optimal reachable reference by e p (t) = x(t) x p (t). We study this reference tracking problem in the Economic MPC framework of strict dissipativity 6, similar to [31]. Assumption 5. There exists a series of storage functions λ k : X R, k = 0,..., p 1, λ k = λ k+p such that for all (x, u) Z the rotated stage cost L satisfies L(x, u, k) := l(x, u, k) l p (k) + λ k (x) λ k+1 (f(x, u)) α l ( e p (k) ), (34) with α l K. Furthermore, for all x(k) X the storage functions are bounded by λ k (x(k)) γ λ ( e p (k) ), γ λ K. The bound γ λ can be seen as a continuity assumption on the storage functions λ k. For general nonlinear dynamical systems with an arbitrary reference trajectory (x ref, u ref ), computing suitable storage functions λ k is a difficult task. For our purposes, it suffices to show the existence of such storage functions, which is discussed in Lemma 4. To study stability properties, we further define the rotated value function, which plays the role of a positive definite Lyapunov function with respect to the optimal reachable reference x p. 6 In [32], [33] the rotated stage cost is positive definite with respect to an orbit instead of a specific point on the orbit (phase). This kind of distinction is not needed for reference tracking, since the phase is specified by the reference trajectory in the time-varying stage cost. A stronger dissipativity property would be L(x, u, k) α l (max{ e p(k), u u p(k) }), which is however not necessary. Definition 2. The rotated open-loop cost J N (x(t), t, u( t)) of an input sequence u( t) is given by J N (x(t), t, u( t)) = L(x(k t), u(k t), t + k) =J N (x(t), t, u( t)) + λ t (x(t)) λ t+n (x(n t)) l p (t + k). } {{ } =:c N (t) The rotated value function ṼN (x(t), t) is defined as Ṽ N (x(t), t) := min J N (x(t), t, u( t)) (35) u( t) st. x(k t) X, u(k t) U, with the minimizer ( x ( t), ũ ( t)). Note that unlike in the case with terminal constraints [33], [34], this rotated cost does in general not have the same minimizer as the original cost (u ũ ), which makes the stability proof more involved, compare [16], [30], [32]. B. Stabilizability and turnpike property In the following, we derive a turnpike property with respect to the optimal reference x p, based on local stabilizability and strict dissipativity. The following proposition bounds the openloop cost, by combing the ideas of Proposition 2 with the strict dissipativity in Assumption 5. Proposition 3. Let Assumptions 1, 4, and 5 be satisfied. Then there exist constants c p, γ, c max R >0 and a function α u K, such that for all e p (t) c p, and all N 0, the following bounds hold V N (x(t), t) γ e p (t) 2 Q + c N (t) + c max e p (t), Ṽ N (x(t), t) α u ( e p (t) ). Proof. Part I: Show feasibility of the local controller κ: As a candidate input, we consider the local controller κ (Ass. 1) to stabilize the optimal reachable reference x p (Def. 1). The resulting state and input sequence x( t), ũ( t) is given by ũ(k t) = κ( x(k t), x p (t + k), u p (t + k)), x(k + 1 t) = f( x(k t), ũ(k t)), ẽ p (k t) = x(k t) x p (t + k). x(0 t) = x(t), Analogous to Proposition 2, setting δ p δ p := min{δ loc, δ p,ref }, c p :=, (36) c δ,u in combination with e p (t) c p implies V δ (x(t), x p (t), u p (t)) δ p. Due to the recursive contractivity of V δ and Assumption 4, this yields ( x(k t), ũ(k t)) X U. Part II: Show that the candidate solution ũ( t) implies the

9 9 desired bound: Due to feasibility of ũ and optimality of V N and ṼN, respectively, we have Ṽ N (x(t), t) J N (x(t), t, ũ( t)) L( x(k t), ũ(k t), t + k), = V N (x(t), t) J N (x(t), t, ũ( t)) l( x(k t), ũ(k t), t + k). = The stage cost of the candidate solution satisfies l( x(k t), ũ(k t), t + k) = x(k t) x ref (t + k) 2 Q + ũ(k t) u ref (t + k) 2 R = x p (t + k) x ref (t + k) 2 Q + ẽ p (k t) 2 Q + 2(x p (t + k) x ref (t + k)) Qẽ p (k t) + u p (t + k) u ref (t + k) 2 R + ũ(k t) u p (k t) 2 R + 2(u p (t + k) u ref (t + k)) R(ũ(k t) u p (k t)) (19),(32),(33) l p (t + k) + ẽ p (k t) 2 Q + R max ẽ p (k t) 2 + 2(λ max (Q)c x,max + λ max (R)c u,max k max ) ẽ p (k t). With this, we can bound the cost associated to ũ( t): J N (x(t), t, ũ( t)) = l( x(k t), ũ(k t), t + k) ẽ p (k t) 2 Q + R max ẽ p (k t) λ max (Q)c x,max + 2λ max (R)c u,max k max ) ẽ p (k t) ẽ p (k t). Using similar arguments to Proposition 2, we get with l p (t + k) } {{ } =c N (t) J N (x(t), t, ũ( t)) γ e p (t) 2 Q + c N (t) + c max e p (t), c max := 2 λ max(q)c x,max + λ max (R)c u,max k max 1 ρ and γ according to (20). For the rotated cost, we have J N (x(t), t, ũ( t)) cδ,u =J N (x(t), t, ũ( t)) c N (t) + λ t (x(t)) λ t+n ( x(n t)) c δ,l (37) γ e p (t) 2 Q + c max e p (t) + γ λ ( e p (t) ) + γ λ ( ẽ p (N t) ). Due to the contractivity and N 0 we have ρ N c δ,u cδ,u ẽ p (N t) e p (t) e p (t). c δ,l c δ,l Correspondingly we get the bound Ṽ N (x(t), t) γ e p (t) 2 Q + c max e p (t) ( ) cδ,u + γ λ ( e p (t) ) + γ λ e p (t) c δ,l :=α u ( e p (t) ), (38) with α u K. In conclusion, we have constructed a feasible input sequence ũ( t), that exponentially stabilizes the optimal reference x p, resulting in local upper bounds on the value function V N and the rotated value function ṼN. Lemma 2. Let Assumptions 1, 4, and 5 be satisfied. There exist functions σ, σ L, such that the following properties hold for all N, M N with M N and all e p (t) c p with c p according to (36). The optimal solution to (4) contains at least M points k {0,..., N 1}, that satisfy e p(k t) σ(n M + 1). (39) The optimal solutions to (4), (35) contain at least M points k x {0,..., N 1}, that simultaneously satisfy e p(k x t) σ(n M), ẽ p(k x t) σ(n M). (40) The corresponding open-loop costs satisfy J kx (x(t), t, u ( t)) J kx (x(t), t, ũ ( t)) (41) 2γ λ ( σ(n M)) + α V (2 σ(n M)). Proof. The proof is composed of four parts. In Part I, the rotated cost of the optimal solution to (4) is bounded. In Part II, the turnpike of the optimal solution (39) is established. In Part III, the combined turnpike property (40) is derived. In Part IV, the bound (41) is shown. Part I: We first bound the rotated cost of the optimal solution to (4) using Proposition 3 and Assumption 5: J N (x(t), t, u ( t)) =V N (x(t), t) c N (t) + λ t (x(t)) λ t+n (x (N t)) γ e p (t) 2 Q + c max e p (t) + γ λ ( e p (t) ) + γ λ ( e p(n t) ) α u ( e p (t) ) + C α u (c p ) + C, with Similarly, the rotated cost (35) satisfies C := max x X γ λ( x ). (42) Ṽ N (x(t), t) J N (x(t), t, u ( t)) α u (c p ) + C. Part II: Define σ(n M + 1) := α 1 l ( ) αu (c p ) + C, σ L. (43) N M + 1 Suppose there exist N M + 1 instances k {0,..., N 1} with e (k t) > σ(n M + 1). Using Assumption 5, the rotated cost satisfies J N (x(t), t, u ( t)) = L(x (k t), u (k t), t + k) > (N M + 1)α l (σ(n M + 1)) = α u (c p ) + C.

10 10 This is a contradiction to the derived bound for J N. Thus, at least M instances satisfy e p(k t) σ(n M + 1). Part III: Similar to Part II, given any N 0 N. There exist at most N 0 instances k that satisfy e (k t) > σ(n 0 + 1) or ẽ (k t) > σ(n 0 + 1) respectively. Using arguments similar to [16, Sec. 7, (21)], there exist at least M = N 2N 0 points k x, that simultaneously satisfy with e p(k x t) σ(n M), ẽ p(k x t) σ(n M), ( σ(n M) := α 1 l 2 α ) u(c p ) + C, σ L. (44) N M Part IV: Given the derivation in Part III and Prop. 3 we have J kx (x(t), t, u ( t)) =V N (x(t), t) + λ t (x(t)) λ t+kx (x (k x t)) c kx (t) V N kx (x (k x t), t + k x ) J kx (x(t), t, ũ ( t)) + λ t (x(t)) λ t+kx (x (k x t)) c kx (t) V N kx ( x (k x t), t + k x ) V N kx (x (k x t), t + k x ) = J kx (x(t), t, ũ ( t)) λ t+kx (x (k x t)) + λ t+kx ( x (k x t)) + V N kx ( x (k x t), t + k x ) V N kx (x (k x t), t + k x ) (40) J kx (x(t), t, ũ ( t)) + 2γ λ ( σ(n M)) + α V (2 σ(n M)). The following lemma shows that a similar bound holds on sublevel sets of ṼN. This is required in order to extend local results to sublevel sets, similar to Theorem 2. Lemma 3. Let Assumption 5 be satisfied. For any Ṽmax R >0, there exists a function σṽmax L, such that for any x(t) with ṼN(x(t), t) Ṽmax and any N, M N with M N, the optimal solutions to (4), (35) contain at least M points k x {0,..., N 1}, that simultaneously satisfy e p(k x t) σṽmax (N M), The corresponding open-loop costs satisfy ẽ p(k x t) σṽmax (N M). J kx (x(t), t, u ( t)) J kx (x(t), t, ũ ( t)) (45) 2γ λ ( σṽmax (N M)) + α V (2 σṽmax (N M)). Proof. Given Ṽ N (x(t), t) with the rotated minimizer ( x ( t), ũ ( t)), we have Ṽ N (x(t), t) =J N (x(t), t, ũ ( t)) c N (t) Correspondingly we have J N (x(t), t, u ( t)) + λ t (x(t)) λ t+n ( x (N t)). =V N (x(t), t) c N (t) + λ t (x(t)) λ t+n (x (N t)) J N (x(t), t, ũ ( t)) c N (t) + λ t (x(t)) λ t+n (x (N t)) =ṼN (x(t), t) + λ t+n ( x (N t)) λ t+n (x (N t)) Ṽmax + 2C, with C according to (42). The rest of the proof is analogous to Lemma 2 with ( ) σṽmax (N M) := α 1 + 2C l 2Ṽmax. (46) N M Thus, we have established turnpike properties of the optimal solutions x ( t), x ( t) on any sublevel sets of ṼN. C. Practical asymptotic tracking Given these preliminaries, we can study the closed-loop properties of the MPC scheme for unreachable reference trajectories. To study practical tracking, we introduce the set S cp := {(x, t) ṼN(x, t) α l (c p )} with c p according to (36) and α l according to Assumption 5. The following theorem establishes local practical tracking of the optimal reachable trajectory x p. Theorem 3. Let Assumptions 1, 4, and 5 be satisfied. There exists an Ñ1 and a function θ L, such that for all N > Ñ1 and all initial conditions satisfying (x(0), 0) S cp, the closedloop system (5) satisfies α l ( e p (t) ) ṼN(x(t), t) α u ( e p (t) ), Ṽ N (x(t + 1), t + 1) ṼN(x(t), t) α l ( e p (t) ) + θ(n 2), for t 0. The set S cp is positively invariant and the origin e p = 0 is uniformly practically asymptotically stable under the closed-loop dynamics (5). Proof. The proof is structured in three parts, similar to the proof of Theorem 1, with Part I-II assuming ṼN (x(t), t) α l (c p ). This is recursively established in Part III. Part I: Show boundedness of ṼN: The lower bound is based on the strict dissipativity property in Assumption 5, yielding Ṽ N (x(t), t) L(x(t), ũ (0 t), t) α l ( e p (t) ). This property in combination with ṼN (x(t), t) α l (c p ) implies e p (t) c p. The upper bound then directly follows from Proposition 3. Part II: Show decrease of ṼN: With Lemma 2, (40) and M = 2, there exists a k x {1,..., N 1}, such that e p(k x t) σ(n 2), ẽ p(k x t) σ(n 2). For N Ñ0 := 2 + σ 1 (c p ), we can apply the results of Proposition 3. Using the dynamic programming principle, we have Ṽ N (x(t + 1), t + 1) L(x (k t), u (k t), t + k) k x 1 k=1 + ṼN k x+1(x (k x t), t + k x ) (38),(41) J kx (x(t), t, ũ ( t)) L(x(t), u(t), t) + α u ( e p(k x t) ) + 2γ λ ( σ(n 2)) + α V (2 σ(n 2)) ṼN (x(t), t) α l ( e p (t) ) + θ(n 2), θ(n) := α u ( σ(n)) + 2γ λ ( σ(n))+α V (2 σ(n)). (47)

11 11 Part III: Practical Asymptotic Stability: In order to apply this argument recursively, we have to ensure that Ṽ N (x(t), t) α l (c p ) holds recursively. Using the results in Part I/II and Lemma 1, we can recursively establish the following bound for all t 0: for all Ṽ N (x(t), t) α u (α 1 l ( θ(n 2))) + θ(n 2) =:V (N 2) α l (c p ), N Ñ1 := 2 + V 1 (α l (c p )). (48) Thus, for all N Ñ1 Ñ0 the set S cp is positively invariant. Furthermore, the origin e p = 0 is uniformly practically asymptotically stable under the closed-loop system (5), with the practical Lyapunov function ṼN. The following theorem shows that we can increase the region of attraction similar to Theorem 2, by increasing the prediction horizon N. Theorem 4. Let Assumptions 1, 4, and 5 be satisfied. For any Ṽ max R >0, there exists an Ñ3 and a function α u, Ṽ max K, such that for all N > Ñ3 and all initial conditions x(0) with Ṽ N (x(0), 0) Ṽmax, the closed-loop system (5) satisfies α l ( e p (t) ) ṼN(x(t), t) α u, Ṽ max ( e p (t) ), Ṽ N (x(t + 1), t + 1) ṼN(x(t), t) α N, Ṽ max ( e p (t) ) + θ(n 2), for all t 0, with θ L according to Theorem 3 and α N, Ṽ max K. The closed-loop (5) converges to the set S cp in a finite number of steps and renders the origin e p = 0 uniformly practically asymptotically stable. Proof. Part I and II show the desired properties at time t 0, assuming Ṽ (x(t), t) Ṽmax. Part III establishes that Ṽ N (x(t), t) Ṽmax holds recursively. Part I: Show boundedness of ṼN : Using the bound Ṽmax and Proposition 3, the function { max{αu (r), r c α u, Ṽ max (r) := p Ṽ max } if r c p max{α u (c p ), Ṽmax} + ɛ(r c p ) else satisfies α u, Ṽ max K and ṼN(x(t), t) α u, Ṽ max ( e p (t) ) for arbitrary small ɛ R >0. The lower bound is trivial (Thm. 3). Part II: Show decrease of ṼN : The following derivation is split in two Parts (a), (b) depending on whether e p (t) c p := αu 1 (α l (c p )) c p or not, which are unified in (c). (a) Assume e p (t) c p, which implies (x(t), t) S cp by Proposition 3. Then we can use the derivations from Theorem 3 to conclude Ṽ N (x(t + 1), t + 1) ṼN(x(t), t) α l ( e p (t) ) + θ(n 2). For N Ñ1 we have positive invariance of S cp. (b) Assume e p (t) > c p. Lemma 3 with M = 1 ensures that there exists a k x {0,..., N 1}, such that e p(k x t) σṽmax (N 1), ẽ p(k x t) σṽmax (N 1). For N 1 + σṽmax ( c p ), we have e p(k x t) c p and thus k x 1. Using Proposition 3, we get This yields Ṽ N kx+1(x (k x t), t + k x ) α u ( e p(k x t) ). Ṽ N (x(t + 1), t + 1) L(x (k t), u (k t), t + k) k x 1 k=1 + ṼN k x+1(x (k x t), t + k x ) (45) J kx (x(t), t, ũ ( t)) L(x(t), u(t), t) + α u ( e p(k x t) ) with For + 2γ λ ( σṽmax (N 1)) + α V (2 σṽmax (N 1)) ṼN (x(t), t) α l ( e p (t) ) + θṽmax (N 1), θṽmax (N) := θ( σ 1 ( σ Vmax (N))). (49) N > Ñ2 := we have θṽmax (N 1) < α l ( c p ) and thus θ 1 Ṽ max (α l ( c p )) + 1, (50) Ṽ N (x(t + 1), t + 1) ṼN(x(t), t) α 2,N, with α 2,N R >0. (c) Given (a) and (b) we can unify the results as follows. Let N > Ñ3 := max{ñ1, Ñ2} with Ñ1, Ñ 2 according to (48), (50). The function r min{α l (r), α 3,N } c p if r c p α N, Ṽ max (r) := min{α l ( c p ), α 3,N } + r cp r max c p ɛ 2 else with r max = max x1,x 2 X x 1 x 2, ɛ 2 = 1 2 ( θ(n 2)+ α 2,N ) and α 3,N = α 2,N + θ(n 2) ɛ 2 > 0 satisfies α N, Ṽ max K, Ṽ N (x(t + 1), t + 1) ṼN(x(t), t) α N, Ṽ max ( e p (t) ) + θ(n 2). Part III: Recursive Feasibility and Practical Stability: Based on the decrease condition in Part II b) we have ṼN (x(t), t) Ṽ max recursively satisfied and the derivations in Part I and II hold for all t 0. Furthermore, (x(t), t) converges to the set S cp in at most k Ṽmax α = l (c p) α 2,N steps. From the derivations in Part II (a) and Theorem 3 we further know that this set is positively invariant for the closed-loop dynamics. Thus, the origin e p = 0 is uniformly practically asymptotically stable. We have shown, for any prediction horizon N > Ñ3 and any initial condition x(0) with ṼN(x(0), 0) Ṽmax, the closedloop uniformly converges to the set S cp, which is centered around the optimal reference x p. Corollary 1. Theorem 4 implies the following closed-loop asymptotic average performance bound: lim K K 1 1 K t=0 l(x(t), u(t), t) V p,min p + θ(n 2). (51)

12 12 The resulting closed-loop system practically tracks the unknown optimal reachable reference x p. Compared to the results in [32], the period length p is not needed for the online implementation. This is mainly due to the different setup and the corresponding difference in the dissipativity assumption (time-varying vs. orbit). Compared to the result in [16, Thm. 4.2], where Economic MPC in case of steady-state optimality is considered, this theorem relaxes the controllability assumption. A direct extension of these results to arbitrary reference trajectories would require a finite time controllability assumption on the full state space X (similar assumptions are used in [26, Sec. 8], [32]). Similar finite time controllability assumptions are used for sampleddata problems in [35]. In contrast, our result only requires local stabilizability and utilizes level sets similar to [14]. On level sets we automatically have a finite time controllability property with respect to the set S cp, as derived in Lemma 3 and exploited in Theorem 4. D. Discussion The following lemma shows how we can guarantee the strict dissipativity property in Assumption 5. Lemma 4. Assume that the system (1) is locally controllable around x p [27, Def ] and that the system is uniformly suboptimally operated off the trajectory x p according to Definition [17, Def. 12]. Then Assumption 5 is satisfied. Proof. The proof of this lemma is an extension of the results in [36], with the main difference in [36, Thm. 4] and [37, Thm. 4.12]. The local controllability assumption in combination with the definition of uniform suboptimal operation, enables us to construct a certain periodic trajectory with lower cost to prove dissipativity by contradiction. The rest of the proof does not change, compared to the steady-state case 7. With this result, we have a set of suitable assumptions that ensure practical tracking for general reference trajectories. Given an in general unreachable periodic reference trajectory (x ref, u ref ), there exists a trajectory (x p, u p ) (Def. 1), which is reachable and as close as possible to the given reference trajectory. Assume that the system is uniformly suboptimally operated off x p, which means that any trajectory with a close to optimal cost has a turnpike property with respect to x p. Such an assumption is not restrictive due to the positive definite cost Q, assuming the minimizer x p is unique. Then Lemma 4 ensures strict dissipativity (Ass. 5) if the system is locally controllable around x p. Finally Theorem 4 ensures practical tracking of x p, provided the prediction horizon N is chosen large enough. The main requirements are local stabilizability (Ass. 1) and that x p lies strictly in the constraint set (Ass. 4). To summarize, the closed-loop system does the right thing in that it finds the best trajectory x p, which is as close as possible to the unreachable reference, given that a large enough prediction horizon N is used. In contrast to most existing results for unreachable reference trajectories [10], [11], which use an artificial reference 7 The original proof [37, Thm. 4.12] only showed dissipativity on the control invariant set X 0, but can be extended to hold on X. trajectory, we are able to practically track the optimal reference without explicitly computing it. Remark 8. In case we have a non periodic reference, closedloop stability and performance of the MPC scheme over any finite interval T can still be ensured with the presented Theorems. In particular, consider a reference ( x ref, ũ ref ), with ( x ref (t), ũ ref (t)) = (x ref (t), u ref (t)), for all t {0,..., T + N} and the period length p T + N. Clearly, over the finite interval T the closed-loop system for both reference trajectories with the same initial condition cannot be distinguished. Thus the theorems regarding periodic reference trajectories can be used to describe the performance over any finite time interval. This is especially relevant in case the reference trajectory becomes reachable after some finite time interval, see numerical example Section VI. An extension of this analysis to general non-periodic reference trajecotries could be based on the concept of overtaking optimality [38]. Compared to reachable references, a drawback is that the bounds on the prediction horizon N depend on quantities that are defined by the optimal reachable trajectory (c p,c), for which hard bounds are difficult to obtain in practice (similar problems also appear in [16], where Economic MPC in case of steady-state optimality is considered). Hence, the bounds on the prediction horizon are often more of a conceptual nature. V. LINEAR SYSTEMS This section studies the special case of linear 8 systems x(t + 1) = Ax(t) + Bu(t) (52) and shows how the previous assumptions and results simplify. A. Stabilizability Lemma 5. Consider the system (52) with (A, B) stabilizable. Any stabilizing controller K satisfies Assumption 1 with κ(x, x ref, u ref ) = u ref + K(x x ref ), (53) A cl = A + BK, Q = Q + K RK. An incremental Lyapunov function is given by the Lyapunov equation P A clp A cl = Q, ρ = 1 λ min (Q P 1 ), V δ (x, x ref, u ref ) = x x ref 2 P. The bounds in Proposition 2 reduce to γ = λ max (P Q 1 ), c = δ ref λ min (QP 1 ). Assumption 3 reduces to the Pontryagin set difference 9 x ref (t) X := X X δref, u ref (t) U := U KX δref, 8 Similar results can be derived for linear time-varying (LTV) systems x(t + 1) = A(t)x(t) + B(t)u(t). This class of systems is especially relevant for tracking time-varying references, as nonlinear models are often linearized along the reference trajectory or identified based on experiments along reference trajectories. 9 The Pontryagin set difference is defined as P 1 P 2 = {x 1 P 1 x 1 + x 2 P 1, x 2 P 2 }. For polytopic constraints X, U the tightened constraints can be computed using the support function [39]. Alternatively, for a given reference trajectory, δ ref is the solution to a linear program (LP).

13 13 with the set X δref = {e e P e δ ref }. Proof. The proof of this lemma follows from plugging κ from (53) into (9) and evaluate the corresponding inequality using standard arguments (from linear systems theory). If the discrete-time linear quadratic regulator (DLQR) is considered, the bound γ becomes tight. B. Strong duality Lemma 6. Consider the system (52). Let Assumption 4 hold. Assume that Z is a convex set and R is positive definite. Then Assumption 5 is satisfied with linear storage functions λ k (x(k)) = λ k e p(k), γ λ (r) = r max k λ k. The rotated stage cost is given by L(x, u, k) = x x p (k) 2 Q + u u p (k) 2 R. (54) Proof. Assumption 4 ensures that the state and input constraints Z are not active in (31). The first part of this lemma is a standard result in convex optimization (strong duality) with the dual variables λ [31], [40]. This result requires a strictly convex cost (R, Q positive definite) and convex constraints (linear system, Z convex) with a slater condition. The special structure of the rotated cost is due to the inactive constraints, see also [41, Prop. 4.3]. For linear systems with a quadratic stage cost, we have linear storage functions and a quadratic rotated stage cost L. C. Unreachable reference trajectories Theorem 5. Consider the system (52) with (A, B) stabilizable. Let Assumption 4 hold. Assume that Z is a convex set and R is positive definite. For any Ṽmax R >0, there exists an Ñ3, such that for all N > Ñ3 and all initial conditions x(0) such that ṼN(x(0), 0) Ṽmax, the closed-loop system (5) satisfies e p (t) 2 Q ṼN(x(t), t) γṽmax e p (t) 2 Q Ṽ N (x(t + 1), t + 1) ṼN(x(t), t) e p (t) 2 Q + θṽmax (N 1), for all t 0 with θṽmax L, γṽmax R >0. Proof. The proof follows the lines of Theorem 4. The simplified assumptions are due to Lemma 5 and Lemma 6. Given the tighter characterization of the rotated cost in (54), the local bound in Proposition 3 can be replaced by ṼN (x(t), t) γ e p (t) 2 Q, for all e p(t) 2 Q c p,lin, with γ according to Lemma 5 and c p,lin = λ min (Q)δ p /c δ,u analogous to c in (18). This implies the upper bound { } Ṽ N (x(t), t) γṽmax e p (t) 2 Q, γṽmax = max γ, Ṽmax c p,lin. A lenghty calculation yields σ(n) θ(n) := γ σ(n) + 2 max λ k k λ min (Q) + α V γc p,lin + max k λ k (1 + ρ N γ) σ(n) := N + 2C σṽmax (N) := 2Ṽmax λ min (Q)N. The proof is analogous to Theorem 4. cp,lin ( ) 2 σ(n), λ min (Q) λ + C min(q), This theorem is an adaptation of Theorem 4 to linear system dynamics, which results in two simplifications. First, the assumptions are simpler and easier to verify. Second, the bounds on the necessary prediction Ñ3 are simpler. Conceptually similar results were obtained in [30, Thm 3.11] for linear quadratic economic MPC, assuming no constraints, i.e. Z = R n+m. VI. NUMERICAL EXAMPLES The following numerical examples illustrate the theoretical results in this paper. A. Nonlinear reference tracking The following example illustrates the theoretical results in Sections II-III for reachable references. We consider a discretetime version of a continuous stirred tank reactor (CSTR) ( ) x f(x, u) = 1 + h θ (1 x 1) hkx 1 e M x 2 x 2 + h θ (x f x 2 ) + hkx 1 e, M x 2 hαu(x 2 x c ) with the temperature x 1, the concentration x 2 and the coolant flow rate u, taken from 10 [25]. We consider the constraint set X = [0, 1] [0.5, 1], U = [0, 2] and the stage cost Q = I, R = The linearization of the system at each point r Z can be stabilized (Ass. 2) with ( ) P r =, K r = ( ). Remark 9. These matrices have been computed by solving multiple linear matrix inequalities (LMI) based on a gridding of the constraint set Z, similar to [42]. If we consider a larger constraint set Z, Assumption 2 can only be satisfied with parameterized matrices P r, K r. Based on Proposition 1, this property in combination with the continuity of the dynamics implies local incremental stabilizability (Ass. 1) with κ(x, z, v) = v + K r (x z), V δ (x, z, v) = x z 2 P r, and the parameters 11 δ loc = and ρ = These values are quite conservative. By gridding the constraint 10 The parameters are θ = 20, k = 300, M = 5, x f = , x c = , α = 0.117, h = 0.1. This corresponds to an Euler discretization with the sampling time h = 0.1s of the system in [25]. 11 Assumption 2 is satisfied with ɛ = 1. The nonlinearity f(x, u) satisfies the quadratic bounds (11),(16) with T = 2.11, T max = The Lipschitz bound is given by k max = K r = 261 and the quadratic bounds on V δ are given by the eigenvalues of P r: c δ,l = 71.35, c δ,u =

14 14 set and considering all possible combinations (x, z, v), it is possible to numerically verify the tighter bound ρ = , δ loc = 0.01 for this controller and δ-lyapunov function. We consider a reachable (Ass. 3) periodic reference trajectory that connects two equilibria which can be seen in Figure 2. According to Proposition 2, we have V N (x(t), t) γ e(t) 2 Q for e(t) 2 Q c, with γ = 166.6, c = A numerical investigation shows that the controller κ implies the tighter bound γ = 22.1 along the reference trajectory. According to Theorem 1, a prediction horizon of N > 134 ensures local exponential reference tracking. The reference trajectory and the local sets e(t) 2 Q c are shown in Figure 2. The small local region of attraction is mainly due to the input constraint. If a prediction horizon of N = 200 is used, Theorem 2 provides the region of attraction V max = The region of attraction, with one exemplary open-loop solution, is plotted in Figure ensure stability. In order to guarantee exponential stability for V max = , these derivations require a prediction horizon of N > 563. Thus, the proof technique presented in Theorem 2 is significantly less conservative compared to the method in [14]. In the closed-loop simulations, we find that the MPC with prediction horizon N = 10 is already able to exponentially stabilize the reference trajectory for all initial conditions considered in Theorem 2. Thus, the improved theoretical bounds on N in Theorem 2 are still quite conservative. Remark 11. In order to use the methods in [8], [9], one has to construct terminal sets and costs along the reference trajectory. Both these methods require offline computations for an explicit reference trajectory. Thus, online changes in the reference trajectory are hard to deal with. On the other hand, the proposed approach is independent of the reference trajectory as long as Assumption 3 is satisfied. To apply the method in [10], an artificial reference trajectory of equal length as the reference trajectory would have to be optimized, which seems unsuitable for real time operation. This example illustrates the applicability of the proposed method and highlights potential sources of conservatism B. Linear systems and unreachable reference trajectories Now we study a linear system with an unreachable reference trajectory to illustrate the theoretical results in Sections IV-V. Consider an asynchronous motor (ASM) with the parameters 12 from [43], and the sampling time of T s = 0.1ms. The model is a linear system subject to ellipsoidal constraints Fig. 2. Concentration vs. temperature: periodic reference trajectory x ref (solid) with local region of attraction e(t) 2 Q c (ellipses) Fig. 3. Concentration vs. temperature: periodic reference trajectory x ref (solid), increased region of attraction V N (x(0)) V max (dots), local region of attraction e(t) 2 Q c (ellipses) and exemplary open-loop solution x ( 0) (dashed). Remark 10. The proposed reference tracking MPC ensures stability with a prediction horizon of N = 200 (T = 20s) and requires no complex design procedure. As discussed in Remark 5, derivations similar to [14] could also be used to x + =Ax + Bu, x = ( i s,α, i s,β, Ψ r,α, Ψ r,β ), u = ( us,α, u s,β ), X ={x i 2 s,α + i 2 s,β i 2 max, Ψ 2 r,α + Ψ 2 r,β Ψ 2 r,max}, U ={u u 2 s,α + u 2 s,β u 2 max}, with the stator current i s,αβ, the rotor flux Ψ r,αβ and the input voltage u. 1) Stabilizability and reachable reference trajectory: Typically, induction machines are controlled in the rotated dqframe, in which the system is described by nonlinear differential equations. In the αβ-frame, we have a linear system and stationary operation is described by a periodic trajectory. Thus, we can transform a nonlinear setpoint stabilization problem into a linear reference tracking problem. For the reference tracking MPC we use the stage cost Q = I, R = I, which implies γ = 2.5 with Lemma 5. Based on Theorem 1, the prediction horizon N 0 = 4 is sufficient for local stability of the periodic operation. An exemplary plot of two periodic trajectories with the local sets projected on the stator current i s,αβ can be seen in Figure 4. We want to focus on the problem of transitioning from one periodic operation to another. A naive method would be to 12 To simplify the following derivations, the stator flux Ψ s is scaled by a factor of 1000 and the input voltage is scaled by a factor of 10. The angular velocity ω is constant. We consider the constraints i max = 1, Ψ r,max = 284 and u max = 0.02.

15 15 directly change the reference trajectory at some point t, for which reliable theoretical guarantees are hard to obtain and practical simulations also lead to unsatisfactory results Fig. 4. Phase plot stator current i s,αβ : two periodic reference trajectories (solid) with (projected) local region of attraction e(t) 2 Q c (ellipses). Each corresponds to a specific operation point in the dq-frame. 2) Unreachable reference trajectory: To avoid this problem, one can generate a reference trajectory connecting the two periodic trajectories. In practice, computing a reachable reference trajectory can be a difficult task 14. Thus, as a reference trajectory x ref we linearly interpolate between the two trajectories over p = 1800 steps creating an almost reachable reference trajectory. Correspondingly, there exists an optimal reachable 15 reference trajectory x p (Def. 1). Theorem 5 can be used 16 to ensure local practical stability of the optimal reachable reference trajectory. In Figure 5, we can see the (partially) unreachable reference trajectory x ref, the optimal reachable reference trajectory x p and the local region of attraction e p 2 Q c p,lin. In addition, an exemplary closedloop simulation with N = 10 can be seen, which practically tracks the optimal reference trajectory x p. We have demonstrated that the simple reference tracking MPC formulation ensures practical tracking of the unknown optimal reachable trajectory. Thus, we have provided a method to enable operation changes without complex design procedures. We note that most other reference tracking approaches [8], [9], [10] are unsuited to track such an unreachable, non periodic reference trajectory. 13 Theorem 2 can be used to analyze the region of attraction. Due to the large distance between the trajectories, this would require a prediction horizon N 10 8, which is unrealistic. If we implement this method with a prediction horizon of N = 10, the MPC has in general undesirable nonsmooth operation accompanied by numerical difficulties stemming from active nonlinear state and input constraints, and repeated feasibility problems. 14 It is possible to compute a connecting reference trajectory in the dq frame based on a flatness property of the ASM. This is based on the nonlinear system, which makes real time optimization more difficult. In addition, constraint violations might occur. 15 Definition 1 is originally based on periodic reference trajectories. As discussed in Remark 8, non periodic trajectories can be treated as periodic trajectories over any finite time span. 16 Theorem 5 requires a prediction horizon of N Ñ , which is quite conservative. This conservatism is due to the conservative bound C in the turnpike property. Fig. 5. Phase plot stator current i s,αβ : unreachable reference trajectory x ref connecting two periodic reference trajectories, optimal reachable reference trajectory x p with (projected) local region of attraction e p(t) 2 Q c p,lin (ellipses) and closed-loop MPC trajectory x MPC (dashed). VII. CONCLUSION We have analyzed the closed-loop properties of a simple reference tracking MPC formulation and established theoretical guarantees for classes of systems and reference trajectories. These results rely on the introduced notion of local incremental stabilizability, which bridges the gap from setpoint stabilization to dynamic reference tracking. The derived bounds are less conservative than existing approaches. Furthermore, the theoretical results are applicable to general reachable and unreachable reference trajectories, which is not the case for most MPC schemes. The proposed reference tracking MPC scheme offers a simple approach that requires no complex offline computation for a specific reference trajectory. Thus, an operator can simply change the dynamic reference trajectory online to transition to another mode of operation, which makes the proposed scheme appealing for practical applications. The practicality of this approach has been illustrated with two numerical examples. Studying reference trajectories, which become unreachable due to active constraints, is part of future work. REFERENCES [1] J. B. Rawlings and D. Q. Mayne, Model predictive control: Theory and design. Nob Hill Pub., [2] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. Scokaert, Constrained model predictive control: Stability and optimality, Automatica, vol. 36, pp , [3] H. Chen and F. Allgöwer, A quasi-infinite horizon nonlinear model predictive control scheme with guaranteed stability, Automatica, vol. 34, pp , [4] D. Limón, T. Alamo, F. Salas, and E. F. Camacho, On the stability of constrained MPC without terminal constraint, IEEE Transactions on Automatic Control, vol. 51, pp , [5] L. Grüne, NMPC without terminal constraints, in Proc. IFAC Conf. Nonlinear Model Predictive Control, 2012, pp [6] M. Reble and F. Allgöwer, Unconstrained model predictive control and suboptimality estimates for nonlinear continuous-time systems, Automatica, vol. 48, pp , [7] A. Isidori, Nonlinear Control Systems. Springer, [8] T. Faulwasser, Optimization-based solutions to constrained trajectorytracking and path-following problems, Ph.D. dissertation, Otto-von- Guericke-Universität Magdeburg, [9] E. Aydiner, M. A. Müller, and F. Allgöwer, Periodic reference tracking for nonlinear systems via model predictive control, in Proc. European Control Conf. (ECC), 2016, pp [10] D. Limon, M. Pereira, D. M. de la Peña, T. Alamo, C. N. Jones, and M. N. Zeilinger, MPC for tracking periodic references, IEEE Transactions on Automatic Control, vol. 61, pp , 2016.

16 16 [11] P. Falugi and D. Q. Mayne, Tracking a periodic reference using nonlinear model predictive control, in Proc. 52nd IEEE Conf. Decision and Control (CDC), 2013, pp [12] S. Di Cairano and F. Borrelli, Reference tracking with guaranteed error bound for constrained linear systems, IEEE Transactions on Automatic Control, vol. 61, pp , [13] P. Falugi, Model predictive control for tracking randomly varying references, Int. J. Control, vol. 88, pp , [14] A. Boccia, L. Gru ne, and K. Worthmann, Stability and feasibility of state constrained MPC without stabilizing terminal constraints, Systems & Control Letters, vol. 72, pp , [15] M. Schulze Darup and M. Cannon, A missing link between nonlinear MPC schemes with guaranteed stability, in Proc. 54th IEEE Conf. Decision and Control (CDC), 2015, pp [16] L. Gru ne, Economic receding horizon control without terminal constraints, Automatica, vol. 49, pp , [17] M. A. Mu ller, L. Gru ne, and F. Allgo wer, On the role of dissipativity in economic model predictive control, in Proc. 5th IFAC Conf. Nonlinear Model Predictive Control, 2015, pp [18] J. Ko hler, M. A. Mu ller, and F. Allgo wer, Nonlinear reference tracking with model predictive control: An intuitive approach, 2018, submitted to European Control Conf. (ECC). [19] M. A. Mu ller and K. Worthmann, Quadratic costs do not always work in MPC, Automatica, vol. 82, pp , [20] I. R. Manchester and J.-J. E. Slotine, Control contraction metrics: Convex and intrinsic criteria for nonlinear feedback design, IEEE Transactions on Automatic Control, vol. 62, pp , [21] D. N. Tran, B. S. Ru ffer, and C. M. Kellett, Incremental stability properties for discrete-time systems, in Proc. 55th IEEE Conf. Decision and Control (CDC), 2016, pp [22] D. Angeli, A Lyapunov approach to incremental stability properties, IEEE Transactions on Automatic Control, vol. 47, pp , [23] F. Bayer, M. Bu rger, and F. Allgo wer, Discrete-time incremental ISS: A framework for robust NMPC, in Proc. European Control Conf. (ECC), 2013, pp [24] W. J. Rugh and J. S. Shamma, Research on gain scheduling, Automatica, vol. 36, pp , [25] D. Q. Mayne, E. C. Kerrigan, E. Van Wyk, and P. Falugi, Tube-based robust nonlinear model predictive control, Int. J. Robust and Nonlinear Control, vol. 21, pp , [26] L. Gru ne and J. Pannek, Nonlinear Model Predictive Control. Springer, [27] E. D. Sontag, Mathematical Control Theory: Deterministic Finite Dimensional Systems. Springer, [28] J. B. Rawlings, D. Bonne, J. B. Jorgensen, A. N. Venkat, and S. B. Jorgensen, Unreachable setpoints in model predictive control, IEEE Transactions on Automatic Control, vol. 53, pp , [29] M. A. Mu ller and F. Allgo wer, Economic and distributed model predictive control: Recent developments in optimization-based control, SICE J. of Control, Measurement, and System Integration, vol. 10, pp , [30] L. Gru ne and M. Stieler, Asymptotic stability and transient optimality of economic MPC without terminal conditions, J. Proc. Contr., vol. 24, pp , [31] M. Zanon, S. Gros, and M. Diehl, A Lyapunov function for periodic economic optimizing model predictive control, in Proc. 52nd IEEE Conf. Decision and Control (CDC), 2013, pp [32] M. A. Mu ller and L. Gru ne, Economic model predictive control without terminal constraints for optimal periodic behavior, Automatica, vol. 70, pp , [33] M. Zanon, L. Gru ne, and M. Diehl, Periodic optimal control, dissipativity and MPC, IEEE Transactions on Automatic Control, vol. 62, pp , [34] D. Angeli, R. Amrit, and J. B. Rawlings, On average performance and stability of economic model predictive control, IEEE transactions on automatic control, vol. 57, pp , [35] T. Faulwasser and D. Bonvin, On the design of economic NMPC based on approximate turnpike properties, in Proc. 54th IEEE Conf. Decision and Control (CDC), 2015, pp [36] M. A. Mu ller, D. Angeli, and F. Allgo wer, On necessity and robustness of dissipativity in economic model predictive control, IEEE Transactions on Automatic Control, vol. 60, pp , [37] M. A. Mu ller, Distributed and economic model predictive control: beyond setpoint stabilization, Ph.D. dissertation, Universita t Stuttgart, [38] L. Gru ne and S. Pirkelmann, Closed-loop performance analysis for economic model predictive control of time-varying systems, in Proc. 56th IEEE Conf. Decision and Control (CDC), 2017, pp [39] C. Conte, M. N. Zeilinger, M. Morari, and C. N. Jones, Robust distributed model predictive control of linear systems, in Proc. European Control Conf. (ECC), 2013, pp [40] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, [41] T. Damm, L. Gru ne, M. Stieler, and K. Worthmann, An exponential turnpike theorem for dissipative discrete time optimal control problems, SIAM J. on Control and Optimization, vol. 52, pp , [42] P. S. Cisneros and H. Werner, Parameter-dependent stability conditions for quasi-lpv model predictive control, in Proc. American Control Conf. (ACC), 2017, pp [43] H. S. Che, E. Levi, M. Jones, W.-P. Hew, and N. A. Rahim, Current control methods for an asymmetrical six-phase induction motor drive, IEEE Transactions on Power Electronics, vol. 29, pp , Johannes Ko hler received his Master degree in Engineering Cybernetics from the University of Stuttgart, Germany, in During his studies, he spent 3 months at Harvard University in Na Li s research lab. He has since been a doctoral student at the Institute for Systems Theory and Automatic Control under the supervision of Prof. Frank Allgo wer and a member of the Graduate School Soft Tissue Robotics at the University of Stuttgart. His research interests are in the area of model predictive control. Matthias A. Mu ller received a Diploma degree in Engineering Cybernetics from the University of Stuttgart, Germany, and an M.S. in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign, US, both in In 2014, he obtained a Ph.D. in Mechanical Engineering, also from the University of Stuttgart, Germany, for which he received the 2015 European Ph.D. award on control for complex and heterogeneous systems. He is currently working as a senior lecturer (Akademischer Oberrat) at the Institute for Systems Theory and Automatic Control at the University of Stuttgart, Germany. His research interests include nonlinear control and estimation, model predictive control, distributed control and switched systems, with application in different fields including biomedical engineering. Frank Allgo wer studied Engineering Cybernetics and Applied Mathematics in Stuttgart and at the University of California, Los Angeles (UCLA), respectively, and received his Ph.D. degree from the University of Stuttgart in Germany. Since 1999 he is the Director of the Institute for Systems Theory and Automatic Control and professor at the University of Stuttgart. His research interests include networked control, cooperative control, predictive control, and nonlinear control with application to a wide range of fields including systems biology. For the years Frank serves as President of the International Federation of Automatic Control (IFAC) and since 2012 as Vice President of the German Research Foundation DFG.

Nonlinear Reference Tracking with Model Predictive Control: An Intuitive Approach

Nonlinear Reference Tracking with Model Predictive Control: An Intuitive Approach onlinear Reference Tracking with Model Predictive Control: An Intuitive Approach Johannes Köhler, Matthias Müller, Frank Allgöwer Abstract In this paper, we study the system theoretic properties of a reference

More information

Economic and Distributed Model Predictive Control: Recent Developments in Optimization-Based Control

Economic and Distributed Model Predictive Control: Recent Developments in Optimization-Based Control SICE Journal of Control, Measurement, and System Integration, Vol. 10, No. 2, pp. 039 052, March 2017 Economic and Distributed Model Predictive Control: Recent Developments in Optimization-Based Control

More information

Economic Nonlinear Model Predictive Control

Economic Nonlinear Model Predictive Control Economic Nonlinear Model Predictive Control Timm Faulwasser Karlsruhe Institute of Technology (KIT) timm.faulwasser@kit.edu Lars Grüne University of Bayreuth lars.gruene@uni-bayreuth.de Matthias A. Müller

More information

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS D. Limon, J.M. Gomes da Silva Jr., T. Alamo and E.F. Camacho Dpto. de Ingenieria de Sistemas y Automática. Universidad de Sevilla Camino de los Descubrimientos

More information

Economic MPC using a Cyclic Horizon with Application to Networked Control Systems

Economic MPC using a Cyclic Horizon with Application to Networked Control Systems Economic MPC using a Cyclic Horizon with Application to Networked Control Systems Stefan Wildhagen 1, Matthias A. Müller 1, and Frank Allgöwer 1 arxiv:1902.08132v1 [cs.sy] 21 Feb 2019 1 Institute for Systems

More information

ECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects

ECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects ECE7850 Lecture 8 Nonlinear Model Predictive Control: Theoretical Aspects Model Predictive control (MPC) is a powerful control design method for constrained dynamical systems. The basic principles and

More information

EE C128 / ME C134 Feedback Control Systems

EE C128 / ME C134 Feedback Control Systems EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of

More information

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1 A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1 Ali Jadbabaie, Claudio De Persis, and Tae-Woong Yoon 2 Department of Electrical Engineering

More information

IEOR 265 Lecture 14 (Robust) Linear Tube MPC

IEOR 265 Lecture 14 (Robust) Linear Tube MPC IEOR 265 Lecture 14 (Robust) Linear Tube MPC 1 LTI System with Uncertainty Suppose we have an LTI system in discrete time with disturbance: x n+1 = Ax n + Bu n + d n, where d n W for a bounded polytope

More information

Economic model predictive control with self-tuning terminal weight

Economic model predictive control with self-tuning terminal weight Economic model predictive control with self-tuning terminal weight Matthias A. Müller, David Angeli, and Frank Allgöwer Abstract In this paper, we propose an economic model predictive control (MPC) framework

More information

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem Pulemotov, September 12, 2012 Unit Outline Goal 1: Outline linear

More information

Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop

Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop Jan Maximilian Montenbruck, Mathias Bürger, Frank Allgöwer Abstract We study backstepping controllers

More information

ESTIMATES ON THE PREDICTION HORIZON LENGTH IN MODEL PREDICTIVE CONTROL

ESTIMATES ON THE PREDICTION HORIZON LENGTH IN MODEL PREDICTIVE CONTROL ESTIMATES ON THE PREDICTION HORIZON LENGTH IN MODEL PREDICTIVE CONTROL K. WORTHMANN Abstract. We are concerned with model predictive control without stabilizing terminal constraints or costs. Here, our

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

Asymptotic stability and transient optimality of economic MPC without terminal conditions

Asymptotic stability and transient optimality of economic MPC without terminal conditions Asymptotic stability and transient optimality of economic MPC without terminal conditions Lars Grüne 1, Marleen Stieler 2 Mathematical Institute, University of Bayreuth, 95440 Bayreuth, Germany Abstract

More information

Economic model predictive control without terminal constraints: optimal periodic operation

Economic model predictive control without terminal constraints: optimal periodic operation Economic model predictive control without terminal constraints: optimal periodic operation Matthias A. Müller and Lars Grüne Abstract In this paper, we analyze economic model predictive control schemes

More information

Postface to Model Predictive Control: Theory and Design

Postface to Model Predictive Control: Theory and Design Postface to Model Predictive Control: Theory and Design J. B. Rawlings and D. Q. Mayne August 19, 2012 The goal of this postface is to point out and comment upon recent MPC papers and issues pertaining

More information

Enlarged terminal sets guaranteeing stability of receding horizon control

Enlarged terminal sets guaranteeing stability of receding horizon control Enlarged terminal sets guaranteeing stability of receding horizon control J.A. De Doná a, M.M. Seron a D.Q. Mayne b G.C. Goodwin a a School of Electrical Engineering and Computer Science, The University

More information

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming

ECE7850 Lecture 7. Discrete Time Optimal Control and Dynamic Programming ECE7850 Lecture 7 Discrete Time Optimal Control and Dynamic Programming Discrete Time Optimal control Problems Short Introduction to Dynamic Programming Connection to Stabilization Problems 1 DT nonlinear

More information

Theory in Model Predictive Control :" Constraint Satisfaction and Stability!

Theory in Model Predictive Control : Constraint Satisfaction and Stability! Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time

More information

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality

Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Christian Ebenbauer Institute for Systems Theory in Engineering, University of Stuttgart, 70550 Stuttgart, Germany ce@ist.uni-stuttgart.de

More information

Real Time Economic Dispatch for Power Networks: A Distributed Economic Model Predictive Control Approach

Real Time Economic Dispatch for Power Networks: A Distributed Economic Model Predictive Control Approach Real Time Economic Dispatch for Power Networks: A Distributed Economic Model Predictive Control Approach Johannes Köhler, Matthias A. Müller, Na Li, Frank Allgöwer Abstract Fast power fluctuations pose

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

Stability and feasibility of MPC for switched linear systems with dwell-time constraints

Stability and feasibility of MPC for switched linear systems with dwell-time constraints MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Stability and feasibility of MPC for switched linear systems with dwell-time constraints Bridgeman, L.; Danielson, C.; Di Cairano, S. TR016-045

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

ROBUSTNESS OF PERFORMANCE AND STABILITY FOR MULTISTEP AND UPDATED MULTISTEP MPC SCHEMES. Lars Grüne and Vryan Gil Palma

ROBUSTNESS OF PERFORMANCE AND STABILITY FOR MULTISTEP AND UPDATED MULTISTEP MPC SCHEMES. Lars Grüne and Vryan Gil Palma DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS Volume 35, Number 9, September 2015 doi:10.3934/dcds.2015.35.xx pp. X XX ROBUSTNESS OF PERFORMANCE AND STABILITY FOR MULTISTEP AND UPDATED MULTISTEP MPC SCHEMES

More information

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 EN530.678 Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 Prof: Marin Kobilarov 0.1 Model prerequisites Consider ẋ = f(t, x). We will make the following basic assumptions

More information

State-norm estimators for switched nonlinear systems under average dwell-time

State-norm estimators for switched nonlinear systems under average dwell-time 49th IEEE Conference on Decision and Control December 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA State-norm estimators for switched nonlinear systems under average dwell-time Matthias A. Müller

More information

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles HYBRID PREDICTIVE OUTPUT FEEDBACK STABILIZATION OF CONSTRAINED LINEAR SYSTEMS Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides Department of Chemical Engineering University of California,

More information

Second Order Optimality Conditions for Constrained Nonlinear Programming

Second Order Optimality Conditions for Constrained Nonlinear Programming Second Order Optimality Conditions for Constrained Nonlinear Programming Lecture 10, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk)

More information

Distributed and Real-time Predictive Control

Distributed and Real-time Predictive Control Distributed and Real-time Predictive Control Melanie Zeilinger Christian Conte (ETH) Alexander Domahidi (ETH) Ye Pu (EPFL) Colin Jones (EPFL) Challenges in modern control systems Power system: - Frequency

More information

MPC for tracking periodic reference signals

MPC for tracking periodic reference signals MPC for tracking periodic reference signals D. Limon T. Alamo D.Muñoz de la Peña M.N. Zeilinger C.N. Jones M. Pereira Departamento de Ingeniería de Sistemas y Automática, Escuela Superior de Ingenieros,

More information

Predictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions

Predictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions Predictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions M. Lazar, W.P.M.H. Heemels a a Eindhoven Univ. of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands

More information

Hybrid Systems Course Lyapunov stability

Hybrid Systems Course Lyapunov stability Hybrid Systems Course Lyapunov stability OUTLINE Focus: stability of an equilibrium point continuous systems decribed by ordinary differential equations (brief review) hybrid automata OUTLINE Focus: stability

More information

On the Inherent Robustness of Suboptimal Model Predictive Control

On the Inherent Robustness of Suboptimal Model Predictive Control On the Inherent Robustness of Suboptimal Model Predictive Control James B. Rawlings, Gabriele Pannocchia, Stephen J. Wright, and Cuyler N. Bates Department of Chemical & Biological Engineering Computer

More information

Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees

Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees Pontus Giselsson Department of Automatic Control LTH Lund University Box 118, SE-221 00 Lund, Sweden pontusg@control.lth.se

More information

Output Input Stability and Minimum-Phase Nonlinear Systems

Output Input Stability and Minimum-Phase Nonlinear Systems 422 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 3, MARCH 2002 Output Input Stability and Minimum-Phase Nonlinear Systems Daniel Liberzon, Member, IEEE, A. Stephen Morse, Fellow, IEEE, and Eduardo

More information

MPC: implications of a growth condition on exponentially controllable systems

MPC: implications of a growth condition on exponentially controllable systems MPC: implications of a growth condition on exponentially controllable systems Lars Grüne, Jürgen Pannek, Marcus von Lossow, Karl Worthmann Mathematical Department, University of Bayreuth, Bayreuth, Germany

More information

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 1/5 Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 2/5 Time-varying Systems ẋ = f(t, x) f(t, x) is piecewise continuous in t and locally Lipschitz in x for all t

More information

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions Robust Stability Robust stability against time-invariant and time-varying uncertainties Parameter dependent Lyapunov functions Semi-infinite LMI problems From nominal to robust performance 1/24 Time-Invariant

More information

On robustness of suboptimal min-max model predictive control *

On robustness of suboptimal min-max model predictive control * Manuscript received June 5, 007; revised Sep., 007 On robustness of suboptimal min-max model predictive control * DE-FENG HE, HAI-BO JI, TAO ZHENG Department of Automation University of Science and Technology

More information

arxiv: v2 [math.oc] 15 Jan 2014

arxiv: v2 [math.oc] 15 Jan 2014 Stability and Performance Guarantees for MPC Algorithms without Terminal Constraints 1 Jürgen Pannek 2 and Karl Worthmann University of the Federal Armed Forces, 85577 Munich, Germany, juergen.pannek@googlemail.com

More information

ESC794: Special Topics: Model Predictive Control

ESC794: Special Topics: Model Predictive Control ESC794: Special Topics: Model Predictive Control Nonlinear MPC Analysis : Part 1 Reference: Nonlinear Model Predictive Control (Ch.3), Grüne and Pannek Hanz Richter, Professor Mechanical Engineering Department

More information

Information Structures Preserved Under Nonlinear Time-Varying Feedback

Information Structures Preserved Under Nonlinear Time-Varying Feedback Information Structures Preserved Under Nonlinear Time-Varying Feedback Michael Rotkowitz Electrical Engineering Royal Institute of Technology (KTH) SE-100 44 Stockholm, Sweden Email: michael.rotkowitz@ee.kth.se

More information

Distributed Receding Horizon Control of Cost Coupled Systems

Distributed Receding Horizon Control of Cost Coupled Systems Distributed Receding Horizon Control of Cost Coupled Systems William B. Dunbar Abstract This paper considers the problem of distributed control of dynamically decoupled systems that are subject to decoupled

More information

Passivity-based Stabilization of Non-Compact Sets

Passivity-based Stabilization of Non-Compact Sets Passivity-based Stabilization of Non-Compact Sets Mohamed I. El-Hawwary and Manfredi Maggiore Abstract We investigate the stabilization of closed sets for passive nonlinear systems which are contained

More information

Nonlinear Control Lecture 5: Stability Analysis II

Nonlinear Control Lecture 5: Stability Analysis II Nonlinear Control Lecture 5: Stability Analysis II Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2010 Farzaneh Abdollahi Nonlinear Control Lecture 5 1/41

More information

Control, Stabilization and Numerics for Partial Differential Equations

Control, Stabilization and Numerics for Partial Differential Equations Paris-Sud, Orsay, December 06 Control, Stabilization and Numerics for Partial Differential Equations Enrique Zuazua Universidad Autónoma 28049 Madrid, Spain enrique.zuazua@uam.es http://www.uam.es/enrique.zuazua

More information

On the Inherent Robustness of Suboptimal Model Predictive Control

On the Inherent Robustness of Suboptimal Model Predictive Control On the Inherent Robustness of Suboptimal Model Predictive Control James B. Rawlings, Gabriele Pannocchia, Stephen J. Wright, and Cuyler N. Bates Department of Chemical and Biological Engineering and Computer

More information

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton

More information

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0). Linear Systems Notes Lecture Proposition. A M n (R) is positive definite iff all nested minors are greater than or equal to zero. n Proof. ( ): Positive definite iff λ i >. Let det(a) = λj and H = {x D

More information

Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems

Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems Event-Triggered Decentralized Dynamic Output Feedback Control for LTI Systems Pavankumar Tallapragada Nikhil Chopra Department of Mechanical Engineering, University of Maryland, College Park, 2742 MD,

More information

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization Global stabilization of feedforward systems with exponentially unstable Jacobian linearization F Grognard, R Sepulchre, G Bastin Center for Systems Engineering and Applied Mechanics Université catholique

More information

Decentralized and distributed control

Decentralized and distributed control Decentralized and distributed control Centralized control for constrained discrete-time systems M. Farina 1 G. Ferrari Trecate 2 1 Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB) Politecnico

More information

Extremum Seeking with Drift

Extremum Seeking with Drift Extremum Seeking with Drift Jan aximilian ontenbruck, Hans-Bernd Dürr, Christian Ebenbauer, Frank Allgöwer Institute for Systems Theory and Automatic Control, University of Stuttgart Pfaffenwaldring 9,

More information

arxiv: v2 [cs.sy] 12 Jul 2013

arxiv: v2 [cs.sy] 12 Jul 2013 On generalized terminal state constraints for model predictive control Lorenzo Fagiano and Andrew R. Teel arxiv:127.788v2 [cs.sy] 12 Jul 213 Keywords: Model predictive control, Constrained control, Optimal

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

TTK4150 Nonlinear Control Systems Solution 6 Part 2

TTK4150 Nonlinear Control Systems Solution 6 Part 2 TTK4150 Nonlinear Control Systems Solution 6 Part 2 Department of Engineering Cybernetics Norwegian University of Science and Technology Fall 2003 Solution 1 Thesystemisgivenby φ = R (φ) ω and J 1 ω 1

More information

On the Approximation of Constrained Linear Quadratic Regulator Problems and their Application to Model Predictive Control - Supplementary Notes

On the Approximation of Constrained Linear Quadratic Regulator Problems and their Application to Model Predictive Control - Supplementary Notes On the Approximation of Constrained Linear Quadratic Regulator Problems and their Application to Model Predictive Control - Supplementary Notes Michael Muehlebach and Raffaello D Andrea arxiv:183.551v1

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Further results on Robust MPC using Linear Matrix Inequalities

Further results on Robust MPC using Linear Matrix Inequalities Further results on Robust MPC using Linear Matrix Inequalities M. Lazar, W.P.M.H. Heemels, D. Muñoz de la Peña, T. Alamo Eindhoven Univ. of Technology, P.O. Box 513, 5600 MB, Eindhoven, The Netherlands,

More information

Nonlinear Model Predictive Control Tools (NMPC Tools)

Nonlinear Model Predictive Control Tools (NMPC Tools) Nonlinear Model Predictive Control Tools (NMPC Tools) Rishi Amrit, James B. Rawlings April 5, 2008 1 Formulation We consider a control system composed of three parts([2]). Estimator Target calculator Regulator

More information

LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE

LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE TIMO REIS AND MATTHIAS VOIGT, Abstract. In this work we revisit the linear-quadratic

More information

A Simple Self-triggered Sampler for Nonlinear Systems

A Simple Self-triggered Sampler for Nonlinear Systems Proceedings of the 4th IFAC Conference on Analysis and Design of Hybrid Systems ADHS 12 June 6-8, 212. A Simple Self-triggered Sampler for Nonlinear Systems U. Tiberi, K.H. Johansson, ACCESS Linnaeus Center,

More information

EE363 homework 7 solutions

EE363 homework 7 solutions EE363 Prof. S. Boyd EE363 homework 7 solutions 1. Gain margin for a linear quadratic regulator. Let K be the optimal state feedback gain for the LQR problem with system ẋ = Ax + Bu, state cost matrix Q,

More information

Stochastic Tube MPC with State Estimation

Stochastic Tube MPC with State Estimation Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems MTNS 2010 5 9 July, 2010 Budapest, Hungary Stochastic Tube MPC with State Estimation Mark Cannon, Qifeng Cheng,

More information

Robust Adaptive MPC for Systems with Exogeneous Disturbances

Robust Adaptive MPC for Systems with Exogeneous Disturbances Robust Adaptive MPC for Systems with Exogeneous Disturbances V. Adetola M. Guay Department of Chemical Engineering, Queen s University, Kingston, Ontario, Canada (e-mail: martin.guay@chee.queensu.ca) Abstract:

More information

Norm-controllability, or how a nonlinear system responds to large inputs

Norm-controllability, or how a nonlinear system responds to large inputs Norm-controllability, or how a nonlinear system responds to large inputs Matthias A. Müller Daniel Liberzon Frank Allgöwer Institute for Systems Theory and Automatic Control IST), University of Stuttgart,

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

Improved MPC Design based on Saturating Control Laws

Improved MPC Design based on Saturating Control Laws Improved MPC Design based on Saturating Control Laws D.Limon 1, J.M.Gomes da Silva Jr. 2, T.Alamo 1 and E.F.Camacho 1 1. Dpto. de Ingenieria de Sistemas y Automática. Universidad de Sevilla, Camino de

More information

Generalized Input-to-State l 2 -Gains of Discrete-Time Switched Linear Control Systems

Generalized Input-to-State l 2 -Gains of Discrete-Time Switched Linear Control Systems Generalized Input-to-State l 2 -Gains of Discrete-Time Switched Linear Control Systems Jianghai Hu, Jinglai Shen, and Vamsi Putta February 9, 205 Abstract A generalized notion of input-to-state l 2 -gain

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 57 Outline 1 Lecture 13: Linear System - Stability

More information

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University. Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ

More information

Economic Model Predictive Control Historical Perspective and Recent Developments and Industrial Examples

Economic Model Predictive Control Historical Perspective and Recent Developments and Industrial Examples 1 Economic Model Predictive Control Historical Perspective and Recent Developments and Industrial Examples Public Trial Lecture Candidate: Vinicius de Oliveira Department of Chemical Engineering, Faculty

More information

ESC794: Special Topics: Model Predictive Control

ESC794: Special Topics: Model Predictive Control ESC794: Special Topics: Model Predictive Control Discrete-Time Systems Hanz Richter, Professor Mechanical Engineering Department Cleveland State University Discrete-Time vs. Sampled-Data Systems A continuous-time

More information

Giulio Betti, Marcello Farina and Riccardo Scattolini

Giulio Betti, Marcello Farina and Riccardo Scattolini 1 Dipartimento di Elettronica e Informazione, Politecnico di Milano Rapporto Tecnico 2012.29 An MPC algorithm for offset-free tracking of constant reference signals Giulio Betti, Marcello Farina and Riccardo

More information

arxiv: v3 [math.ds] 22 Feb 2012

arxiv: v3 [math.ds] 22 Feb 2012 Stability of interconnected impulsive systems with and without time-delays using Lyapunov methods arxiv:1011.2865v3 [math.ds] 22 Feb 2012 Sergey Dashkovskiy a, Michael Kosmykov b, Andrii Mironchenko b,

More information

Lyapunov Theory for Discrete Time Systems

Lyapunov Theory for Discrete Time Systems Università degli studi di Padova Dipartimento di Ingegneria dell Informazione Nicoletta Bof, Ruggero Carli, Luca Schenato Technical Report Lyapunov Theory for Discrete Time Systems This work contains a

More information

Suboptimality of minmax MPC. Seungho Lee. ẋ(t) = f(x(t), u(t)), x(0) = x 0, t 0 (1)

Suboptimality of minmax MPC. Seungho Lee. ẋ(t) = f(x(t), u(t)), x(0) = x 0, t 0 (1) Suboptimality of minmax MPC Seungho Lee In this paper, we consider particular case of Model Predictive Control (MPC) when the problem that needs to be solved in each sample time is the form of min max

More information

A Generalization of Barbalat s Lemma with Applications to Robust Model Predictive Control

A Generalization of Barbalat s Lemma with Applications to Robust Model Predictive Control A Generalization of Barbalat s Lemma with Applications to Robust Model Predictive Control Fernando A. C. C. Fontes 1 and Lalo Magni 2 1 Officina Mathematica, Departamento de Matemática para a Ciência e

More information

MOST control systems are designed under the assumption

MOST control systems are designed under the assumption 2076 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 9, OCTOBER 2008 Lyapunov-Based Model Predictive Control of Nonlinear Systems Subject to Data Losses David Muñoz de la Peña and Panagiotis D. Christofides

More information

Problem Description The problem we consider is stabilization of a single-input multiple-state system with simultaneous magnitude and rate saturations,

Problem Description The problem we consider is stabilization of a single-input multiple-state system with simultaneous magnitude and rate saturations, SEMI-GLOBAL RESULTS ON STABILIZATION OF LINEAR SYSTEMS WITH INPUT RATE AND MAGNITUDE SATURATIONS Trygve Lauvdal and Thor I. Fossen y Norwegian University of Science and Technology, N-7 Trondheim, NORWAY.

More information

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points

Nonlinear Control. Nonlinear Control Lecture # 3 Stability of Equilibrium Points Nonlinear Control Lecture # 3 Stability of Equilibrium Points The Invariance Principle Definitions Let x(t) be a solution of ẋ = f(x) A point p is a positive limit point of x(t) if there is a sequence

More information

Impulsive Stabilization and Application to a Population Growth Model*

Impulsive Stabilization and Application to a Population Growth Model* Nonlinear Dynamics and Systems Theory, 2(2) (2002) 173 184 Impulsive Stabilization and Application to a Population Growth Model* Xinzhi Liu 1 and Xuemin Shen 2 1 Department of Applied Mathematics, University

More information

A Stable Block Model Predictive Control with Variable Implementation Horizon

A Stable Block Model Predictive Control with Variable Implementation Horizon American Control Conference June 8-,. Portland, OR, USA WeB9. A Stable Block Model Predictive Control with Variable Implementation Horizon Jing Sun, Shuhao Chen, Ilya Kolmanovsky Abstract In this paper,

More information

Generating Functions of Switched Linear Systems: Analysis, Computation, and Stability Applications

Generating Functions of Switched Linear Systems: Analysis, Computation, and Stability Applications IEEE-TAC FP--5 Generating Functions of Switched Linear Systems: Analysis, Computation, and Stability Applications Jianghai Hu, Member, IEEE, Jinglai Shen, Member, IEEE, and Wei Zhang, Member, IEEE Abstract

More information

L 2 -induced Gains of Switched Systems and Classes of Switching Signals

L 2 -induced Gains of Switched Systems and Classes of Switching Signals L 2 -induced Gains of Switched Systems and Classes of Switching Signals Kenji Hirata and João P. Hespanha Abstract This paper addresses the L 2-induced gain analysis for switched linear systems. We exploit

More information

Stability in the sense of Lyapunov

Stability in the sense of Lyapunov CHAPTER 5 Stability in the sense of Lyapunov Stability is one of the most important properties characterizing a system s qualitative behavior. There are a number of stability concepts used in the study

More information

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively); STABILITY OF EQUILIBRIA AND LIAPUNOV FUNCTIONS. By topological properties in general we mean qualitative geometric properties (of subsets of R n or of functions in R n ), that is, those that don t depend

More information

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS Jorge M. Gonçalves, Alexandre Megretski y, Munther A. Dahleh y California Institute of Technology

More information

Chapter 3 Nonlinear Model Predictive Control

Chapter 3 Nonlinear Model Predictive Control Chapter 3 Nonlinear Model Predictive Control In this chapter, we introduce the nonlinear model predictive control algorithm in a rigorous way. We start by defining a basic NMPC algorithm for constant reference

More information

Equilibria with a nontrivial nodal set and the dynamics of parabolic equations on symmetric domains

Equilibria with a nontrivial nodal set and the dynamics of parabolic equations on symmetric domains Equilibria with a nontrivial nodal set and the dynamics of parabolic equations on symmetric domains J. Földes Department of Mathematics, Univerité Libre de Bruxelles 1050 Brussels, Belgium P. Poláčik School

More information

Whither discrete time model predictive control?

Whither discrete time model predictive control? TWCCC Texas Wisconsin California Control Consortium Technical report number 214-1 Whither discrete time model predictive control? Gabriele Pannocchia Dept. of Civil and Industrial Engineering (DICI) Univ.

More information

Model Predictive Control Short Course Regulation

Model Predictive Control Short Course Regulation Model Predictive Control Short Course Regulation James B. Rawlings Michael J. Risbeck Nishith R. Patel Department of Chemical and Biological Engineering Copyright c 2017 by James B. Rawlings Milwaukee,

More information

L -Bounded Robust Control of Nonlinear Cascade Systems

L -Bounded Robust Control of Nonlinear Cascade Systems L -Bounded Robust Control of Nonlinear Cascade Systems Shoudong Huang M.R. James Z.P. Jiang August 19, 2004 Accepted by Systems & Control Letters Abstract In this paper, we consider the L -bounded robust

More information

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 EC9A0: Pre-sessional Advanced Mathematics Course Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 1 Infimum and Supremum Definition 1. Fix a set Y R. A number α R is an upper bound of Y if

More information