Suboptimality of minmax MPC. Seungho Lee. ẋ(t) = f(x(t), u(t)), x(0) = x 0, t 0 (1)
|
|
- Mitchell Spencer
- 5 years ago
- Views:
Transcription
1 Suboptimality of minmax MPC Seungho Lee In this paper, we consider particular case of Model Predictive Control (MPC) when the problem that needs to be solved in each sample time is the form of min max problem. We find an analytical expression of the suboptimality that is a degree of degradation of the min max MPC solution with respect to the optimal control solution. We transfer the min max problem to the standard Bolza form in a frame of Method of Outer Approximations (MOA). Then we apply Phase I- Phase II method to solve the problem. Our method establishes forward invariant set and hence the feasibility is guaranteed without any additional constraint or modification of the cost function. We show that as the horizon length is increased sufficiently large, MPC solution approaches to the optimal control solution. 1 Introduction The principal of optimality is well studied since Bellman developed one of the most pioneering research about determining the optimal sequence of decisions from the fixed state of the system [1]. The approach that realizes his idea is referred as Dynamic Programming (DP) associated with discrete-time optimization problems. In continuous time optimization problems, the analogous equation is a partial differential equation which is usually referred as Hamilton-Jacobi-Bellman (HJB) equation. Suppose that we have a dynamical system that is modeled by a differential equation of the form ẋ(t) = f(x(t), u(t)), x(0) = x 0, t 0 (1) where x(t) is the state of the system and u(t) is the control. Now suppose that one wants to optimize its behavior by solving an optimal control problem of the form min u(t) U,x(t) X 0 f 0 (x(t), u(t))dt, (2) subject to the differential equation constraint (1). Let û(t) be solution of this optimal control problem and ˆx(t) = x(t, û(t)), t 0 the resulting trajectory. 1
2 Now suppose that the model (1) is not perfect, so the actual trajectory resulting from the control û(t), x (t, û(t)), is quite different from ˆx(t). To remedy this situation, it was proposed that every time units, the actual state be measured and the problem (1) re-solved, to obtain a corrected control, effectively creating a feedback mechanism. Experiments have shown that this idea, called Model Predictive Control (MPC) is an excellent idea. MPC is a form of feedback control in which the current control action is obtained by solving online, at each sample time, an open-loop optimal control problem over a fixed time window with the current system state as the initial condition. The solution of this optimization problem yields a finite sequence of control actions over the optimization window; only the first value is applied to the system, then the window is advanced one sample time and the optimization process repeated [2]. We are particularly interested in the min max MPC which refers the case when the problem that needs to be solved in every sample time is a minmax problem. See [3] for detailed discussion of DP in min max. This type of problem naturally arises in many engineering applications. The pursuit-evasion game applications are presented in [4] and [5]. Network and image processing applications are discussed in [6], [7] and [8], respectively. Biped robot application is given in [9]. However, as MPC is an approximation of the optimal control, the quality of solution obtained by solving MPC problem is inherently degraded from the optimal control problem. Some researches about MPC include finding analytical expression of this degradation, called suboptimality. However, suboptimality in min max MPC has not been explored yet. In this research, we present the analytical expression of it. Perhaps the most closely related works to this research are [10], [11], and [12]. They are in common in a sense that the given MPC problem is a Bolza form and the problem is unconstrained. The fact that the constrained min max problem is considered differentiates this work from the aforementioned researches. Another key feature of this work is a recursive feasibility. The recursive feasibility is one of well known drawback of the MPC. See [13], [14] for the feasibility issue in MPC. This issue is normally resolved by introducing terminal constraint or cost [14], [15]. In this work, we adopt Phase I-Phase II method [16] so that we can explicitly construct forward invariant set and hence the recursive feasibility is guaranteed without any additional constraint or cost. Our approach is to utilize some numerical optimization method. First, we use the method of the Method of Outer Approximations (MOA) [16] to separate continuous min max problem to the sequence of finite min max problems and 2
3 max problems that converges to the initial problem. Next, we adopt Phase I-Phase II method to the separated problems so that the solution of separated problems guarantee the existence of the forward invariant set. Also the sequence of the separated problems are constructed so that the problem can be transformed to the discrete Bolza form. Then we present properties of values of the problem based on the principal of optimality. Finally, analytical expression of the suboptimality that is leveraged by the relaxed dynamic programming [10],[11] is presented. 2 Problem statement In a given sample time t, consider two plays, X and Y whose nonlinear, discrete dynamics are given as x t+k+1;t = f x (x t+k;t, u t+k;t ), y t+k+1;t = f y (y t+k;t, v t+k;t ) (1),where u t+k;t R nu, v t+k;t R nv, x t+k;t R px, y t+k;t R py, x t+k;t χ, with given x t;t = x 0 and y t;t = y 0. Let u t = {u t+0;t, u t+1;t, u t+h 1;t } be a control sequence with a length H and x t = {x t+1;t, x t+2;t, x t+h;t } denote the prediction of the X player trajectory obtained by iterating the dynamics (1). The Y player trajectory is defined similarly. We are interested in solving the following problem in a frame of MPC, in a point of X player s view. min u t U t max v t Y (u t ) c(x k+t;t, y k+t;t ) s.t. f l (u k+t;t, x k+t;t ) 0, l {1, 2,, q x }, k H := {0, 1,, H 1}, x t+k+1;t = f x (x t+k;t, u t+k;t ), y t+k+1;t = f y (y t+k;t, v t+k;t ) (2),where U t R nu H, p : R px R py R, Y (u t ) := {v t R nv H f l c(u k+t;t, x k+t;t, v k+t;t, y k+t;t ) 0, v t+k;t R nv, k H}, l {1, 2,, q y }. Since we are interested in any sample time t, in given x 0 3
4 and y 0, we rewrite (2) in a compact form by omitting t min u U max v Y (u) c(x k, y k ) s.t. f l (u k, x k ) 0, l {1, 2,, q x }, k H x k+1 = f x (x k, u k ), y k+1 = f y (y k, v k ) (3) Problem is to find analytical expression of suboptimality of (3). 3 Method of Outer Approximation Problem in (3) is a generalized minmax problem and it is solved by separating it to the finite minmax problem and the maximimzaion problem. Algorithm 1: Method of Outer Approximation. Step 0: Set P 0 = {ŷ 0 }, I 0 = {0}, i = 0. Step 1: X player solves finite min max problem: û i = arg min u U max j I i c j (x k ) s.t. f l (u k, x k ) 0, l {1, 2,, q}, x k+1 = f x (x k, u k ), y k+1 = f y (y k, v k ), where c j (x k ) := c(x k, y j k ) (4) The trajectory ˆx i that corresponds to û i is obtained through (1). Step 2: X player assumes that Y player solves the following problem: ˆv i = arg max c(ˆx i k, y k ) v Y (û i ) s.t. f l (u k, v k, x k, y k ) 0, l {1, 2,, q}, x k+1 = f x (x k, u k ), y k+1 = f y (y k, v k ). (5) The trajectory ŷ i+1 that corresponds to ˆv i is obtained through (1). Step 3:. Update P i+1 = P i {ŷ i+1 }, I i+1 = I i {i + 1}, i = i + 1 go to Step 1. 4
5 ˆx 1ˆx 2ˆx MOA Horizon 1 2 H t 1 t Sample time Figure 1: An example trajectory generated by MOA at sample time t, when the horizon length is H. ˆx is obtained as lim i ˆx i = ˆx To analyze the convergence of the algorithm 1, let us translate max operation as follow. max c(ˆx i k, y k ) = max µ k c(ˆx i µ Σ H k, y k ), (6) where Σ q = {µ = {µ 1, µ 2,, µ q } µ i 0, Σ q i=1 µi = 1} is a unit simplex. Let v := (v, µ), c (ˆx i k, y k) := µ k c(ˆx i k, y k), then we can rewrite max operation in a compact form: ˆv i = arg max v Y (û i ) c (ˆx i k, y k ) s.t. f l (u k, v k, x k, y k ) 0, l {1, 2,, q} (7) Let us define φ(x) and φ P i(x) functions as max functions in Y and P i Y, respectively. φ(x) := max v Y (u) c (x, y) (8) and φ P i(x) := max v P i c (x, y) (9) Below Lemma 1 to Theorem 6 are from [16], Chapter 2 and 3. We summarize the result for the completeness of the Algorithm 1. Lemma 1. Suppose that the Algorithm 1 has constructed an infinite sequence {ˆx i } i=1, in solving (3). If {ˆxi } i=1 ˆx, then φ P i (ˆx i ) φ(ˆx) Proof. φ(ˆx i ) φ Pi (ˆx i ) c (ˆx i, ŷ i 1 ) (10) 5
6 Note that ŷ i 1 is a maximizer in i 1 th iteration in Algorithm 1. Since φ( ) is a continuous function, Since c ( ) is a continuous function, φ(ˆx i ) φ(ˆx) (11) c (ˆx i, ŷ i 1 ) c (ˆx i 1, ŷ i 1 ) 0 as i. (12) Because c (ˆx i 1, ŷ i 1 ) = φ(ˆx i 1 ), Therefore, c (ˆx i, ŷ i 1 ) φ(ˆx i 1 ) φ(ˆx) as i. (13) φ Pi (ˆx i ) φ(ˆx) (14) Theorem 2. Suppose Algorithm 1 constructed {ˆx i } i=1. If ˆx is an accumulation point, then ˆx is a minimizer of (3). Proof. From Lemma.1, φ Pi (ˆx i ) φ(ˆx) (15) as {ˆx i } i=1 ˆx. Let ˆv = min u U φ(x). Now, suppose ˆx is not a minimizer of φ(x). Then we see that there exists i [1, ] such that φ Pi (x) > φ(x) for some x. This contradict φ Pi (x) φ(x). Therefore, ˆx is a minimizer of φ(x). To solve finite min max problem (4) and max problem (5) in the algorithm 1, we use Phase I-Phase II method. We summarize the result in following Definition 3. to Theorem 6. Interested reader can find detailed discussion in [16], [17], and [18]. Definition 3. Following is called optimality function. { p θ i := min µ 0 ν [ ] } k φ(x i ) c k (x i ) + γψ(x i ) + (16) µ Σ 0 q Lemma 4. Searching direction ( h(x i ) = 1 δ µ 0 x p νx c k i (x i ) + ) q µ j x f j (x i ) j=1 (17) 6
7 , where (µ x, ν x ) is a solution of (16), is the direction that minimizes the cost of (3), while satisfying constraints. Lemma 5. Step size associated with the searching direction (17) is following. λ i = max{β k F (x i, x i + β k h i ) β k αθ i }, (18), where F (z, x) := max{φ(x) φ(z) γψ(z) +, ψ(x) ψ(z) + }, ψ(x) := max j l f l (x), ψ(x) + := max{0, ψ(x)}. Theorem 6. Algorithm solves problem (3) and hence the forward invariant set χ is established. 4 Transformation Remark 7. We have established χ by using P hasei P haseii method. This implies that lim i ŷ i = ŷ and lim i ˆx i = ˆx. Lemma 8. Suppose that the sequence x is given and ŷ i and ˆx i are maximizer and minimizer, respectively and c(x k, y k ) 0. Then following hold. c(x, ŷ i ) 1 max c(ˆx i k, ŷk) i = max v Y (u) c(x, y) 1 c(ˆx i k, y k ) (19) Proof. First, Suppose ˆx is a maximizing sequence of both f(x) and g(x) and maximum of the sequence is obtained at k th element. Then we see that ˆx is also a maximizing sequence of f(x)g(x) because it satisfies f(x)t g(x) + f(x) g(x) = 0 x x when x = ˆx. Now, by definition, ŷ i := arg max v Y (û i ) c(ˆx i k, y k) through f y. We see that ŷ i is a sequence such that c(ˆx i k, ŷi ) is a nondecreasing sequence because any decreasing does not beneficial for maximization. Hence, the maximum element is obtained on the boundary. This implies that every subinterval starting from ŷ 1 to ŷ H is also maximized. In turn, for given x, this implies ŷ i maximizes H c(x, y) = c(x, y) 1 as well. Therefore, (19) holds. Now let us discuss about the transformation from min max to Bolza form. In the Step 2 for some i in Method of Outer Approximation (MOA), ŷ i is obtained. Since ˆx i is obtained in the Step 1, following holds: max c(ˆx i k, ŷk) i = c(ˆx i, ŷ i ) (20) 7
8 Assumption 9. c(ˆx i, ŷ i ) 0. Therefore, max c(ˆx i k, ŷi k ) c(ˆx i, ŷ i ) = 1 (21) Also, for any x = {x 1,, x H }, and for the maximizer (ŷ i ) obtained from Step 2 of MOA, c(x, ŷ i ) 1 = c(x 1, ŷ1) i + c(x 2, ŷ2) i + + c(x H, ŷh) i H = c(x k, ŷk) i (22) Let l i (x k ) := c(x k, ŷk i ), then (22) becomes c(x, ŷ i ) 1 = H l i (x k ) (23) Note that the superscript i in l i (x k ) indicates that the function implicitly depends on ŷ i. Multiplying (21) to the LSH of (23) yields, Since c(x, ŷ i ) 1 max c(ˆx i c(ˆx i, ŷ i ) k, ŷk) i = H l i (x k ) (24) max c(ˆx i k, ŷk) i = max v Y (u) c(ˆx i k, y k ) (25) (24) becomes 1 c(ˆx i, ŷ i ) max v Y (u) c(x, y) 1 c(ˆx i k, y k ) = H l i (x k ) (26) Let c (ˆx i k, x, y) := c(x, y) 1c(ˆx i k, y k) and by taking min u U, 1 min c(ˆx i, ŷ i ) u U max v Y (u) c (ˆx i k, x, y) = min u U H l i (x k ) (27) 8
9 Finally, let l i (x k ) := c(ˆx i, ŷ i ) l i (x k ). Then from (27), min u U max v Y (u) c (ˆx k, x, y) = min u U H l i (x k ) (28) Remark 10. RHS form in (28) is not an implementable form because it requires advanced knowledge ˆx and ŷ. However, it is a favorable form for checking the suboptimality as the suboptimality is defined by the value of the optimization problem which implies that the sequences of ˆx and ŷ is obtained. Note that it is an i th iteration of the MOA. Therefore, a sequence of suboptimality is generated as i. Generated sequence of suboptimality is the case when the objective function is c (ˆx k, x, y). As i, we found relation between c (ˆx k, x, y) and the original objective function from the Proposition 11. Proposition 11. Suppose i so that ˆx i ˆx and ŷ i ŷ. Then c (ˆx k, x, y) = c(ˆx, ŷ) 1 c(ˆx k, ŷ k ). Proof. Because c (,, ) is a continuous function, as ˆx i ˆx and ŷ i ŷ, c (ˆx k, x, y) c (ˆx k, ˆx, ŷ). Therefore, c (ˆx k, x, y) = c(ˆx, ŷ) 1 c(ˆx k, ŷ k ). From now we omit superscript i that implies that we are currently i th iteration of the outer approximations. First, let us define the value of the infinite horizon problem V (x) := l(x n, µ (x n )), (29) n=0, where µ (x n ) is an infinite horizon control solution. The original objective function that is required in V (x) is defined as { l(x k ) := c(ˆx, ŷ ) c(x k, ŷk) i c(ˆx t, ŷ = t) c(x k+t;t, ŷk+t;t i ), if k H c(ˆx t, ŷ t) c(x k+t;t, ŷh+t;t i ), else (30), where ŷ t := {ŷ t+1;t, ŷ t+2;t, ŷ t+h;t, ŷ t+h;t, },i.e., the H th entry is infinitely repeated. Remark 12. The infinite horizon problem (29) describes the situation when the Y player uses finite horizon control law, but X player utilizes infinite horizon control law. Note that Y player control sequence and trajectory are extended by duplicating the last element to well define the cost function. Clearly, this preserves the value of the maximization. The value of the problem (28) is V H (x). The point 9
10 Pursuer obstacle Evader Figure 2: Illustrative example: different predicted trajectories x in V (x) is a starting point of entire problem and the point x in V H (x) is a starting point of current horizon. Fig.2 illustrated the different predicted trajectories resulting from the infinite horizon and finite horizon problem as appears in [19]. Consider the pursue-evasion game between a single pursuer and an evader. The goal of the pursuer is to minimize the distance between two players while avoiding the obstacle. The goal of the evader is to maximize the distance between them while avoiding the obstacle as well. It is intuitively clear that the evader trajectory is opposite direction to the pursuer. Knowing this, the pursuer trajectory is toward the evader. Suppose there is an obstacle between two players. If the pursuer solves finite horizon problem which is not long enough to activate the constraint about the obstacle, resulting trajectory is still straight line to the evader. However, if the pursuer solves infinite horizon problem, he takes the presence the obstacle into the consideration, and hence generates better trajectory. Lemma 13. For two nonempty sets U and U, let U = {u = {u 1, u 2,, u U }}, and U = {u = {u 1, u 2,, u U }} such that U U and hence U U. Then V U (x) V U (x) and hence V (x) V U (x) Proof. Let ψ(x) := max v Y (u) c(x k, y k ). From U U, we immediately see k H min u U ψ(x) min u U ψ(x). Remark 14. In the non-min max problem, for two non-empty sets such that H M, V M (x) V H (x) holds and hence V (x) V H (x). It is because cost function is a sum of the nonpositive running cost. Let l(x n, µ H (x)) be a running cost at a point x n, with a control µ H (x). Definition 15. V µ H(x) (x n ) := l(x n, µ H (x n )), (31) n=0 10
11 where the feedback law µ H (x) is a minimizing sequence of the problem (28). µ H (x) = arg min u U = arg min u U H l i (x k ) max v Y (u) c (ˆx k, x, y) (32) From the Bellman s principle, V H+1 (x n ) = min u {V H (x n+1 ) + l(x n, u)}. It is known that if u = µ H (x), V H+1 (x n ) = V H (x n+1 ) + l(x n, µ H (x)) (33) From the property of min max MPC, V H+1 (x n ) V H (x n ). Therefore, min u {V H (x n+1 ) + l(x n, u)} V H (x n ) (34) and V H (x n+1 ) + l(x n, µ H (x)) V H (x n ) (35) Definition 16. The suboptimality parameter of the min max MPC problem is α 1, such that, for any given x, and non-maximizer y V H (x n ) αv (x n ) αv µ H(x n) (x n ) (36) The parameter α quantifies the effectiveness of the finite horizon control law with respect to the the case when it is applied to the infinite horizon problem. In both cases, Y player uses finite horizon control law. Remark 17. Inequality (36) trivially satisfied by a large α. Suppose α = 1 then due to the property of the value of the minmax problem (Lemma 13), equality should hold: V H (x n ) = V (x n ), and V H (x n ) = V µ H(x) (x n ) (37) This implies that V (x n ) = V µ H(x) (x n ) and hence µ H (x) is the optimal control. Proposition 18. Suppose some α 1 satisfies αl(x n, µ H (x n )) V H (x n ) + V H (x n+1 ) (38) 11
12 Then it also satisfies V H (x n ) αv (x n ) αv µ H(x n) (x n ). Proof. From the hypothesis, αl(x n, µ H (x n )) V H (x n ) + V H (x n+1 ), summing up both side from n to infinity yields αv µ H(x n) (x n ) V H (x k ) + V H (x k+1 ) V H (x n ). (39) k=n Therefore, αv µ H(x n) (x n ) V H (x n ). Clearly αv (x n ) αv µ H(x n) (x n ) and for large α 1, αv (x n ) V H (x n ). Hence, the desired result is obtained. Our definition of the suboptimality is similar to the one in [10],[11],and [12]. However, the fundamental property of the value of the problem from Lemma 13 differentiate the procedure. 5 Sequence of suboptimality Since we are solving (2) in a recursive scheme (outer approximation), the parameter α that is associated with the suboptimality creates a sequence. Step 1. Perform the i th iteration of MOA. Step 2. Check suboptimality using (39). Step 3. Set i = go to Step 1. Step 4. As MOA terminates, compute the suboptimality of the original objective function. 6 Suboptimality Lemma 19. Suppose V H 1 (x n+1 ) + V H (x n+1 ) (α 1)l(x n, µ H (x n )) (40) holds for some α 1. Then, proposition 18 also holds. Proof. Adding l(x n, µ H (x n )) in both side of (40) yields V H 1 (x n+1 ) + V H (x n+1 ) + l(x n, µ H (x)(x n )) αl(x n, µ H (x)(x n )) (41) Since V H (x n ) = V H 1 (x n+1 ) + l(x n, µ H (x n )), V H (x n ) + V H (x n+1 ) αl(x n, µ H (x n )) (42) 12
13 It coincide to (38). Assumption 20. There exists η k 1 and γ 1 such that η k V k (x n ) V k 1 (x n ), V k (x n ) γl(x n ), for k H, V H+1 (x n ) l(x n ) (43) Theorem 21. The value of α that satisfies (40) is α = γ (γ+1) H+1 (γ+1) H+1 γ H+1 Proof. Note that the assumption V k (x n ) γl(x n ) implies V k 1 (x n+1 ) γl(x n ) because V k (x n ) V k 1 (x n+1 ). V k (x n ) = V k 1 (x n+1 ) + l(x n ) V k 1 (x n+1 ) + l(x n ) + ɛ(γl(x n ) V k 1 (x n+1 )) = (1 ɛ)v k 1 (x n+1 ) + (1 + ɛγ)l(x n ) η k (1 ɛ)v k (x n+1 ) + (1 + ɛγ)l(x n ) (44) Set ɛ = η 1 then (44) becomes γ+η V k (x n ) η k γ + 1 γ + η k (V k (x n+1 ) + l(x n )) = η k γ + 1 γ + η k V k+1 (x n ) (45) Suppose Then it satisfies η k = (γ + 1)k (γ + 1) k γ k (46) η k γ + 1 γ + η k = η k+1 (47) From the assumption, V H 1 (x n ) η H V H (x n ), adding V H (x n+1 ) both side V H (x n ) + V H (x n+1 ) η H+1 V H+1 (x n ) + V H (x n+1 ) (48) Since V H+1 (x n ) = V H (x n+1 ) + l(x n ), V H (x n ) + V H (x n+1 ) η H+1 V H+1 (x n ) + V H+1 (x n ) l(x n ). (49) 13
14 Since V H+1 (x n ) l(x n ) 0, (49) still holds without this term. Since V H+1 (x n ) V H (x n ), and V H (x n ) γl(x n ), and hence, V H (x n ) + V H (x n+1 ) γ(η H+1 )l(x n ) (50) From (40) and (51), V H 1 (x n+1 ) + V H (x n+1 ) [γ(η H+1 ) 1]l(x n ) (51) (α 1)l(x n ) = [γ(η H+1 ) 1]l(x n ) (52) Therefore, (γ + 1) H+1 α = γ (53) (γ + 1) H+1 γ H+1 Now we are interested in finding γ without evaluating V k 1 (x n+1 ) γl(x n ). Consider the following inequality. min c(x, u U ŷi 1 ) V H 1 (x n+1 ) γl(x n ). (54) Hote that ŷ i 1 is a maximizer obtained from the previous iteration i 1 and hence in general, it is not a maximizer in i th iteration of MOA. Let ˆx i be a resulting trajectory from min u U c(x, ŷ i 1 ). Then min c(x, u U ŷi 1 ) = min{c(ˆx i 1, ŷ1 i 1 ), c(ˆx i 2, ŷ2 i 1 ),, c(ˆx i H, ŷ i 1 H )} = min{c i 1, c i 2,, c i H} (55),where c i k := c(ˆxi k, ŷi 1 k ). Hote the following holds for any two scalar a and b. min{a, b} = a + b a b 2, (56) 2 2 Let us recursively define minimums as follow d k = d k 1 + c k+1 2 dk 1 + c k (57) 14
15 , with Then d 1 = c 1 + c 2 2 c1 + c (58) min c(x, u U ŷi 1 ) = d H 1 (59) and hence from (41) the approximation of the γ which does not require the knowledge of ˆx and ŷ is found γ = d H 1 l(x H ). (60) Remark 22. The condition γ = 1 means V k (x n ) l(x n ), for k H and its approximation is d H 1 = l(x H ). The latter indicates that the minimum is archived at the last entry of the sequence. 7 Conclusion From (40), if γ = 1, for every ɛ > 0, there exists H such that 1 + ɛ > α. This implies that from (25), optimality is recovered in a frame of MOA as H. References [1] R. Bellman, Dynamic programming and lagrange multipliers, Proceedings of the National Academy of Sciences of the United States of America, vol. 42, no. 10, p. 767, [2] D. Q. Mayne, Model predictive control: Recent developments and future promise, Automatica, [3] B. O Donoghue, Y. Wang, and S. Boyd, Min-max approximate dynamic programming, in Computer-Aided Control System Design (CACSD), 2011 IEEE International Symposium on, pp , IEEE, [4] S. Lee, E. Polak, and J. Walrand, A receding horizon control law for harbor defense, in Proc. 51th annual Allerton Conference on Communication, Control, and Computing, October
16 [5] S. Lee, E. Polak, and J. Walrand, On the use of min-max algorithms in receding horizon control laws for harbor defense, Engineering Optimization 2014, p. 211, [6] P. K. Simpson, Fuzzy min-max neural networks. i. classification, Neural Networks, IEEE Transactions on, vol. 3, no. 5, pp , [7] P. K. Simpson, Fuzzy min-max neural networks-part 2: Clustering., IEEE Transactions on Fuzzy systems, vol. 1, no. 1, p. 32, [8] J.-H. Wang and L.-D. Lin, Improved median filter using minmax algorithm for image processing, Electronics Letters, vol. 33, no. 16, pp , [9] J. Morimoto, G. Zeglin, and C. Atkeson, Minimax differential dynamic programming: application to a biped walking robot, in Intelligent Robots and Systems, (IROS 2003). Proceedings IEEE/RSJ International Conference on, vol. 2, pp vol.2, Oct [10] L. Grüne and A. Rantzer, On the infinite horizon performance of receding horizon controllers, Automatic Control, IEEE Transactions on, vol. 53, no. 9, pp , [11] L. Grüne, Analysis and design of unconstrained nonlinear mpc schemes for finite and infinite dimensional systems, SIAM Journal on Control and Optimization, vol. 48, no. 2, pp , [12] M. Reble and F. Allgöwer, Unconstrained model predictive control and suboptimality estimates for nonlinear continuous-time systems, Automatica, vol. 48, no. 8, pp , [13] J. Löfberg, Oops! i cannot do it again: Testing for recursive feasibility in mpc, Automatica, vol. 48, no. 3, pp , [14] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. Scokaert, Constrained model predictive control: Stability and optimality, Automatica, vol. 36, no. 6, pp , [15] L. Grüne and J. Pannek, Nonlinear model predictive control. Springer, [16] E. Polak, Optimization: algorithms and consistent approximations. Springer-Verlag New York, Inc.,
17 [17] E. Polak and L. He, Unified steerable phase i-phase ii method of feasible directions for semi-infinite optimization, Journal of Optimization Theory and Applications, vol. 69, no. 1, pp , [18] E. Polak, R. Trahan, and D. Q. Mayne, Combined phase i phase ii methods of feasible directions, Mathematical programming, vol. 17, no. 1, pp , [19] R. Isaacs, Differential games,
Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees
Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees Pontus Giselsson Department of Automatic Control LTH Lund University Box 118, SE-221 00 Lund, Sweden pontusg@control.lth.se
More informationStochastic Tube MPC with State Estimation
Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems MTNS 2010 5 9 July, 2010 Budapest, Hungary Stochastic Tube MPC with State Estimation Mark Cannon, Qifeng Cheng,
More informationOn the Inherent Robustness of Suboptimal Model Predictive Control
On the Inherent Robustness of Suboptimal Model Predictive Control James B. Rawlings, Gabriele Pannocchia, Stephen J. Wright, and Cuyler N. Bates Department of Chemical & Biological Engineering Computer
More informationFINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez
FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton
More informationLearning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System
Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System Ugo Rosolia Francesco Borrelli University of California at Berkeley, Berkeley, CA 94701, USA
More informationESTIMATES ON THE PREDICTION HORIZON LENGTH IN MODEL PREDICTIVE CONTROL
ESTIMATES ON THE PREDICTION HORIZON LENGTH IN MODEL PREDICTIVE CONTROL K. WORTHMANN Abstract. We are concerned with model predictive control without stabilizing terminal constraints or costs. Here, our
More informationMPC: implications of a growth condition on exponentially controllable systems
MPC: implications of a growth condition on exponentially controllable systems Lars Grüne, Jürgen Pannek, Marcus von Lossow, Karl Worthmann Mathematical Department, University of Bayreuth, Bayreuth, Germany
More informationIntroduction to Model Predictive Control. Dipartimento di Elettronica e Informazione
Introduction to Model Predictive Control Riccardo Scattolini Riccardo Scattolini Dipartimento di Elettronica e Informazione Finite horizon optimal control 2 Consider the system At time k we want to compute
More informationA Generalization of Barbalat s Lemma with Applications to Robust Model Predictive Control
A Generalization of Barbalat s Lemma with Applications to Robust Model Predictive Control Fernando A. C. C. Fontes 1 and Lalo Magni 2 1 Officina Mathematica, Departamento de Matemática para a Ciência e
More informationA Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1
A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1 Ali Jadbabaie, Claudio De Persis, and Tae-Woong Yoon 2 Department of Electrical Engineering
More informationOn robustness of suboptimal min-max model predictive control *
Manuscript received June 5, 007; revised Sep., 007 On robustness of suboptimal min-max model predictive control * DE-FENG HE, HAI-BO JI, TAO ZHENG Department of Automation University of Science and Technology
More informationEnlarged terminal sets guaranteeing stability of receding horizon control
Enlarged terminal sets guaranteeing stability of receding horizon control J.A. De Doná a, M.M. Seron a D.Q. Mayne b G.C. Goodwin a a School of Electrical Engineering and Computer Science, The University
More informationMoving Horizon Estimation (MHE)
Moving Horizon Estimation (MHE) James B. Rawlings Department of Chemical and Biological Engineering University of Wisconsin Madison Insitut für Systemtheorie und Regelungstechnik Universität Stuttgart
More informationOutline. 1 Full information estimation. 2 Moving horizon estimation - zero prior weighting. 3 Moving horizon estimation - nonzero prior weighting
Outline Moving Horizon Estimation MHE James B. Rawlings Department of Chemical and Biological Engineering University of Wisconsin Madison SADCO Summer School and Workshop on Optimal and Model Predictive
More informationOn the Inherent Robustness of Suboptimal Model Predictive Control
On the Inherent Robustness of Suboptimal Model Predictive Control James B. Rawlings, Gabriele Pannocchia, Stephen J. Wright, and Cuyler N. Bates Department of Chemical and Biological Engineering and Computer
More informationGame Theoretic Continuous Time Differential Dynamic Programming
Game heoretic Continuous ime Differential Dynamic Programming Wei Sun, Evangelos A. heodorou and Panagiotis siotras 3 Abstract In this work, we derive a Game heoretic Differential Dynamic Programming G-DDP
More informationA Model Predictive Control Scheme with Additional Performance Index for Transient Behavior
A Model Predictive Control Scheme with Additional Performance Index for Transient Behavior Andrea Alessandretti, António Pedro Aguiar and Colin N. Jones Abstract This paper presents a Model Predictive
More informationpacman A classical arcade videogame powered by Hamilton-Jacobi equations Simone Cacace Universita degli Studi Roma Tre
HJ pacman A classical arcade videogame powered by Hamilton-Jacobi equations Simone Cacace Universita degli Studi Roma Tre Control of State Constrained Dynamical Systems 25-29 September 2017, Padova Main
More informationNonlinear Reference Tracking with Model Predictive Control: An Intuitive Approach
onlinear Reference Tracking with Model Predictive Control: An Intuitive Approach Johannes Köhler, Matthias Müller, Frank Allgöwer Abstract In this paper, we study the system theoretic properties of a reference
More informationEN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018
EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal
More informationAn Introduction to Model-based Predictive Control (MPC) by
ECE 680 Fall 2017 An Introduction to Model-based Predictive Control (MPC) by Stanislaw H Żak 1 Introduction The model-based predictive control (MPC) methodology is also referred to as the moving horizon
More informationOn the stability of receding horizon control with a general terminal cost
On the stability of receding horizon control with a general terminal cost Ali Jadbabaie and John Hauser Abstract We study the stability and region of attraction properties of a family of receding horizon
More informationIMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS
IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS D. Limon, J.M. Gomes da Silva Jr., T. Alamo and E.F. Camacho Dpto. de Ingenieria de Sistemas y Automática. Universidad de Sevilla Camino de los Descubrimientos
More informationEconomic MPC using a Cyclic Horizon with Application to Networked Control Systems
Economic MPC using a Cyclic Horizon with Application to Networked Control Systems Stefan Wildhagen 1, Matthias A. Müller 1, and Frank Allgöwer 1 arxiv:1902.08132v1 [cs.sy] 21 Feb 2019 1 Institute for Systems
More informationEECS 750. Hypothesis Testing with Communication Constraints
EECS 750 Hypothesis Testing with Communication Constraints Name: Dinesh Krithivasan Abstract In this report, we study a modification of the classical statistical problem of bivariate hypothesis testing.
More informationOptimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.
Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation
More informationOptimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112
Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationNavigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop
Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop Jan Maximilian Montenbruck, Mathias Bürger, Frank Allgöwer Abstract We study backstepping controllers
More informationDistributed Receding Horizon Control of Cost Coupled Systems
Distributed Receding Horizon Control of Cost Coupled Systems William B. Dunbar Abstract This paper considers the problem of distributed control of dynamically decoupled systems that are subject to decoupled
More informationESC794: Special Topics: Model Predictive Control
ESC794: Special Topics: Model Predictive Control Nonlinear MPC Analysis : Part 1 Reference: Nonlinear Model Predictive Control (Ch.3), Grüne and Pannek Hanz Richter, Professor Mechanical Engineering Department
More informationLinear Quadratic Zero-Sum Two-Person Differential Games
Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard To cite this version: Pierre Bernhard. Linear Quadratic Zero-Sum Two-Person Differential Games. Encyclopaedia of Systems and Control,
More informationValue Function and Optimal Trajectories for some State Constrained Control Problems
Value Function and Optimal Trajectories for some State Constrained Control Problems Hasnaa Zidani ENSTA ParisTech, Univ. of Paris-saclay "Control of State Constrained Dynamical Systems" Università di Padova,
More informationECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects
ECE7850 Lecture 8 Nonlinear Model Predictive Control: Theoretical Aspects Model Predictive control (MPC) is a powerful control design method for constrained dynamical systems. The basic principles and
More informationLinear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013
Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,
More informationNumerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini
Numerical approximation for optimal control problems via MPC and HJB Giulia Fabrini Konstanz Women In Mathematics 15 May, 2018 G. Fabrini (University of Konstanz) Numerical approximation for OCP 1 / 33
More informationNonlinear L 2 -gain analysis via a cascade
9th IEEE Conference on Decision and Control December -7, Hilton Atlanta Hotel, Atlanta, GA, USA Nonlinear L -gain analysis via a cascade Peter M Dower, Huan Zhang and Christopher M Kellett Abstract A nonlinear
More informationRobust Adaptive MPC for Systems with Exogeneous Disturbances
Robust Adaptive MPC for Systems with Exogeneous Disturbances V. Adetola M. Guay Department of Chemical Engineering, Queen s University, Kingston, Ontario, Canada (e-mail: martin.guay@chee.queensu.ca) Abstract:
More informationSampled-Data Model Predictive Control for Nonlinear Time-Varying Systems: Stability and Robustness
Sampled-Data Model Predictive Control for Nonlinear Time-Varying Systems: Stability and Robustness Fernando A. C. C. Fontes 1, Lalo Magni 2, and Éva Gyurkovics3 1 Officina Mathematica, Departamento de
More informationConstrained State Estimation Using the Unscented Kalman Filter
16th Mediterranean Conference on Control and Automation Congress Centre, Ajaccio, France June 25-27, 28 Constrained State Estimation Using the Unscented Kalman Filter Rambabu Kandepu, Lars Imsland and
More informationCourse on Model Predictive Control Part II Linear MPC design
Course on Model Predictive Control Part II Linear MPC design Gabriele Pannocchia Department of Chemical Engineering, University of Pisa, Italy Email: g.pannocchia@diccism.unipi.it Facoltà di Ingegneria,
More informationPredictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions
Predictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions M. Lazar, W.P.M.H. Heemels a a Eindhoven Univ. of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
More informationLINEAR-CONVEX CONTROL AND DUALITY
1 LINEAR-CONVEX CONTROL AND DUALITY R.T. Rockafellar Department of Mathematics, University of Washington Seattle, WA 98195-4350, USA Email: rtr@math.washington.edu R. Goebel 3518 NE 42 St., Seattle, WA
More informationarxiv: v2 [math.oc] 15 Jan 2014
Stability and Performance Guarantees for MPC Algorithms without Terminal Constraints 1 Jürgen Pannek 2 and Karl Worthmann University of the Federal Armed Forces, 85577 Munich, Germany, juergen.pannek@googlemail.com
More informationOptimal Control. McGill COMP 765 Oct 3 rd, 2017
Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps
More informationarxiv: v2 [math.oc] 29 Aug 2012
Ensuring Stability in Networked Systems with Nonlinear MPC for Continuous Time Systems Lars Grüne 1, Jürgen Pannek 2, and Karl Worthmann 1 arxiv:123.6785v2 [math.oc] 29 Aug 212 Abstract For networked systems,
More informationStability Analysis of Optimal Adaptive Control under Value Iteration using a Stabilizing Initial Policy
Stability Analysis of Optimal Adaptive Control under Value Iteration using a Stabilizing Initial Policy Ali Heydari, Member, IEEE Abstract Adaptive optimal control using value iteration initiated from
More informationApproximate dynamic programming for stochastic reachability
Approximate dynamic programming for stochastic reachability Nikolaos Kariotoglou, Sean Summers, Tyler Summers, Maryam Kamgarpour and John Lygeros Abstract In this work we illustrate how approximate dynamic
More informationMPC for tracking periodic reference signals
MPC for tracking periodic reference signals D. Limon T. Alamo D.Muñoz de la Peña M.N. Zeilinger C.N. Jones M. Pereira Departamento de Ingeniería de Sistemas y Automática, Escuela Superior de Ingenieros,
More informationOptimal Control of Nonlinear Systems: A Predictive Control Approach
Optimal Control of Nonlinear Systems: A Predictive Control Approach Wen-Hua Chen a Donald J Ballance b Peter J Gawthrop b a Department of Aeronautical and Automotive Engineering, Loughborough University,
More informationNonlinear Model Predictive Control for Periodic Systems using LMIs
Marcus Reble Christoph Böhm Fran Allgöwer Nonlinear Model Predictive Control for Periodic Systems using LMIs Stuttgart, June 29 Institute for Systems Theory and Automatic Control (IST), University of Stuttgart,
More informationMinimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality
Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Christian Ebenbauer Institute for Systems Theory in Engineering, University of Stuttgart, 70550 Stuttgart, Germany ce@ist.uni-stuttgart.de
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationLinear Programming Methods
Chapter 11 Linear Programming Methods 1 In this chapter we consider the linear programming approach to dynamic programming. First, Bellman s equation can be reformulated as a linear program whose solution
More informationMin-Max Output Integral Sliding Mode Control for Multiplant Linear Uncertain Systems
Proceedings of the 27 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July -3, 27 FrC.4 Min-Max Output Integral Sliding Mode Control for Multiplant Linear Uncertain
More informationPassivity-based Stabilization of Non-Compact Sets
Passivity-based Stabilization of Non-Compact Sets Mohamed I. El-Hawwary and Manfredi Maggiore Abstract We investigate the stabilization of closed sets for passive nonlinear systems which are contained
More informationPontryagin s maximum principle
Pontryagin s maximum principle Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2012 Emo Todorov (UW) AMATH/CSE 579, Winter 2012 Lecture 5 1 / 9 Pontryagin
More informationEE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games
EE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games S. S. Sastry REVISED March 29th There exist two main approaches to optimal control and dynamic games: 1. via the Calculus of Variations
More informationDifferential Games II. Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011
Differential Games II Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011 Contents 1. I Introduction: A Pursuit Game and Isaacs Theory 2. II Strategies 3.
More informationA Receding Horizon Control Law for Harbor Defense
A Receding Horizon Control Law for Harbor Defense Seungho Lee 1, Elijah Polak 2, Jean Walrand 2 1 Department of Mechanical Science and Engineering University of Illinois, Urbana, Illinois - 61801 2 Department
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationSecond Order Optimality Conditions for Constrained Nonlinear Programming
Second Order Optimality Conditions for Constrained Nonlinear Programming Lecture 10, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk)
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationA Model Predictive Control Framework for Hybrid Dynamical Systems
A Model Predictive Control Framework for Hybrid Dynamical Systems Berk Altın Pegah Ojaghi Ricardo G. Sanfelice Department of Computer Engineering, University of California, Santa Cruz, CA 9564, USA (e-mail:
More informationPostface to Model Predictive Control: Theory and Design
Postface to Model Predictive Control: Theory and Design J. B. Rawlings and D. Q. Mayne August 19, 2012 The goal of this postface is to point out and comment upon recent MPC papers and issues pertaining
More informationSufficient Conditions for the Existence of Resolution Complete Planning Algorithms
Sufficient Conditions for the Existence of Resolution Complete Planning Algorithms Dmitry Yershov and Steve LaValle Computer Science niversity of Illinois at rbana-champaign WAFR 2010 December 15, 2010
More informationNear-Potential Games: Geometry and Dynamics
Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics
More informationA Systematic Approach to Extremum Seeking Based on Parameter Estimation
49th IEEE Conference on Decision and Control December 15-17, 21 Hilton Atlanta Hotel, Atlanta, GA, USA A Systematic Approach to Extremum Seeking Based on Parameter Estimation Dragan Nešić, Alireza Mohammadi
More informationStability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games
Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,
More informationDecentralized and distributed control
Decentralized and distributed control Centralized control for constrained discrete-time systems M. Farina 1 G. Ferrari Trecate 2 1 Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB) Politecnico
More informationRecent Advances in State Constrained Optimal Control
Recent Advances in State Constrained Optimal Control Richard B. Vinter Imperial College London Control for Energy and Sustainability Seminar Cambridge University, 6th November 2009 Outline of the talk
More informationGLOBAL STABILIZATION OF THE INVERTED PENDULUM USING MODEL PREDICTIVE CONTROL. L. Magni, R. Scattolini Λ;1 K. J. Åström ΛΛ
Copyright 22 IFAC 15th Triennial World Congress, Barcelona, Spain GLOBAL STABILIZATION OF THE INVERTED PENDULUM USING MODEL PREDICTIVE CONTROL L. Magni, R. Scattolini Λ;1 K. J. Åström ΛΛ Λ Dipartimento
More informationTheory in Model Predictive Control :" Constraint Satisfaction and Stability!
Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time
More informationAn homotopy method for exact tracking of nonlinear nonminimum phase systems: the example of the spherical inverted pendulum
9 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June -, 9 FrA.5 An homotopy method for exact tracking of nonlinear nonminimum phase systems: the example of the spherical inverted
More informationOptimal Control Theory
Optimal Control Theory The theory Optimal control theory is a mature mathematical discipline which provides algorithms to solve various control problems The elaborate mathematical machinery behind optimal
More informationarxiv: v1 [cs.sy] 2 Oct 2018
Non-linear Model Predictive Control of Conically Shaped Liquid Storage Tanks arxiv:1810.01119v1 [cs.sy] 2 Oct 2018 5 10 Abstract Martin Klaučo, L uboš Čirka Slovak University of Technology in Bratislava,
More information(q 1)t. Control theory lends itself well such unification, as the structure and behavior of discrete control
My general research area is the study of differential and difference equations. Currently I am working in an emerging field in dynamical systems. I would describe my work as a cross between the theoretical
More informationAn asymptotic ratio characterization of input-to-state stability
1 An asymptotic ratio characterization of input-to-state stability Daniel Liberzon and Hyungbo Shim Abstract For continuous-time nonlinear systems with inputs, we introduce the notion of an asymptotic
More informationApproximation of Continuous-Time Infinite-Horizon Optimal Control Problems Arising in Model Predictive Control
26 IEEE 55th Conference on Decision and Control (CDC) ARIA Resort & Casino December 2-4, 26, Las Vegas, USA Approximation of Continuous-Time Infinite-Horizon Optimal Control Problems Arising in Model Predictive
More informationModel Predictive Control of Nonlinear Systems: Stability Region and Feasible Initial Control
International Journal of Automation and Computing 04(2), April 2007, 195-202 DOI: 10.1007/s11633-007-0195-0 Model Predictive Control of Nonlinear Systems: Stability Region and Feasible Initial Control
More informationLecture Note 7: Switching Stabilization via Control-Lyapunov Function
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 7: Switching Stabilization via Control-Lyapunov Function Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationLecture 25: Subgradient Method and Bundle Methods April 24
IE 51: Convex Optimization Spring 017, UIUC Lecture 5: Subgradient Method and Bundle Methods April 4 Instructor: Niao He Scribe: Shuanglong Wang Courtesy warning: hese notes do not necessarily cover everything
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationGiulio Betti, Marcello Farina and Riccardo Scattolini
1 Dipartimento di Elettronica e Informazione, Politecnico di Milano Rapporto Tecnico 2012.29 An MPC algorithm for offset-free tracking of constant reference signals Giulio Betti, Marcello Farina and Riccardo
More information1 The Observability Canonical Form
NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)
More informationEconomic model predictive control with self-tuning terminal weight
Economic model predictive control with self-tuning terminal weight Matthias A. Müller, David Angeli, and Frank Allgöwer Abstract In this paper, we propose an economic model predictive control (MPC) framework
More informationMath 273a: Optimization Subgradients of convex functions
Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions
More informationSet Robust Control Invariance for Linear Discrete Time Systems
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 12-15, 2005 MoB09.5 Set Robust Control Invariance for Linear Discrete
More informationNONLINEAR RECEDING HORIZON CONTROL OF QUADRUPLE-TANK SYSTEM AND REAL-TIME IMPLEMENTATION. Received August 2011; revised December 2011
International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 10(B), October 2012 pp. 7083 7093 NONLINEAR RECEDING HORIZON CONTROL OF
More informationAn Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game
An Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game Marzieh Khakestari (Corresponding author) Institute For Mathematical Research, Universiti Putra Malaysia, 43400
More informationOptimality Conditions for Nonsmooth Convex Optimization
Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never
More informationAnytime Planning for Decentralized Multi-Robot Active Information Gathering
Anytime Planning for Decentralized Multi-Robot Active Information Gathering Brent Schlotfeldt 1 Dinesh Thakur 1 Nikolay Atanasov 2 Vijay Kumar 1 George Pappas 1 1 GRASP Laboratory University of Pennsylvania
More informationProximal-like contraction methods for monotone variational inequalities in a unified framework
Proximal-like contraction methods for monotone variational inequalities in a unified framework Bingsheng He 1 Li-Zhi Liao 2 Xiang Wang Department of Mathematics, Nanjing University, Nanjing, 210093, China
More informationNumerical Optimization
Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Consider the problem: Barrier and Penalty Methods x X where X
More informationChapter 2 Event-Triggered Sampling
Chapter Event-Triggered Sampling In this chapter, some general ideas and basic results on event-triggered sampling are introduced. The process considered is described by a first-order stochastic differential
More information5. Solving the Bellman Equation
5. Solving the Bellman Equation In the next two lectures, we will look at several methods to solve Bellman s Equation (BE) for the stochastic shortest path problem: Value Iteration, Policy Iteration and
More informationReal Time Stochastic Control and Decision Making: From theory to algorithms and applications
Real Time Stochastic Control and Decision Making: From theory to algorithms and applications Evangelos A. Theodorou Autonomous Control and Decision Systems Lab Challenges in control Uncertainty Stochastic
More informationSmall Gain Theorems on Input-to-Output Stability
Small Gain Theorems on Input-to-Output Stability Zhong-Ping Jiang Yuan Wang. Dept. of Electrical & Computer Engineering Polytechnic University Brooklyn, NY 11201, U.S.A. zjiang@control.poly.edu Dept. of
More information