Linear Matrix Inequalities in Control

Size: px
Start display at page:

Download "Linear Matrix Inequalities in Control"

Transcription

1 C. W. Scherer a S. Weiland b Linear Matrix Inequalities in Control a Mathematical Systems Theory, University of Stuttgart, Pfaffenwaldring 57, 7569 Stuttgart/ Germany Carsten.scherer@mathematik.uni-stuttgart.de b Department of Electrical Engineering, Eindhoven University of Technology P.O. Box 513, 56 MB Eindhoven, The Netherlands Postprint Series Issue No Stuttgart Research Centre for Simulation Technology (SRC SimTech) SimTech Cluster of Excellence Pfaffenwaldring 7a 7569 Stuttgart publications@simtech.uni-stuttgart.de

2 Linear Matrix Inequalities in Control Carsten Scherer Siep Weiland Mathematical Systems Theory Department of Electrical Engineering University of Stuttgart Eindhoven University of Technology Pfaffenwaldring 57 P.O. Box Stuttgart 56 MB Eindhoven Germany The Netherlands 1 Introduction Optimization problems and control problems are highly intertwined. If a control configuration has been decided upon, controller parameters or control input signals can be interpreted as decision variables of an optimization problem that reflects the desired specifications and constraints of the controlled system. In recent years, linear matrix inequalities (LMI s) have emerged as a powerful tool for approaching control problems that appear difficult if not impossible to solve in an analytic fashion. Although the history of linear matrix inequalities goes back to the fourties, with a major emphasis on their role in control in the sixties through the work of Kalman, Yakubovich, Popov and Willems, only during the last decades have powerful numerical interior point techniques been developed to solve LMI s in a practical and efficient manner (Nesterov, Nemirovskii). Today, several commercial and non-commercial software packages are available that allow for simple coding of general LMI problems. For example, Yalmip [12] is a very flexible and non-commercial toolbox for defining and solving advanced optimization problems. Boosted by the availability of fast LMI solvers, research in robust control theory has experienced a significant paradigm shift. Instead of arriving at an analytical solution of an optimal control problem and implementing such a solution in software so as to synthesize optimal controllers, today a substantial body of research is devoted to reformulating a control problem to the question of whether a specific linear matrix inequality is solvable or, alternatively, to optimizing functionals over linear matrix inequality constraints. It is the purpose of this chapter to highlight the main role and use of LMI s in solving a large variety of control and estimation problems. Notation R and C denote the fields of real and complex numbers. Sets of real and complex matrices of dimension m n are denoted by R m n and C m n. A matrix A C m n is Hermitian if it is square (m = n) and if A = A where the star denotes taking complex conjugate transpose. For real matrices this amounts to saying that A = A in which case A is said to be symmetric. The vector spaces of all n n Hermitian and symmetric matrices will be denoted by H n and S n, respectively, and we will omit the superscript n if the dimension is not relevant for the context. A Hermitian or symmetric matrix A is called negative definite, negative semi-definite, positive definite or positive semi-definite if x Ax <,, > or for all non-zero complex vectors x C n. We will denote this by A, A, A and A, and extend this notation to expressions A B to mean that A B for any A, B H n. A congruence transformation 1

3 of a square matrix M is a mapping M T MT with some nonsingular T. The operator col( ) stacks its arguments in a vector, as in col(a, b) = ( a b ) where a, b are vectors or matrices with the same number of columns. L n 2 denotes the space of all signals x : [, ) R n with finite energy x L2 := x(t) 2 dt where. is the Euclidean vector norm. We refer to the appendix for a brief summary on notions of convex sets and convex functions. 2 Linear matrix inequalities and convexity A Linear Matrix Inequality (LMI) is a constraint of the form F (x) := F + x 1 F x n F n (1) where F, F 1,..., F n are given real symmetric matrices and where x = col(x 1,..., x n ) is a vector of unknown real scalar decision variables. The inequality F (x) means that x should render the symmetric matrix F (x) negative definite, i.e., the maximum eigenvalue of F (x) should be negative. The LMI (1) gives rise to two types of questions: (a) The LMI feasibility problem amounts to testing whether there exist real variables x 1,..., x n such that (1) holds. (b) The LMI optimization problem amounts to minimizing a cost function c(x) = c 1 x c n x n over all x 1,..., x n that satisfy the constraint (1). Classical linear programs easily fit in this formalism, but also quadratic programs and some instances of convex quadratically constrained quadratic programs can be reformulated in this setting. In fact, the LMI optimization problem is a natural generalization of a linear program in which inequalities are defined by the cone of positive definite matrices. In most control applications, LMI s arise with matrix variables rather than vector variables. This means that we consider inequalities of the more general form F (X) in which X is a matrix that belongs to an arbitrary finite dimensional vector space X of matrices and where F : X S m is an affine function. We recall that affine functions assume the form F (x) = F + T (x) where F is fixed and where T is a linear map. Affine functions are, therefore, linear mappings plus some offset. For this reason, a linear matrix inequality is actually better called an affine matrix inequality, but the world has decided to accept this erroneous nomenclature. Example 1 A simple example of an LMI in the matrix valued unknown X = X is F (X) = A X + XA + Q where A and Q = Q are given square real matrices. This is a special case of (1) by expanding X as X = n k=1 x kx k where X k = Xk, k = 1,..., n, is a basis of the set of symmetric matrices X. Indeed, F is affine and F (X) = F ( n k=1 x kx k ) = Q + n k=1 x k(a X k + X k A) which is of the form (1) with F = Q and F k = A X k + X k A. 2

4 Of course, one is led to wonder what the practical interest in studying constraints of the special form (1) might be. There are a number of answers to this question. Firstly, (1) defines a convex constraint on the decision variable. This means that the solution set S := {x F (x) } of the LMI F (x) is convex. We refer the reader to the appendix for a brief overview of notions and results on convex sets and convex functions. Although the convex constraint F (x) on x may seem rather special, many convex sets can be represented in this way and, in fact, have more attractive properties than general convex sets. Secondly, the solution set of every finite set of LMI s F 1 (x),..., F k (x) can again be represented as one single LMI F (x) by setting F (x) = diag(f 1 (x),..., F k (x)). Hence, multiple LMI constraints on a decision variable x is again an LMI constraint. Thirdly, and very importantly, the partitioned LMI ( ) F 11 (x) F 12 (x) F (x) = F 21 (x) F 22 (x) is equivalent to { F 11 (x) S 22 (x) := F 22 (x) F 21 (x)f 11 (x) 1 F 12 (x) and at the same time equivalent to { S 11 (x) := F 11 (x) F 12 (x)f 22 (x) 1 F 21 (x) F 22 (x) Since F is affine, this equivalence means that also specific types of quadratic and rational inequalities can be reformulated as (1). The expressions S 11 (x) and S 22 (x) are called Schur complements of F 22 (x) and F 11 (x) in F (x). The above equivalences follow from the fact that congruence transformations of symmetric matrices leave the number of positive and negative eigenvalues invariant. In particular, M if and only if T MT ( for any nonsingular ) matrix T. For example, the first follows by taking M = F (x) and T =. I F 11 (x) 1 F 12 (x) I 3 Numerical solutions of LMI s Optimization problems over symmetric semi-definite matrix variables belong to the realm of semi-definite programming or semi-definite optimization. Although we mainly focus on control problems here, semi-definite programs occur in combinatorial optimization, polynomial optimization, topology optimization, etc. In the last few decades this research field has witnessed incredible breakthroughs in numerical tools, commercial and non-commercial software developments and fast solution algorithms. In particular, the introduction of powerful interior point methods allow us to effectively decide about the feasibility of semi-definite programs and to determine their solutions. This section aims to indicate briefly the main ideas behind interior point solvers of convex optimization programs. Let the solution set S := {x R n F (x) } of the LMI (1) be the domain of a convex function f : S R which we wish to minimize. That is, we consider the convex optimization problem v opt = inf f(x). (2) x S The idea behind interior point methods is to solve this constrained optimization problem by a sequence of unconstrained optimization problems. For this purpose, a barrier function φ is introduced. This is a function φ : R n R which is required to 3

5 (a) be strictly convex on the interior of S and (b) approach + along any sequence of points {x n } n=1 in the interior of S that converges to a boundary point of S. Given a barrier function φ, the constraint optimization problem is replaced by the unconstrained optimization problem to minimize the functional f t (x) := f(x) + tφ(x) (3) where t > is a penalty parameter. The main idea is to determine a curve t x(t) that associates with any t > a minimizer x(t) of f t. The minimum of f t is attained in the interior of S. Subsequently, the behavior of this mapping is considered as the penalty parameter t decreases to zero. In almost all interior point methods, the unconstrained optimization problem is solved with the classical Newton-Raphson iteration technique to approximately determine a local minimizer of f t. Since f t is strictly convex on R n, every local minimizer of f t is guaranteed to be the unique global minimizer. Under mild assumptions and for a suitably defined sequence of penalty parameters t n, t n as n, the sequence x(t n ) will converge to a point x. That is, the limit x := lim n x(t n ) exists and v opt = f(x ). If, in addition, x belongs to the interior of S then it is an optimal solution to (2); otherwise an almost optimal solution of (2) can be deduced from the sequence x(t n ). Interior point methods can be applied to either of the two LMI problems defined in the previous section. If we consider the feasibility problem associated with the LMI F (x) then (f does not play a role and) one candidate barrier function is the logarithmic function { log det( F (x) 1 ) if x S, φ(x) := otherwise. If S is bounded and non-empty, φ will be strictly convex. Convexity of φ implies the existence of a unique x opt such that φ(x opt ) is the global minimum of φ. The point x opt belongs to S and is called the analytic center of the feasibility set S. The LMI optimization problem to minimize c(x) subject to the LMI F (x) can be viewed as a feasibility problem for the LMI ( ) c(x) t G t (x) := F (x) where t > t := inf x S c(x) is a penalty parameter. Using the same barrier function yields the unconstrained optimization problem to minimize g t (x) := log det( G t (x) 1 ) = log 1 t c(x) + log det( F (x) 1 ) for a sequence of decreasing positive values of t. Due to the strict convexity of g t the minimizer x(t) of g t is unique for all t > t. Since closed form expressions for the gradient and Hessian of g t can be obtained, a Newton iteration is an efficient numerical method to find minimizers of g t. Currently much research is devoted to further exploiting the structure of LMI s in order to tailor dedicated solvers for specific semi-definite programs. 4 Stability characterizations with LMI s Around 189, Aleksandr Mikhailovich Lyapunov made a systematic study of the local expansion and contraction properties of motions of dynamical systems around an attractor. He worked 4

6 out the idea that an invariant set of a differential equation attracts all nearby solutions if one can find a function that is bounded from below and decreases along all solutions outside the invariant set. Such functions are called Lyapunov functions and they have been used to prove stability of equilibria of differential equations ever since. Consider the differential equation ẋ(t) = f(x(t), t) (4) with finite dimensional state space X = R n and where f : X R X is assumed to be sufficiently smooth so as to guarantee existence and uniqueness of the solution that satisfies the initial condition x(t ) = x X. The differential equation (4) is time-invariant if solutions satisfy x t +τ,x (t + τ) = x t,x (t) for any τ R with t and t + τ in the interval of existence. A point x X is called an equilibrium or fixed point of the differential equation if x t,x (t) = x satisfies (4) (which implies that f(x, t) = for all t t ). A wealth of stability concepts associated with fixed points of differential equations exists. Here, we focus on just one. The equilibrium point x of (4) is called exponentially stable if for all t R, positive numbers α, β and δ exist (all possibly depending on t ) such that x x δ = x t,x (t) x β x x e α(t t ) for all t t. (5) If α, β and δ do not depend on t then x is said to be uniformly exponentially stable. If δ is arbitrary, x is called globally exponentially stable. Hence, a fixed point is exponentially stable if all solutions of the differential equation that initiate nearby x converge to x with an exponential rate α >. The following result is standard and relates exponential stability of linear time-invariant differential equations to LMI feasibility. Theorem 2 Let A R n n. The following statements are equivalent. (a) The origin is an exponentially stable equilibrium point of ẋ = Ax. (b) All eigenvalues λ(a) of A belong to C := {s C R(s) < } (That is, A is Hurwitz). (c) The linear matrix inequalities A X + XA and X are feasible. Any solution X of the LMI s in item (c) defines the quadratic function V (x) := x Xx that serves as a Lyapunov function for the equilibrium point x = of the differential equation ẋ = Ax. Indeed, V achieves its minimum at x = and its derivative is d dt x(t) Xx(t) = ẋ(t) Xx(t) + x(t) Xẋ(t) = x(t) [A X + XA]x(t) and hence strictly decreasing by item (c). Rather straightforward arguments lead to (5) where δ is arbitrary, β = λ max (X)/λ min (X) and α > is any number for which A X +XA+2αX. For many applications in control and engineering one may be interested in characterizing eigenvalue locations of A in more general stability regions than C. For a real symmetric matrix P S 2m, the set of complex numbers { } I I L P := s C P si si 5

7 is called an LMI region. Important stability regions such as half-planes C stab 1 := {s R(s) < α}, circles C stab 2 := {s s < r} or conic sectors C stab 3 := {s R(s) tan(θ) < I(s) } can be represented by LMI regions L P1, L P2 and L P3, respectively, by taking sin(θ) cos(θ) 2α 1 r 2 P 1 =, P 2 =, P 3 = cos(θ) sin(θ) 1 1 sin(θ) cos(θ). cos(θ) sin(θ) LMI regions include sets bounded by circles, ellipses, strips, parabolas and hyperbolas. Since any finite intersection of LMI regions is again an LMI region, one can virtually approximate any convex region in C as long as it is symmetric with respect to the real axis. To present the main result of this section, we recall the definition of the Kronecker product A B of two matrices A C m n, B C k l, which is the mk nl matrix A 11 B... A 1n B A B =... A m1 B... A mn B The following result as originating from [4] is an interesting and elegant generalization of the stability characterization in Theorem 2. Theorem 3 Let A R n n. The following statements are equivalent. (a) All eigenvalues of A are contained in the LMI region { } I Q S I s C si S. R si (b) There exists X = X such that ( ) I X Q X S I X and A I X S. X R A I The condition in item (b) is an LMI in X. The result of Theorem 2 is recovered by taking Q =, S = 1 and R =. With Q = 1, S = and R = 1 the LMI region corresponds to the open unit disc; hence A has all eigenvalues within the open unit disc iff there exists X such that A XA X. In turn, this LMI test is equivalent to saying that the discrete time system x(k + 1) = Ax(k) is exponentially stable. Example 4 Consider the problem to find a stabilizing feedback law u = F x that simultaneously stabilizes the continuous-time systems ẋ = A k x + B k u where k = 1,..., 4 and A 1 =, B 1 =, A 2 =, B 2 =, A 3 =, B 3 =, A 4 =, B 4 =

8 By Theorem 2, the equivalent problem is to find X k and F such that (A k + B k F ) X k + X k (A k + B k F ) for k = 1,..., 4. Since both X k and F are unknown, this is not an LMI constraint. However, assuming X 1 = = X 4 = X, a congruence transformation with the matrix Y := X 1 transforms the five matrix inequalities to Y, A k Y + Y A k + B km + M B k, k = 1,..., 4 where we set M = F Y. These are LMI s in Y and M. When implemented with the given matrices, this set of LMI s turns out to be feasible and any solution defines a feedback F = MY ( 1 that solves the ) stabilization problem. One of these feedbacks is computed to be F = An analogous synthesis strategy can be applied for the simultaneous pole placement problem which amounts to finding F such that eigenvalues of A k + B k F belong to an LMI region L P for all k. Its solution is then an application of Theorem 3. 5 Performance characterizations with LMI s In this section we consider a linear system ẋ = Ax + Bd, e = Cx + Dd, x() = (6) in which d is viewed as an undesired external disturbance and e is an error output. Many control synthesis problems can be translated into a question of disturbance attenuation: the controller should reduce the effect of the disturbance d onto the error e as much as possible. In this section we discuss techniques how to quantify or analyze the effect of d onto e for the system (6) whose transfer function matrix is given by T (s) = C(sI A) 1 B + D. 5.1 H 2 -Performance Let us assume for (6) that A is Hurwitz and D =. If the number of inputs is m then B = (b 1,..., b m ) has m columns. If the system is excited with a unit impulse in the ν-th input, it responds with the output trajectory z ν (t) = Ce At b ν. The energy of this output trajectory equals ( ) [Ce At b ν ] [Ce At b ν ] dt = b ν e A t C Ce At dt b ν = b νy b ν (7) with Y being the observability Gramian, the unique solution of the Lyapunov equation A Y + Y A + C C =. (8) If we add the output energies for impulsive inputs in all input components we obtain m ν=1 z ν (t) z ν (t) dt = m Trace(b ν b νy ) = Trace(BB Y ) = Trace(B Y B) ν=1 where Trace denotes the sum of the diagonal elements of any matrix. In view of the explicit formula for the observability Gramian as used in (7) combined with Parseval s theorem, we infer that Trace(B Y B) actually equals T 2 H 2 := 1 2π Trace [C(iωI A) 1 B] [C(iωI A) 1 B] dω. (9) 7

9 Note that T H2 is the co-called H 2 -norm of the transfer matrix T, the name of which is motivated by the theory of Hardy-spaces in pure mathematics; this relation is not relevant for our purposes. We have actually derived the impulse-response performance interpretation of the H 2 -norm. Moreover this interpretation also provides a concise link to classical LQ theory, since the impulse response z ν ( ) is identical to the output response if the system s state is initialized as x() = b ν. Let us briefly touch upon the stochastic interpretation of the H 2 -norm. If w is white noise with unity covariance, the asymptotic variance of the output process of (6) satisfies ( lim t E[z(t) z(t)] = Trace lim t CE[x(t)x(t) ]C ) = Trace(CX C ) where X is the system s controllability Gramian, the unique solution of AX + X A + BB =. (1) Since T (s) = B (si A ) 1 C, we can conclude that the asymptotic output variance is equal to T H2. Due to (9) this is the same as T H2, which leads to a dual version for computing T H2. Let us summarize our findings as follows: If D = and X and Y are the system s controllability and observability Gramians satisfying (1) and (8) respectively, then T 2 H 2 = Trace(CX C ) = Trace(B Y B) is the sum of the energies of the output of (6) for impulsive inputs in each input channel, and it also equals the asymptotic output variance if the input is white noise with unity covariance. Note that H 2 -norms can be easily determined by solving linear equations and computing traces. Since these explicit formulas are inadequate for applying the synthesis procedure as developed in the next section, let us provide a genuine characterization of a bound on the H 2 -norm by LMI s. It is important to stress that this formulation actually combines an LMI characterization of system stability with a bound on system performance. Theorem 5 A is Hurwitz and T H2 < γ iff D = and there exist X = X and W = W with ( ) A X + XA XB X C B, and Trace(W ) < γ. (11) X γi C W Sketch of proof of if. For the general manipulation of performance specifications it is an instructive illustration of how to show that the feasibility of the system of LMI s (11) implies T H2 < γ. Indeed, by taking Schur complements we infer that (11) leads to A X + XA + 1 γ XBB X, X, CX 1 C W, Trace(W ) < γ. The first inequality implies A X + XA. Together with X this implies (Theorem 2) that A is Hurwitz. On the other hand, ˆX = γx 1 satisfies A ˆX + ˆX A + BB and Trace(C ˆX C ) < Trace(γW ) < γ 2. Combining the latter inequalities with (1), we get A( ˆX X )+( ˆX X )A which implies, using the stability of A, that ˆX X. Consequently, also Trace(CX C ) < γ 2 which yields that T H2 < γ. 8

10 5.2 H -Performance A different way of quantifying the effect of the disturbance d on the output e in (6) is in terms of the so-called energy gain e L2 T H := sup. < d L2 < d L2 The norm reflects the worst amplification of disturbances on outputs if measuring the sizes of the input and output signals in terms of their L 2 -norm or energy. Dissipativity theory [19] provides a direct path towards an LMI characterization of an upper bound T H < γ. Theorem 6 A is Hurwitz and T H < γ iff there exists X with I X I I γ 2 I I +. (12) A B X A B C D I C D Sketch of proof of if. Let us assume that (12) holds. Then the left-upper block of (12) just reads as A X + XA + C C which implies A X + XA. Therefore X guarantees that A is Hurwitz. Since the inequality (12) is strict, it continues to hold if we replace γ 2 by (γ ɛ) 2 for some suitably small ɛ >. Let us choose any d( ) with < d L2 < and let e( ) be the output of (6). If we right-multiply the perturbed version of (12) with col(x(t), d(t)) and left-multiply with its transpose, and if we exploit the relations in (6), we obtain ( ) x(t) X x(t) d(t) (γ ɛ) 2 I d(t) + = ẋ(t) X ẋ(t) e(t) I e(t) By integration on [, T ] and with x() = we infer x(t ) Xx(T ) + T = d dt x(t) Xx(t) + e(t) e(t) (γ ɛ) 2 d(t) d(t). T e(t) e(t) dt (γ ɛ) 2 d(t) d(t) dt for all T. Since both x( ) and ẋ( ) are of finite energy, x(t ) for T. After taking the limit T we hence get e 2 L 2 (γ ɛ) 2 d 2 L 2 or e L2 / d L2 γ ɛ. Since this holds for all < d L2 < we finally arrive at T H < γ. We started from the formulation of the LMI s (12) which gives the most insight for a dissipationbased proof of these results. In the literature, more common equivalent representations of the LMI (12) are ( A X + XA + C C XB + C D B X + D C D D γ 2 I ) or (Schur) A X + XA XB C B X γ 2 I D. (13) C D I Equivalently, A X+XA+C C+(XB+C D)(γ 2 I D D) 1 (B X+D C) and D D γ 2 I which touches upon the relation to Riccati inequalities and equations. 9

11 5.3 The Kalman-Yakubovich-Popov (KYP) Lemma We have established the link of (12) to the time-domain energy-gain by dissipation arguments. The relation of this LMI to the frequency-domain is the subject of the celebrated KYP lemma. Algebraic arguments proceed as follows. If (12) holds with X = X then A X +XA implies that A has no eigenvalues on the imaginary axis. For ω R let us observe that ( ) ( ) (iωi A) 1 B I X I (iωi A) 1 B = I A B X A B I = [(iωi A) 1 B] I X I (iωi A) 1 B = iωi X iωi = [(iωi A) 1 B] [iωx iωx](iωi A) 1 B =. Therefore feasibility of (12) implies, the validity of the frequency-domain inequality I γ 2 I I for all ω R { }, T (iω) I T (iω) which follows for ω = from the right-lower block of (12). Note that this inequality translates into T (iω) T (iω) γ 2 I or σ max (T (iω)) < γ for all ω R { }. In turn, this reveals the relation of the L 2 -gain bound to the peak of the largest singular value of the system s frequency response, which is simply the classical L -norm for transfer matrices without poles on the imaginary axis, or the H -norm for transfer matrices whose poles are all contained in the open left-half plane. The following result captures a generalization of the strict version of the KYP lemma for an arbitrary symmetric matrix P and without involving any sign-constraint on X. Theorem 7 In the case that A has no eigenvalues on the imaginary axis then I I P for all ω R { } (14) T (iω) T (iω) iff there exists some X = X with I X I I I + P. (15) A B X A B C D C D The survey article [1] provides a nice historical account of the development around the KYP lemma with many references, also to the Russian literature, and discusses further versions of this result even without any a priori hypotheses. In particular [2] is a rich source for the link of the KYP lemma to semi-definite programming duality and the related control theoretic interpretations. 5.4 Variants The books [3, 7] contain a whole variety of concrete variations on the theme of formulating performance specifications with LMI s. Let us provide a sample. 1

12 Generalized H 2 -performance. If replacing Trace(W ) < γ by W γi in Theorem 5, the formulated LMI s characterize that the system gain from finite energy input signals to the peak value of the output signal is bounded by γ: e L sup < d L2 < d L2 < γ with e L := sup e(t). t This criterion allows to impose time-uniform bounds on the error variable under the assumption that the disturbance has bounded energy. Quadratic performance. Our discussion of L 2 -gain performance opens the path towards the following generalization. With a symmetric weighting matrix P, quadratic performance with index P is achieved if an ɛ > exists such that the following integral quadratic constraint on the performance channel d e of the stable system (6) is satisfied: d(t) d(t) P dt ɛ d 2 L e(t) e(t) 2 for all d L 2. This time-domain specification directly translates into the frequency-domain inequality (14) and, due to Theorem 7, into feasibility of the LMI (15). If the right-lower block of P is positive semi-definite, it is elementary to see that stability of A is guaranteed by imposing the additional positivity constraint X. Discrete-time. All described results have counterparts for discrete-time systems x(t + 1) = Ax(t) + Bd(t), e(t) = Cx(t) + Dd(t), x() =, t =, 1, 2,.... If A has all its eigenvalues in the unit disc (Schur stability) then quadratic performance holds, by definition, if there exists some ɛ > with d(t) d(t) P ɛ d(t) d(t). e(t) e(t) t= This is equivalent to the frequency-domain inequality I I P for all z C, z = 1 T (z) T (z) on the unit circle, which is, in turn, equivalent to the existence of X = X satisfying the LMI I X I I I + P. A B X A B C D C D If the right-lower block of P is positive semi-definite, Schur stability of A is once again guaranteed by X. We stress the pleasing parallel structure of continuous-time and discrete-time performance formulations, which is even more striking for synthesis since the proposed general procedure applies to both domains without the need for any adaptation. t= 6 Optimal Performance Synthesis Let us now consider a system ẋ = Ax + B d + Bu e = C x + D d + Eu y = Cx + F d (16) 11

13 e System d y Controller u Figure 1: Generalized Plant Configuration where, in addition to the disturbance d and the error e, the signal u is a control input and y is a measured output. In feedback synthesis, the goal is to determine a controller ẋ c = A c x c + B c y u = C c x c + D c y (17) which feeds the measurements y back to u such that the controlled system, the interconnection of (16) and (17), is internally stable and satisfies a desired performance property (Figure 1). Note that the controlled system with state ξ = col(x, x c ) is easily seen to be described by ξ = Aξ + Bd e = Cξ + Dd with A B := C D C + ED c C EC c A + BD c C BC c B c C A c B + BD c F B c F D + ED c F (18). In view of the generalized plant framework [2, 18] it is essential to understand that this innocent problem formulation comprises surprisingly many specific configurations as they are needed in one- or two-degrees of freedom controller synthesis for reference tracking and disturbance attenuation. In this section emphasis is put on an H -norm bound as a measure of performance. 6.1 State-Feedback Synthesis The simplest control law is state-feedback u = D c x with a gain D c, which leads to the controlled system ẋ = (A + BD c )x + B d (19) e = (C + ED c )x + D d. Using Theorem 6 and with the LMI expressed as in (13), this controller stabilizes (19) and renders the H -norm of the transfer matrix d e smaller than γ iff there exists some X with X and (A + BD c ) X + X(A + BD c ) XB (C + ED c ) B X γ2 I D (C + ED c ) D I. (2) Recall that the first inequality guarantees stability while the second captures performance. We observe that, in synthesis, we need to search for both D c and X in order to satisfy (2). The performance inequality is, with some abuse of notation, a so-called bilinear matrix inequality problem since the left-hand side is affine in X (for fixed D c ) and affine in D c (for fixed X). Actually, a large variety of design problems in control can be easily seen to admit this structure. Unfortunately, however, bilinear matrix inequalities are as hard to handle as general nonlinear programs. Fortunately, for the problem at hand there is a surprisingly simple remedy. 12

14 There exists a celebrated procedure which actually turns the non-convex bilinear matrix inequality (2) into a convex problem of linear matrix inequalities. This is achieved by applying a nonlinear change of variables (D c, X) (M, Y ) and a congruence transformation to (2). Indeed if we transform (2) by congruence with X 1 and diag(x 1, I, I) respectively, and if we introduce the new variables Y := X 1 and M := D c X 1, (21) we arrive at Y and (AY + BM) + (AY + BM) B (C Y + EM) B γ 2 I D (C Y + EM) D I. (22) Obviously the constraints (22) constitute linear matrix inequalities in (M, Y ) whose feasibility can be verified. If (M, Y ) is a solution of (22), we can solve (21) for (D c, X) as X = Y 1 and D c = MY 1 and perform a congruence transformation of (22) with Y 1 and diag(y 1, I, I) which leads back to (2). This proves, with X being a certificate, that D c is indeed stabilizing and achieves the desired performance specification. On the other hand, if the inequalities (22) are not feasible, it is assured that no state-feedback gain can exist for which both these properties are satisfied. 6.2 Output Feedback Synthesis A general output feedback controller leads to the controlled system (18). It achieves stability of A and C(sI A) 1 B + D H < γ iff there exists some X with A X + X A X B C X and B X γ 2 I D. (23) C D I Due to the affine dependence of (A, B, C, D) on the controller matrices (A c, B c, C c, D c ), this involves again a bilinear matrix inequality. It is pleasing that one can identify, again, a convexifying controller parameter transformation [13, 17]. If we partition X U X = U and X 1 Y V = V (24) according to A (where X and Y share their dimension with A and the s denote matrices that are irrelevant for our purposes) reads as K L XAY U XB A c B c V = +. (25) M N I C c D c CY I We view this as a transformation (X, A c, B c, C c, D c ) (X, Y, K, L, M, N) =: v. This definition is motivated by the following easily verified relations: Y I with Y := V we have Y Y I X Y = =: X(v) and (26) I X Y (X A)Y Y AY + BM A + BNC B + BNF (X B) A(v) B(v) = K XA + LC XB + LF =:. (27) CY D C(v) D(v) C Y + EM C + ENC D + ENF 13

15 Although looking intricate, the key is the affine dependence of the blocks X(v), A(v), B(v), C(v) and D(v) on the new variables v. If Y is non-singular, a congruence transformation with Y and diag(y, I, I) on (23) leads to Y X Y and Y (A X)Y + Y (XA)Y Y (X B) Y C (B X )Y γ 2 I D CY D I, (28) which is, in turn, nothing but the following LMI in v: A(v) + A(v) B(v) C(v) X(v) and B(v) γ 2 I D(v). (29) C(v) D(v) I This brings us to the following result which is true without any hypothesis on Y. Theorem 8 There exists a controller (17) which stabilizes (18) and renders the H -norm of the transfer matrix d e smaller than γ iff the LMI s (29) are feasible. The actual design of a controller proceeds as follows. Find a solution of the LMI s (29); due to the first inequality in (29) the matrix I XY is non-singular (Schur); therefore one can find square and non-singular matrices U and V with I XY = UV ; then we can solve (25) for (A c, B c, C c, D c ). This controller does the job since we can define the non-singular matrix Y and solve for X in (26), which implies that (27) is valid; then (29) is nothing but (28); since Y is non-singular, this transforms into (23) by congruence. In summary, we have reduced the design problem to find a stabilizing controller that establishes a bound on the H -norm of the closed loop system by an equivalent problem that amounts to checking the feasibility of the LMI s (29). Since γ 2 enters these constraints in an affine fashion, one can minimize γ 2 subject to the feasibility of these LMI s in order to directly compute the optimal H -attenuation level that is achievable by stabilizing controllers. 6.3 General Synthesis Procedure Although we discussed H -synthesis in quite some detail, we can extract the following generic procedure for moving from analysis to synthesis inequalities for a whole variety of other performance specifications that can be expressed by LMI s. Rewrite the analysis inequalities Find a formal congruence transformation involving Y which leads to inequalities in terms of the blocks Y X Y, Y (X A)Y, Y (X B), CY and D. Then the synthesis inequalities are obtained by the substitution ( ) ( ) Y Y (X A)Y Y (X B) A(v) B(v) X Y X(v), C D C(v) D(v) with (27) for the variables v := (X, Y, K, L, M, N). For state-feedback synthesis one can apply the very same procedure with the formulas X(v) = Y, A(v) = AY +BM, B(v) = B, C(v) = C Y +EM, D(v) = D for v = (M, Y ). 14

16 The controller construction is independent from the particular analysis inequalities and remains, both for state-feedback and output-feedback synthesis, unaltered. This procedure applies to H 2 -synthesis as well as to the variants of the LMI analysis specification discussed in Section 5.4. As an illustration, the discrete-time system x(t + 1) = Ax(t) + Bd(t), e(t) = Cx(t) + Dd(t) is Schur-stable and its l 2 -gain is bounded by γ iff there exists some X with X, ( I A B ) ( X Since the second inequality can also be expressed as X, ) I I γ 2 I I +. X A B C D I C D ( X γ 2 I ) ( A B C D ) ( X I ) ( A B a Schur complement argument reveals that these analysis inequalities can be written as X A X C γ 2 I B X D X A X B X C D I C D and thus admit the precise format in order to apply the general synthesis procedure. Based on the generic dualization and elimination results described in [16], it is often possible to eliminate matrix variables that only appear in one of these synthesis inequalities, with the benefit of reducing computational complexity. For example, one can eliminate all matrices K, L, M, N from the H -synthesis inequalities in order to arrive at the inequalities as proposed in [8, 11]. Finally, in many practical problems the disturbance and error signals are partitioned as d = col(d 1,..., d p ) and e = col(e 1,..., e q ) in order to impose multiple individual performance requirements on some of the channels d ν e µ, possibly with different norms. The general controller synthesis procedure described in this section is applicable for this kind of multi-objective control problems provided that one is willing to allow some level of conservatism by expressing the combined desired specifications with one and the same matrix X in the closed-loop system. This variant of multi-objective control, which is addressed as a Lyapunov-shaping paradigm in [17] in more detail, even allows the incorporation of pole-placement requirements on A in terms of general LMI regions. We refer to [17] for an instructive controller design example. The introduction of slack-variables offers a possibility to reduce conservatism as shown for discrete-time systems in [5], while an LMI solution of the general multi-objective control problem based on the Youla-Kucera parameterization of all stabilizing controllers is discussed in [15] and its references. ), 6.4 Observer and Estimator Synthesis Estimation problems are related to the configuration in Figure 2. Based on measurements y, the goal is to determine an estimator whose output approximates the system output z (which can be equal to the state or comprise components of d) as closely as possible, despite the corruption of the system by the disturbance d. If d is white noise with unity covariance, minimization of the asymptotic variance of e amounts to minimizing the H 2 -norm of d e which results in the classical Kalman filter. Alternatively, using the energy-gain of d e as a performance indicator amounts to considering the H -estimation problem. These and many other variants 15

17 e u + Estimator z y System d Figure 2: Estimator Synthesis of estimation problems can be rephrased as an output-feedback synthesis problem for a system as described by (16) with B = and E = I (which means that u does not excite the system dynamics). Let us stress that in this reformulation u admits the interpretation of the estimator output and e is the estimation error. If the estimator admits the structure of an observer with a to-be-designed gain D c, ẋ c = Ax c + D c (Cx c y), u = C x c, (3) then the dynamics of the state-error ξ = x x c is described by ξ = (A + D c C)ξ + (B + D c F )d, e = C ξ + D d and defines the transfer matrix d e. This representation is dual (transposed) to what we considered for state-feedback synthesis, and only slight modifications are required in order to determine the LMI s for synthesizing observer gains D c that stabilize the error dynamics and achieve e.g. an H - or H 2 -norm bound γ on d e. Let us now assume that A in (16) is stable. Instead of assuming a particular structure, we can then try to find a general estimator (17) with stable A c such that a desired performance level for d e is achieved. In view of (18) and the fact that B =, stability of A c now boils down to stability of A, and the very same output-feedback synthesis procedure can be followed in order to design optimal estimators by LMI s. However, it is interesting to observe that the structural property B = allows a simplification of the convexifying controller parameter transformation which will be essential for robust estimator synthesis [9]. Indeed, starting from (24) and with Z = Y 1 let us define K L U A c B c V Z I I := and Y := M N I C c D c I V. (31) Z After a direct computation we get Y Z Z X Y = =: X(v) and (32) Z X Y (X A)Y Y ZA ZA ZB (X B) A(v) B(v) = XA+LC +K XA+LC XB +LF =: CY D C(v) D(v) C +ENC +EM C +ENC D +ENF with an affine dependence on X, Z, K, L, M, N. Therefore the general synthesis procedure in Section 6.3 does apply, with the only modification being the use of these new substitution formulas and by recalling the relation Z = Y 1 for the actual design of the estimator. 16

18 7 Polytopic Uncertainties and Robustness Analysis First principle models of physical systems are often represented by state-space descriptions in which the various components of the state represent well-defined physical quantities. Variations, perturbations or uncertainties in physical parameters lead to uncertainty in the model. Often, this uncertainty is reflected by variations in well distinguished parameters or coefficients in the model, while, in addition, the nature and/or range of the uncertain parameters may be known, or partially known. Since very small parameter variations may have a major impact on the dynamics of a system, it is of evident importance to analyse parametric uncertainties of dynamical systems. Suppose that δ = (δ 1,..., δ p ) is the vector which expresses the ensemble of all uncertain quantities in a given dynamical system. Then there are at least two distinct cases which are of independent interest: (a) Time-invariant parametric uncertainties: the vector δ is a fixed but unknown element of an uncertainty set δ R p. (b) Time-varying parametric uncertainties: the vector δ is an unknown time varying function δ : R R p whose values δ(t) belong to an uncertainty set δ R p, and possibly satisfy additional constraints. 7.1 Time-invariant parametric uncertainty Consider the uncertain time-invariant system defined by ẋ = A(δ)x (33) where A( ) is a continuous function of the real valued parameter vector δ = col(δ 1,..., δ p ) which is only known to be contained in an uncertainty set δ R p. The problem of robust stability amounts to characterizing whether the equilibrium point x = of (33) is exponentially stable for all parameters δ δ. With time-invariant uncertainties, (33) is robustly stable iff A(δ) is Hurwitz for all δ δ. Since δ generally consists of infinitely many points, the verification of this condition is rather troublesome from a computational point of view. The uncertain system (33) is called quadratically stable if there exists X = X such that X, A(δ) X + XA(δ) for all δ δ. (34) The importance of this definition becomes apparent after observing that V (x) := x Xx is a quadratic Lyapunov function for (33) which, by Theorem 2, implies that A(δ) is Hurwitz for all δ δ. Hence, quadratic stability implies the origin of (33) to be robust exponentially stable against time-invariant uncertainties δ δ. Unless δ has a finite number of points, (34) can not be verified easily. Therefore, the following result is of considerable interest. Theorem 9 If A(δ) is affine in δ and δ = co{δ 1,..., δ N } then (33) is quadratically stable if and only if there exists some X such that X, A(δ k ) X + XA(δ k ), k = 1,..., N. Hence, for polytopic uncertainty sets and affine parametric dependence in (33), quadratic stability can be numerically verified by a feasibility test for a finite number of LMI s only. The proof is an illustrative example of the use of convexity. It requires showing that F (δ) := A(δ) X + XA(δ) for all δ δ if F (δ k ) for k = 1,..., N. To see this, first observe that F ( ) is a convex function on δ whenever A( ) is affine. Second, any δ δ can be written as a convex combination of the points δ 1,..., δ N, say δ = N k=1 α kδ k with nonnegative coefficients α k that sum up to 1; F (δ) = F ( N k=1 α kδ k ) N k=1 α kf (δ k ). 17

19 7.2 Time-dependent parametric uncertainty Robust stability against time-varying uncertainties is generally a more demanding requirement than robust stability against time-invariant uncertainties. Consider the system ẋ(t) = A(δ(t))x(t) (35) which is affected by an uncertain parameter curve δ : R δ. Unlike the case with time-invariant uncertainties, robust stability of the origin is now not implied by the condition that A(δ) is Hurwitz for all δ δ. However, the uncertain system with time-varying parametric uncertainties is exponentially stable if there exists X such that (34) holds. Therefore, quadratic stability does, in fact, imply robust stability against arbitrary fast time-varying parametric uncertainties. This is a nice, but in general, conservative test if additional a priori information on the uncertainty is available. For example, the parameter curves δ( ) are often known to be continuously differentiable and constrained in terms of their values and their rate-of-variation as δ(t) δ, δ(t) ρ for all t R. (36) Less conservative robust stability tests can be inferred by postulating the existence of parameter dependent Lyapunov functions. A popular instance of such functions takes the form V (x, δ) := x X(δ)x and requires a search over matrix functions X(δ) = X(δ) with δ δ. For notational convenience, let us introduce, for a continuously differentiable matrix function X(δ), the derivative p X(δ, ρ) := k X(δ)ρ k, (δ, ρ) δ ρ (37) k=1 where k X( ) denotes the partial derivative of the function X( ) with respect to the k-th entry of δ and where ρ k is the k-th component of the vector ρ. (We stress that X(δ, ρ) is purely symbolic notation which is not to be confused with the partial derivative of X( ) itself.) The following result provides a sufficient condition for robust stability and actually covers many tests in the literature. The proof provides much insight into the understanding of stability arguments based on parameter dependent Lyapunov functions. Theorem 1 Suppose that δ and ρ are compact subsets of R p and suppose that X(δ) = X(δ) is a continuously differentiable matrix function that satisfies X(δ), X(δ, ρ) + A(δ) X(δ) + X(δ)A(δ) (38) for all δ δ and ρ ρ. Then the origin of (35) is exponentially stable for all time-varying parametric uncertainties that satisfy (36). Proof. Suppose that X(δ) satisfies (38). Continuity of X( ) and compactness of δ and ρ guarantee the existence of constants a, b, c > such that, for all δ δ and ρ ρ, ai X(δ) bi, X(δ, ρ) + A(δ) X(δ) + X(δ)A(δ) ci. (39) Let δ( ) and x( ) be a parameter curve and state trajectory that satisfy (36) and (35), respectively. With ξ(t) := x(t) X(δ(t))x(t) we clearly have ] ξ(t) = x(t) [ p k=1 k X(δ(t)) δ k (t) x(t) + x(t) [A(δ(t)) X(δ(t)) + X(δ(t))A(δ(t))]x(t). If we exploit (39) we get a x(t) 2 ξ(t) b x(t) 2 and ξ(t) c x(t) 2. This implies that ξ(t) c b ξ(t) and hence ξ(t) ξ(t ) exp( c b (t t )) for all t t. In turn, this leads to 18

20 x(t) 2 b a x(t ) 2 e c b (t t ) which is (5) for α = c/(2b) and β = b/a. The constraints (38) on X( ) define a purely algebraic test that do not involve the system- or parameter-trajectories. The test is not easy to apply directly because the matrix function X( ) needs to satisfy a partial differential LMI. By considering specific classes of matrix functions X( ), (38) can be converted and implemented with LMI solvers. One of these classes is the set of affine symmetric matrix functions X : δ S n. A few special instances are worth mentioning. If the parameters are time-invariant we have ρ = {} and (38) simplifies to the conditions X(δ) and A(δ) X(δ) + X(δ)A(δ). In that case, (38) is also necessary for robust stability. If parameters vary arbitrarily fast, ρ is unbounded and (38) is feasible only if the partial derivatives k X(δ) vanish identically. This means that X(δ) = X is not depending on δ and we recover the quadratic stability test. 7.3 Robust performance In the previous subsections we have shown how tests for robust stability against parametric uncertainties can be inferred from characterizations of nominal stability. The same generalization applies in order to obtain conditions for verifying robust performance. Consider the uncertain parameter depending system ẋ(t) = A(δ(t))x(t) + B(δ(t))d(t) e(t) = C(δ(t))x(t) + D(δ(t))d(t) where δ( ) is a continuously differentiable rate-bounded uncertainty that satisfies (36). In the case of H -performance, we quantified the effect of d on e in terms of the L 2 gain of the system. However, the output e of (4) not only depends on the input d but also on the uncertainty δ( ). Hence we say that the robust L 2 -gain is smaller than γ if for d = and for all parameter curves δ( ) with (36), x = is an exponentially stable equilibrium of the system (4); (4) for x() = it holds that sup δ( ) satisfies (36) e L2 sup < γ. < d L2 < d L2 With some abuse of terminology, robust L 2 -gain performance is often referred to as robust H -performance, but it is important to realize that frequency domain characterizations do not make sense when considering time-dependent parametric uncertainties in (4). Only with time-invariant parameter uncertainties (ρ = {}) does a robust L 2 -gain that is smaller than γ imply that T δ H < γ for all δ δ, where T δ is the transfer function associated with (4) for time-invariant parameters δ(t) = δ. The following result generalizes Theorem 6 to robust L 2 -gain performance. Theorem 11 Suppose there exists a continuously differentiable matrix function X(δ) = X(δ) such that X(δ) and ( ) X(δ, ρ) + A(δ) X(δ) + X(δ)A(δ) X(δ)B(δ) B(δ) + X(δ) ( ) ( ) I γ 2 I I + (41) C(δ) D(δ) I C(δ) D(δ) for all δ δ and ρ ρ. Then the uncertain system (4) has a robust L 2 -gain smaller than γ. 19

21 The proof of this result is analogous to that of Theorem 6. The main merit of Theorem 11 is that it converts robust L 2 -gain performance to an algebraic property. As in Theorem 1, the condition (41) requires the numerical search for a matrix function X( ) that needs to satisfy a partial differential linear matrix inequality. Moreover, the discussion about the extreme cases concerning time-invariant or arbitrary fast time-varying parameters in Section 7.2 remains valid for (41). Finally, let us remark that Theorem 11 extends, mutatis mutandis, to all other performance specifications that have been mentioned in Section 5. 8 Robust State-Feedback and Estimator Synthesis Consider the system ẋ(t) = A(δ(t))x(t) + B (δ(t))d(t) + B(δ(t))u(t) e(t) = C (δ(t))x(t) + D (δ(t))d(t) + E(δ(t))u(t) y(t) = C(δ(t))x(t) + F (δ(t))d(t) (42) whose describing matrices are affected by a time-dependent parametric uncertainty δ(t) R p. Let us also assume that the dependence of the system matrices is actually affine, and that δ(t) takes values that are, for t, confined to the polytope δ := co{δ 1,..., δ N } R p. (43) Robust controller synthesis deals with the problem of determining a feedback controller that processes measurements y to control inputs u, so as to guarantee robust stability and a desired robust performance specification on the mapping from the disturbance d to the output e of the controlled system. The robust state-feedback synthesis problem is a special case in which the whole state is assumed to be measurable (y = x in (42)) and the controller is a static feedback law u = D c x with some gain D c. In that case, the resulting closed-loop system is described by ẋ(t) = [A(δ(t)) + B(δ(t))D c ]x(t) + B (δ(t))d(t) e(t) = [C (δ(t)) + E(δ(t))D c ]x(t) + D (δ(t))d(t). If applying Theorem 11 for a parameter-independent X(δ) = X and exploiting affine parameterdependence as for Theorem 9, we infer that the robust L 2 -gain of the controlled system is smaller than γ if there exists X = X with X, [A(δ k )+B(δ k )D c ] X +X[A(δ k )+B(δ k )D c ] XB (δ k ) [C (δ k )+E(δ k )D c ] B (δ k ) X γ 2 I D (δ k ) C (δ k )+E(δ k )D c D (δ k ) I for all k = 1,..., N. (44) Literally following the nominal synthesis procedure of Section 6.1 with the convexifying transformation (21) we infer that there exist (D c, X) satisfying (44) iff there exist (M, Y ) satisfying Y, [A(δ k )Y +B(δ k )M] +[A(δ k )Y +B(δ k )M] B (δ k ) [C (δ k )Y +E(δ k )M] B (δ k ) γ 2 I D (δ k ) C (δ k )Y +E(δ k )M D (δ k ) I for all k = 1,..., N. 2

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions Robust Stability Robust stability against time-invariant and time-varying uncertainties Parameter dependent Lyapunov functions Semi-infinite LMI problems From nominal to robust performance 1/24 Time-Invariant

More information

Outline. Linear Matrix Inequalities in Control. Outline. System Interconnection. j _jst. ]Bt Bjj. Generalized plant framework

Outline. Linear Matrix Inequalities in Control. Outline. System Interconnection. j _jst. ]Bt Bjj. Generalized plant framework Outline Linear Matrix Inequalities in Control Carsten Scherer and Siep Weiland 7th Elgersburg School on Mathematical Systems heory Class 3 1 Single-Objective Synthesis Setup State-Feedback Output-Feedback

More information

Modern Optimal Control

Modern Optimal Control Modern Optimal Control Matthew M. Peet Arizona State University Lecture 19: Stabilization via LMIs Optimization Optimization can be posed in functional form: min x F objective function : inequality constraints

More information

Linear Matrix Inequalities in Control

Linear Matrix Inequalities in Control Linear Matrix Inequalities in Control Delft Center for Systems and Control (DCSC) Delft University of Technology The Netherlands Department of Electrical Engineering Eindhoven University of Technology

More information

Outline. Linear Matrix Inequalities in Control. Outline. Time-Invariant Parametric Uncertainty. Robust stability analysis

Outline. Linear Matrix Inequalities in Control. Outline. Time-Invariant Parametric Uncertainty. Robust stability analysis Outline Linear Matrix nequalities in Control Carsten Scherer and Siep Weiland 7th Elgersburg School on Mathematical Systems Theory Class 4 1 Robust Stability against arametric Uncertainties Quadratic stability

More information

Denis ARZELIER arzelier

Denis ARZELIER   arzelier COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15

More information

Linear Matrix Inequality (LMI)

Linear Matrix Inequality (LMI) Linear Matrix Inequality (LMI) A linear matrix inequality is an expression of the form where F (x) F 0 + x 1 F 1 + + x m F m > 0 (1) x = (x 1,, x m ) R m, F 0,, F m are real symmetric matrices, and the

More information

arzelier

arzelier COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.1 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS STABILITY ANALYSIS Didier HENRION www.laas.fr/ henrion henrion@laas.fr Denis ARZELIER www.laas.fr/

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Modern Optimal Control

Modern Optimal Control Modern Optimal Control Matthew M. Peet Arizona State University Lecture 22: H 2, LQG and LGR Conclusion To solve the H -optimal state-feedback problem, we solve min γ such that γ,x 1,Y 1,A n,b n,c n,d

More information

Semidefinite Programming Duality and Linear Time-invariant Systems

Semidefinite Programming Duality and Linear Time-invariant Systems Semidefinite Programming Duality and Linear Time-invariant Systems Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 2 July 2004 Workshop on Linear Matrix Inequalities in Control LAAS-CNRS,

More information

The norms can also be characterized in terms of Riccati inequalities.

The norms can also be characterized in terms of Riccati inequalities. 9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Outline. Linear Matrix Inequalities and Robust Control. Outline. Time-Invariant Parametric Uncertainty. Robust stability analysis

Outline. Linear Matrix Inequalities and Robust Control. Outline. Time-Invariant Parametric Uncertainty. Robust stability analysis Outline Linear Matrix Inequalities and Robust Control Carsten Scherer and Siep Weiland 7th Elgersburg School on Mathematical Sstems Theor Class 4 version March 6, 2015 1 Robust Stabilit Against Uncertainties

More information

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design

More information

Gramians based model reduction for hybrid switched systems

Gramians based model reduction for hybrid switched systems Gramians based model reduction for hybrid switched systems Y. Chahlaoui Younes.Chahlaoui@manchester.ac.uk Centre for Interdisciplinary Computational and Dynamical Analysis (CICADA) School of Mathematics

More information

CDS Solutions to the Midterm Exam

CDS Solutions to the Midterm Exam CDS 22 - Solutions to the Midterm Exam Instructor: Danielle C. Tarraf November 6, 27 Problem (a) Recall that the H norm of a transfer function is time-delay invariant. Hence: ( ) Ĝ(s) = s + a = sup /2

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

Linear Matrix Inequalities in Control

Linear Matrix Inequalities in Control Linear Matrix nequalities in Control Carsten Scherer and Siep Weiland Delft Center for Systems and Control Delft University of Technology The Netherlands Department of Electrical Engineering Eindhoven

More information

Appendix A Solving Linear Matrix Inequality (LMI) Problems

Appendix A Solving Linear Matrix Inequality (LMI) Problems Appendix A Solving Linear Matrix Inequality (LMI) Problems In this section, we present a brief introduction about linear matrix inequalities which have been used extensively to solve the FDI problems described

More information

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1) EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis Topic # 16.30/31 Feedback Control Systems Analysis of Nonlinear Systems Lyapunov Stability Analysis Fall 010 16.30/31 Lyapunov Stability Analysis Very general method to prove (or disprove) stability of

More information

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Static Output Feedback Stabilisation with H Performance for a Class of Plants

Static Output Feedback Stabilisation with H Performance for a Class of Plants Static Output Feedback Stabilisation with H Performance for a Class of Plants E. Prempain and I. Postlethwaite Control and Instrumentation Research, Department of Engineering, University of Leicester,

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

ROBUST ANALYSIS WITH LINEAR MATRIX INEQUALITIES AND POLYNOMIAL MATRICES. Didier HENRION henrion

ROBUST ANALYSIS WITH LINEAR MATRIX INEQUALITIES AND POLYNOMIAL MATRICES. Didier HENRION  henrion GRADUATE COURSE ON POLYNOMIAL METHODS FOR ROBUST CONTROL PART IV.1 ROBUST ANALYSIS WITH LINEAR MATRIX INEQUALITIES AND POLYNOMIAL MATRICES Didier HENRION www.laas.fr/ henrion henrion@laas.fr Airbus assembly

More information

Nonlinear Systems Theory

Nonlinear Systems Theory Nonlinear Systems Theory Matthew M. Peet Arizona State University Lecture 2: Nonlinear Systems Theory Overview Our next goal is to extend LMI s and optimization to nonlinear systems analysis. Today we

More information

CDS Solutions to Final Exam

CDS Solutions to Final Exam CDS 22 - Solutions to Final Exam Instructor: Danielle C Tarraf Fall 27 Problem (a) We will compute the H 2 norm of G using state-space methods (see Section 26 in DFT) We begin by finding a minimal state-space

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008 ECE504: Lecture 8 D. Richard Brown III Worcester Polytechnic Institute 28-Oct-2008 Worcester Polytechnic Institute D. Richard Brown III 28-Oct-2008 1 / 30 Lecture 8 Major Topics ECE504: Lecture 8 We are

More information

The Liapunov Method for Determining Stability (DRAFT)

The Liapunov Method for Determining Stability (DRAFT) 44 The Liapunov Method for Determining Stability (DRAFT) 44.1 The Liapunov Method, Naively Developed In the last chapter, we discussed describing trajectories of a 2 2 autonomous system x = F(x) as level

More information

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Contents. 0.1 Notation... 3

Contents. 0.1 Notation... 3 Contents 0.1 Notation........................................ 3 1 A Short Course on Frame Theory 4 1.1 Examples of Signal Expansions............................ 4 1.2 Signal Expansions in Finite-Dimensional

More information

FEL3210 Multivariable Feedback Control

FEL3210 Multivariable Feedback Control FEL3210 Multivariable Feedback Control Lecture 8: Youla parametrization, LMIs, Model Reduction and Summary [Ch. 11-12] Elling W. Jacobsen, Automatic Control Lab, KTH Lecture 8: Youla, LMIs, Model Reduction

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

More information

Stability theory is a fundamental topic in mathematics and engineering, that include every

Stability theory is a fundamental topic in mathematics and engineering, that include every Stability Theory Stability theory is a fundamental topic in mathematics and engineering, that include every branches of control theory. For a control system, the least requirement is that the system is

More information

Multiobjective Optimization Applied to Robust H 2 /H State-feedback Control Synthesis

Multiobjective Optimization Applied to Robust H 2 /H State-feedback Control Synthesis Multiobjective Optimization Applied to Robust H 2 /H State-feedback Control Synthesis Eduardo N. Gonçalves, Reinaldo M. Palhares, and Ricardo H. C. Takahashi Abstract This paper presents an algorithm for

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

EE363 homework 7 solutions

EE363 homework 7 solutions EE363 Prof. S. Boyd EE363 homework 7 solutions 1. Gain margin for a linear quadratic regulator. Let K be the optimal state feedback gain for the LQR problem with system ẋ = Ax + Bu, state cost matrix Q,

More information

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications L. Faybusovich T. Mouktonglang Department of Mathematics, University of Notre Dame, Notre Dame, IN

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 57 Outline 1 Lecture 13: Linear System - Stability

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10) Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must

More information

Introduction to Nonlinear Control Lecture # 4 Passivity

Introduction to Nonlinear Control Lecture # 4 Passivity p. 1/6 Introduction to Nonlinear Control Lecture # 4 Passivity È p. 2/6 Memoryless Functions ¹ y È Ý Ù È È È È u (b) µ power inflow = uy Resistor is passive if uy 0 p. 3/6 y y y u u u (a) (b) (c) Passive

More information

From Convex Optimization to Linear Matrix Inequalities

From Convex Optimization to Linear Matrix Inequalities Dep. of Information Engineering University of Pisa (Italy) From Convex Optimization to Linear Matrix Inequalities eng. Sergio Grammatico grammatico.sergio@gmail.com Class of Identification of Uncertain

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Output Input Stability and Minimum-Phase Nonlinear Systems

Output Input Stability and Minimum-Phase Nonlinear Systems 422 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 3, MARCH 2002 Output Input Stability and Minimum-Phase Nonlinear Systems Daniel Liberzon, Member, IEEE, A. Stephen Morse, Fellow, IEEE, and Eduardo

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 56, NO. 5, MAY 2011

1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 56, NO. 5, MAY 2011 1030 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 56, NO 5, MAY 2011 L L 2 Low-Gain Feedback: Their Properties, Characterizations Applications in Constrained Control Bin Zhou, Member, IEEE, Zongli Lin,

More information

An Observation on the Positive Real Lemma

An Observation on the Positive Real Lemma Journal of Mathematical Analysis and Applications 255, 48 49 (21) doi:1.16/jmaa.2.7241, available online at http://www.idealibrary.com on An Observation on the Positive Real Lemma Luciano Pandolfi Dipartimento

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization for robust control with constraints p. 1 Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph Efficient robust optimization

More information

EE363 homework 8 solutions

EE363 homework 8 solutions EE363 Prof. S. Boyd EE363 homework 8 solutions 1. Lyapunov condition for passivity. The system described by ẋ = f(x, u), y = g(x), x() =, with u(t), y(t) R m, is said to be passive if t u(τ) T y(τ) dτ

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Global Analysis of Piecewise Linear Systems Using Impact Maps and Quadratic Surface Lyapunov Functions

Global Analysis of Piecewise Linear Systems Using Impact Maps and Quadratic Surface Lyapunov Functions Global Analysis of Piecewise Linear Systems Using Impact Maps and Quadratic Surface Lyapunov Functions Jorge M. Gonçalves, Alexandre Megretski, Munther A. Dahleh Department of EECS, Room 35-41 MIT, Cambridge,

More information

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 Charles P. Coleman October 31, 2005 1 / 40 : Controllability Tests Observability Tests LEARNING OUTCOMES: Perform controllability tests Perform

More information

THIS paper studies the input design problem in system identification.

THIS paper studies the input design problem in system identification. 1534 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 10, OCTOBER 2005 Input Design Via LMIs Admitting Frequency-Wise Model Specifications in Confidence Regions Henrik Jansson Håkan Hjalmarsson, Member,

More information

A Characterization of the Hurwitz Stability of Metzler Matrices

A Characterization of the Hurwitz Stability of Metzler Matrices 29 American Control Conference Hyatt Regency Riverfront, St Louis, MO, USA June -2, 29 WeC52 A Characterization of the Hurwitz Stability of Metzler Matrices Kumpati S Narendra and Robert Shorten 2 Abstract

More information

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 EN530.678 Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 Prof: Marin Kobilarov 0.1 Model prerequisites Consider ẋ = f(t, x). We will make the following basic assumptions

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Introduction to linear matrix inequalities Wojciech Paszke

Introduction to linear matrix inequalities Wojciech Paszke Introduction to linear matrix inequalities Wojciech Paszke Institute of Control and Computation Engineering, University of Zielona Góra, Poland e-mail: W.Paszke@issi.uz.zgora.pl Outline Introduction to

More information

10 Transfer Matrix Models

10 Transfer Matrix Models MIT EECS 6.241 (FALL 26) LECTURE NOTES BY A. MEGRETSKI 1 Transfer Matrix Models So far, transfer matrices were introduced for finite order state space LTI models, in which case they serve as an important

More information

An asymptotic ratio characterization of input-to-state stability

An asymptotic ratio characterization of input-to-state stability 1 An asymptotic ratio characterization of input-to-state stability Daniel Liberzon and Hyungbo Shim Abstract For continuous-time nonlinear systems with inputs, we introduce the notion of an asymptotic

More information

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Stability and Robustness Analysis of Nonlinear Systems via Contraction Metrics and SOS Programming

Stability and Robustness Analysis of Nonlinear Systems via Contraction Metrics and SOS Programming arxiv:math/0603313v1 [math.oc 13 Mar 2006 Stability and Robustness Analysis of Nonlinear Systems via Contraction Metrics and SOS Programming Erin M. Aylward 1 Pablo A. Parrilo 1 Jean-Jacques E. Slotine

More information

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems

Nonlinear Control. Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems Time-varying Systems ẋ = f(t,x) f(t,x) is piecewise continuous in t and locally Lipschitz in x for all t 0 and all x D, (0 D). The origin

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

Convex Optimization 1

Convex Optimization 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Convex Optimization 1 Many optimization objectives generated

More information

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,

More information

Model reduction for linear systems by balancing

Model reduction for linear systems by balancing Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,

More information

EML5311 Lyapunov Stability & Robust Control Design

EML5311 Lyapunov Stability & Robust Control Design EML5311 Lyapunov Stability & Robust Control Design 1 Lyapunov Stability criterion In Robust control design of nonlinear uncertain systems, stability theory plays an important role in engineering systems.

More information

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7) EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and

More information

Robust and Optimal Control, Spring 2015

Robust and Optimal Control, Spring 2015 Robust and Optimal Control, Spring 2015 Instructor: Prof. Masayuki Fujita (S5-303B) D. Linear Matrix Inequality D.1 Convex Optimization D.2 Linear Matrix Inequality(LMI) D.3 Control Design and LMI Formulation

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Linear Matrix Inequalities in Robust Control. Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002

Linear Matrix Inequalities in Robust Control. Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002 Linear Matrix Inequalities in Robust Control Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002 Objective A brief introduction to LMI techniques for Robust Control Emphasis on

More information

Some of the different forms of a signal, obtained by transformations, are shown in the figure. jwt e z. jwt z e

Some of the different forms of a signal, obtained by transformations, are shown in the figure. jwt e z. jwt z e Transform methods Some of the different forms of a signal, obtained by transformations, are shown in the figure. X(s) X(t) L - L F - F jw s s jw X(jw) X*(t) F - F X*(jw) jwt e z jwt z e X(nT) Z - Z X(z)

More information

Control Systems. LMIs in. Guang-Ren Duan. Analysis, Design and Applications. Hai-Hua Yu. CRC Press. Taylor & Francis Croup

Control Systems. LMIs in. Guang-Ren Duan. Analysis, Design and Applications. Hai-Hua Yu. CRC Press. Taylor & Francis Croup LMIs in Control Systems Analysis, Design and Applications Guang-Ren Duan Hai-Hua Yu CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is an imprint of the Taylor & Francis Croup, an

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University An Introduction to Linear Matrix Inequalities Raktim Bhattacharya Aerospace Engineering, Texas A&M University Linear Matrix Inequalities What are they? Inequalities involving matrix variables Matrix variables

More information

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS Jorge M. Gonçalves, Alexandre Megretski y, Munther A. Dahleh y California Institute of Technology

More information

Sum of Squares Relaxations for Polynomial Semi-definite Programming

Sum of Squares Relaxations for Polynomial Semi-definite Programming Sum of Squares Relaxations for Polynomial Semi-definite Programming C.W.J. Hol, C.W. Scherer Delft University of Technology, Delft Center of Systems and Control (DCSC) Mekelweg 2, 2628CD Delft, The Netherlands

More information

Robotics. Control Theory. Marc Toussaint U Stuttgart

Robotics. Control Theory. Marc Toussaint U Stuttgart Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,

More information