Introduction to Linear Matrix Inequalities (LMIs)

Similar documents
ECE 680 Linear Matrix Inequalities

Appendix A Solving Linear Matrix Inequality (LMI) Problems

Linear Matrix Inequality Approach

Lecture Note 5: Semidefinite Programming for Stability Analysis

Introduction to linear matrix inequalities Wojciech Paszke

Modern Optimal Control

Module 04 Optimization Problems KKT Conditions & Solvers

Linear Matrix Inequality (LMI)

From Convex Optimization to Linear Matrix Inequalities

8 A First Glimpse on Design with LMIs

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Graph and Controller Design for Disturbance Attenuation in Consensus Networks

7.1 Linear Systems Stability Consider the Continuous-Time (CT) Linear Time-Invariant (LTI) system

Linear Matrix Inequalities in Control

Research Article An Equivalent LMI Representation of Bounded Real Lemma for Continuous-Time Systems

Marcus Pantoja da Silva 1 and Celso Pascoli Bottura 2. Abstract: Nonlinear systems with time-varying uncertainties

Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions

STABILITY AND STABILIZATION OF A CLASS OF NONLINEAR SYSTEMS WITH SATURATING ACTUATORS. Eugênio B. Castelan,1 Sophie Tarbouriech Isabelle Queinnec

LMI Methods in Optimal and Robust Control

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions

15. Conic optimization

LOW ORDER H CONTROLLER DESIGN: AN LMI APPROACH

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

EE363 homework 8 solutions

Lecture Notes DISC Course on. Linear Matrix Inequalities in Control. Carsten Scherer and Siep Weiland

Semidefinite Programming Basics and Applications

Nonlinear Control Design for Linear Differential Inclusions via Convex Hull Quadratic Lyapunov Functions

Linear Matrix Inequalities in Robust Control. Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002

Convex Optimization 1

Static Output Feedback Stabilisation with H Performance for a Class of Plants

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

SYNTHESIS OF ROBUST DISCRETE-TIME SYSTEMS BASED ON COMPARISON WITH STOCHASTIC MODEL 1. P. V. Pakshin, S. G. Soloviev

Robust and Optimal Control, Spring 2015

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Dissipativity. Outline. Motivation. Dissipative Systems. M. Sami Fadali EBME Dept., UNR

Module 07 Controllability and Controller Design of Dynamical LTI Systems

Denis ARZELIER arzelier

Riccati Equations and Inequalities in Robust Control

An alternative proof of the Barker, Berman, Plemmons (BBP) result on diagonal stability and extensions - Corrected Version

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

Linear Systems with Saturating Controls: An LMI Approach. subject to control saturation. No assumption is made concerning open-loop stability and no

On optimal quadratic Lyapunov functions for polynomial systems

Convex Optimization Approach to Dynamic Output Feedback Control for Delay Differential Systems of Neutral Type 1,2

Convex optimization problems. Optimization problem in standard form

Introduction and Math Preliminaries

Topic # /31 Feedback Control Systems. Analysis of Nonlinear Systems Lyapunov Stability Analysis

9 The LQR Problem Revisited

Optimality Conditions for Constrained Optimization

Parameterized Linear Matrix Inequality Techniques in Fuzzy Control System Design

Computational Finance

Lecture 6: Conic Optimization September 8

Lecture: Convex Optimization Problems

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

Figure 3.1: Unity feedback interconnection

Lecture 2: Convex Sets and Functions

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Semidefinite Programming Duality and Linear Time-invariant Systems

EE C128 / ME C134 Feedback Control Systems

w T 1 w T 2. w T n 0 if i j 1 if i = j

Advances in Convex Optimization: Theory, Algorithms, and Applications

4. Convex optimization problems

On Computing the Worst-case Performance of Lur'e Systems with Uncertain Time-invariant Delays

A Characterization of the Hurwitz Stability of Metzler Matrices

An LMI Approach to the Control of a Compact Disc Player. Marco Dettori SC Solutions Inc. Santa Clara, California

Control with Random Communication Delays via a Discrete-Time Jump System Approach

Application of LMI for design of digital control systems

1 Strict local optimality in unconstrained optimization

Multi-Model Adaptive Regulation for a Family of Systems Containing Different Zero Structures

Support Vector Machines

Lecture 2 and 3: Controllability of DT-LTI systems

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

Lecture 7: Positive Semidefinite Matrices

ECE580 Partial Solution to Problem Set 3

d A 0 + m t k A k 0 whenever λ min (B k (x)) t k λ max (B k (x)) for k = 1, 2,..., m x n B n (k).

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

Stability lectures. Stability of Linear Systems. Stability of Linear Systems. Stability of Continuous Systems. EECE 571M/491M, Spring 2008 Lecture 5

Copositive Plus Matrices

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 8: Basic Lyapunov Stability Theory

Constrained interpolation-based control for polytopic uncertain systems

Analysis and design of switched normal systems

On Bounded Real Matrix Inequality Dilation

EE5138R: Problem Set 5 Assigned: 16/02/15 Due: 06/03/15

Solving of the Modified Filter Algebraic Riccati Equation for H-infinity fault detection filtering

H 2 Suboptimal Estimation and Control for Nonnegative

Chapter 2 Set Theoretic Methods in Control

Gramians based model reduction for hybrid switched systems

arzelier

The Simplest Semidefinite Programs are Trivial

Converse Lyapunov theorem and Input-to-State Stability

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

EE 227A: Convex Optimization and Applications October 14, 2008

Robust Observer for Uncertain T S model of a Synchronous Machine

Controller synthesis for positive systems under l 1-induced performance

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions

Problem 1 Convexity Property

Optimality Conditions

Transcription:

ECE 680 Fall 2017 Introduction to Linear Matrix Inequalities (LMIs) by Stanislaw H. Żak and Guisheng Zhai October 18, 2017 Henry Ford offered customers the Model T Ford in any color the customer wants, as long as it is black. Henry Ford s policy resulted from subordination to the assembly line constraint that could not keep up with market demand. This was a resource constraint. To maximize throughput it was important for Ford not to change colors during production. This policy assured maximum profit and value for Ford by subordinating the whole system (including customers) to production. Henry Ford s failure was that he stuck to his policy even when new competitors entered the market; then the resource constraint became a market constraint resulting in a steep drop in sales and market share. The right step would have been to realize that there was now a market constraint and subordinate the system to the constraint. Competition and customers needs should have driven Ford to produce a variety of cars: different styles, colors, or amenities. Ronen and Pass 3, p. 84 1 Motivation A number of control design problems for linear and nonlinear systems can be formulated as convex optimization problems. To define a convex optimization problem, we need definitions of a convex set and a convex function. Definition 1 A set Ω R n is convex if for any x and y in Ω, the line segment between x and y lies in Ω, that is, αx + (1 α)y Ω for any α (0, 1). 1

Examples of convex sets include the empty set, R n, a subspace, a line, a line segment. Definition 2 A real-valued function f : Ω R defined on a convex set Ω R n is convex if for all x, y Ω and all α (0, 1), f(αx + (1 α)y) αf(x) + (1 α)f(y). Thus a real-valued function in n variables is convex over a convex set if for all x, y Ω the points on the line segment connecting (x, f(x)) and (y, f(y)) lie on or above the graph of f. For more information on the subject of convex function, we recommend 2, Section 22.2. Definition 3 A convex optimization problem is the one where the objective function to be minimized is convex and the constraint set, over which we minimize the objective function, is a convex set. If f is a convex function and Ω is a convex set, then, in general, max f(x) subject to x Ω is NOT a convex optimization problem! However, if f is convex then the above problem is a convex optimization problem. 2 Linear Matrix Inequality and Its Properties Consider n + 1 real symmetric matrices F i = F i R m m, i = 0, 1,..., n and a vector x = x 1 x 2 x n R n. 2

Then, F (x) = F 0 + x 1 F 1 + + x n F n n = F 0 + x i F i i=1 is an affine function of x rather than a linear function of x because F (x) is composed of a linear term, n i=1 x if i, and a constant term, F 0. Consider now an expression of the form F (x) = F 0 + x 1 F 1 + + x n F n 0. The above is to be interpreted as follows: find a set of vectors x such that z F (x)z 0 for all z R m, that is, F (x) is positive semi-definite. Recall that F i s are constant matrices, x is unknown, and F (x) = F (x) is an affine function of x. The expression, F (x) = F 0 + x 1 F 1 + + x n F n 0, is referred to as in the literature as the linear matrix inequality (LMI) although the term the affine matrix inequality would be a correct term to use. It is easy to verify that the set, {x : F (x) 0}, is a convex set. A system of LMIs, F 1 (x) 0, F 2 (x) 0,..., F k (x) 0, can be represented as one single LMI, F 1 (x) F (x) = F 2 (x)... F k (x) 0. A linear matrix inequality, involving an m-by-n constant matrix A, of the form, Ax b can be represented as m LMIs b i a i x 0, i = 1, 2,..., m, 3

where a i is the i-th row of the matrix A. We can view each scalar inequality as an LMI. We then represent m LMIs as one LMI, b 1 a 1 x b F (x) = 2 a 2 x... 0. b m a mx A semidefinite programming, problem is a convex optimization problem of the form minimize c x subject to F (x) 0. Note that the linear objective function c x is a convex function and the constraint set, {x : F (x) 0}, is a convex set. The matrix property that we discuss next is useful when converting LMIs into equivalent LMIs or converting some nonlinear matrix inequalities into linear matrix inequalities. We start with a simple observation. Lemma 1 Let P = P be a nonsingular n-by-n matrix and let x = Mz, where M R n n such that det M 0. Then, x P x 0 if and only if z M P Mz 0, that is, Similarly P 0 if and only if M P M 0. P 0 if and only if M P M 0. Suppose that we have a square block matrix A B B D, where A = A and D = D. Then, by Lemma 1, A B O I A B O I B D 0 if and only if I O B D I O 0, 4

where I is an identity matrix of appropriate dimension. In other words, A B D B 0 if and only if 0. B D B A Consider next a square block matrix of the form A11 A 12, A 21 A 22 where A 11 and A 22 are square and symmetric submatrices, and A 12 = A 21. Suppose that the matrix A 11 is invertible. Then, I O A11 A 21 I A 1 11 A 21 A 21 A 1 11 I A 21 A 22 O I Let = 11 = A 22 A 21 A 1 11 A 21. A11 O O A 22 A 21 A 1 11 A 21 The matrix 11 is called the Schur complement of A 11. Hence, by Lemma 1, A11 A 21 A11 O 0 if and only if 0, A 21 A 22 O 11 that is, A11 A 21 A 21 A 22 0 if and only if A 11 0 and 11 0. Many problems of optimization, control design, and signal processing can be formulated in terms of LMIs. To test whether or not there exists a solution x to F (x) 0 is called a feasibility problem. We say that the LMI is non-feasible if no solution exists. Remark 1 Any feasible non-strict LMI can be reduced to an equivalent LMI that is strictly feasible, by eliminating implicit equality constraints and then reducing the resulting LMI by removing any constant nullspace see 1, Section 2.5.1. Example 1 We present an example illustrating the LMI feasibility problem. It is well known that a constant square matrix A R n n has its eigenvalues in the open left half-complex 5.

plane if and only if for any real, symmetric, positive definite Q R n n, the solution P = P to the Lyapunov matrix equation A P + P A = Q, is positive definite. That is, the real parts of the eigenvalues of A are all negative if and only if there exists a real symmetric positive definite matrix P such that A P + P A 0, or equivalently, A P P A 0. Thus, the location of all eigenvalues of A in the open left half-complex plane is equivalent to feasibility of the following LMI, P O O A P P A that is, the existence of a symmetric positive definite P such that the above inequality holds. Finding P = P 0 such that A P + P A 0 can be reduced to solving the LMI x n x 2n 1 x q 0, feasibility problem. Indeed, let x 1 x 2 x n x P = 2 x n+1 x 2n 1.., where n(n + 1) q =. 2 We select the following basis matrices, 1 0 0 0 1 0 0 0 0 0 0 0 P 1 =.., P 1 0 0 2 =..,..., P 0 0 0 q =... 0 0 0 0 0 0 0 0 1 Note that P i s are symmetric and have only non-zero elements corresponding to x i in P. Any symmetric matrix P can be expressed as a linear combination of the basis matrices. Let F i = A P i P i A, i = 1, 2,..., q. 6

We then write A P + P A = ( x 1 A P 1 + P 1 A ) ( + x 2 A P 2 + P 2 A ) ( + + x q A P q + P q A ) = x 1 F 1 x 2 F 2 x q F q 0. Let then if and only if F (x) = x 1 F 1 + x 2 F 2 + + x q F q, P = P 0 and A P + P A 0 F (x) 0. More examples of linear matrix inequalities in system and control can be found in the book by Boyd et al. 1. 3 LMI Solvers The LMI, F (x) = F 0 + x 1 F 1 + + x n F n 0 is called the canonical representation of an LMI. The LMIs in the canonical form are inefficient from a storage view-point as well as from the efficiency of the LMI solvers view-point. Modern LMI solvers use a structured representation of LMIs that we discuss next. One can use MATLAB s toolbox to efficiently solve LMIs. This toolbox has three types of LMI solvers. 3.1 The Feasibility Problem Solver This solver computes a feasible solution, that is, it solves the feasible problem defined by a given system of LMI constraints. Using this solver, we can solve any system of LMIs of the form N L(X 1,..., X k )N M R(X 1,..., X k )M, 7

where X 1,..., X k are matrix variables, N is the left outer factor, M is the right outer factor, L(X 1,..., X k ) is the left inner factor and R(X 1,..., X k ) is the right inner factor. The matrices L( ) and R( ) are, in general, symmetric block matrices. We note that the term left-hand side refers to what is on the smaller side of the inequality 0 X. Thus in X 0, the matrix X is still on the right-hand side because it is on the larger side of the inequality. We next provide a description of an approach that can be used to solve the given LMI system feasibility problem. To initialize the LMI system description, we type setlmis(). Then we declare matrix variables using the command lmivar. The command lmiterm allows us to specify LMIs that constitute the LMI system under consideration. Next, we need to obtain an internal representation using the command getlmis. We next compute a feasible solution to the given LMI system using the command feasp. After that we extract matrix variable values with the command dec2mat. In summary, a general structure of a MATLAB program for finding a feasible solution to the set of LMIs has the form, setlmis() lmivar lmiterm... lmiterm getlmis feasp dec2mat We now analyze the above commands in some detail so that the reader can write MATLAB programs for solving LMIs after finishing reading this section. First, to create a new matrixvalued variable, say, X, in the given LMI system, we use the command X=lmivar(type,structure) The input type specifies the structure of the variable X. There may be three structures of matrix variables. When type=1, we have a symmetric block diagonal matrix variable. The input type=2 refers to a full rectangular matrix variable. Finally, type=3 refers to other 8

cases. The second input structure gives additional information on the structure of the matrix variable X. For example, the matrix variable X could have the form D 1 O O O D X = 2 O....., O O D r where each D i is a square symmetric matrix. For the above example we would use type=1. The above matrix variable has r blocks. The input structure is then an r 2 matrix whose i-th row describes the i-th block, where the first component of each row gives the corresponding block size, while the second element of each row specifies the block type. For example, X=lmivar(1,3 1) specifies a full symmetric 3 3 matrix variable. On the other hand, X=lmivar(2,2 3) specifies a rectangular 2 3 matrix variable. Finally, the matrix variable S of the form s 1 0 0 0 0 s 1 0 0 S =, 0 0 s 2 s 3 0 0 s 3 s 4 can be declared as follows: S=lmivar(1,2 0;2 1) Note that in the above the second component of the first row of the second input has the value of zero, that is, structure(1,2)=0. This describes a scalar block matrix of the form D 1 = s 1 I 2. Note that the second block is a 2 2 symmetric full block. The purpose of the next command is to specify the terms of the LMI system of interest. This command has the form 9

lmiterm(termid,a,b,flag) We briefly describe each of the four inputs of this command. The first input, termid, is a row with four elements that specify the terms of each LMI of the LMI system. We have termid(1)=n to specify the left-hand side of the n-th LMI. We use termid(1)=-n to specify the right-hand side of the n-th LMI. The middle two elements of the input termid specify the block location. Thus termid(2,3)=i j refers to the term that belongs to the (i, j) block of the LMI specified by the first component. Finally, termid(4)=0 for the constant term, termid(4)=x for the variable term in the form AXB, while termid(4)=-x for the variable term in the form AX B. The second and the third inputs of the command lmiterm give the values of the left and right outer factors, that is, A and B give the value of the constant outer factors in the variable terms, AXB or in AX B. Finally, the fourth input to lmiterm serves as a compact way to specify the expression AXB + (AXB). Thus flag= s can be used to denote a symmetrized expression. We illustrate the above command on the following LMI, P A + A P 0. (1) We have one LMI with two terms. We could use the following description of this single LMI, lmiterm(1 1 1 P,1,A) lmiterm(1 1 1 -P,A,1) Because P A + A P = P A + (P A), we can compactly describe (1) with the use of the flag as follows, lmiterm(1 1 1 P,1,A, s ) Now, to solve the feasibility problem we could have typed tmin,xfeas=feasp(lmis) In general, for a given LMI feasibility problem of the form find such that x L(x) R(x) 10

the command feasp solves the auxiliary convex problem minimize subject to t L(x) R(x) + ti. The system of LMIs is feasible if the minimal t is negative. We add that the current value of t is displayed by feasp at each iteration. Finally, we convert the output of the LMI solver into matrix variables using the command P=dec2mat(lmis,xfeas,P). 3.2 Minimizing Linear Objective Under LMI Constraints This solver is invoked using the function mincx. It solves the convex problem minimize subject to c x A(x) B(x). The notation A(x) B(x) is a shorthand notation for general structured LMI systems. Thus to solve a mincx problem, in addition to specifying the LMI constraints as in the feasp problem, we also declare the linear objective function. Then we invoke the function mincx. We illustrate both the feasp and mincx solvers in the following example. Example 2 We consider the optimization problem minimize c x subject to Ax b, where c = A = 4 5, 1 1 1 3 2 1, b = 8 18 14. We first solve the feasibility problem, that is, we find an x such that Ax b, using the feasp solver. After that we solve the above minimization problem using the mincx solver. A simple MATLAB code accomplishing the above tasks is shown below. 11

% Enter problem data A =1 1;1 3;2 1; b=8 18 14'; c= 4 5'; setlmis(); X=lmivar(2,2 1); lmiterm(1 1 1 X,A(1,:),1); lmiterm(1 1 1 0, b(1)); lmiterm(1 2 2 X,A(2,:),1); lmiterm(1 2 2 0, b(2)); lmiterm(1 3 3 X,A(3,:),1); lmiterm(1 3 3 0, b(3)); lmis=getlmis; % disp(' feasp result ') tmin,xfeas=feasp(lmis); x feasp=dec2mat(lmis,xfeas,x) disp(' mincx result ') objective,x mincx=mincx(lmis,c,0.0001 1000 0 0 1) The feasp produces x feasp = 64.3996 25.1712. The mincx produces x mincx = 3.0000 5.0000. In the next example we discuss the function defcx that we can use to construct the vector c used by the LMI solver mincx. Example 3 Suppose that we wish to solve the optimization problem minimize trace(p ) subject to A P + P A 0, where P = P 0. 12

We can use the function mincx to solve the above problem. However, to use the function mincx, we need a vector c such that c x = trace(p ). After specifying the LMIs and obtaining their internal representation using, for example, the command lmisys=getlmis, we can obtain the desired c with the following MATLAB code, q=decnbr(lmisys); c=zeros(q,1); for j=1:q Pj=defcx(lmisys,j,P); c(j)=trace(pj); end Having obtained the vector c, we can use the function mincx to solve the optimization problem. 3.3 Generalized Eigenvalue Minimization Problem This problem can be stated as minimize subject to λ C(x) D(x) 0 B(x) A(x) λb(x). In the above, we need to distinguish between standard LMI constraints of the form C(x) D(x) and the linear-fractional LMIs of the form A(x) λb(x) that are concerned with the generalized eigenvalue λ, that is, the LMIs involving λ. The number of linear-fractional constraints is specified with nflc. The generalized eigenvalue minimization problem under LMI constraints is solved by calling the solver gevp. The basic structure of the gevp solver has the form, 13

lopt,xopt=gevp{lmisys,nflc} It returns lopt, which is the global minimum of the generalized eigenvalue, and xopt, which is the optimal decision vector variable. The argument lmisys is the system of LMIs, C(x) D(x), 0 B(x), and A(x) λb(x) for λ = 1. As in the previous solvers, the corresponding optimal values of the matrix variables are obtained using dec2mat. There are other inputs to gevp but they are optional. For more information on this type of the LMI solver, we refer to the LMI Lab in the MATLAB s Robust Control Toolbox user s guide. Example 4 Consider a system model of the form ẋ = Ax, x(0) = x 0, where the matrix A R n n is asymptotically stable. We wish to estimate the decay rate of the system s trajectory x(t), that is, we wish to find constants η > 0 and M > 0 such that x(t) e ηt M(x 0 ). (2) Because A is asymptotically stable, by Lyapunov s theorem, for any Q = Q 0 there exists P = P 0 such that A P + P A = Q, that is, Let Then, we have x ( A P + P A ) x = x Qx. V = x P x. V = x ( A P + P A ) x, which is the Lyapunov derivative of V evaluated on trajectories of ẋ = Ax. Let ( α = min V ). (3) V Then, we have α V V and since V = x P x 0 for x 0, we obtain V αv. (4) 14

Therefore, V (t) e αt V (0). (5) We refer to α as the decay rate of V. We represent (5) as x(t) P x(t) e αt x 0 P x 0. (6) We have λ min (P ) x 2 2 x P x. (7) Because P = P 0, we have that λ min (P ) > 0. Combining (6) and (7), and dividing both sides by λ min (P ) > 0 gives Hence, x(t) 2 e αt x 0 P x 0 λ min (P ). x(t) e α 2 t x 0 P x 0 λ min (P ). (8) We represent P = P as P = P 1/2 P 1/2. Hence, Taking (9) into account, we represent (8) as x 0 P x 0 = x 0 P 1/2 P 1/2 x 0 = P 1/2 x 0 2. (9) x(t) e α 2 t P 1/2 x 0 λmin (P ). (10) Comparing the above with (2) yields η = α/2 and M(x 0 ) = P 1/2 x 0 λmin (P ). In terms of LMIs, finding α that satisfies (4) is equivalent to minimizing α subject to P 0 A P + P A αp. 15

For example if A = 1.1853 0.9134 0.2785 0.9058 1.3676 0.5469 0.1270 0.0975 3.0000, then finding α that satisfies (3) can be accomplished using the following LMIs: A = 1.1853 0.9134 0.2785 0.9058 1.3676 0.5469 0.1270 0.0975 3.0000; setlmis(); P = lmivar(1,3 1) lmiterm( 1 1 1 P,1,1) lmiterm(1 1 1 0,.01) lmiterm(2 1 1 P,1,A,'s') lmiterm( 2 1 1 P,1,1) lmis = getlmis; gamma,p opt=gevp(lmis,1); P=dec2mat(lmis,P opt,p) alpha= gamma The result is Recall that η = α/2. % P % P > 0.01*I α = 0.6561 and P = % linear fractional constraint left hand side % linear fractional constraint right hand side 0.6996 0.7466 0.0296 0.7466 0.8537 0.2488 0.0296 0.2488 3.2307. 4 Notes A quick introduction to the MATLAB s LMI toolbox is the tutorial that can be accessed by typing in lmidem. It is very instructive and we recommend to try it. We also recommend a tutorial on the mathematical theory and process control applications of linear matrix inequalities (LMIs) and bilinear matrix inequalities (BMIs) by VanAntwerp and Braatz 4. 16

We mention that in addition to the MATLAB s LMI toolbox there is another toolbox for solving LMIs called LMITOOL, a built in software package in Scilab toolbox, developed at INRIA in France. Scilab offers free software for numerical optimization. There is a version of LMITOOL for MATLAB that can be uploaded from the website of the Scilab Consortium. Yet Another LMI Parser (YALMIP) for solving LMIs was developed in Switzerland in the Automatic Control Laboratory at ETH. YALMIP is an intuitive and flexible modelling language for solving optimization problems in MATLAB. YALMIP supports linear programming (LP), quadratic programming (QP), second order cone programming (SOCP), semidefinite programming, determinant maximization, mixed integer programming, posynomial geometric programming, semidefinite programs with bilinear matrix inequalities (BMI), and multiparametric linear and quadratic programming. A very popular software package for solving LMIs, called CVX, was developed by the group of Stephen Boyd at Stanford. It runs in MATLAB. The CVX example library has hundreds of examples. 5 Practice Problems PP 1 Show that the set, Ω = { x : x P x 1 } is a convex set, where P is a positive definite matrix. PP 2 1. Show that the function, f 1 (x) = x P x, is a convex function, where P is a positive definite matrix; 2. Show that the function, f 2 (x) = x 2 1 +2x 1 x 2 +3x 2 2 5x 1 +6x 2 +10, is a convex function. PP 3 Show that if f(x) is a convex function, then Ω = {x : f(x) c} is a convex set for any real number c. PP 4 Let D and E be given real matrices of appropriate dimensions and let F satisfy F F I. Show that the following matrix inequality holds for any ɛ < 0. DF E + E F D ɛdd + 1 ɛ E E (11) PP 5 Given a positive scalar α, derive an LMI condition under which the real parts of the eigenvalues of a real matrix A R n n are smaller than α. 17

PP 6 Establish an LMI condition for asymptotic stability of the uncertain linear timeinvariant system, ẋ = A(λ)x, where { p } p A(λ) λ i A i : λ i 0, λ i = 1, i=1 i=1 and each individual A i is asymptotically stable. Use a common Lyapunov function approach. PP 7 Investigate asymptotic stability of the uncertain linear time-varying system ẋ = A + DF (t)e x using a quadratic Lyapunov function candidate V (x) = x P x, where A R n n, D R n m and E R r n are constant matrices, while F (t) R m r models uncertainties such that F (t)f (t) γ 2 I r, γ > 0. PP 8 Given a real matrix A R n n, construct a set of LMIs that determine whether or not its eigenvalues satisfy α < Reλ(A) < β, where α < β are two real scalars. Organize your computations using the obtained LMIs in the form of a MATLAB program PP 9 The problem of designing a stabilizing state feedback uk = Kxk for a discrete-time LTI system xk + 1 = Axk + Buk, can be reduced to finding a matrix P 0 such that (A BK) P (A BK) P 0, (12) which is a nonlinear matrix inequality (NMI) with respect to P 0 and K. Formulate LMIs that are equivalent to the above conditions. PP 10 Use LMIs obtained in Practice Problem PP 9 to compute a stabilizing state feedback for the linear discrete-time system, 0 1 0 xk + 1 = xk + uk. 2 3 1 18

PP 11 Consider the linear continuous-time system ẋ = Ax + Bw, x(0) = 0 z = Cx + Dw (13) where x R n is the state, w R q is the disturbance input, z R p is the controlled output, and A, B, C, D are constant matrices of appropriate dimensions. Show that if there exist P = P 0 and γ > 0, such that A P + P A P B C B P γi D C D γi then the system is asymptotically stable, and furthermore t 0 0, (14) t z(τ) z(τ) dτ γ 2 w(τ) w(τ) dτ, for all t 0. PP 12 For the interconnected linear continuous-time system, design a stabilizing decentralized state feedback 0 ẋ 1 = A 11 x 1 + A 12 x 2 + B 1 u 1 (15) ẋ 2 = A 21 x 1 + A 22 x 2 + B 2 u 2, (16) u 1 = K 1 x 1, u 2 = K 2 x 2 using a combined Lyapunov function candidate V (x) = x 1 P 1 x 1 + x 2 P 2 x 2, where P 1 and P 2 are positive definite matrices to be determined. Formulate the design problem in terms of LMIs. 6 Solutions to Practice Problems SPP 1 We will show that for any x and y in Ω, that is, x P x 1, y P y 1, we have αx + (1 α)y Ω for all α (0, 1). 19

Indeed, (αx + (1 α)y) P (αx + (1 α)y) = α 2 x P x + (1 α) 2 y P y + α(1 α)(x P y + y P x) α 2 + (1 α) 2 + α(1 α)(x P y + y P x). Taking into account that we obtain (x y) P (x y) 0, x P y + y P x x P x + y P y 2, and thus (αx + (1 α)y) P (αx + (1 α)y) α 2 + (1 α) 2 + 2α(1 α) = (α + (1 α)) 2 = 1, which implies αx + (1 α)y Ω. The proof is complete. SPP 2 1. First, we note that f 1 (x) is a real-valued function f 1 : R n R defined on R n, a convex set. Then, for any x, y R n and any α (0, 1), we obtain f 1 (αx + (1 α)y) = (αx + (1 α)y) P (αx + (1 α)y) = α 2 x P x + (1 α) 2 y P y + α(1 α)(x P y + y P x). Taking into account that (x y) P (x y) 0, we obtain x P y + y P x x P x + y P y = f 1 (x) + f 1 (y), and hence f 1 (αx + (1 α)y) α 2 f 1 (x) + (1 α) 2 f 1 (y) + α(1 α)(f 1 (x) + f 1 (y)) = αf 1 (x) + (1 α)f 1 (y). Therefore, f 1 (x) is a convex function. 20

2. We rewrite f 2 (x) as f 2 (x) = x P x + c x + 10 where 1 1 P =, c = 5 6. 1 3 Since P is positive definite, the function x P x in f 2 (x) is convex. The function c x + 10 is affine and thus convex. The sum of two convex functions is also convex. Therefore, f 2 (x) is convex. SPP 3 Given any x, y Ω, that is, f(x) c, f(y) c, we have f(αx + (1 α)y) αf(x) + (1 α)f(y) αc + (1 α)c = c for any α (0, 1). Thus, αx + (1 α)y Ω, which implies that Ω is a convex set. SPP 4 Using properties of symmetric nonnegative definite matrices, we obtain (ɛ 1 2 D ɛ 1 2 F E ) (ɛ 1 2 D ɛ 1 2 F E ) 0, which is equivalent to Since F F I, we obtain that DF E + E F D ɛdd + 1 ɛ E F F E. DF E + E F D ɛdd + 1 ɛ E E. SPP 5 The eigenvalues of A + αi, denoted λ i (A + αi), are λ i (A) + α. Therefore, the real parts of the eigenvalues of a real matrix A R n n are smaller than α is equivalent to the matrix A + αi being a Hurwitz matrix. This condition can be expressed in terms of an LMI for the existence of a positive definite matrix P such that (A + αi) P + P (A + αi) 0, or equivalently, P O O A P P A 2αP 0. 21

SPP 6 The system is asymptotically stable if we can find a symmetric P 0 such that A(λ) P + P A(λ) 0. (17) Note that if we can find a common P 0 such that A i P + P A i 0, i = 1, 2,..., p, (18) and then if we multiply each matrix inequality in (18) by a nonnegative scalar λ i and add them up, we obtain the matrix inequality (17). Therefore, the LMIs (18) together with P 0 constitute a sufficient condition for asymptotic stability of the uncertain system. SPP 7 The uncertain linear time-varying system ẋ = A + DF (t)e x is asymptotically stable if we can find symmetric P 0 such that (A + DF (t)e) P + P (A + DF (t)e) 0. (19) Using 0 ( P DF (t) E ) ( P DF (t) E ) = P DF (t)f (t) D P + E E E F (t) D P P DF (t)e γ 2 P DD P + E E E F (t) D P P DF (t)e, we obtain that E F (t) D P + P DF (t)e γ 2 P DD P + E E (20) holds for any F (t) satisfying F (t) F (t) γ 2 I r. Combining (19) and (20), we obtain that the uncertain system is asymptotically stable if P 0 and A P + P A + γ 2 P DD P + E E 0, which is equivalent to the LMI condition, A P + P A + E E P D D P γ 2 I m 0. The above equivalence can be established using the Schur complement. 22

SPP 8 Recall that the symbol R denotes the real part of a given complex number. Noticing that α < R λ(a) is equivalent to the matrix αi A being Hurwitz, which is equivalent to P 1 0, (αi A) P 1 + P 1 (αi A) 0. Similarly, the condition R λ(a) < β is equivalent to the matrix A βi being Hurwitz, which is equivalent to P 2 0, (A βi) P 2 + P 2 (A βi) 0. A simple MATLAB program for solving the above LMIs is shown below. n=size(a,1); setlmis(); P1=lmivar(1,n 1); P2=lmivar(1,n 1); lmiterm(1 1 1 P1,1,alpha*eye(n) A,`s'); lmiterm( 2 1 1 P1,1,1); lmiterm(3 1 1 P2,1,A beta*eye(n),`s'); lmiterm( 4 1 1 P2,1,1); lmis=getlmis; tmin,xfeas=feasp(lmis); if tmin<0 disp(' alpha < Re lambda(a) < beta? : YES ') else disp(' alpha < Re lambda(a) < beta? : NO ') end SPP 9 Since P 0, the matrix inequality (12) is equivalent to P 1 A BK 0, A K B P which can be verified using the Schur complement. Let W = P 1 and pre-multiply and post-multiply the above inequality by diag(i, W ) to obtain W AW BKW W A W K B W 0. 23

Let M = KW, then we obtain the LMI condition, W AW BM W A M B W 0. (21) When the above LMI is feasible, the controller gain is given by K = MW 1. SPP 10 The following is a simple MATLAB program solving the LMI (21). A=0 1; 2 3; B=0; 1; n=size(a,1); m=size(b,2); setlmis(); W=lmivar(1,n 1); M=lmivar(2,m n); lmiterm(1 1 1 W,1, 1); lmiterm(1 1 2 W,A,1); lmiterm(1 1 2 M,B, 1); lmiterm(1 2 2 W,1, 1); % (1,1): W % (1,2):AW % (1,2): BM % (2,2): W lmis=getlmis; tmin,xfeas=feasp(lmis); if tmin<0 WW = dec2mat(lmis,xfeas,w); MM = dec2mat(lmis,xfeas,m); K = MM*inv(WW) eig(a B*K) end SPP 11 We first observe from (14) that the (1,1) block of the LMI is negative definite, that is, A P + P A 0, and thus the system is asymptotically stable. 24

Next, using the Schur complement, we can show that (14) is equivalent to A P + P A P B C + γ 1 C D 0. B P γi D We pre-multiply the above by x w and post-multiply it by x w to obtain x ( A P + P A ) x + x P Bw + w B P x γw w + γ 1 z z 0. (22) We next evaluate the Lyapunov derivative of V (x) = x P x along solutions of the system to obtain V (x) = ẋ P x + x P ẋ = (Ax + Bw) P x + x P (Ax + Bw) = x ( A P + P A ) x + x P Bw + w B P x γw w γ 1 z z. Integrating both sides of the above inequality from 0 to t gives V (x(t)) V (x(0)) γ t Since V (x(t)) 0 and V (x(0)) = 0, we have t 0 0 t w(τ) w(τ) dτ γ 1 z(τ) z(τ) dτ. t z(τ) z(τ) dτ γ 2 w(τ) w(τ) dτ. SPP 12 Substituting the decentralized state feedback into the interconnected system, we obtain d x1 dt x 2 = A11 B 1 K 1 A 12 A 21 A 22 B 2 K 2 0 0 x1 x 2. (23) We consider the following Lyapunov function candidate for the closed-loop system, V (x) = x 1 P 1 x 1 + x P 2 P 2 x 2 = x 1 x 1 O x1 2. O P 2 x 2 The closed-loop system (23) is asymptotically stable if A11 B 1 K 1 A 12 P 1 O A 21 A 22 B 2 K 2 O P 2 P 1 O A11 B 1 K 1 A 12 + O P 2 A 21 A 22 B 2 K 2 0. 25

Pre-multiplying and post-multiplying the above by Q1 O P 1 O = O Q 2 O P 2 1 yields Q1 O A11 B 1 K 1 A 12 O Q 2 A 21 A 22 B 2 K 2 A11 B 1 K 1 A 12 Q1 O + A 21 A 22 B 2 K 2 O Q 2 0, where Q 1 = P 1 1 and Q 2 = P 1 2. Let M 1 = K 1 Q 1 and M 2 = K 2 Q 2, then performing some matrix multiplications gives A11 Q 1 + Q 1 A 11 B 1 M 1 M 1 B 1 Q 1 A 21 + A 12 Q 2 0, A 21 Q 1 + Q 2 A 12 A 22 Q 2 + Q 2 A 22 B 2 M 2 M 2 B 2 which, together with Q 1 0 and Q 2 0, constitutes the LMIs for asymptotic stability of the closed-loop system. If the LMIs are feasible, the controller gains are given by K 1 = M 1 Q 1 1, K 2 = M 2 Q 1 2. A simple MATLAB program for solving the above LMIs is given below. n1=size(a11,1); m1=size(b1,2); n2=size(a22,1); m2=size(b2,2); setlmis(); Q1=lmivar(1,n1 1); Q2=lmivar(1,n2 1); M1=lmivar(2,m1 n1); M2=lmivar(2,m2 n2); lmiterm(1 1 1 Q1, A11, 1, 's'); lmiterm(1 1 1 M1, B1, 1, 's'); lmiterm(1 1 2 Q1, 1, A21'); lmiterm(1 1 2 Q2, A12, 1); lmiterm(1 2 2 Q2, A22, 1, 's'); lmiterm(1 2 2 M2, B2, 1, 's'); lmiterm( 2 1 1 Q1, 1, 1); 26

lmiterm( 3 1 1 Q2, 1, 1); lmis=getlmis; tmin,xfeas=feasp(lmis); if tmin<0 q1 = dec2mat(lmis,xfeas,q1); q2 = dec2mat(lmis,xfeas,q2); m1 = dec2mat(lmis,xfeas,m1); m2 = dec2mat(lmis,xfeas,m2); K1 = m1*inv(q1); K2 = m2*inv(q2); Acl = A11 B1*K1 A12; A21 A22 B2*K2; eig(acl) end References 1 S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System and Control Theory. SIAM Studies in Applied Mathematics. Society for Industrial and Applied Mathematics, Philadelphia, 1994. 2 E. K. P. Chong and S. H. Żak. An Introduction to Optimization. John Wiley & Sons, Inc., Hoboken, New Jersey, fourth edition, 2013. 3 B. Ronen and S. Pass. Focused Operations Management: Achieving More With Existing Resources. John Wiley & Sons, Hoboken, New Jersey, 2008. 4 J. G. VanAntwerp and R. D. Braatz. A tutorial on linear and bilinear matrix inequalities. Journal of Process Control, 10(4):363 385, 2000. 27