Linear Matrix Inequality (LMI)

Similar documents
Static Output Feedback Stabilisation with H Performance for a Class of Plants

Research Article An Equivalent LMI Representation of Bounded Real Lemma for Continuous-Time Systems

Modern Optimal Control

Denis ARZELIER arzelier

Outline. Linear Matrix Inequalities in Control. Outline. System Interconnection. j _jst. ]Bt Bjj. Generalized plant framework

Convex Optimization Approach to Dynamic Output Feedback Control for Delay Differential Systems of Neutral Type 1,2

Lecture 15: H Control Synthesis

Robust Output Feedback Controller Design via Genetic Algorithms and LMIs: The Mixed H 2 /H Problem

Introduction to linear matrix inequalities Wojciech Paszke

Multiobjective Optimization Applied to Robust H 2 /H State-feedback Control Synthesis

On Bounded Real Matrix Inequality Dilation

Modern Optimal Control

An LMI Approach to the Control of a Compact Disc Player. Marco Dettori SC Solutions Inc. Santa Clara, California

Lecture 10: Linear Matrix Inequalities Dr.-Ing. Sudchai Boonto

Optimization based robust control

An LMI Optimization Approach for Structured Linear Controllers

A new robust delay-dependent stability criterion for a class of uncertain systems with delay

On Computing the Worst-case Performance of Lur'e Systems with Uncertain Time-invariant Delays

Lecture Note 5: Semidefinite Programming for Stability Analysis

A New Strategy to the Multi-Objective Control of Linear Systems

Modern Optimal Control

Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science : MULTIVARIABLE CONTROL SYSTEMS by A.

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions

Appendix A Solving Linear Matrix Inequality (LMI) Problems

Linear Matrix Inequalities in Control

Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions

Graph and Controller Design for Disturbance Attenuation in Consensus Networks

Parameterized Linear Matrix Inequality Techniques in Fuzzy Control System Design

Balanced Truncation 1

H 2 Optimal State Feedback Control Synthesis. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Semidefinite Programming Duality and Linear Time-invariant Systems

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

arzelier

Robust multi objective H2/H Control of nonlinear uncertain systems using multiple linear model and ANFIS

Robust Anti-Windup Compensation for PID Controllers

7.1 Linear Systems Stability Consider the Continuous-Time (CT) Linear Time-Invariant (LTI) system

Chap 4. State-Space Solutions and

The norms can also be characterized in terms of Riccati inequalities.

Lecture Notes DISC Course on. Linear Matrix Inequalities in Control. Carsten Scherer and Siep Weiland

FEL3210 Multivariable Feedback Control

LOW ORDER H CONTROLLER DESIGN: AN LMI APPROACH

CDS Solutions to the Midterm Exam

Application of LMI for design of digital control systems

On the solving of matrix equation of Sylvester type

LINEAR ALGEBRA W W L CHEN

Introduction to Nonlinear Control Lecture # 4 Passivity

Chap. 3. Controlled Systems, Controllability

EE363 homework 8 solutions

Introduction to Linear Matrix Inequalities (LMIs)

Linear System Theory

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

MCE693/793: Analysis and Control of Nonlinear Systems

LMI based output-feedback controllers: γ-optimal versus linear quadratic.

Linear Matrix Inequalities in Robust Control. Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Robust and Optimal Control, Spring 2015

Control for stability and Positivity of 2-D linear discrete-time systems

Problem Set 4 Solution 1

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

State feedback gain scheduling for linear systems with time-varying parameters

ON POLE PLACEMENT IN LMI REGION FOR DESCRIPTOR LINEAR SYSTEMS. Received January 2011; revised May 2011

Robust Multivariable Control

Optimization Based Output Feedback Control Design in Descriptor Systems

LMI Methods in Optimal and Robust Control

Stabilization and Passivity-Based Control

From Convex Optimization to Linear Matrix Inequalities

1 Continuous-time Systems

Efficient robust optimization for robust control with constraints Paul Goulart, Eric Kerrigan and Danny Ralph

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

LMIs for Observability and Observer Design

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31

Design of hybrid control systems for continuous-time plants: from the Clegg integrator to the hybrid H controller

June Engineering Department, Stanford University. System Analysis and Synthesis. Linear Matrix Inequalities. Stephen Boyd (E.

José C. Geromel. Australian National University Canberra, December 7-8, 2017

Lecture 9 Nonlinear Control Design

Convex Optimization 1

Module 03 Linear Systems Theory: Necessary Background

Economic sensor/actuator selection and its application to flexible structure control

Lecture 7 : Generalized Plant and LFT form Dr.-Ing. Sudchai Boonto Assistant Professor

Time-Invariant Linear Quadratic Regulators!

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Interval solutions for interval algebraic equations

Robust Anti-Windup Controller Synthesis: A Mixed H 2 /H Setting

Multi-Model Adaptive Regulation for a Family of Systems Containing Different Zero Structures

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty

Stability, Pole Placement, Observers and Stabilization

Lecture 10 - Eigenvalues problem

6.241 Dynamic Systems and Control

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

1 The Observability Canonical Form

Problem 2 (Gaussian Elimination, Fundamental Spaces, Least Squares, Minimum Norm) Consider the following linear algebraic system of equations:

Fixed-Order Robust H Controller Design with Regional Pole Assignment

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Robust PID Controller Design for Nonlinear Systems

Linear Systems with Saturating Controls: An LMI Approach. subject to control saturation. No assumption is made concerning open-loop stability and no

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

2nd Symposium on System, Structure and Control, Oaxaca, 2004

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Transcription:

Linear Matrix Inequality (LMI) A linear matrix inequality is an expression of the form where F (x) F 0 + x 1 F 1 + + x m F m > 0 (1) x = (x 1,, x m ) R m, F 0,, F m are real symmetric matrices, and the inequality > 0 in (1) means positive definite, i.e., u T F (x)u > 0 for all u R n, u 0. Equivalently, the smallest eigenvalue of F (x) is positive. 1

Definition[Linear matrix inequality(lmi)] A linear matrix inequality is F (x) > 0 (2) where F is an affine function mapping a finite dimensional vector space to the set S n {M : M = M T R n n }, n > 0, of real matrices. remark Recall, from definition, that an affine mapping F : V S n necessarily takes the form F (x) = F 0 +T (x) where F 0 S n and T : V S n is a linear transformation. Thus if V is of dimension m, and {e 1,, e m } constitutes a basis for V, then we can write T (x) = m j=1 x j F j where the elements {x 1,, x m } are such that x = m j=1 x j e j and F j = T (e j ) for j = 1,, m. Hence we obtain (1) as a special case. 2

Remark. The same remark applies to mappings F : R m 1 m 2 S n where m 1, m 2 Z +. A simple example where m 1 = m 2 is the Lyapunov inequality F (X) = A T X + XA + Q > 0. Here, A, Q R m m are assumed to be given, Q is symmetric, and X R m m is the unknown matrix. In this case, the domain V of F in definition is equal to S m. We can view this LMI as a special case of (1) by defining a basis E 1,, E m of S m and writing X = m j=1 x j E j : F (X) = F m j=1 x j E j = F 0 + which is of the form (1). = F 0 + m j=1 m j=1 x j F (E j ) x j F j 3

Remark. The LMI F (x) = F 0 + xf 1 + + x m F m defines a convex constraint on x = (x 1,, x m ). i.e., the set F {x : F (x) > 0} is convex. Indeed, if x 1, x 2 F and α (0, 1) then F (αx 1 +(1 α)x 2 ) = αf (x 1 )+(1 α)f (x 2 ) > 0 Convexity has an important consequence: even though the LMI has no analytical solution in general, it can be solved numerically with guarantees of fining a solution when one exists. Although the LMI may seem special, it turns out that many convex sets can be represented in this way. 4

1. Note that a system of LMIs (i.e. a finite set of LMIs) can be written as a single LMI since F 1 (x) < 0. F K (x) < 0 is equivalent to F (x) diag[f 1 (x),, F K (x)] < 0 2. Combined constraints (in the unknown x) of the form { F (x) > 0 Ax = b or { F (x) > 0 x = Ay + b for some y where the affine function F : R m S n and matrices A R n m and b R n are given can be lumped into one LMI. More generally, the combined equations { F (x) > 0 (3) x M 5

where M is an affine subset of R n, i.e. M = x 0 + M 0 = {x 0 + m m M 0 } with x 0 R n and M 0 a linear subspace of R n, can be written in the form of one single LMI. In order to see this, let e 1,, e k R n be a basis of M 0 and let F (x) = F 0 + T (x) be decomposed as in remark. Then (3) can be rewritten as 0 < F (x) = F 0 + T (x 0 + k j=1 = F 0 + T (x 0 ) + }{{} constant part x j e j ) k j=1 = F 0 + x 1 F 1 +... + x k F k F ( x) x j T (e j ) }{{} linear part where F 0 = F 0 + T (x 0 ), F j = T (e j ) and x = (x 1,, x k ). This implies that x R n satisfies (3) if and only if F (x) > 0. Note

that the dimension of x is smaller than the dimension of x. 3. (Schur Complement) Let F : V S n be an affine function partitioned to [ ] F11 (x) F F (x) = 12 (x) F 21 (x) F 22 (x) where F 11 (x) is square. Then F (x) > 0 iff { F11 (x) > 0 F 22 (x) F 21 (x)f11 1 (x)f 12(x) > 0 (4) Note that the second inequality in (4) is a nonlinear matrix inequality in x. It follows that nonlinear matrix inequalities of the form (4) can be converted to LMIs, and nonlinear inequalities (4) define a convex constraint on x.

Types of LMI problems Suppose that F, G : V S n 1 and H : V S n 2 are affine functions. Feasibility: The test whether or not there exist solutions x of F (x) > 0 is called a feasibility problem. The LMI is called nonfeasible if no solutions exist. Optimization: Let f : S R and suppose that S = {x F (x) > 0}. The problem to determine V opt = inf x S f(x) is called an optimization problem with an LMI constraint. Generalized eigenvalue problem: Minimize a scalar λ R subject to λf (x) G(x) > 0 F (x) > 0 H(x) > 0 6

What are LMIs good for? Many optimization problems in control design, identification, and signal processing can be formulated using LMIs. Example. Asymptotic stability of the LTI system ẋ = Ax, A R n n (5) Lyapunov said, asymptotically stable iff there exists X S n such that X > 0, A T X + XA < 0 i.e. equivalent to feasibility of the LMI [ ] X 0 0 A T > 0 X XA 7

Example. Determine a diagonal matrix D such that DMD 1 < 1 where M is some given matrix. Since DMD 1 < 1 D T M T D T DMD 1 < I M T D T DM < D T D X M T XM > 0 where X := D T D > 0 we see that the existence of such a matrix means the feasibility of LMI. 8

Example. Let F be an affine function and consider the problem of minimizing f(x) σ max (F (x)) over x. λ max (F T (x)f (x)) < γ γi F T (x)f (x) > 0 [ if we define [ ] x x, F ( x) γ γi F (x) [ γi F (x) F T (x) I ] F T (x) I > 0 ], f( x) γ, then F is an affine function of x and the problem to minimize the maximum eigenvalue of F (x) is equivalent to determining inf f( x) subject to the LMI F ( x) > 0. Hence, this is an optimization problem with a linear objective function f and an LMI constraint. 9

Example(Simultaneous stabilization) Consider k LTI systems with n-dim state space and m-dim input space: ẋ = A i x + B i u where A i R n n and B i R n m, i 1,, k. We d like to find a state feedback law u = F x, F R m n such that the eigenvalues λ(a i + B i F ) lie on the LHP for i 1,, k. From the example above, this is solved when we find matrices F and X i, i 1,, k such that for i 1,, k, { Xi > 0 (A i + B i F ) T (6) X i + X i (A i + B i F ) < 0 Note that this is not a system of LMIs in X i and F. If we introduce Y i = X 1 i and K = F Y i, then (6) becomes { Yi > 0 A i Y i + Y i A T i + B ik + K T i B i < 0, which can be further simplified by assuming 10

the existence of a joint Lyapunov function, i.e. X i = = X k = X. The joint stabilization problem has a solution if this system of LMIs is feasible.

H nominal performance Consider x = Ax + Bu (7) y = Cx + Du (8) with state space X = R n, input space U = R m and output space Y = R p. proposition If the system (7) is asymptotically stable then G < γ whenever there exists a solution K = K T > 0 to the LMI [ A T K + KA + C T C KB + C T ] D B T K + D T C D T D γ 2 < 0. (9) I Can compute the H norm of the transfer function by minimizing γ > 0 over all variables γ and K > 0 that satisfy the LMI. 11

H 2 nominal performance We take impulsive inputs of the form u(t) = δ(t)e i with e i the i th basis vector in the standard basis of the input space R m. (i = 1 m). With zero initial conditions, the corresponding output y i L 2 and is given by y i (t) = C exp(at)be i for t > 0 De i δ(t) for t = 0 0 for t < 0.. Only if D = 0, the sum of the squared norms of all such impulse responses m i=1 y i 2 2 is well defined and given by m i=1 y i 2 2 = trace = trace 0 BT exp(a t )C T C exp(at)b dt 0 C exp(at)bbt exp(a T t)c T dt = trace G(jω)G (jω) dω where G is the transfer function of the system. 12

proposition Suppose that the system (7) is asymptotically stable (and D = 0), then the following statements are equivalent. (a) G 2 < γ (b) there exists K = K T > 0 and Z such that [ A T ] [ ] K + KA KB K C T B T < 0; > 0; K I C Z (10) trace(z) < γ 2 (11) (c) there exists K = K T > 0 and Z such that [ AK + KA T KC T ] [ ] K B < 0; CK I B T > 0; Z (12) trace(z) < γ 2 (13) 13

pf. note that G 2 < γ is equivalent to requiring that the controllability gramian W c := 0 exp(at)bb T exp(a T t) dt satisfies trace(cw C T ) < γ 2. Since the controllability gramian is the unique positive definite solution to the Lyapunov equation AW + W A T + BB T = 0 this is equivalent to saying that there exists X > 0 such that AX + XA T + BB T < 0; trace(cxc T ) < γ 2. With a change of variables K := X 1, this is equivalent to the existence of K > 0 and Z such that A T K + KA + KBB T K < 0; CK 1 C T < Z; and trace(z) < γ 2. 14

Now, using Schur complements for the first two inequalities yields that G 2 < γ is equivalent to the existence of K > 0 and Z such that [ A T ] [ ] K + KA KB K C T B T < 0; > 0; K I C Z and trace(z) < γ 2. The equivalence with (12) is obtained by the observation that G 2 = G T 2. Therefore, the smallest possible upper bound of the H2-norm of the transfer function can be calculated by minimizing the criterion trace(z) over the variables K > 0 and Z that satisfy the LMIs defined by the first two inequalities in (10) or (12).

Controller Synthesis Let and ẋ = Ax + B 1 w + B 2 u z = C x + D 1 w + D 2 u z 2 = C 2 x + D 21 w + D 22 u y = C y x + D y1 w ẋ K = A K x K + B K y u = C K x K + D K y be state-space realizations of the plant P (s) and the controller K(s) respectively. 15

Denoting by T (s) and T 2 (s) the CL TF from w to z and z 2, respectively, we consider the following multi-objective synthesis problem: Design an output feedback controller u = K(s)y such that H performance: maintains the H norm of T below γ 0. H 2 performance: of T 2 below ν 0. maintains the H 2 norm Multi-objective H 2 /H controller design: minimizes the trade-off criterion of the form α T 2 + β T 2 2 2 with some α, β 0. Pole placement: places the CL poles in some prescribed LMI region D. 16

Let the following denote the corresponding CL state-space eqns, x cl = A cl x cl + B cl w z = C cl1 x cl + D cl1 w z 2 = C cl2 x cl + D cl2 w then our design objectives can be expressed as follows: H performance: the CL RMS gain from w to z does not exceed γ iff there exists a symmetric matrix X such that A cl X + X A T cl B cl X Ccl1 T Bcl T I Dcl1 T C cl1 X D cl1 γ 2 I < 0 X > 0 H 2 performance: the LQG cost from w to z 2 does not exceed ν iff D cl2 = 0 and there 17

exists a symmetric matrices X 2 and Q such that [ Acl X 2 + X 2 A T ] cl B cl < 0 I [ B T cl Q Ccl2 T X 2 X 2 Ccl2 T > 0 X 2 trace(q) < ν 2 ] Pole placement: the CL poles lie in the LMI region D := {z C : L + Mz + M T z < 0} with L = L T = [λ ij ] 1 i,j m and M = [µ ij ] 1 i,j m iff there exists a symmetric matrix X pol such that [λ ij X pol + µ ij A cl X pol + µ ji X pol A T cl ] 1 i,j m < 0 X pol > 0.

For tractability, we seek a single Lyapunov matrix X := X = X 2 = X pol that enforces all three sets of constraints. Factorizing X as [ ] [ ] 1 R I 0 S X = M T 0 I N T and introducing the transformed controller variables: B K := NB K + SB 2 D K C K := C K M T + D K C y R A K := NA K M T + NB K C y R + SB 2 C K M T +S(A + B 2 D K C y )R, the inequality constraints on X are turned into LMI constraints in the variables R, S, Q, A K, B K, C K and D K. And we have the following suboptimal LMI formulation of our multi-objective synthesis problem: 18

Minimize αγ 2 +βtrace(q) over R, S, Q, A K, B K, C K, D K and γ 2 satisfying: AR + RA T + B 2 C K + CK T BT 2 C R + D 2 C K [ [ R λ ij I A K + A + B 2 D K C y A T S + SA + B K C y + Cy T BT K C + D 2 D K C y I S ] Q [ AR + B2 C + µ K A + ij A K SA µ ji [ (AR + B2 C K (A + B 2 D K C 19

Given optimal solutions γ, Q of this LMI problem, the closed loop performances are bounded by T γ, T 2 trace(q ). This has been implemented by the matlab command hinfmix. 20

Reference Boyd S, El Ghaoui L, Feron E, Balakrishnan V. Linear matrix inequalities in system and control theory, vol. 15 ed. Scherer C, Weiland S. Linear matrix inequalities in control. Lecture notes of DISC Course LMI Control Toolbox, Gahinet, Nemirovski, Laub, Chilali, Mathworks 21

Affine combinations of linear systems Often models uncertainty about specific parameters is reflected as uncertainty in specific entries of the state space matrices A, B, C, D. Let p = (p 1,..., p n ) denote the parameter vector which expresses the uncertain quantities in the system and suppose that this parameter vector belongs to some subset P R n. Then the uncertain model can be thought of as being parameterized by p P through its state space representation ẋ = A(p)x + B(p)u (14) y = C(p)x + D(p)u. (15) One way to think of equations of this sort is to view them as a set of linear time-invariant systems as parameterized by p P. However, if p is time, then (14) defines a linear time varying dynamical system and it can therefore also be viewed as such. If components of p are 22

time varying and coincide with state components then (14) is better viewed as a nonlinear system. Of particular interest will be those systems in which the system matrices affinely depend on p. This means that A(p) = A 0 + p 1 A 1 + + p n A n (16) B(p) = B 0 + p 1 B 1 + + p n B n (17) C(p) = C 0 + p 1 C 1 + + p n C n (18) D(p) = D 0 + p 1 D 1 + + p n D n. (19) Or, written in a more compact form where S(p) = S 0 + p 1 S 1 +... + p n S n S(p) = [ A(p) B(p) C(p) D(p) is the system matrix associated with (14). We call these models affine parameter dependent ]

models. In MATLAB such a system is represented with the routines psys and pvec. For n = 2 and a parameter box P {(p 1, p 2 ) p 1 [p min 1, p max 1 ], p 2 [p min 2, p max 2 ]} the syntax is affsys = psys( p, [s0, s1, s2] ); p = pvec( box, [p1min p1max ; p2min p2max]) where p is the parameter vector whose i-th component ranges between pimin and pimax. Bounds on the rate of variations, ṗ i (t) can be specified by adding a third argument rate when calling pvec.

See also the following routines: pdsimul for time simulations of affine parameter models aff2pol to convert an affine model to an equivalent polytopic model pvinfo to inquire about the parameter vector 23