Olivier Bachelier 3. Michael Sebek 4;5. Abstract. polynomial matrix to belong to a given region D of the complex plane.

Similar documents
Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Links Between Robust and Quadratic Stability of. Michael Sebek 4;5. Vladimr Kucera 4;5. Abstract

Fixed-Order Robust H Controller Design with Regional Pole Assignment

Optimization based robust control

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

Linear Systems with Saturating Controls: An LMI Approach. subject to control saturation. No assumption is made concerning open-loop stability and no

Control of linear systems subject to time-domain constraints with polynomial pole placement and LMIs

Didier HENRION henrion

Optimizing simultaneously over the numerator and denominator polynomials in the Youla-Kučera parametrization

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms.

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

Maximizing the Closed Loop Asymptotic Decay Rate for the Two-Mass-Spring Control Problem

Strong duality in Lasserre s hierarchy for polynomial optimization

Research Article Partial Pole Placement in LMI Region

Fixed Order H Controller for Quarter Car Active Suspension System

ROBUST ANALYSIS WITH LINEAR MATRIX INEQUALITIES AND POLYNOMIAL MATRICES. Didier HENRION henrion

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI optimization for fixed-order H controller design

Solving polynomial static output feedback problems with PENBMI

On parameter-dependent Lyapunov functions for robust stability of linear systems

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

arzelier

A new robust delay-dependent stability criterion for a class of uncertain systems with delay

Positive Polynomials and Robust Stabilization with Fixed-Order Controllers

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

Proc. 9th IFAC/IFORS/IMACS/IFIP/ Symposium on Large Scale Systems: Theory and Applications (LSS 2001), 2001, pp

On Computing the Worst-case Performance of Lur'e Systems with Uncertain Time-invariant Delays

Stability of linear time-varying systems through quadratically parameter-dependent Lyapunov functions

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren

June Engineering Department, Stanford University. System Analysis and Synthesis. Linear Matrix Inequalities. Stephen Boyd (E.

Robust linear optimization under general norms

Static Output Feedback Stabilisation with H Performance for a Class of Plants

Lecture Note 5: Semidefinite Programming for Stability Analysis

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

II.3. POLYNOMIAL ANALYSIS. Didier HENRION

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Convex Optimization Approach to Dynamic Output Feedback Control for Delay Differential Systems of Neutral Type 1,2

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Multi-Model Adaptive Regulation for a Family of Systems Containing Different Zero Structures

Marcus Pantoja da Silva 1 and Celso Pascoli Bottura 2. Abstract: Nonlinear systems with time-varying uncertainties

Applications of Controlled Invariance to the l 1 Optimal Control Problem

Optimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex

d A 0 + m t k A k 0 whenever λ min (B k (x)) t k λ max (B k (x)) for k = 1, 2,..., m x n B n (k).

Parameterized Linear Matrix Inequality Techniques in Fuzzy Control System Design

Robust Anti-Windup Compensation for PID Controllers

A note on continuous behavior homomorphisms

Didier HENRION henrion

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

ROBUST CONSTRAINED REGULATORS FOR UNCERTAIN LINEAR SYSTEMS

Lecture 6: Conic Optimization September 8

SPECTRA - a Maple library for solving linear matrix inequalities in exact arithmetic

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

ON THE ROBUST STABILITY OF NEUTRAL SYSTEMS WITH TIME-VARYING DELAYS

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

A Survey Of State-Feedback Simultaneous Stabilization Techniques

STABILITY AND STABILIZATION OF A CLASS OF NONLINEAR SYSTEMS WITH SATURATING ACTUATORS. Eugênio B. Castelan,1 Sophie Tarbouriech Isabelle Queinnec

Introduction to linear matrix inequalities Wojciech Paszke

H State-Feedback Controller Design for Discrete-Time Fuzzy Systems Using Fuzzy Weighting-Dependent Lyapunov Functions

Linear Matrix Inequalities in Control

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

IMPROVED MPC DESIGN BASED ON SATURATING CONTROL LAWS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Linear Matrix Inequality (LMI)

where m r, m c and m C are the number of repeated real scalar blocks, repeated complex scalar blocks and full complex blocks, respectively. A. (D; G)-

Global Optimization of H problems: Application to robust control synthesis under structural constraints

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

The Simplest Semidefinite Programs are Trivial

Multiobjective Optimization Applied to Robust H 2 /H State-feedback Control Synthesis

To appear in IEEE Trans. on Automatic Control Revised 12/31/97. Output Feedback

Research Article An Equivalent LMI Representation of Bounded Real Lemma for Continuous-Time Systems

Interval solutions for interval algebraic equations

SYNTHESIS OF LOW ORDER MULTI-OBJECTIVE CONTROLLERS FOR A VSC HVDC TERMINAL USING LMIs

The moment-lp and moment-sos approaches

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Graphical User Interface for Design Stabilizing Controllers

LMI based output-feedback controllers: γ-optimal versus linear quadratic.

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

Robust Farkas Lemma for Uncertain Linear Systems with Applications


From Convex Optimization to Linear Matrix Inequalities

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

Convergence rates of moment-sum-of-squares hierarchies for volume approximation of semialgebraic sets

Polynomial Stabilization with Bounds on the Controller Coefficients

u - P (s) y (s) Figure 1: Standard framework for robustness analysis function matrix and let (s) be a structured perturbation constrained to lie in th

H controller design on the COMPLIB problems with the Robust Control Toolbox for Matlab

Chapter Robust Performance and Introduction to the Structured Singular Value Function Introduction As discussed in Lecture 0, a process is better desc

WE CONSIDER linear systems subject to input saturation

1 Introduction CONVEXIFYING THE SET OF MATRICES OF BOUNDED RANK. APPLICATIONS TO THE QUASICONVEXIFICATION AND CONVEXIFICATION OF THE RANK FUNCTION

Semidefinite Programming Basics and Applications

An LQ R weight selection approach to the discrete generalized H 2 control problem

Stability analysis and state feedback control design of discrete-time systems with a backlash

2nd Symposium on System, Structure and Control, Oaxaca, 2004

Infinite elementary divisor structure-preserving transformations for polynomial matrices

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Proceedings of TOGO 2010, pp. 1??. k=1. subject to. y k g(x k ) = b, k=1. k=1

Constrained interpolation-based control for polytopic uncertain systems

Handout 8: Dealing with Data Uncertainty

Transcription:

D-Stability of Polynomial Matrices 1 Didier Henrion ; Olivier Bachelier Michael Sebek ; Abstract Necessary and sucient conditions are formulated for the zeros of an arbitrary polynomial matrix to belong to a given region D of the complex plane. The conditions stem from a general optimization methodology mixing LFRs, rank-one LMIs and the S-procedure. They are expressed as an LMI feasibility problem that can be tackled with widespread powerful interior-point methods. Most importantly, the D-stability conditions can be combined with other LMI conditions arising in robust stability analysis. Keywords Polynomial Matrix, D-stability, LMI. 1 Introduction Polynomial matrices play a central role in modern systems theory. Algebraic methods such as the polynomial approach [] or the behavioral approach [] heavily rely upon polynomial matrices. Unsurprisingly, fundamental system features are captured by properties of polynomial matrices. For example, the zeros of the denominator polynomial matrix in a matrix fraction description [] characterize system dynamics and performance. Satisfactory transient time response can be ensured as soon as the zeros are located in some specic region of the complex plane [1]. The present paper is precisely concerned with 1 This work was supported by the Barrande Project No. 9/00-9/0, by the Grant Agency of the Czech Republic under contract No. 10/99/18 and by the Ministry of Education of the Czech Republic under contract No. VS9/0. Corresponding author. E-mail henrion@laas.fr. FAX + 1 9 9. LAAS-CNRS, Avenue du Colonel Roche, 1 0 Toulouse, Cedex, France. Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Pod vodarenskou vez, 18 08 Prague 8, Czech Republic. Trnka Laboratory for Automatic Control, Faculty of Electrical Engineering, Czech University of Technology, Technicka, 1 Prague, Czech Republic. 1

checking whether the zeros of a given polynomial matrix belong to a given region D. Following a terminology introduced in [], this problem is henceforth referred to as the D-stability problem. In the special case of polynomials, i.e. scalar polynomial matrices, assessing stability is a long-standing problem that has motivated a lot of work since the last century. In order to check D-stability in the left-half plane (for continuous-time systems), in the unit disk (for discrete-time systems) or in more general regions (for ensuring system performance), two approaches may be pursued. One can alternatively Compute the zeros of the polynomial, see [8] for a recent survey and [8] for promising new developments. A standard way to proceed consists in computing the eigenvalues of the companion matrix associated to the polynomial, using for example the numerically stable Schur decomposition [1]; Use indirect methods such as the determinantal Routh-Hurwitz or Schur-Cohn criteria or the quadratic Hermite-Fujiwara criterion, see [1]. The case of polynomial matrices is far more involved. Considering square non-singular matrices only, we can basically distinguish between two approaches to D-stability. One can alternatively First, compute the determinant of the polynomial matrix [0], which may be numerically dicult although signicant progress has been reported recently [0, 1]. Second, use one of the two methods depicted above on the resulting scalar polynomial; Compute the zeros of the polynomial matrix [0]. Here also, a standard technique is to compute the eigenvalues of the pencil matrix associated to the polynomial matrix [], using for example the numerically stable QZ decomposition [1]. As pointed out in [8], some numerical diculties may be expected with high multiplicity zeros. Considering rectangular polynomial matrices, the only method we are aware of was reported in [] and makes indirect use of state-space arguments for computing the zeros. Note that, up to the authors' knowledge, there is no satisfactory extension of the scalar indirect methods to the matrix case. Some attempts were reported to extend the Hermite- Fujiwara criterion [] or the Routh-Hurwitz criterion [9] to polynomial matrices, but no unied approach has appeared so far. Finally, let us point out that the problem of checking stability of polynomial matrices is dicult enough to motivate the development of potentially conservative stability conditions that do not require the computation of the determinant [, ]. Note also that these conditions are only valid for checking stability in the unit disk. In view of this unsatisfactory state of the art, this paper is an attempt to overcome the lack of suciently general methods for studying stability of polynomial matrices without performing tedious determinant computations or numerically hazardous evaluations of the

zeros. The approach we pursue is based on an idea that originally appeared in []. The stability problem for polynomial matrices is rst expressed as an optimization problem. Then, several techniques are used to come up with a standard formulation of the problem. The basic concepts we hinge upon are Linear Fractional Representations (LFRs, see [8]), rank-constrained Linear Matrix Inequalities (LMIs, see []), the S-procedure [] and semidenite programming []. The strengths of our method are enumerated below Rectangular polynomial matrices with real or complex coecients can be handled; Covered stability regions include half-planes, disks, parabolas or their non-convex complements or possibly non-connected unions, not necessarily symmetric with respect to the real axis; The stability conditions are expressed as an LMI, a linear feasibility problem over the cone of positive denite matrices [] that can be solved through powerful interiorpoint methods []; Most importantly, the stability conditions can be combined with other conditions of similar type arising in robust stability analysis problems, when some elements in the polynomial matrix are uncertain. This extension will be covered elsewhere. The outline of the paper is as follows. In Section, we state the D-stability problem to be solved. The stability regions covered by our approach are described in Section. A convex LMI formulation of the stability conditions is derived in Section. This is our main result. In Section, several examples are provided to illustrate the method. Finally, the paper winds up with some concluding remarks and directions for future research. Notations R and C are the sets of real and complex numbers, respectively. s? = x? jy is the complex conjugate of the complex number s = x + jy. A? (s) is the transpose conjugate of the complex polynomial matrix A(s) in the indeterminate s. I n denotes the identity matrix of dimension n. TraceA is the trace of matrix A. RankA is the rank of matrix A. The matrix inequalities A B and A B mean that matrix A?B is positive denite and positive semidenite, respectively. A B stands for the Kronecker product of two matrices A and B. Finally, (A) and (A) denote the number of positive and negative eigenvalues of a Hermitian matrix A = A?, respectively. Problem Statement Suppose we are given an m n complex polynomial matrix A(s) = A 0 + A 1 s + + A d s d of degree d together with a region of the complex plane. D C

A zero of A(s) is usually dened [] as a complex value z C for which the rank of A(s) drops below its normal value, i.e. RankA(z) < RankA(s) Without computing the determinant or the zeros of A(s), we aim at nding necessary and sucient conditions for D-stability of A(s), i.e. for the zeros of A(s) to belong to D. Stability Regions First we describe the class of regions D we will consider throughout the paper. Dene D C = fs C s = Dg C as the complement of region D in C. In this paper, we restrict our attention to regions D whose complement reads D C = fs C B 00 + B 10 s + B 01 s? + B 11 ss? 0g (1) where matrices B gh C kk are such that non-singular matrix B B = B? 00 B = 01 B 10 B 11 () has at least one negative eigenvalue, i.e. (B) > 0. The motivation behind this particular choice will become apparent shortly. In Sections.1 and. we show that the class of regions dened above encompasses a wide variety of typical clustering regions. In Section. we show that we can describe at a given accuracy any region of the complex plane, even non-convex or non-connected ones. In Section., we enumerate simple transformations (scalings, translations, rotations and Mobius mapping) that can be applied to our regions. In Section. an important point is raised about open regions and zeros at innity. Finally, in Section. we highlight some links between our representation and standard clustering regions encountered in the literature..1 First Order Regions Let k = 1. We can dene three dierent regions in this case. Half-plane D = fx + jy C ax + by + c < 0g with a; b; c R and B = c a + jb a? jb 0

Disk interior D = fs C js? s 0 j < rg with s 0 C, r > 0 R and B =?r + s 0 s? 0?s 0?s? 0 1 Disk exterior D = fs C js? s 0 j > rg with s 0 C, r > 0 R and r B =? s 0s? 0 s 0 s? 0?1. Second Order Regions When k =, we can describe the following useful regions. Ellipsoid D = fx + jy C a (x? x 0 ) + b (y? y 0 ) < 1g with a; b; x 0 ; y 0 R and B = B = (a x 0 + b y 0? 1) 0?(a x 0 + jb y 0 ) 1 0 1=(b? a ) 1 0?(a x 0? jb y 0 ) 1 b 0 1 0 0 0 (a x + 0 b y 0? 1) 0?(a x 0 + jb y 0 ) j 0 1=(a? b ) j 0?(a x 0? jb y 0 )?j a 0?j 0 0 0 Parabola D = fx + jy C x + x 0 + a y < 0g with a; x 0 ; y 0 R and B = x 0 0 a 0 1 a 0 a a 0 a 0 0 0 when a < b ; when a > b A wide variety of other regions can be described when k =, but they are generally irrelevant as far as zero clustering is concerned.. Union of Regions The regions mentioned above are connected. Such regions shall cover many practical problems but some specic investigations require more sophisticated regions such as unions of possibly disjoint subregions. This is particularly true when one wants to test the transient performances of a multi-time scale system. It would be quite inappropriate to check stability in a convex region containing the whole of dynamics. For instance, if a system has two separate dynamics, it is more judicious to choose D as a union of two disjoint subregions, one for the fast poles and the other one for the slow poles. As another

example, the damping factor induced by a couple of dominant complex eigenvalues may be handled with a couple of disjoint subregions (such as disks or ellipses) symmetric to each other with respect to the real axis. Actually, our formulation can easily handle union of regions. To see this, note that the intersection of two complementary regions D C i = fs C B 00 i + B 10 i s + B 01 for i = 1; is a complementary region D C = fs C B 00 1 0 B 10 1 0 + 0 B 00 0 B 10 s+ i s? + B 11 B 01 i 1 0 0 B 01 ss? 0g B s? 11 1 0 + 0 B 11 ss? 0g Therefore, s D 1 or s D if and only if s D, where D is the union of D 1 and D. This property holds for any arbitrary number of regions. Consequently, we can describe any union of the regions introduced in Sections.1 and., such as half-planes or disks. Since any region of the complex plane can be represented by the union of half-planes and disks at any desired accuracy, we conclude that the regions described in this paper are dense in the set of regions of the complex plane. These regions may be open, non-convex or even non-connected. Note however that our formulation cannot easily handle intersection of regions. This is believed to be the major hurdle of our approach. For example, we were not able to nd a suitable expression for describing a conic sector in the left half-plane, a typical clustering region.. Transformation of Regions The basic geometric transformations that can be carried out to enlarge the class of regions covered by our formulation are enumerated next. Scaling ks where k R matrix B becomes B 00 k?1 B 01 k?1 B 10 k? B 11 Translation s + t where t C matrix B becomes B 00?? tb10 t? B 01 + tt? B 11 B 01? tb 11 B 10? t? B 11 B 11 Rotation e j s where 0 < matrix B becomes B 00 e j B 01 e?j B 10 B 11 Mobius mapping (a + bs)=(c + ds) where a; b; c; d R matrix B becomes b B 00? ab(b 10 + B 01 ) + a B 11?bdB 00? abb 10 + bcb 01? acb 11?bdB 00 + bcb 10? abb 01? acb 11 d B 00? cd(b 10 + B 01 ) + c B 11

. Zeros at Innity Special care must be taken when studying stability of polynomial matrices in open regions such as disk complements or half-planes. This is because polynomial matrices generally feature zeros at innity, i.e. when ss?! 1. The reader is referred to [] for an accurate denition and a comprehensive treatment of zeros at innity. As an example, just consider stability in the left half-plane. One traditionally says that a polynomial matrix is stable in the continuous-time sense if and only if its zeros are located in the left half-plane. However, this sentence refers to nite zeros only. Indeed, a stable polynomial matrix may have zeros located in the right half-plane, provided these zeros are at innity. When studying stability in open regions, it may thus be necessary to exclude innite zeros of polynomial matrices. For instance, when studying stability in the left half-plane D = fs C s + s? < 0g, the complementary right half-plane D C = fs C s + s? 0g may be replaced by the half disk D C r = fs C s + s? 0; ss? r g for a suciently large r R. Innite zeros may belong to open regions D and D C, but denitely do not belong to bounded region D C r for nite r.. Relation to Standard Stability Regions..1 -Regions -regions, proposed in [1], are described by the so-called polynomial formulation D = fs C X e;f B ef s e (s? ) f < 0g where e; f are integers such that e 0,f 0 and e + f d. Integer d is called the order of the region. In most practical problems we encounter rst order regions (d = 1) such as half-planes or second order regions (d = ) such as disks, ellipses, hyperbolas, parabolas, as well as vertical and horizontal strips. It must be noticed that our complementary description (1) of stability regions can always be written as a nite set of -regions. It suces to enumerate the set of determinantal minors of the matrix expression in (1) to see that non-negativity of each minor can be expressed as an -region... LMI Regions -regions are not always convenient because of possible nonlinearities or even nonconvexities. That is why a more recent formulation of regions has been recently introduced in

[9, 10] and widely used afterwards. The formulation of a LMI region is as follows D = fs C B 00 + B 10 s + B 01 s? 0g where B 00 = (B 00 )? R dd and B 10 = (B 01 )? R dd are given matrices. Integer d is called the order of the LMI region. LMI regions are symmetric with respect to the real axis. The set of rst order LMI regions corresponds to vertical half-planes while second order LMI regions include disks, ellipses, classical and hyperbolic sectors, vertical and horizontal strips. The set of LMI regions is dierent from the set of -regions since an -region is not necessarily convex. Actually, none of these sets includes the other one and both can be of great practical interest. The LMI formulation can sometimes be obtained from the polynomial formulation by applying Schur complement. This can be seen as a way to reduce the degree of the scalar inequality by turning it into a matrix inequality. Note nally that many unions of possibly disjoint and non-symmetric convex LMI subregions can be formulated as generalized LMI regions, an extension of LMI regions [9, ]. Our complementary region D C dened in equation (1) reduces to an LMI region, provided B 11 = 0. In this case, region D is a (possibly non-convex) complementary LMI region. LMI Formulation of the Problem Following these preliminaries, we can now derive an LMI formulation of a necessary and sucient condition for D-stability of a polynomial matrix A(s). In the sequel, it is assumed that A(s) has full column-rank. If the input matrix has full row-rank, then we work on its transpose. The main steps of our approach can be sketched as follows. In Section.1, we show that checking stability amounts to solving an optimization problem. Using LFRs, we derive a rank-one LMI formulation of this problem in Section., and an equivalent quadratic programming formulation in Section.. Then we use the S-procedure in Section. to show that the non-convex rank constraint can actually be dropped, leading to a tractable LMI formulation of stability conditions..1 Optimization Problem If z C is a zero of an m n full column-rank polynomial matrix A(s), then there exists a non-zero vector x C n such that A(z)x = 0 Consequently, all the zeros of A(s) belong to D if and only if the optimal value of the optimization problem = min x? A? (s)a(s)x st s D C () 8

is such that > 0. It is assumed that vector x is non-zero in problem ().. Rank-one LMI Problem Now we show that solving optimization problem () amounts to solving a rank-one LMI optimization problem. We shall use the following important result. Lemma 1 Polynomial matrix A(s) has the following Linear Fractional Representation (LFR) A(s) = A 0 + L(I nd? D)?1 R In the above equation = si nd and A0 R L D = A 0 A 1 A d?1 A d I n 0 0 0 0 I n 0 0....... 0 0 0 0 0 0 I n 0 Proof of Lemma 1 See [8]. In relation to the above LFR, we dene the following matrices A Q = B R = A0 R L D () In order to avoid any redundancy in components of vectors p and q, we may require that matrices Q and R have full rank. If this is not the case, we can perform a series of reductions as described in the Appendix. Performing this reduction is not necessary, but usually signicantly reduces problem dimensions. Let r denote the dimension of diagonal matrix. We can dene vectors p; q C r that A(s)x = A 0 x + Lp q = Rx + Dp p = q such () Lemma Let B a Hermitian matrix and D C the corresponding region as dened in equation (1). Then there exist vectors p; q C r such that Q = [I k q I k p]b[i k q I k p]? = B 00 qq? + B 10 pq? + B 01 qp? + B 11 pp? 0 () if and only if p = sq and s D C. 9

Proof of Lemma The proof is adapted from Lemma C.1 in [11]. If r = 1, vectors p and q are scalars and the result readily follows. Now suppose that r > 1. Since (B) > 0, for arbitrary vectors p and q, generically it holds (Q) > 0. If we enforce Q to be positive semidenite, then the negative eigenvalues of Q vanish and the rank of Q drops beyond its normal value. Since B is non-singular, this rank deciency comes necessarily from vectors p; q. This may occur only if vectors p; q become linearly dependent, i.e. if there exists a scalar s such that p = sq. In this case, matrix Q = (B 00 + B 10 s + B 01 s? + B 11 ss? ) qq? is positive semidenite for any vector q. Therefore, scalar s belongs to region D C. Dene the rank-one positive semidenite matrix x x X = zz? = p p and the projection matrix In virtue of equations () and (), it holds P = [0 I r ] A(s)x = Az q = Qz p = Pz? 0 () Inequality () is therefore an LMI in matrix X. Let us denote it by a linear map F (X) = B 00 (QXQ? ) + B 10 (PXQ? ) + B 01 (QXP? ) + B 11 (PXP? ) (8) Using equations (), () and (8), an alternative formulation of problem () can now be derived. It reads = min TraceA? AX st F (X) 0 X 0 RankX = 1 Problem (9) is an LMI optimization problem with a rank constraint. It must be pointed out that rank-constrained LMIs frequently arise in control problems [1, 18, 19] but also in mathematical programming and combinatorial optimization. In [], it is shown that all of the problems with polynomial objective and polynomial constraints (including quadratic programming, binary programming, integer programming, to mention just a few) may actually be formulated as rank-constrained LMI optimization problems. (9). Quadratic Programming Rank-one LMI problem (9) can be transformed into a quadratic optimization problem with the help of the following result. 10

Lemma S = S? 0 if and only if TraceP S 0 for every matrix P = P? 0. Proof of Lemma Just put matrix S into Schur form, see Lemma C. in [11]. Let where P gh C rr. Similarly, let P = P? = B ef = where B ef gh C and dene the linear map F D (P ) = kx kx g=1 h=1 P 11. P k1 P 1k. P kk B ef 11 B ef 1k. B ef k1. B ef kk? B 00 hgq? P gh Q + B 10 hgp? P gh Q + B 01 hgq? P gh Q + B 11 hgp? P gh P (10) dual to the map introduced in (8). Letting M 0 = Q and M 1 = P, matrix inequality F (X) 0 holds if and only if the scalar TraceP F (X) = Trace = Trace = Trace 1X 1X e=0 f =0 1X 1X 1X 1X kx P 11. P k1 kx e=0 f =0 g=1 h=1 e=0 f =0 g=1 h=1 = TraceF D (P )X kx kx P 1k. P kk B11M ef e XM? f B ef B ef hg P ghm e XM? f B ef hgm? f P ghm e X. k1m e XM? f B1kM ef e XM? f B1kM ef e XM? f. is greater than or equal to zero for every positive denite matrix P = P? 0. Now dene quadratic forms f 0 (z) = z? A? Az f 1 (z) = z? F D (P )z With these notations, rank-one LMI problem (9) can equivalently be written as the quadratic optimization problem = min f 0 (z) s.t. f 1 (z) 0 (11) 11

. LMI Formulation Finally, we apply the S-procedure to quadratic problem (11) to show that the non-convex rank constraint in problem (9) can actually be dropped. Notice that > 0 in problem (9) if and only if f 0 (z) > 0 for all non-zero vector z such that f 1 (z) 0. Since P is positive denite, there always exists some non-zero vector z 0 such that f 1 (z 0 ) > 0. Therefore, we can use the S-procedure [] to prove that > 0 in problem (9) if and only if A? A F D (P ) P = P? 0 Dening N as a basis for the right null-space of matrix A, i.e. AN = 0; it follows from the Elimination Lemma [] that feasibility problem (1) can equivalently be written as N? F D (P )N 0 P = P? 0 Using standard semidenite programming duality arguments [], we show that feasibility problem (1) is dual to LMI optimization problem (9) without the rank constraint. To see this, build the Lagrangian L(P; X; Y ) =?Trace(A? A?F D (P ))X?TraceP Y =?TraceA? AX +Trace(F (X)?Y )P where X = X? 0 and Y = Y? 0 are Lagrange multiplier matrices. The dual function associated to the Lagrangian reads g(x; Y ) = min L(P; X; Y ) = P?TraceA? AX if F (X) = Y 0?1 otherwise. The dual optimization problem, obtained by maximizing dual function g(x; Y ) is therefore (1) (1) = min TraceA? AX s.t. F (X) 0 X = X? 0 (1) Since matrix P is positive denite, there is always a vector z such that f 1 (z) > 0 in problem (11). Therefore Slater's qualication constraint holds and in the absence of a duality gap the optimal value in problem (1) is equal to the optimal value in problem (11). We have shown the main result of this paper. Theorem 1 The zeros of polynomial matrix A(s) belong to region D if and only if > 0 in LMI optimization problem (1). Corollary 1 The zeros of polynomial matrix A(s) belong to region D if and only if there is a matrix P solution to LMI problem (1). 1

Both problems (1) and (1) are linear problems over the cone of positive denite matrices. They can be solved very eciently with the interior-point methods described in []. When an LMI features complex coecients, it can be transformed into a real LMI using the standard manipulations described in [9]. Remark 1 In Section.1 we have assumed that vector x is non-zero. Without loss of generality, we can thus enforce normalization equality x? x = 1 in optimization problem (). Equivalently we can enforce equality In 0 Trace X = 1 0 0 in LMI optimization problem (1). Illustration In the following examples, we dene and solve the LMI problems with the user-friendly interface Lmitool.0 for Matlab 1 [1]. We used a tolerance of 0001 to enforce strict deniteness of the LMIs. The computations involving polynomial matrices are performed with the Polynomial Toolbox Version.0 for Matlab [1]..1 Lyapunov Theorem as a Particular Case As a rst illustration, we show that the well-known Lyapunov Theorem can be derived as a particular case of Corollary 1, as pointed out in []. Consider the polynomial matrix A(s) = A 0? si n Hurwitz stability of constant matrix A 0 is equivalent to polynomial matrix A(s) having its zeros in the closed left half-plane D = fx + jy C x < 0g Associated to region D is the indenite matrix 0 1 B = 1 0 An obvious choice for a right null-space basis for matrix A = A 0?I n is given by N = In 1 Matlab is a trademark of The MathWorks, Inc. A 0 1

Consequently, LMI problem (1) becomes N? F (P )N = [ I n A? 0 ] ([ 0 I n ]? P [ I n 0 ] + [ I n 0 ]? P [ 0 I n ]) [ I n A? 0 ]? 0 for any positive denite Hermitian matrix P. This can be written more compactly as the standard LMI stemming from the Lyapunov Theorem A? 0 P + P A 0 0 P = P? 0. Stability Bounds Consider as in [] the second degree polynomial matrix s + 1= as A(s) = as s + 1= arising from a closed-loop system with feedback gain a R. We study stability of A(s) with respect to the unit disk, i.e. D = fs C ss? < 1g Associated to region D is the indenite matrix?1 0 B = 0 1 The LFR matrices corresponding to A(s) are as follows A0 R L D = A Q = B R = 1= 0 0 a 1 0 0 1= a 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 Then, we solve the LMI feasibility problem of Corollary 1 for dierent values of a. Letting a = 19 we obtain a strictly feasible positive denite solution P = 0 0 10 0 81 8 0 0 8 1 0 10 0 0 1809 0 When a 10 the solver dectects unfeasibility of the LMIs. Similarly, letting a =?19 we obtain a strictly feasible positive denite solution P = 991 0 0?1 0 1?90 0 0?90 10 0?1 0 0 1 1 0

When a?10 the solver detects unfeasibility of the LMIs. After a tedious application of the Schur-Cohn criterion to the determinant of A(s), it can be shown that A(s) has all its zeros inside the unit circle if and only if jaj 19, which is consistent with our experiments.. Zeros at Innity Consider the polynomial matrix a + s as + s A(s) = s 1 + s parametrized in a R. We study stability of A(s) with respect to the left half-plane, i.e. Associated to region D C D C = fs C s + s? 0g is the indenite matrix 0 1 B = 1 0 We readily compute det A(s) = s + a hence the polynomial matrix is stable if and only if a > 0. LFR matrices associated to A(s) are given by A0 R L D = A Q = B R = a 0 1 a 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 Following the reduction procedure described in the Appendix, we successively remove column in R and row in Q cancel column in R with matrix V = 1 0?1 0 1 0 0 0 1 and remove column in R and row in Q to eventually come up with the reduced LFR A0 R L D = 1 a 0 1 a 0 1 0 1 0 0 1 0 1 0 0

LMI optimization problem (1) reads = min Trace s.t. 0 0 0 0 1 0 0 1 a 0 a a 0 1 0 a a a 0 a a? X X = X? 0 1 0 0 1 0 0 0 0 Recalling Remark 1, we add the constraint I 0 Trace 0 0 to the above LMI problem. X + 1 0 0 1 0 0 1 0 X = 1? X 0 0 0 0 1 0 0 1 Let a = 1. The optimal solution to the above LMI problem is = 0 for X = x p x p? = 1 0 0?1 1 0 0?1? 0 0 One can check that vector q is equal to zero in equations (). Since q = p=s and vector p is non-zero, it follows that ss? tends to innity. We can thus conclude that polynomial matrix A(s) features a zero at innity. This zero belongs to the unstable region D C, yet A(s) is a stable matrix since its nite zero z =?1 does not belong to D C. That clearly illustrates the fact that we cannot conclude about stability of nite zeros of a polynomial matrix featuring innite zeros when working in open regions such as the left half-plane. As pointed out in Section., this problem can be overcome when considering half-disk D C r = s C s + s? 0; ss? r as a bounded complementary stability region, for a suciently large r R. Associated to region D C r is the indenite matrix B = 0 0 1 0 0 r 0 0 1 0 0 0 0 0 0?1 When a = 1, the optimal solution to LMI problem (1) is = 10? > 0 when r = 10 and = 10? > 0 when r = 100. This is consistent with the fact that A(s) has a nite stable zero in z =?1. When a =?1, the optimal solution to LMI problem (1) is = 1 10?11 0 when r = 10 and = 8 10?11 0 when r = 100. This is consistent with the fact that A(s) has a nite unstable zero in z = 1. 1

. Union of Regions Consider the model of a VTOL helicopter proposed in []. A state-feedback is designed as in [] to locate the closed-loop poles of the linearized system. The resulting system reads _x = Ax + Bu y = Cx where A = 80 190 1811?199?118?1?1 01 108 8019?99?189 00000 00000 10000 00000 for an uncertain parameter a R and B = 0 011?9?00 900 00000 00000 + a 1 0 0 0 C = 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 We investigate pole clustering of the above system in a non-connected region D which is the union of ellipse fx + jy C (x + ) =9 + y < 1g and half-plane fx + jy C x <?g As pointed out in Section., the ellipse corresponds to slow dynamics and the half-plane corresponds to fast dynamics. The indenite matrix corresponding to the union of the above regions is given by B = 8=9 0 0 =9 1 0 0 9= 0 1 0 0 0 0 10 0 0 1 =9 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 For a given value of parameter a, we build the right coprime matrix fraction description [, ] N(s)D?1 (s) = C(sI? A)?1 B of the transfer function from input u to output y. We enforce denominator polynomial matrix D(s) to be column-reduced []. As a result, D(s) has no innite zeros and it is not necessary to pursue the approach of Section.. Then we study stability of polynomial matrix D(s) by solving LMI optimization problem (1). By virtue of Remark 1, we add the constraint I 0 Trace X = 1 0 0 1

When a = 0, we get N(s) =?91 + 0099s 1 + 911s + 00s?0989? s and D(s) =?008? s + 01808s?8 + 1s + 98s? +?101s?1899 + 0s + 098s LMI optimization problem (1) yields X = 09?00?1 01 111?0?00 00 01?019?0 00?1 01 111?0?1000 81 01?019?0 00 80?18 111?0?1000 80 108?01?0 00 81?18?01 8 and = 00 > 0. Hence all the zeros of polynomial matrix D(s) are located in region D. When a =?1, we get N(s) =?0 + 00801s 101 + 0s 80 + 018s 00? 0s and D(s) =?818 + 08s + 01811s?89 + 10s + 098s?180? 0s?110 + 9s + 1181s LMI optimization problem (1) yields X = 001?09?0091 009 0008?0009?09 099 009?0018?0009 00018?0091 009 0008?0009?0000 0000 009?0018?0009 00018 0000?00001 0008?0009?0000 0000 0000?00001?0009 00018 0000?00001?00001 00000 and = 91 10?8 0. Therefore, some zeros of polynomial matrix D(s) are located outside of region D. We can verify these results by plotting the root locus of matrix A. As can be seen in Figure 1, all the eigenvalues belong to region D when a = 0, but not when a =?1. 18

y=imag(s) 1 0 1 x=0 a=0 a= 1 D 1 10 8 0 x=real(s) Figure 1. Root locus of matrix A when a = 0 and a =?1. Conclusion We have proposed a general LMI methodology for determining whether the zeros of a given polynomial matrix belong to a given region of the complex plane. Contrary to most of the existing D-stability methods, no determinant or zero computation is required. As another advantage, we can easily handle rectangular matrices or matrices with complex coecients. The class of regions covered by our approach is very large and most notably includes non-convex regions or non-connected unions thereof. Another strength of the method is its reliance upon semidenite programming, for which powerful and ecient algorithms are widely available. Our work hints at several directions for future research. The most promising one is probably the extension of our approach to uncertain polynomial matrices and robust stability analysis. Our D-stability conditions can naturally be combined with other LMIs representing norm-bounded or interval uncertainties. However, the resulting optimization problem may lose its nice convexity properties and heuristic or global optimization algorithms must be devised. In the same vein, we can also handle -D or more general n-d polynomial matrices. A point that deserves further attention is the numerical performance of our method compared to standard routines based on determinant computation and scalar stability tests. It is believed that the performance will strongly hinge upon the LMI solver in use and that some built-in parameter tunings will be required. Another interesting extension would be to consider synthesis problems and Diophantine equations over polynomial matrices []. Here also, the resulting optimization problem 19

becomes a non-convex bilinear matrix inequality problem. Acknowledgments We are grateful to Laurent El Ghaoui, ENSTA, Paris, for providing a preprint of the paper []. The rst author would like to thank Denis Arzelier and Dimitry Peaucelle, LAAS-CNRS, Toulouse, for fruitful discussions. Appendix Suppose we are a given a polynomial matrix A(s) and its LFR as in Lemma 1. following procedure reduces the LFR such that matrices Q and R have full rank. The Step 1. If Q is rank decient, let U be the lower-left triangular matrix with ones along the diagonal such that the linearly dependent rows in Q are forced to zero in matrix UQ. Let J be the set of indices of these rows. Replace matrices L,R and D by LU?1,UR and UDU?1, respectively. Suppress rows J in L,D, and suppress columns J in R,D,. Update matrices R and Q accordingly. Step. If R is rank decient, let V = U?1 be the upper-right triangular matrix with ones along the diagonal such that the linearly dependent columns in R are forced to zero in matrix RV. Let J be the set of indices of these columns. Replace matrices L,R and D by LV,V?1 R and V?1 DV, respectively. Suppress columns J in L,D, and suppress rows J in R,D,. Update matrices R and Q accordingly. Step. Go to Step 1 until no reduction can be achieved. The above procedure results in a reduced LFR of A(s) with full rank matrices Q and R. The proof follows by careful inspection. The key point is in noting that matrix commutes with matrix U = V?1, i.e. U = U. Therefore, is not aected by the successive similarity transformations. Redundant components in p and q corresponding to zero rows in Q or zero columns in R can therefore be removed without altering the LFR of A(s). Note that reduction matrices U and V can be computed using numerically stable operations [1]. References [1] J. Ackermann \Robust Control Systems with Uncertain Parameters", Springer Verlag, London, 199. [] S. M. Ahn \Stability of a Matrix Polynomial in Discrete Systems", IEEE Transactions on Automatic Control, Vol., No., pp. 11{11, 198. 0

[] B. D. O. Anderson and R. R. Bitmead \Stability of Matrix Polynomials", International Journal of Control, Vol., No., pp. {, 19. [] O. Bachelier \Commande des Systemes Lineaires Incertains. Placement de P^oles Robustes en D-Stabilite", Ph. D. Thesis, LAAS-CNRS, Toulouse, France, 1998. [] B. R. Barmish \New Tools for Robustness of Linear Systems", Macmillan, New York, 199. [] A. Ben-Tal, L. El Ghaoui and A. Nemirovski \Robust Semidenite Programming", Research Report, Optimization Laboratory, Faculty of Industrial Engineering and Management, The Israel Institute of Technology, Technion City, Haifa, Israel, 1998. To appear in R. Saigal, L. Vandenberghe and H. Wolkowicz (Editors), \Handbook of Semidenite Programming", Kluwer Academic Publishers, Boston, Massachusetts, 1999. [] S. Boyd, L. El Ghaoui, E. Feron and V. Balakrishnan \Linear Matrix Inequalities in System and Control Theory", SIAM Studies in Applied Mathematics, Philadelphia, Pennsylvania, 199. [8] L. Brugnano and D. Trigiante \Polynomial Roots the Ultimate Answer?", Linear Algebra and its Applications, Vol., pp. 0{19, 199. [9] M. Chilali \Methodes LMI pour l'analyse et la Synthese Multi-Criteres", Ph. D. Thesis, UFR Mathematiques de la Decision, Paris IX Dauphine, France, 199. [10] M. Chilali and P. Gahinet \H1 Design with Pole Placement Constraints An LMI Approach", IEEE Transactions on Automatic Control, Vol. 1, No., pp. 8{, 199. [11] L. El Ghaoui, \Robustness of Linear Systems to Parameter Variations", Ph. D. Thesis, Department of Aeronautics and Astronautics, Stanford University, California, 1990. [1] L. El Ghaoui and P. Gahinet \Rank Minimization under LMI Constraints A Framework for Output Feedback Problems", Proceedings of the European Control Conference, Groningen, The Netherlands, 199. [1] L. El Ghaoui, J. L. Commeau \Lmitool.0 Package An Interface to Solve LMI Problems", E-Letters on Systems, Control and Signal Processing, Issue 1, 1999. [1] G. H. Golub and C. F. Van Loan \Matrix Computations. Third Edition", The Johns Hopkins University Press, Baltimore, Maryland, 199. [1] S. Gutman \Root Clustering of a Complex Matrix in an Algebraic Region", IEEE Transactions on Automatic Control, Vol., No., pp. {0, 199. [1] S. Gutman and E. I. Jury \A General Theory for Matrix Root Clustering in Subregions of the Complex Plane", IEEE Transactions on Automatic Control, Vol., No., pp. 8{8, 1981. [1] D. Henrion and M. Sebek \Improved Polynomial Matrix Determinant Computation", Technical Report, LAAS-CNRS, Toulouse, France, 1999. Submitted for publication. 1

[18] D. Henrion, S. Tarbouriech and M. Sebek \Algebraic Approach to Robust Controller Design A Geometric Interpretation", Proceedings of the American Control Conference, AACC, pp. 0{0, Philadelphia, Pennsylvania, 1998. [19] D. Henrion, S. Tarbouriech and M. Sebek \Rank-one LMI Approach to Simultaneous Stabilization of Linear Systems", to appear in Proceedings of the European Control Conference, EUCA, Karlsruhe, Germany, 1999. [0] M. Hromck and M. Sebek \New Algorithms for Polynomial Matrices Based on FFT", Technical Report, Institute of Information Theory and Automation, Czech Academy of Sciences, Prague, Czech Republic, 1998. Submitted for publication. [1] E. I. Jury \Inners and Stability of Dynamic Systems", Wiley, New York, 19. [] T. Kailath \Linear Systems", Prentice Hall, Englewood Clis, New Jersey, 1980. [] V. Kucera \Discrete Linear Control The Polynomial Approach", John Wiley and Sons, Chichester, England, 199. [] H. Kwakernaak \State-space Algorithms for Polynomial Matrix Computations", Memorandum, No. 118, Department of Applied Mathematics, University of Twente, The Netherlands, 199. [] H. Kwakernaak and M. Sebek \Polynomial J-Spectral Factorization", IEEE Transactions on Automatic Control, Vol. 9, No., pp. 1{8, 199. [] Y. Nesterov and A. Nemirovski \Interior-Point Polynomial Methods in Convex Programming", SIAM Studies in Applied Mathematics, Vol. 1, Philadelphia, Pennsylvania, 199. [] K. T. Ngo and K. T. Erickson \Stability of Discrete-Time Matrix Polynomials", IEEE Transactions on Automatic Control, Vol., No., pp. 8{, 199. [8] V. Y. Pan \Solving a Polynomial Equation Some History and Recent Progress", SIAM Review, Vol. 9, No., pp. 18{0, 199. [9] F. Kraus, M. Mansour and M. Sebek \Hurwitz Matrix for Polynomial Matrices", in R. Jeltsch and M. Mansour (Editors) \Stability Theory. Proceedings of the Conference - Centennial Hurwitz on Stability Theory", International Series of Numerical Mathematics, Vol. 11, pp. {, Birkhauser Verlag, Basel, Switzerland, 199. [0] M. Sebek, S. Pejchova, D. Henrion and H. Kwakernaak \Numerical Methods for Zeros and Determinant of Polynomial Matrix", Proceedings of the Mediterranean Symposium on New Directions in Control and Automation, IEEE, pp. 88-91, Chania, Crete, Greece, 199. [1] M. Sebek, H. Kwakernaak, D. Henrion and S. Pejchova \Recent Progress in Polynomial Methods and Polynomial Toolbox for Matlab Version.0", Proceedings of the Conference on Decision and Control, IEEE, pp. 1-8, Tampa, Florida, December 1998. See also the Polynomial Toolbox home page at www.polyx.cz.

[] V. L. Syrmos and F. L. Lewis \Output Feedback Eigenstructure Assignment Using Two Sylvester Equations", IEEE Transactions on Automatic Control, Vol. 8, No., pp. 9{99, 199. [] L. Vandenberghe and S. Boyd, \Semidenite Programming", SIAM Review, Vol. 8, pp. 9{9, 199. [] A. I. G. Vardulakis, \Linear Multivariable Control. Algebraic Analysis and Synthesis Methods", Wiley, Chichester, 1991. [] J. C. Willems \Paradigms and Puzzles in the Theory of Dynamical Systems", IEEE Transactions on Automatic Control, Vol., pp. 9{9, 1991. [] V. A. Yakubovich \The S-procedure in Nonlinear Control Theory", Vestnik Leningrad University of Mathematics, Vol., pp. {9, 19. In Russian, 191. [] R. K. Yedavalli \Robust Root Clustering for Linear Uncertain Systems Using Generalized Lyapunov Theory", Automatica, Vol. 9, No. 1, pp. {0, 199. [8] K. Zhou, J. C. Doyle and K. Glover \Robust and Optimal Control", Prentice Hall, Upper Saddle River, New Jersey, 199.