An Observation on the Positive Real Lemma

Similar documents
The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4)

Dipartimento di Matematica

Engineering Tripos Part IIB Nonlinear Systems and Control. Handout 4: Circle and Popov Criteria

16. Local theory of regular singular points and applications

On Positive Real Lemma for Non-minimal Realization Systems

Conceptual Questions for Review

Introduction to Nonlinear Control Lecture # 4 Passivity

w T 1 w T 2. w T n 0 if i j 1 if i = j

The Exponential of a Matrix

The Rationale for Second Level Adaptation

Stability of Parameter Adaptation Algorithms. Big picture

BESSEL MATRIX DIFFERENTIAL EQUATIONS: EXPLICIT SOLUTIONS OF INITIAL AND TWO-POINT BOUNDARY VALUE PROBLEMS

A Characterization of the Hurwitz Stability of Metzler Matrices

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

On the Kalman-Yacubovich-Popov lemma and common Lyapunov solutions for matrices with regular inertia

On some interpolation problems

Recall the convention that, for us, all vectors are column vectors.

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Solution of Linear State-space Systems

Zeros and zero dynamics

CHAPTER 3. Matrix Eigenvalue Problems

1 Last time: least-squares problems

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Generalized eigenspaces

The Jordan canonical form

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Cheat Sheet for MATH461

EE Control Systems LECTURE 9

Georgia Institute of Technology Nonlinear Controls Theory Primer ME 6402

Lecture 1 Systems of Linear Equations and Matrices

MATH 205C: STATIONARY PHASE LEMMA

Linear Algebra: Matrix Eigenvalue Problems

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback

Final Exam Practice Problems Answers Math 24 Winter 2012

Spectral Theorem for Self-adjoint Linear Operators

Polynomial Properties in Unitriangular Matrices 1

Some New Results on Linear Quadratic Regulator Design for Lossless Systems

Math Matrix Algebra

Numerical Linear Algebra Homework Assignment - Week 2

Absolute value equations

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Homework 3 Solutions Math 309, Fall 2015

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

Chap 3. Linear Algebra

Study Guide for Linear Algebra Exam 2

1.1.1 Algebraic Operations

Rational Covariance Extension for Boundary Data and Positive Real Lemma with Positive Semidefinite Matrix Solution

University of Toronto Department of Electrical and Computer Engineering ECE410F Control Systems Problem Set #3 Solutions = Q o = CA.

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

Designing Information Devices and Systems I Spring 2016 Official Lecture Notes Note 21

There is a more global concept that is related to this circle of ideas that we discuss somewhat informally. Namely, a region R R n with a (smooth)

MIT Final Exam Solutions, Spring 2017

On Linear-Quadratic Control Theory of Implicit Difference Equations

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Identification Methods for Structural Systems

Linear Algebra and Matrix Theory

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

21 Linear State-Space Representations

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

Robust Control 2 Controllability, Observability & Transfer Functions

Block companion matrices, discrete-time block diagonal stability and polynomial matrices

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

5 Linear Transformations

Partial Eigenvalue Assignment in Linear Systems: Existence, Uniqueness and Numerical Solution

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

On an Inverse Problem for a Quadratic Eigenvalue Problem

NP-hardness of the stable matrix in unit interval family problem in discrete time

NOTES ON LINEAR ODES

Solutions Problem Set 8 Math 240, Fall

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Linearization at equilibrium points

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

Diagonal matrix solutions of a discrete-time Lyapunov inequality

Linear Algebra and its Applications

EE363 homework 8 solutions

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Static Output Feedback Stabilisation with H Performance for a Class of Plants

On the eigenvalues of specially low-rank perturbed matrices

A METHOD FOR NONLINEAR SYSTEM CLASSIFICATION IN THE TIME-FREQUENCY PLANE IN PRESENCE OF FRACTAL NOISE. Lorenzo Galleani, Letizia Lo Presti

Finding eigenvalues for matrices acting on subspaces

arxiv:math.cv/ v1 23 Dec 2003

Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Memoirs on Differential Equations and Mathematical Physics

QUANTITATIVE L P STABILITY ANALYSIS OF A CLASS OF LINEAR TIME-VARYING FEEDBACK SYSTEMS

Eigenvalues and Eigenvectors A =

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

4. Linear transformations as a vector space 17

V.B. LARIN 1. Keywords: unilateral quadratic matrix equation, matrix sign function, parameters updating.

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS

Controllability. Chapter Reachable States. This chapter develops the fundamental results about controllability and pole assignment.

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Transcription:

Journal of Mathematical Analysis and Applications 255, 48 49 (21) doi:1.16/jmaa.2.7241, available online at http://www.idealibrary.com on An Observation on the Positive Real Lemma Luciano Pandolfi Dipartimento di Matematica, Politecnico di Torino, Corso Duca degli Abruzzi, 24, 1128 Torino, Italy Submitted by C. T. Leondes Received April 3, 2 We give a necessary and a sufficient condition that the transfer function of an exponentially stable linear finite-dimensional system be a real positive matrix. The condition does not assume controllability observability properties. 21 Academic Press 1. INTRODUCTION AND REFERENCES The positive real lemma is an important tool in systems and circuit theory, see, for example, [1]. The lemma can be described as follows. Let T z be a rational transfer function, which is the transfer function of the exponentially stable linear time invariant system A B C D, with real matrices ( is used to denote transposition), ẋ = Ax + Bu y = C x + Du We assume that x n, u, y m so that the transfer function T z is a square m m matrix, T z =D + C zi A 1 B The number z is complex. Hence, for the computation of T z, we complexify the linear spaces of dimensions n and m. We assumed that the system is exponentially stable so that the eigenvalues of A belong to Rez<. For future reference, we collect the crucial assumptions: Assumption (H). u m, x n, y m ; the matrices A, B, C, D have consistent dimensions, and are real; every eigenvalue of the matrix A has a negative real part. 22-247X/1 $35. Copyright 21 by Academic Press All rights of reproduction in any form reserved. 48

the positive real lemma 481 By definition, the transfer function T z is positive real when y T z +T z y y m z Rez> ( denotes transposition followed by conjugation). It is possible to prove that a positive real rational matrix does not have unstable poles; but it may have poles on the imaginary axis. In this article, we assume that the matrix A is stable; hence the poles of T z have negative real parts. We put iω =T iω +T iω = C iωi A 1 B B iωi + A 1 C + D + D and we note that this matrix has the following rational extension to the complex plane: z =C zi A 1 B B zi + A 1 C + D + D (1) We have: Theorem 1. Let Assumption (H) hold. The matrix T z is positive real if and only if y T iω +T iω y = y iω y for every ω and y m. See [1, p. 53] for the proof. The positive real property, under suitable controllability observability assumptions, is equivalent to the solvability of a particular set of equations: Theorem 2 (Positive Real Lemma). Let A B C D be a minimal realization of T z, which satisfies Assumption (H). Then, the matrix T z is positive real if and only if there exist matrices Q, W and symmetric positive definite P which satisfy the following system: A P + PA = QQ PB = C QW W W = D + D (2) The problem described by Eq. (2) is important also in the stability analysis of systems affected by feedback nonlinearities, a problem known as the Lur e problem. Equation (2) is a special instance of the linear operator inequality. Several proofs of this theorem, or related results, appeared in the literature. In particular, the sufficiency part is obtained by elementary algebraic manipulations, while the necessity is more involved. See [1] for several different proofs. In spite of the fact that results related to Theorem 2 are quite old (see [8, 6, 4]), new proofs still appear, see, for example, [7] for a nice proof based on convex analysis (in this context, convex analysis was used previously,

482 luciano pandolfi see [5]). However, at what extent the minimality assumption is really crucial in the previous theorem is not yet completely clarified. For example, in [7] controllability of the pair A B is assumed, but not observability. Even more, minimality is not used in [1] to prove that solvability of (2) implies iω and an examination of the derivation in [1] shows that iω even if (2) is solvable with P only positive semidefinite. Instead, we see that minimality is crucial if we want the matrix P to be positive definite (instead then semidefinite). The scope of this article is to examine the solvability of Eq. (2) when controllability and observability properties are not assumed. Now, we introduce the results and the notations that we use in our key result, Theorem 4. We make use of the following factorization theorem: if z is a rational matrix which is bounded on the imaginary axis, and such that iω, then it is possible to find a matrix N z which is bounded in Rez> and such that iω =N iω N iω See [9] where the existence of a spectral factorization is proved; i.e., a factorization such that N iω has a rational right inverse which is regular in Rez>, of course when iω is not identically zero. The matrix N z of a spectral factorization (known as a spectral factor of z ) has as many rows as the normal rank of z. We use the factorization of iω as follows. We consider an eigenvalue z of the matrix A and a Jordan chain of z, i.e., a finite sequence of vectors such that Av = z v Av i = z v i + v i 1 <i r 1 (The number r is the length of the Jordan chain). The matrix e At has the following property: k e At v = e zt v e At v k = e z t t i i! v k i i= In general, the eigenvalue z has a finite number of Jordan chains. We enumerate them in any order and we denote J z ν the νth Jordan chain of z in the chosen order. We use the factor N z and the Jordan chain J z ν = v v 1 v r 1 of z to construct the block matrix, N N 1 N N z ν = (3) N r 1 N r 2 N r 3 N

the positive real lemma 483 where r is the length of the Jordan chain that we are considering and N s = 1 d s [ 1 d s ] s! dz s N z = s! dz s N z Remark 3. Ambiguity with respect to the signs in front of the derivative forces us to repeat explicitly: s!n s is the sth derivative of the function z =N z, computed at z = z. With these notations we have: Theorem 4. Let the matrices N z ν be constructed from any spectral factor of z and let assumption (H) hold. If the transfer function T z is positive, then there exist matrices Q, W and P = P which solve system (2) if and only if the following conditions hold for every Jordan chain J z ν of the matrix A: col [ C v C v 1 C v r 1 ] im Nz ν (4) Minimality is not assumed; but in this case in general P will not be positive defined: it will only be positive semidefinite, see Example 7. It is clear that the previous theorem is quite implicit and the necessity condition is most interesting, since it gives a new property that the spectral factors of z must enjoy, when system (2) is solvable. But also sufficiency is interesting since we can derive an easier test for solvability from it; i.e., we can derive the Churilov condition (see [3]): Eq. (2) can be solved if there exists ω such that det iω. In fact, in this case det N z has at most isolated zeros with positive real parts. The fact that N z has a regular right inverse shows that it has no zero. The matrix (3) has a surjective matrix on the diagonal: it is itself surjective so that condition (4) holds. A second important case is expressed by the following corollary: Corollary 5. If every Jordan chain of the matrix A has length 1 and the transfer function T z is positive real, system (2) is solvable if and only if C v k im N z k for every eigenvector v k (whose eigenvalue is z k ) of the matrix A. For most clarity, we prove the theorem in the next section. The idea of the proof is borrowed from the article [2]. We present now some preliminary observations. First of all we note the following formula, which will be repeatedly used: if L t and R t are integrable matrix valued functions which are supported z

484 luciano pandolfi on +, then the Fourier transformation of the function, L s R t + s ds t > L s t R s ds t < (5) is the function ˆL iω ˆR iω. We note now that if T z is positive real then D + D so that there exist matrices W which solve the equation W W = D + D. The idea of the proof is as follows: We fix one of these matrices W and we try to solve the remaining equations, i.e., A P + PA = QQ PB = C QW (6) The idea, from [2], is to consider the matrix Q as a parameter. When Q varies, we get a family of matrices P given by P = e A s QQ e As ds (7) These matrices satisfy the first equation in (6). The problem is so reduced to find a certain parameter Q such that PB = C QW, i.e., such that B P = B e A s Q Q e As ds = C W Q (8) We put braces in the formula (8), around a term that we are going to examine more closely now. We introduce { B e A t Q if t> M t = if t< With this notation, we write C = B P + W Q = and we define the function C e At B = B e A t C = Hence, we get the equality C e At B if t> B e A t C if t< = M s Q e As ds + W Q M s Q e A t+s Bds+ W Q e At B if t> B e A s t QM s ds + B e A t QW if t< (9) { M s M t + s ds + W M t if t> M s t M s ds + M t W if t< The Fourier transformation of the left-hand side is C iωi A 1 B B iωi + A 1 C

the positive real lemma 485 We compute the Fourier transformation of the right-hand side and we get the equality, C iωi A 1 B B iωi + A 1 C = ˆM iω ˆM iω +W ˆM iω + ˆM iω W so that, summing W W to both sides we obtain the factorization iω = [ ˆM iω +W ][ ˆM iω +W ] We put = [ ˆM iω +W ] [ ˆM iω +W ] (1) N iω = ˆM iω +W (11) Consequently, we have the following factorization result, which is in fact well known: Theorem 6. If there exist matrices Q and P = P which solve (6), then the function N z = ˆM z +W is holomorphic and bounded in a half plane Rez> σ, with σ> and satisfies Of course, iω =N iω N iω im iω =im ˆM iω +W =im N iω 2. THE PROOF OF THE MAIN THEOREM Before that we proceed to the proof, it is interesting to discuss the following example and to prove a lemma that it suggests. Example 7. Let C =, B, D =. In this case, iω = and problem (6) is solvable: a solution is Q =, P =. Instead, let B = and C. Also, in this case iω is zero; but, now Eq. (6) is not solvable: = PB = C has no solution. Remark 8. The case C =, B and n 1 corresponds to a nonminimal system. The problem (6) is solvable, but P is not coercive. Lemma 9. If the function iω is zero, then system (6) is solvable if and only if C =.

486 luciano pandolfi Proof. If C = then Q =, P = solves the problem. Conversely, let iω =. In this case, we have also W =. Let P, Q solve problem (6). We have C = = 1 2π B e A t Q Q e At dt ˆM iω Q iωi A 1 dω (12) (the second equality from Parseval identity). We noted that M iω = N iω (since W = ) is a factor of iω : it is zero if iω is zero so that C = too. The previous lemma proves Theorem 4 in the case that iω is zero. Hence, we can confine our analysis to the case of nonzero iω. We have the following proof of the only if part: Proof of Theorem 4, necessity. there exists Q such that C = We saw that if Eqs. (6) are solvable then B e A t Q Q e At dt + W Q (13) Let us consider a Jordan chain of A. Letv, v 1 v r 1 be its elements and let z be the eigenvalue. We multiply both sides of (13) by v.weget C v = = B e A t Q Q e z t v dt + W Q v M t Q e z t v dt + W Q v = ˆM z Q v + W Q v = N z Qv Now, we repeat the previous computation for v 1 v 2 We get the equalities, C v 1 = M t Q te z t v dt + M t Q e z t v 1 dt + W Q v 1 = d dz ˆM z Q v + ˆN z Q v 1 = d dz N z Q v + N z Q v 1 The convention that we explicitly stated in Remark 3 comes from the previous and the next computation. We proceed as above untill the end of the Jordan chain and we get [ i 1 C d s ] i v i = s! dz s N z Qv i s = N s Qv i s s= s=

the positive real lemma 487 We collect the previous equalities and we get condition (4) for the matrix N z. We sum up: we proved that the condition of the theorem holds for the matrix N z ; but the proof is not yet finished since we are not asserting that the factor N z that we introduced is a spectral factor, while the theorem states that condition (4) should hold for a spectral factor. We complete the proof by proving that if condition (4) holds for a certain factor, it also holds for a spectral factor. We relay on [9, Corollary 1]: if N z is any factor of z then there exists a spectral factor z and a rational matrix V z such that N z =V z z. The matrix V z has a certain structure that is not relevant to the following argument; but it has no poles on the right half plane since z is regular and of full rank in the right half plane, and the matrix N z which we introduced above is regular in the right half plane. It is easy to compute the block in position k r of the matrix N z ν: with 1/s! = when s< and D to denote the derivative, this block is equal to 1 s! Ds N z = 1 s! s = i= s i= [ ( ) s D s i z i D i V z ][ 1 1 s i! Ds i z ] i! Di V z Let z ν be the matrix in (3), constructed from the spectral factor z. We see from the previous computation that N z ν = z νk z for a certain matrix K z which depends on V z. Hence, if condition (4) holds for N z it holds for z too. This ends the proof of the only if part of Theorem 4. We prove now the if part of Theorem 4. Proof of Theorem 4, Sufficiency. The if part was proved already in the special case iω =, see Example 7 and Theorem 9. Hence, we consider the case iω. We noted already that the function iω has a rational extension to the complex plane, see (1). Moreover, we assume positivity; i.e., we assume iω for each ω. Hence, there exists a spectral factor of iω, which we call N z : it is possible to represent z =N z N z where, we recall, N z is a matrix which is holomorphic on Rez and which has a holomorphic right inverse J z in Rez>.

488 luciano pandolfi We observe that the rational function N z is bounded, so that N exists. We choose W = N so that W W = D + D. The function E iω =N iω W tends to zero for ω + ;Itis a rational function, so that it is square integrable on the imaginary axis. Moreover, it is holomorphic in Rez > so that it has an inverse Fourier transform Ě t which is zero for negative times. The function Ě t is, for t>, a combination of polynomials and decaying exponential functions. To prove the theorem, we construct a matrix Q such that C W Q = To construct the matrix Q, we distinguish two cases: Ě t Q e At dt (14) Case (a). The eigenvectors v k of A are n, hence a complete system in n. In this case, we can construct C W Q v k = Ě t Q v k e z kt dt = E z k Q v k Consequently, we are looking for a solution q k of C v k = N ξ k q k where ξ k = z k Reξ k > (15) This equation can be solved for the vector q k = Q v k thanks to the condition (4). In fact, if the Jordan chain has length 1 then the corresponding matrix (3) has only one block. This completes the construction of the matrix Q in this case. Case (b). The case that the matrix A admits Jordan chains of length larger then one. We fix our attention to a Jordan chain v, v 1 v r 1 of A. Letz be the eigenvalue. Multiplication of both sides of the required equality (14) by the vectors v i gives the system of equations, C v i = i N s Qv i s s= see the corresponding computation in the proof of the only if part. The assumed condition (4) shows that these equations can be solved. We proceed analogously for every Jordan chain and we get a solution Q of (14). This ends the construction of the matrix Q which satisfies C W Q = Ě t Q e At dt

the positive real lemma 489 We construct the matrix P as in (7) and, finally, we show that this matrix satisfies B P = C W Q. It is sufficient for this that we note the following lemma which is known at least in the case W =. We present the proof for completeness. Lemma 1. We have: Ě t =B e A t Q. Proof. Hence. We know iω =N iω N iω = E iω +W E iω +W C iωi A 1 B B iωi + A 1 C It follows from (5) that, for t>, and, from (14), = E iω E iω +E iω W + W E iω C e At B = C e At B = Ě s Ě t + s ds + W Ě t (16) Ě s Q e A t+s Bds+ W Q e At B (17) We put F t =Q e At B Ě t for t, F t = for t<. We subtract (16) from (17). We get W F t = = Ě s F t + s ds = Ě r t F r dr = t Ě r t F r dr Ě r t F r dr The last row follows since we know that E s =, F s =ifs<. The previous equality holds for t so that the function, U t =W F t + Ě r t F r dr is square integrable on and it is zero for t. It follows that there exists an extension Û z of the Fourier transformation of U t, which is holomorphic in Rez<. A simple computation shows that Û z =N z ˆF z and we note that ˆF z, if not zero, must have its poles in Rez<. Hence, if F t were not zero, the function N z should cancel the poles of F z in Rez<. This is not possible, since the matrix N z has a holomorphic left inverse. Consequently, F t =, as wanted. This lemma completes the proof of Theorem 4.

49 luciano pandolfi REFERENCES 1. B. D. O. Anderson and S. Vongpanitlered, Network Analysis and Synthesis: A Modern Systems Theory Approach, Prentice Hall, Englewood Cliffs, NJ, 1973. 2. A.V. Balakrishnan, On a generalization of the Kalman Yakubovich Lemma, Appl. Math. Optim. 31 (1995), 177 187. 3. A. N. Churilov, On the solvability of matrix inequalities, Mat. Zametki 36 (1984), 725 732. 4. R. E. Kalman, Lyapunov functions for the problem of Lur e in automatic control, Proc. Nat. Acad. Sci. U.S.A. 49 (1963), 21 25. 5. A. A. Nudel man and N. A. Schwartzman, On the existence of the solutions to certain operatorial inequalities, Sibirsk. Mat. Zh. 16 (1975), 562 571 (in Russian). 6. V. M. Popov, Absolute stability of nonlinear systems of automatic control, Avtomat. i Telemekh. 22 (1961), 961 979 (in Russian). 7. A. Rantzer, On the Kalman Yakubovich Popov lemma, Systems Control Lett. 28 (1996), 7 1. 8. V. A. Yakubovich, The frequency theorem in control theory, Siberian J. Math. 14 (1973), 384 419. 9. D. C. Youla, On the factorization of rational matrices, IRE Trans. Inform. Theory IT-7 (1961) 172 189.