Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Similar documents
EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Foundations of Matrix Analysis

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

Math 489AB Exercises for Chapter 2 Fall Section 2.3

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix.

Math Linear Algebra Final Exam Review Sheet

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

1 Linear Algebra Problems

1 Last time: least-squares problems

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

12. Perturbed Matrices

Chapter Two Elements of Linear Algebra

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

3 (Maths) Linear Algebra

Diagonalization by a unitary similarity transformation

1 Positive definiteness and semidefiniteness

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Lecture Notes in Linear Algebra

Linear Algebra Massoud Malek

Lecture 1: Review of linear algebra

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Numerical Analysis Lecture Notes

Elementary linear algebra

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

B553 Lecture 5: Matrix Algebra Review

A PRIMER ON SESQUILINEAR FORMS

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Linear Algebra and Matrix Inversion

and let s calculate the image of some vectors under the transformation T.

Linear Algebra Review

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

CS 246 Review of Linear Algebra 01/17/19

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Fundamentals of Engineering Analysis (650163)

Lecture Summaries for Linear Algebra M51A

1 Matrices and Systems of Linear Equations

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Chapter 3 Transformations

arxiv: v1 [math.na] 5 May 2011

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Math Linear Algebra II. 1. Inner Products and Norms

Solving Homogeneous Systems with Sub-matrices

1 Determinants. 1.1 Determinant

Numerical Linear Algebra

Linear Algebra March 16, 2019

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Spectral inequalities and equalities involving products of matrices

Basic Concepts in Linear Algebra

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Lecture 2: Linear Algebra Review

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

Two Results About The Matrix Exponential

Linear Algebra: Matrix Eigenvalue Problems

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

An alternative proof of the Barker, Berman, Plemmons (BBP) result on diagonal stability and extensions - Corrected Version

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Linear Algebra Review

Math 321: Linear Algebra

Review of Basic Concepts in Linear Algebra

Linear Algebra and Eigenproblems

Linear Algebra Short Course Lecture 2

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

12x + 18y = 30? ax + by = m

9. Numerical linear algebra background

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Chapter XI Novanion rings

Canonical lossless state-space systems: staircase forms and the Schur algorithm

d A 0 + m t k A k 0 whenever λ min (B k (x)) t k λ max (B k (x)) for k = 1, 2,..., m x n B n (k).

Chapter 1. Matrix Algebra

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Vector Spaces and Linear Transformations

Systems of Linear Equations and Matrices

Eigenvalues and Eigenvectors: An Introduction

Linear Algebra. Session 12

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Linear Algebra Primer

Some New Results on Lyapunov-Type Diagonal Stability

Systems of Linear Equations and Matrices

arxiv: v1 [math.na] 13 Mar 2019

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

Matrices and systems of linear equations

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Algebra C Numerical Linear Algebra Sample Exam Problems

1 Matrices and Systems of Linear Equations. a 1n a 2n

Knowledge Discovery and Data Mining 1 (VO) ( )

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Transcription:

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type in which the number of terms involving products of the problem data with the matrix variable can be arbitrary. We show that contrary to what happens with stard Lyapunov equations, which have only two terms, these generalized matrix equations can have unique solutions but the associated matrix representation in terms of Kronecker products can be singular. We show how a simple modification to the equation can lead to a matrix representation that does not suffer from this deficiency. I. INTRODUCTION Consider linear equations of the form L(X) := A i XBi T + B i XA T i = C = C T. (1) with real matrix coefficients A i, B i a square real matrix unknown X. These similar equations in several unknowns often occur when one deals with problems having matrix unknowns, e.g. [1, [2. Observe that due to the symmetry of the right-h side, solutions will always be symmetric, that is X = X T (2) even if this constraint is not explicitly stated. Efficient numerical solution is little developed for M 2, very highly developed when M = 1. There, a common tool (at least implicitly) is to convert this to a conventional linear system L vec(x) = vec(c) (3) where, using Kronecker products, matrix L is L := A i B i + B i A i (4) We call this the vec-representation of L. It translates structure (1) into a natural Kronecker product structure (4). A key property is: for M = 1 equation (1) has a solution this solution is unique if only if L L are invertible. For M 2 invertibility of L L can fail even when (1) has a solution this solution is unique. The catch here is that a solution to (1) may exist be unique but L M. C. de Oliveira is with the Department of Mechanical Aerospace Engineering, University of California San Diego, La Jolla, CA, 92093-0411, USA mauricio@ucsd.edu J. W. Helton is with the Department of Mathematics, University of California San Diego, La Jolla, CA, 92093-0112, USA helton@math.ucsd.edu Partly supported by NSF DMS 0757212, DMS 0700758 the Ford Motor Company. may not be invertible without (2). A brute force fix to this problem is to produce linear equations L sym x sym = c sym (5) which do not have a nice (Kronecker product) structure where x sym is a vector of dimension k = m(m + 1)/2. (6) Note that for m > 1 we have k < m 2 k m 2 /2 for large m. This is bad because sparsity patterns in the coefficient matrices are likely destroyed. One contribution of the present paper is to show that if L is not invertible but a square equation of the form (1) has a unique solution, the modified problem A i XBi T + B i XA T i + α(x X T ) = C = C T. (7) will have this same solution, independent of the parameter α will be invertible for all but finitely many α, see Theorem 3.3. Thus we have a highly structured set of well posed linear equations available for numerical techniques. A similar result in terms of the rank of L is produced when the coefficients A i, B i are not square. See Theorem 3.4 for the generalization of (7). Future work treats systems of equations in more variables numerical algorithms for which our formulas are well suited. The interested reader is also referred to the work [3 for related results in the case of quadratic matrix equations to the work [4 for a numerical method for solving coupled Sylvester equations. II. SOME PERSPECTIVE In the above equation the A s B s (problem data) are real matrices of compatible dimensions M 1 is an integer. We are especially interested in solving equations where M 2 is fixed usually small, say M < 20, the coefficient matrices come from system problems. One concrete example are the equations arising from interiorpoint algorithms applied to convex matrix optimization problems such as the ones described in [2. In these problems A s B s can be highly structured, e.g. sparse, low rank, etc, we would like to preserve some of this structure when solving equation (1). Subsequent work will treat systems of such matrix equations in several unknowns. A particularly well studied instance of equation (1) is the Lyapunov equation AX + XA T = C = C T (8) in which M = 1. In this particular case, where all matrices are square of dimension m, the best known algorithms for

computing the solution to the Lyapunov equation (8) will compute X is O(m 3 ) operations (e.g. Bartels Stewart algorithm [5). A closer look at these algorithms reveals that the matrices A C are first transformed into a highly structured form, e.g. A is transformed into à in Schur form using orthogonal transformations, a solution is computed after solving an equation of the form (I à + à I) vec(x) = vec( C) (9) Note that the above equation is never formed explicitly its solution is computed on a column by column basis. It is also well know that a solution to equation (8) will exist be unique if only if a solution to (9) exists is unique. We shall prove that this is the case for all equations of the form (1) with M = 1 in Lemma 4.6. As we anticipated, the situation is more involved when M 2, where the matrix L in (4) may be rank-deficient even when the solution to equation (1) exists is unique. The best illustration is with a simple example. Example 1: Let L(X) = A T X+XA+B T XB with M = m = 2 We have A = 1 0, B = 0 2 2 0 0 1 L = 0 1 1 0 0 1 1 0, 1 0 0 4 0 1. 1 0 which is singular. Nevertheless, if equations (5) are formed taking into account the symmetry of X = X T in (2) one can verify that 2 0 1 L sym = 0 1 0 1 0 4 is non-singular, hence that the solution to any equation where L is as in Example 1 with a symmetric right-h side C = C T will exist be unique. III. CROSS-SYMMETRIC LINEAR MAPPINGS All results in the present paper can be seen as consequences of the following property of equation (1), which we now formalize. Definition 1 (Cross-Symmetric Linear Mapping): Let L : R m m R r r be a linear mapping. We say that the mapping L is cross-symmetric if L(X) T = L(X T ) for all X. Note that the definition is not restricted to linear mappings of the form (1) but to any linear mapping. Indeed, the proofs in this section do not make use of structure in (1) other than its cross-symmetry. The main point is that cross-symmetric mappings are block-diagonalizable into symmetric antisymmetric components. The following is stard [6, p.170. Proposition 1: Let R m m, SR m m, AR m m denote the space of real square, real symmetric, real anti-symmetric matrices of dimension m. Then R m m = SR m m AR m m. The following lemma is equivalent to blockdiagonalization for cross-symmetric mappings. Lemma 3.1 (Block-diagonalization): Let L(X) : R m m R r r be a linear cross-symmetric mapping. Then where L(R m m ) = L sym (SR m m ) L anti (AR m m ) L sym (X) := 1 2 (L(X) + L(X)T ) (10) L anti (X) := 1 2 (L(X) L(X)T ). (11) Proof: Let X = X sym +X anti, where X sym SR m m X anti AR m m. Then Y = L(X) R r r is such that Y = Y sym + Y anti where Y sym := L sym (X) = L( 1 2 [X + XT ) = L(X sym ) Y anti := L anti (X) = L( 1 2 [X XT ) = L(X anti ). hence Y sym SR r r Y anti AR r r. As a consequence of that, equations involving crosssymmetric mappings can be split in two, as in the next lemma. Lemma 3.2: Let C R m m, L(X) : R m m R r r be a linear cross-symmetric mapping consider the equation L(X) = C. (12) Equation (12) admits a solution X = X sym + X anti where X sym SR m m X anti AR m m if only if L sym (X sym ) = 1 2 (C + CT ), L anti (X anti ) = 1 2 (C CT ). Proof: A proof of this lemma can be constructed combining Proposition 1 with Lemma 3.1. It means that one can characterize the existence of symmetric solutions by looking only at the symmetric component of the mapping. Indeed, it should be clear that if C = C T then a symmetric solution to an equation L(X) = C = C T exists is unique if only it L sym is nonsingular. This explains why L can be singular but a symmetric solution may exist be unique. In particular, because of Lemma 3.1, we should have rank(l) = rank(l sym ) + rank(l anti ) from block-diagonalization. In the next paragraphs we will discuss how L can be modified so that it becomes nonsingular whenever L sym is also nonsingular. We will tackle the square case first. Theorem 3.3: Let L(X) : R m m R m m be a crosssymmetric linear mapping. Define L α : R m m R m m where L α (X) := L(X) α 2 (X XT ). (13)

Then L α (R m m ) = L α (SR m m ) L α (AR m m ) rank(l α ) p + rank(l sym ) where p = m(m 1)/2. Furthermore rank(l α ) = p + rank(l sym ) for all but finitely many α R. Proof: The proof that L α is the direct sum of symmetric anti-symmetric subspaces is similar to that of Lemma 3.1. With that in mind all that we need to prove is that L α (AR m m ) is full rank for all but finitely many α, that is L α (X anti ) = 0 = X anti = 0 for X anti AR m m almost all α R. But L α (X anti ) = 0 with X anti 0 if only if L(X anti ) = αx anti that is, if α is an eigenvalue with anti-symmetric eigenvector X anti of the mapping L. Hence if α is not one of the finitely many eigenvalues of L we must have that X anti = 0. Note that L α is not cross-symmetric but it is still blockdiagonalizable in symmetric anti-symmetric blocks. From here it is not hard to be convinced that the vecrepresentation of L α will then be nonsingular whenever L sym is nonsingular for almost any α. In the next section we will come up with explicit formulas for such a representation, but at this point we can already revisit Example 1. Example 2: Let L(X) = A T X +XA+B T XB with data as in Example 1. We already now that L is singular but L sym is not. For α = 2 we have that L α is not singular. Indeed, one can verify that its vec-representation is 2 0 0 1 L α = 0 2 0 0 0 0 2 0, 1 0 0 4 which is non-singular. Theorem 3.3 can be generalized in many ways. One could replace the anti-symmetric term X X T by a symmetric one X +X T write comparable statements for L anti. One can also propose a version that works with non-square mappings, as in the next theorem. Theorem 3.4: Let L(X) : R m m R r r be a crosssymmetric linear mapping E, F R r m be full-rank matrices. Define L α : R m m R r r where L α (X) := L(X) α 4 F (X XT )E T α 4 E(X XT )F T. Then L α (R m m ) = L α (SR m m ) L α (AR m m ) (14) rank(l α ) p + rank(l sym ) where p = s(s 1)/2, s = min{m, r}. Furthermore rank(l α ) = p + rank(l sym ) for all but finitely many α R. Proof: A proof of the above theorem is omitted can be constructed as the proof of Theorem 3.3. Alternatively, one can rely on the concrete representations the argument to be presented in Section IV-D. We use these results in order to solve symmetric equations of the form (1). Corollary 3.5: Let L(X) : R m m R r r be a crosssymmetric linear mapping. Define L α : R m m R r r as in (13) if m = r or in (14) if m r. The equation L(X) = C = C T admits a symmetric solution X = X T if only if X also solves L α ( X) = C = C T. When m = r, if the symmetric solution X is unique then L α is nonsingular for all but finitely many α. Proof: For any X = X T we have L(X) = L α (X). Then X = X T also solves L( X) = C = C T. Uniqueness non-singularity follow from Theorem 3.3. IV. CONCRETE RESULTS FOR vec-representations The results presented in the previous section hold for abstract cross-symmetric linear mappings. In this sections we will give concrete representations, specifically vecrepresentations, for the mappings of the particular form (1). In order to do that we will first revisit some results on Kronecker products in the next section. The goal is to provide formulas for evaluating L, L sym, L anti in terms of Kronecker products. These formulas are useful for implementing numerical algorithms. A. Kronecker Products the vec Operator We review now some basic properties of Kronecker products the vec operator (see [7, Chapter 4 for much more). For a matrix X R m n the operator vec : R m n R mn produces a vector vec(x) R mn with the columns of X all stacked up in order. The Kronecker product of two matrices A R s n B R r m, denoted A B, is the C R rs mn block matrix where the (i, j) block entry is the matrix C ij = A ij B. The following lemma summarizes some useful properties of Kronecker products the vec operator. Lemma 4.1: Let X R m n, A R s n, B R r m, C R n p, D R m q be given. Then (i) vec(bxa T ) = (A B) vec(x); (ii) (A B) T = (A T B T ); (iii) (A B)(C D) = (AC BD); Consider X R m m define K m to be the unique permutation matrix K m R m2 m 2 such that vec(x T ) = K m vec(x). The following relations will be useful.

Lemma 4.2: Let X R m m, A R r m, B R r m, the unique permutation matrix K m such that vec(x T ) = K m vec(x) be given. Then (i) K T m = K m ; (ii) K T mk m = K m K T m = I; (iii) (A B) = K r (B A)K m. A proof of the above two lemmas can be found, for instance, in [7, Chapter 4. The next lemma on the properties of the matrices I+K m,m I K m,m is easy to prove. See also [8. Lemma 4.3: There exist full rank matrices P m R k m2, where k = m(m + 1)/2, Q m R p m2, where p = m(m 1)/2, satisfying (i) P m vec X = P m vec X T ; (ii) 1 2 P m(i + K m ) = P m K m = P m ; (iii) P m (I K m ) = 0. (iv) Q m vec X = Q m vec X T ; (v) 1 2 Q m(i K m ) = Q m K m = Q m ; (vi) Q m (I + K m ) = 0; Furthermore Q m Pm T = 0. Proof: By definition P m vec X = P m vec X T = P m K m vec X which holds for any X hence P m K m = P m. Consequently P m = 1 2 (P m+k m P m ) = 1 2 P m(i+k m ). This implies (ii). That such P m R k m2 is full rank follows from the fact that P m extracts the symmetric subspace of X, which has dimension k. Indeed, (i) (ii) imply that P m vec X = P m vec 1 2 (X +XT ). Note that (ii) also implies that P m P m K m = 0, hence (iii). A similar proof work for Q m in order to prove (iv) (vi). Finally, multiplying (v) on the right by Pm T using (iii) leads to Q m Pm T = 0. We can use the matrices P m Q m to split R m m into two complementary subspaces. Note however that matrices P m Q m are not unique even though K m is unique! Indeed, in R 2 2 x 1 1 0 0 0 x1 x vec 3 = x 2 x 2 x 4 x 3, K 2 = 0 0 1 0 0 1 0 0 x 4 0 0 0 1 while both 1 0 0 0 P2 1 = 0 1 1 0, P2 2 = 1 2 0 0 0 0 1 1 0 2 0 0 0 1 0 0 0 2 satisfy all properties in Lemma 4.3. In many scenarios it is useful to settle on a concrete choice of P m Q m. One can do this by defining symmetric a anti-symmetric versions of the vec operator: if X R m m the symmetric vec operator svec : R m m R k, k = m(m + 1)/2, produces a vector svec(x) R k with the columns of the lower triangular part of the symmetric matrix 1 2 (X + XT ) all stacked up in order. Likewise, the anti-symmetric vec operator kvec : R m m R p, p = m(m 1)/2, produces a vector kvec(x) R p with the columns of the lower triangular part of the anti-symmetric matrix 1 2 (X XT ) all stacked up in order. It is simple to verify that matrices P m Q m satisfying Lemma 4.3 svec X = P m vec X, kvec X = Q m vec X are now uniquely defined. In particular, P 2 = P 2 2. B. vec-representations of Cross-Symmetric Mappings Let us begin by formalizing the notion of vecrepresentation, which holds not only for cross-symmetric linear mappings. Definition 2 (vec-representation): Let L : R m n R r s be a linear mapping. We say that the matrix L R rs mn is a vec-representation of the mapping L if L vec X = vec L(X) for all X. The following lemma explore some properties of vecrepresentations of cross-symmetric linear mappings, including a formula for computing the block-diagonalization of the vec-representation. Lemma 4.4: Let L(X) : R m m R r r be a linear cross-symmetric mapping L R r2 m 2 its vecrepresentation. Then where [ Pr Q r L [ P T m K r L = LK m (15) Q T Lsym 0 m =, (16) 0 L anti L sym = P r LP T m, L anti = Q r LQ T m. (17) P r, P m, Q r Q m are matrices satisfying Lemma 4.3. Proof: Because L is cross symmetric, L(X) T = L(X T ). That is K r L vec(x) = L vec(x T ) = LK m vec(x) which should hold for all X, hence (15). From Lemma 4.2, multiplication of (15) by K r on the left implies that L = K r LK m so L = 1 2 (L + K rlk m ). Then multiply out (16) use the above to show that P r LQ T m = 1 2 P r(l + K r LK m )Q T m use Lemma 4.3 to exp P r (L + K r LK m )Q T m = P r L(I + K m )Q T m = 0. A similar construction yields Q r LPm T = 0. For particular mappings of the form (1) we now derive formulas based on Kronecker products.

Lemma 4.5: Consider the particular cross-symmetric linear mapping L : R m m R r r given in (1). Define A := A vec-representation of L is A i B i (18) L = A + K r AK m where the permutation matrices K m K r are defined as in Lemma 4.2. Furthermore, L sym = 2 P r AP T m, L anti = 2 Q r AQ T m. Proof: Simply use Lemma 4.2 to write B i A i = K r (A i B i )K m. Then compute (17) L sym = P r (A + K r AK m )P T m = P r A(I + K m )P T m = 2P r AP T m after using Lemma 4.3. A similar calculation provides a formula for L anti. C. The Case M = 1 Because of the block-diagonalization property of crosssymmetric linear mappings it is possible to conclude that a square cross-symmetric linear mapping L will be nonsingular if only if both its symmetric part L sym antisymmetric L anti components are non singular. As anticipated in the introduction, in the case M 2 any combination of singular non-singular L sym L anti is possible depending on the data. We illustrate such possibilities by first revisiting Example 1. Example 3: Let L(X) = A T X+XA+B T XB A B as in Example 1. We already know that L sym ( L sym ) is non-singular but that L is singular. Using the formulas in the previous section we can compute L anti = 0. In this example, L is singular because L anti is singular. On the other extreme, consider another simple example with the same mapping but different data. Example 4: Let L(X) = A T X + XA + B T XB, with M = 2 A = 1/2 1, B = 1 1/2 0 1. 1 0 Here 1 1 1 1 L = 1 1 1 1 1 1 1 1 1 1 1, L sym = 1 0 1, L anti = 1, 1 1 1 1 1 1 1 where L L sym are singular while L anti is non-singular. In the case M = 1 the structure is much more rigid, a fact that is implicitly used to solve Lyapunov equations. The following lemma holds when L is square M = 1. Lemma 4.6: Let L : R m m R m m : L(X) = AXB T + BXA T. Then L is non-singular if only if L sym is non-singular. Proof: All we need to prove is that L singular implies L sym is also singular. It is easy to see that when either A or B are singular that L will also be singular. In this case, say A is singular, let x 0 be such that Ax = 0. Then X = xx T is symmetric such that L(X) = 0 which implies that L sym must also be singular. The case when neither A nor B are singular is more complicated. If A B are non-singular, then L is non-singular if only if B 1 L(X)B T is non-singular. A vec-representation of the mapping B 1 L(X)B T is (9) with à = B 1 A. It is then possible to use results on the eigenstructure of the Kronecker sums (e.g. [7, Theorem 4.4.5) to show that if (λ i, x i ) is an eigenvalue-eigenvector pair of the real matrix à then (µ ij, z ij ) where µ ij = λ i + λ j, z ij = x j x i is an eigenvalue-eigenvector pair of the real matrix (I Ã+à I) for all i, j = 1,..., m. This type of result is probably the one used in most proofs of solvability conditions for Sylvester Lyapunov equations. Therefore when A B are non-singular L will be singular if only if µ ij = λ i + λ j = 0 for some i j. Note that because à is real we should have that the real matrix Z ij defined such that vec(z ij ) = z ij + z ij 0, where the overline denotes complex conjugation, needs to satisfy L(Z ij ) = 0. But because for the same i j we should also have µ ji = 0, there will be a Z ji such that vec(z ji ) = z ji + z ji 0 L(Z ij ) = 0. Note that X ij := Z ij + Z ji = x i x T j + x j x T i + x i x T j + x jx T i is a symmetric real matrix for which L(X ij ) = 0, hence L sym must also be singular, which completes the proof. D. vec-representation of the Mapping L α We now turn our attention to the modified mappings L α introduced in Theorems 3.3 3.4. Let us first deal with the square case, in which L α is defined as in (13). The following simple facts will be presented without proof. First note that for any linear mapping L its associate vec-representation we have that L α = L + α 2 (I K m) where K m is a permutation as in Lemma 4.2. Note that I K m is a sparse matrix with 2m nonzero entries. Hence the above modification should add few nonzero entries to L α as compared with L. In particular, it will often add less zeros than the projection P m LPm. T It is also not hard to show that when L is cross-symmetric that a concrete block-diagonal representation for L α is given by Pm L α [ Pm Q T Q T Lsym 0 m = m 0 L anti + α Q m Q T m This makes clear how the extra term affects the rank of only the anti-symmetric part. Furthermore, since Q m is a full-rank matrix we will have that Q m Q T m 0, i.e. positive definite.

When L is not square, L α is defined as in (14) for some full-rank matrices E F. Its vec-representation is similar L α = L + α 4 (I K r)d(i K m ) with D = E F, the associated block-diagonal form is Pr L α [ Pm Q T Q T Lsym 0 m = r 0 L anti + α Q r DQ T m That L anti is full-rank for almost all α comes then from the fact that Q r DQ T m is full-rank. V. CONCLUSIONS, APPLICATIONS AND FUTURE WORK In this paper we have studied the solvability of a single linear equation on a symmetric matrix variable of the form A i XBi T + B i XA T i = C = C T. The results clarify the relationship between the above equation the non-minimal vec-representation ( M ) R i L i + L i R i vec X = vec C. We have shown that for M 2 the vec-representation can be rank-deficient even when a solution to the original problem exists is unique. We have shown that if this is the case, for square equations, one can work with the vecrepresentation of the modified problem L i XRi T + R i XL T i + α(x X T ) = C = C T. which will be guaranteed to have a unique solution whenever the original problem does for some properly chosen scalar α. A slightly more complicated version of the above argument holds for non-square equations as well is given in Theorem 3.4. The results are directly applicable to the design of algorithms for solving matrix equations in which the data is highly structured, e.g. sparse, for which construction of the associated minimal representation can destroy the underlying problem structure. We shall be working in the future with concrete instances arising from interior-point algorithms applied to convex matrix optimization problems such as the ones described in [2. In such problems the equations are square the number of terms (M) is approximately 20 the variable dimension (m) can range from 10 to 100 in most practical problems. Another interesting application of the present result is in the study of general linear equations of the form L i XRi T + F i X T G T i = C (19) in which the variable X is not symmetric. As with the problem discussed here, minimal representations of the above equation will often destroy the structure present in the problem data this time because of the term in X T. An idea we are currently investigating is how to apply the results of the present paper to the equation R T Fi L i Z i G T = C (20) i where Z is a symmetric structured variable 0 X T Z =. (21) X 0 The current results are very promising will be reported in a future paper. REFERENCES [1 M. Konstantinov, V. Mehrmann, P. Petkov, On properties of Sylvester Lyapunov operators, Linear Algebra its Applications, vol. 312, pp. 35 71, 2000. [2 J. F. Camino, J. W. Helton, R. E. Skelton, Solving matrix inequalities whose unknowns are matrices, SIAM Journal on Optimization, vol. 17, no. 1, pp. 1 36, 2006. [3 V. A. Tsachouridis B. Kouvaritakis, The homogeneous projective transformation of general quadratic matrix equations, IMA Journal of Mathematical Control Information, vol. 22, pp. 517 540, 2005. [4 F. Ding T. W. Chen, Iterative least-squares solutions of coupled sylvester matrix equations, Systems & Control Letters, vol. 54, pp. 95 107, Feb. 2005. [5 R. H. Bartels G. W. Stewart, Algorithm - solution of matrix equation AX + XB = C, Communications of The ACM, vol. 15, no. 9, pp. 820 826, 1972. [6 R. A. Horn C. R. Johnson, Matrix Analysis. Cambridge, UK: Cambridge University Press, 1985. [7 R. A. Horn C. R. Johnson, Topics in Matrix Analysis. Cambridge, UK: Cambridge University Press, 1991. [8 J. R. Magnus H. Neudecker, Commutation matrix - some properties applications, Annals Of Statistics, vol. 7, no. 2, pp. 381 394, 1979.