Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications

Similar documents
Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

ELA

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

On V-orthogonal projectors associated with a semi-norm

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

Moore Penrose inverses and commuting elements of C -algebras

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

Linear Algebra and its Applications

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Matrix Inequalities by Means of Block Matrices 1

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN

Re-nnd solutions of the matrix equation AXB = C

Nonsingularity and group invertibility of linear combinations of two k-potent matrices

The DMP Inverse for Rectangular Matrices

arxiv: v1 [math.ra] 28 Jan 2016

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

A new algebraic analysis to linear mixed models

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

A note on the equality of the BLUPs for new observations under two linear models

On the Hermitian solutions of the

Some inequalities for sum and product of positive semide nite matrices

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Lecture notes: Applied linear algebra Part 1. Version 2

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

Generalized Schur complements of matrices and compound matrices

On Sums of Conjugate Secondary Range k-hermitian Matrices

MATH36001 Generalized Inverses and the SVD 2015

Ep Matrices and Its Weighted Generalized Inverse

Generalized Principal Pivot Transform

Chapter 1. Matrix Algebra

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

On the solvability of an equation involving the Smarandache function and Euler function

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model

New insights into best linear unbiased estimation and the optimality of least-squares

Some results on the reverse order law in rings with involution

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

Rank equalities for idempotent and involutory matrices

Operators with Compatible Ranges

On Pseudo SCHUR Complements in an EP Matrix

Singular Value Inequalities for Real and Imaginary Parts of Matrices

EXPLICIT SOLUTION TO MODULAR OPERATOR EQUATION T XS SX T = A

Row and Column Distributions of Letter Matrices

MAT 2037 LINEAR ALGEBRA I web:

The Drazin inverses of products and differences of orthogonal projections

COLUMN RANKS AND THEIR PRESERVERS OF GENERAL BOOLEAN MATRICES

The reflexive re-nonnegative definite solution to a quaternion matrix equation

Research Article Constrained Solutions of a System of Matrix Equations

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

Tensor Complementarity Problem and Semi-positive Tensors

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Subset selection for matrices

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Chapter Two Elements of Linear Algebra

Trace inequalities for positive semidefinite matrices with centrosymmetric structure

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES

Introduction to Matrix Algebra

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Spectral inequalities and equalities involving products of matrices

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

The generalized Schur complement in group inverses and (k + 1)-potent matrices

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Moore-Penrose inverse of differences and products of projectors in a ring with involution

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Product of Range Symmetric Block Matrices in Minkowski Space

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

EXPLICIT SOLUTION TO MODULAR OPERATOR EQUATION T XS SX T = A

On some linear combinations of hypergeneralized projectors

Chapter 3 Transformations

arxiv: v1 [math.ra] 21 Jul 2013

Workshop on Generalized Inverse and its Applications. Invited Speakers Program Abstract

Positive definite preserving linear transformations on symmetric matrix spaces

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

arxiv: v1 [math.na] 1 Sep 2018

MAT Linear Algebra Collection of sample exams

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

Dragan S. Djordjević. 1. Introduction

Elementary linear algebra

Linear Algebra and its Applications

of a Two-Operator Product 1

Group inverse for the block matrix with two identical subblocks over skew fields

Jordan Canonical Form of A Partitioned Complex Matrix and Its Application to Real Quaternion Matrices

Absolute value equations

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

Review of Linear Algebra

ECE 275A Homework #3 Solutions

Positive Semidefiniteness and Positive Definiteness of a Linear Parametric Interval Matrix

On Monoids over which All Strongly Flat Right S-Acts Are Regular

On EP elements, normal elements and partial isometries in rings with involution

Transcription:

Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications Yongge Tian a, Ying Li b,c a China Economics and Management Academy, Central University of Finance and Economics, eijing 181, China b School of Management, University of Shanghai for Science and Technology, Shanghai 293, China c College of Mathematics Science, Liaocheng University, Liaocheng, Shandong 25259, China Abstract. In this paper, we first give the maximal and minimal values of the ranks and inertias of the quadratic matrix functions q 1(X) = Q 1 XP 1X and q 2(X) = Q 2 X P 2X subject to a consistent matrix equation AX =, where Q 1, Q 2, P 1 and P 2 are Hermitian matrices. As applications, we derive necessary and sufficient conditions for the solution of AX = to satisfy the quadratic equality XP 1X = Q 1 and X P 2X = Q 2, as well as the quadratic inequalities XP 1X Q 1 and X P 2X Q 2 in the Löwner partial ordering. In particular, we give the minimal matrices of q 1(X) and q 2(X) subject to AX = in the Löwner partial ordering. Mathematics Subject Classifications: 15A9; 15A24; 15A63; 151; 1557; 65K1; 65K15 Key Words: Linear matrix function; quadratic matrix function; rank; inertia; Löwner partial ordering; generalized inverse; matrix equations 1 Introduction Suppose that q 1 (X) = Q 1 XP 1 X, q 2 (X) = Q 2 X P 2 X (1.1) are two Hermitian quadratic matrix functions, where P 1 = P 1 C m m, Q 1 = Q 1 C n n, P 2 = P 2 C n n, Q 2 = Q 2 C m m are given, and X C n m is a variable matrix assumed to satisfy the linear matrix equation AX =, (1.2) where A C p n and C p m are given. Since the two matrix functions may vary with respect to the choice of the variable matrix X, as well as the the solution to (1.2), the numerical characteristics of these matrix functions, such as, their ranks, inertias, traces, norms, eigenvalues, etc., may vary with respect to the choice of X as well. Hence, many research problems on these two matrix functions and their properties can be proposed and studied. In fact, linear and quadratic matrix functions, and their special cases linear and quadratic matrix equations, have been two classes of fundamental object of study in matrix theory and applications. The two quadratic matrix functions in (1.1) and their variations occur widely in matrix theory and applications. Some previous work on (1.1) subject to (1.2) in quadratic programming and control theory can be found, e.g., in 1, 6. In a recent paper 33, Tian considered the rank and inertia of the quadratic matrix function X AX, and gave closed-form formulas for the maximal and minimal ranks and inertias of this matrix function with respect to the variable matrix X. As continuation, we consider in this paper the maximization and minimization problems on the ranks and inertias of (1.1) subject to (1.2). y using some known results on rank/inertia optimization of linear matrix functions and generalized inverses of matrices, we shall first derive closed-form formulas for the extremal ranks/inertias of q 1 (X) and q 2 (X) subject to (1.2). Then we use them to derive necessary and sufficient conditions for the following quadratic equalities and inequalities in the Löwner partial ordering XP 1 X = Q 1, X P 2 X = Q 2, (1.3) XP 2 X > Q 1 (< Q 1, Q 1, Q 1 ), X P 2 X > Q 2 (< Q 2, Q 2, Q 2 ) (1.4) to hold. In addition, we shall solve four optimization problems on q 1 (X) and q 2 (X) subject to (1.2) in the Löwner partial ordering, namely, to find X 1, X 2 C n m such that AX 1 = and q i (X) q i (X 1 ) s.t. AX =, i = 1, 2, (1.5) AX 2 = and q i (X) q i (X 2 ) s.t. AX =, i = 1, 2 (1.6) E-mail Addresses: yongge.tian@gmail.com; liyingliaoda@gmail.com 1

hold, respectively, where the four matrices q i (X 1 ) and q i (X 2 ), i = 1, 2, when they exist, are called the maximal and minimal matrices of q 1 (X) and q 2 (X) in (1.1) subject to (1.2), respectively. The rank and inertia of a Hermitian matrix are two basic concepts in matrix theory for describing the dimension of row/column vector space and the sign distribution of the eigenvalues of the matrix, which are well understood and are easy to compute by the well-known elementary or congruent matrix operations. These two quantities play an essential role in characterizing algebraic properties of Hermitian matrices. When considering max/min problems on rank/inertia of a matrix function globally, we can separate them as rank maximization problem (RMaxP), rank minimization problem(rminp), inertia maximization problem (IMaxP), and inertia minimization problem(iminp), respectively. These max/min problems consist of determining the extremal rank/inertia of the matrix function, and finding the variable matrices such that the matrix function attains the extremal ranks/inertias. Just like the classic optimization problems on determinants, traces and norms of matrices, the problem of maximizing or minimizing the rank and inertia of a matrix could be regarded as a special topic in mathematical optimization theory, although it was not classified clearly in the literature. Such optimization problems occur in regression analysis and control theory; see, e.g., 7, 8, 15, 23, 24, 4. ecause the rank/inertia of a matrix are finite nonnegative integers, the extremal rank/inertia of a matrix function always exist no matter what the domains of the variable entries in the matrix function are given. The extremal rank/inertias of a matrix function can be used to characterize some fundamental algebraic properties of the matrix function, for example, (I) the maximal/minimal dimensions of the row and column spaces of the matrix function; (II) nonsingularity of the matrix function when it is square; (III) solvability of the corresponding matrix equation; (IV) rank, inertia and range invariance of the matrix function; (V) definiteness of the matrix function when it is Hermitian; etc. Notice that these optimal properties of a matrix function can hardly be characterized by determinants, traces and norms of matrices, etc. Hence, it is really necessary to pay attention to optimization problems on ranks and inertias of matrices. Since the variable entries in a matrix function are often taken as continuous variables from some constraint sets, while the objective functions the rank/inertia of the matrix function take values only from a finite set of nonnegative integers, this kind of continuous-integer optimization problems cannot be solved by various optimization methods for continuous or discrete cases. In fact, there is no rigorous mathematical theory for solving a general rank/inertia optimization problem except some special cases that can be solved by pure algebraical methods. It has been realized that rank/inertia optimization and completion problems of a general matrix function have deep connections with computational complexity, and were regarded as NP-hard; e.g., 5, 7, 8, 9, 11, 12, 13, 16, 22, 26, 28. Fortunately, closed-form solutions of the rank/inertia optimization problems of q 1 (X) and q 2 (X) in (1.1), as well as many others can be derived algebraically. Throughout this paper, C m n and C m H stand for the sets of all m n complex matrices and all m m complex Hermitian matrices, respectively. The symbols A, r(a), R(A) and N (A) stand for the conjugate transpose, rank, range (column space) and null space of a matrix A C m n, respectively; I m denotes the identity matrix of order m; A, denotes a row block matrix consisting of A and. We write A > (A ) if A is Hermitian positive definite (nonnegative definite). Two Hermitian matrices A and of the same size are said to satisfy the inequality A > (A ) in the Löwner partial ordering if A is positive definite (nonnegative definite). The Moore Penrose inverse of A C m n, denoted by A, is defined to be the unique solution X satisfying the four matrix equations (i) AXA = A, (ii) XAX = X, (iii) (AX) = AX, (iv) (XA) = XA. The symbols E A = I m AA and F A = I n A A stand for the two orthogonal projectors onto the null spaces N(A ) and N(A), respectively. The ranks of E A and F A are given by r(e A ) = m r(a) and r(f A ) = n r(a). As is well known, the eigenvalues of a Hermitian matrix A C m H are all real, and the inertia of A is defined to be the triplet In(A) = { i + (A), i (A), i (A) }, 2

where i + (A), i (A) and i (A) are the numbers of the positive, negative and zero eigenvalues of A counted with multiplicities, respectively. The two numbers i + (A) and i (A), usually called the partial inertia, can easily be computed by elementary congruence matrix operations. For a matrix A C m H, we have r(a) = i + (A) + i (A) and i (A) = m r(a). Hence, once i + (A) and i (A) are both determined, r(a) and i (A) are both obtained as well. Note that the inertia of a Hermitian matrix divides the eigenvalues of the matrix into three sets on the real line. Hence the inertia of a Hermitian matrix can be used to characterize definiteness of the Hermitian matrix. The following results are obvious from the definitions of the rank/inertia of a matrix. Lemma 1.1 Let A C m m, C m n, and C C m H. Then, (a) A is nonsingular if and only if r(a) = m. (b) = if and only if r() =. (c) C > (C < ) if and only if i + (C) = m (i (C) = m). (d) C (C ) if and only if i (C) = (i + (C) = ). Lemma 1.2 Let S be a set consisting of (square) matrices over C m n, and let H be a set consisting of Hermitian matrices over C m H. Then, (a) S has a nonsingular matrix if and only if max r(x) = m. X S (b) All X S are nonsingular if and only if min r(x) = m. X S (c) S if and only if min r(x) =. X S (d) S = {} if and only if max r(x) =. X S (e) All X S have the same rank if and only if max r(x) = min r(x). X S X S (f) H has a matrix X > (X < ) if and only if max X H i +(X) = m (g) All X H satisfy X > (X < ) if and only if min X H i +(X) = m (h) H has a matrix X (X ) if and only if min X H i (X) = (i) All X H satisfy X (X ) if and only if max X H i (X) = ( ) max i (X) = m. X H ( ) min i (X) = m. X H ( ) min i +(X) =. X H ( ) max i +( X) =. X H (j) All X H have the same positive index of inertia if and only if max X H i +(X) = min X H i +(X). (k) All X H have the same negative index of inertia if and only if max X H i (X) = min X H i (X). These two lemmas show that once certain formulas for the (extremal) rank and the positive and negative signatures of a Hermitian matrix are derived, we can use them to characterize equalities and inequalities for the Hermitian matrix. This basic algebraic method, referred to as the matrix rank/inertia method, is available for studying various Hermitian matrix functions that involve generalized inverses of matrices and variable matrices. The following are some known formulas for ranks/inertias of partitioned matrices and generalized inverses of matrices, which will be used in the latter part of this paper. 3

Lemma 1.3 (25) Let A C m n, C m k, C C l n. Then, r A, = r(a) + r(e A ) = r() + r(e A), (1.7) A r = r(a) + r(cf C A ) = r(c) + r(af C ), (1.8) A r = r() + r(c) + r(e C AF C ). (1.9) Lemma 1.4 (31) Let A C m H, Cm n, D C m n, and denote A A M 1 =, M 2 =. D Then, In particular, i ± (M 1 ) = r() + i ± (E AE ), (1.1) r(m 1 ) = 2r() + 2r(E AE ), (1.11) E i ± (M 2 ) = i ± (A) + i A ± E A D A, (1.12) E r(m 2 ) = r(a) + r A E A D A. (1.13) (a) The partial inertias of M 2 satisfies the following inequalities (b) If A, then (c) If A, then (d) If R() R(A), then i ± (M 2 ) i ± (A) + i ± ( D A ) i ± (A). (1.14) i + (M 1 ) = r A,, i (M 1 ) = r(), r(m 1 ) = r A, + r(). (1.15) i + (M 1 ) = r(), i (M 1 ) = r A,, r(m 1 ) = r A, + r(). (1.16) i ± (M 2 ) = i ± (A) + i ± ( D A ), r(m 2 ) = r(a) + r( D A ). (1.17) (e) If R() R(A) = {} and R( ) R(D) = {}, then i ± (M 2 ) = i ± (A) + i ± (D) + r(), r(m 2 ) = r(a) + 2r() + r(d). (1.18) Some formulas derived from (1.1) and (1.11) are A A F i P ± F P = i ± C P r(p ), (1.19) P A A F r P F P = r C P 2r(P ), (1.2) P A Q EQ AE i Q E Q ± = i E Q D ± D r(q), (1.21) Q A Q EQ AE r Q E Q = r E Q D D 2r(Q). (1.22) Q We shall use them to simplify the inertias of block Hermitian matrices involving Moore Penrose inverses of matrices. 4

Lemma 1.5 (2) Let A C m H, Cm n and C C p m be given. Then, { max r A XC (XC) = min r A,, C A A C, r X C n p, r C }, (1.23) min r A XC X C (XC) = 2r A,, C + max{ s + + s, t + + t, s + + t, s + t + }, n p { max i ± A XC (XC) = min X C n p (1.24) } A A C i ±, i ±, (1.25) C min X C n p i ± A XC (XC) = r A,, C + max{ s ±, t ± }, (1.26) where A s ± = i ± r A C A C, t ± = i ± r C A C C The right-hand sides of (1.23) (1.26) contain only the ranks/inertias of some block matrices consisting of the given matrices in the linear matrix function. Hence, their simplification and applications are quite easy in most situations. As fundamental formulas, (1.23) (1.26) can be widely used for finding the extremal ranks/inertias of various matrix functions with variable matrices with symmetric patterns. Lemma 1.6 (a) 27 The matrix equation in (1.2) has a solution if and only if AA =. In this case, the general solution to (1.2) can be written in the following parametric form X = A + F A V, (1.27) where V C n m is arbitrary. The solution to (1.2) is unique if and only if r(a) = n. (b) 3 In this case, max r(x) = min{ m, AX= n + r() r(a) }, (1.28) min r(x) = r(). AX= (1.29) In order to derive explicit formulas for ranks of block matrices, we use the following three types of elementary block matrix operation (EMOs, for short): (I) interchange two block rows (columns) in a block matrix; (II) multiply a block row (column) by a nonsingular matrix from the left-hand (right-hand) side in a block matrix; (III) add a block row (column) multiplied by a matrix from the left-hand (right-hand) side to another block row (column). In order to derive explicit formulas for the inertia of a block Hermitian matrix, we use the following three types of elementary block congruence matrix operation (ECMOs, for short) for a block Hermitian matrix with the same row and column partition: (IV) interchange ith and jth block rows, while interchange ith and jth block columns in the block Hermitian matrix; (V) multiply ith block row by a nonsingular matrix P from the left-hand side, while multiply ith block column by P from the right-hand side in the block Hermitian matrix; (VI) add ith block row multiplied by a matrix P from the left-hand side to jth block row, while add ith block column multiplied by P from the right-hand side to jth block column in the block Hermitian matrix. The three types of operation are in fact equivalent to some congruence transformation of a Hermitian matrix A P AP, where the nonsingular matrix P is from the elementary block matrix operations to the block rows of A, and P is from the elementary block matrix operations to the block columns of A. Some applications of the ECMOs in establishing formulas for inertias of Hermitian matrices can be found, e.g., in 31, 32, 33.. 5

2 Rank/inertias optimization of Q 1 XP 1 X subject to AX = Note that I X I Q XP P X P I Q XP X X =. I P It is easy to get the following equalities Q XP Q XP X i ± P X = i P ± = i P ± ( Q XP X ) + i ± (P ), (2.1) or equivalently, Note that Q XP P X P i ± ( Q XP X Q XP ) = i ± P X P = Q In + X, P + X I P P n, i ± (P ). (2.2) is a linear matrix function. Hence, we are able to derive the extremal ranks/inertias of Q XP X from Lemma 1.5. In fact the rank/inertia of any nonlinear Hermitian matrix function can be converted to the rank/inertia of certain linear Hermitian matrix function. Theorem 2.1 Let q 1 (X) be as given in (1.1) and assume that (1.2) is consistent. Then, (a) The maximal rank of q 1 (X) subject to (1.2) is max rq 1(X) AX= = min{ 2n + r( AQ 1 A P 1 ) 2r(A), n + r AQ 1, P 1 r(a), r(q 1 ) + r(p 1 ) }. (2.3) (b) The minimal rank of q 1 (X) subject to (1.2) is where min rq 1(X) = max{ s 1, s 2, s 3, s 4 }, (2.4) AX= s 1 = r( AQ 1 A P 1 ) + 2r AQ 1, P 1 2r AQ 1 A, P 1, s 2 = 2r AQ 1, P 1 + r(q 1 ) r(p 1 ) 2r(AQ 1 ), s 3 = 2r AQ 1, P 1 + i + ( AQ 1 A P 1 ) r AQ 1 A, P 1 i (P 1 ) + i (Q 1 ) r(aq 1 ), s 4 = 2r AQ 1, P 1 + i ( AQ 1 A P 1 ) r AQ 1 A, P 1 i + (P 1 ) + i + (Q 1 ) r(aq 1 ). (c) The maximal partial inertia of q 1 (X) subject to (1.2) is max i ±q 1 (X) = min{ n + i ± ( AQ 1 A P 1 ) r(a), i ± (Q 1 ) + i (P 1 ) }. (2.5) AX= (d) The minimal partial inertia of q 1 (X) subject to (1.2) is min i ±q 1 (X) = max{ i ± ( AQ 1 A P 1 ) + r AQ 1, P 1 r AQ 1 A, P 1, AX= Proof Substituting (1.27) into Q 1 XP 1 X yields Applying (2.2) to (2.7) gives the following result r AQ 1, P 1 + i ± (Q 1 ) i ± (P 1 ) r(aq 1 )}. (2.6) Q 1 XP 1 X = Q 1 ( A + F A V )P 1 ( A + F A V ). (2.7) i ± Q 1 (A + F A V )P 1 (A + F A V ) Q = i 1 (A + F A V )P 1 ± P 1 (A + F A V ) P 1 ( Q = i 1 A P 1 FA ± P 1 (A ) + P 1 i ± (P 1 ) ) V, P 1 + V F P A, i ± (P 1 ). (2.8) 1 6

Denote q(v ) = Applying Lemma 1.4 to (2.9) yields where Q 1 A P 1 P 1 (A ) P 1 FA + V, P 1 + V F A,. (2.9) P1 max rq(v ) = min{ r(m), r(m 1 ), r(m 2 ) }, (2.1) V min rq(v ) = 2r(M) + max{ s + + s, t + + t, s + + t, s + t + }, (2.11) V max i ± q(v ) = min{ i ± (M 1 ), i ± (M 2 )}, (2.12) V min i ±q(v ) = r(m) + max{ s ±, t ± }, (2.13) V Q M = 1 A P 1 F A P 1 A, P 1 P 1 Q 1 A P 1 F A Q 1 A P 1 M 1 = P 1 A P 1, M 2 = P 1 A P 1 P 1, F A P 1 Q 1 A P 1 F A Q 1 A P 1 F A N 1 = P 1 (A ) P 1 P 1, N 2 = P 1 (A ) P 1 P 1 F A P 1 and s ± = i ± (M 1 ) r(n 1 ), t ± = i ± (M 2 ) r(n 2 ). Applying (1.19) (1.22), and simplifying by EMOs and ECMOs, we obtain r(m) = Q 1 A Q1 A P 1 F A + r(p1 ) = r P 1 I n + r(p A 1 ) r(a) I = r n + r(p AQ 1 P 1 1 ) r(a) = n + r(p 1 ) + r AQ 1, P 1 r(a), (2.14) Q1 A r(n 1 ) = r P 1 F A + r(p F A 1 ) = r 1 A P 1 I n I n A + r(p 1 ) 2r(A) A I n = r I n + r(p 1 ) 2r(A) P 1 AQ 1 A = 2n + r(p 1 ) 2r(A) + r P 1, AQ 1 A, (2.15) r(n 2 ) = n + 2r(P 1 ) + r(aq 1 ) r(a), (2.16) Q 1 A P 1 I n i ± (M 1 ) = i ± P 1 (A ) P 1 I n A r(a) A Q 1 A P 1 I n Q 1 A = i ± P 1 (A ) P 1 P 1 I n r(a) AQ 1 P 1 AQ 1 A I n = i ± P 1 I n r(a) AQ 1 A P 1 = n + i ± (P 1 ) + i ± ( AQ 1 A P 1 ) r(a), (2.17) Q 1 A P 1 i ± (M 2 ) = i ± P 1 (A ) P 1 P 1 = i ± (Q 1 ) + r(p 1 ), (2.18) P 1, 7

and s ± =i ± (M 1 ) r(n 1 ) = r(a) + i ± (AQ 1 A P 1 ) r P 1, AQ 1 A i (P 1 ) n, (2.19) t ± =i ± (M 2 ) r(n 2 ) = r(a) + i ± (Q 1 ) r(aq 1 ) r(p 1 ) n. (2.2) Substituting (2.14) (2.2) into (2.1) (2.13), and then (2.1) (2.13) into (2.8), we obtain (2.3) (2.6). Many consequences can be derived from Theorem 2.1. Corollary 2.2 Let q 1 (X) be as given in (1.1) and assume that (1.2) is consistent. Then, (a) AX = has a solution such that Q 1 XP 1 X is nonsingular if and only if r( AQ 1 A P 1 ) 2r(A) n, r AQ 1, P 1 r(a), r(q 1 ) + r(p 1 ) n. (2.21) (b) Q 1 XP 1 X is nonsingular for all solutions of AX = if and only if one of s i n, i = 1,... 4 holds. (c) AX = and XP 1 X = Q 1 have a common solution if and only if AQ 1 A = P 1, R(AQ 1 ) R(P 1 ), i ± (Q 1 ) i ± (P 1 ). (2.22) (d) XP 1 X = Q 1 holds for all solutions of AX = if and only if r(a) = n and AQ 1 A = P 1, or Q 1 = and P 1 =. (e) AX = has a solution such that Q 1 XP 1 X > if and only if i + ( AQ 1 A P 1 ) r(a) and i + (Q 1 ) + i (P 1 ) n. (2.23) (f) AX = has a solution such that Q 1 XP 1 X < if and only if i ( AQ 1 A P 1 ) r(a) and i (Q 1 ) + i + (P 1 ) n. (2.24) (g) Q 1 XP 1 X > holds for all solutions of AX = if and only if or i + ( AQ 1 A P 1 ) + r AQ 1, P 1 = n + r AQ 1 A, P 1 (2.25) r AQ 1, P 1 + i + (Q 1 ) = n + i + (P 1 ) + r(aq 1 ); (2.26) (h) Q 1 XP 1 X < holds for all solutions of AX = if and only if or i ( AQ 1 A P 1 ) + r AQ 1, P 1 = n + r AQ 1 A, P 1 (2.27) r AQ 1, P 1 + i (Q 1 ) = n + i (P 1 ) r(aq 1 ). (2.28) (i) AX = has a solution such that Q 1 XP 1 X if and only if AQ 1 A P 1, r AQ 1 A, P 1 = r AQ 1, P 1 i (P 1 ) i (Q 1 ) + r(aq 1 ). (2.29) (j) AX = has a solution such that Q 1 XP 1 X if and only if AQ 1 A P 1, r AQ 1 A, P 1 = r AQ 1, P 1 i + (P 1 ) i + (Q 1 ) + r(aq 1 ). (2.3) (k) Q 1 XP 1 X holds for all solutions of AX = if and only if r(a) = n and AQ 1 A P 1, or Q 1 and P 1 ; 8

(l) Q 1 XP 1 X holds for all solutions of AX = if and only if r(a) = n and AQ 1 A P 1, or Q 1 and P 1. Corollary 2.3 Let q 1 (X) be as given in (1.1) with P 1 > and Q 1 >, and assume that (1.2) is consistent. Then, Then, max r( Q 1 XP 1 X ) = min{ n, 2n + r( AQ 1 A P 1 ) 2r(A) }, (2.31) AX= min r( Q 1 XP 1 X ) = max{ r( AQ 1 A P 1 ), i ( AQ 1 A P 1 ) + n m }, (2.32) AX= max i ±( Q 1 XP 1 X ) = min{ n + i ± ( AQ 1 A P 1 ) r(a), i ± (I n ) + i (I m ) }, (2.33) AX= min i ±( Q 1 XP 1 X ) = max{ i ± ( AQ 1 A P 1 ), i ± (I n ) i ± (I m )}. (2.34) AX= (a) AX = has a solution such that Q 1 XP 1 X is nonsingular if and only if r( AQ 1 A P 1 ) 2r(A) n. (b) Q 1 XP 1 X is nonsingular for all solutions of AX = if and only if r( AQ 1 A P 1 ) = n or i ( AQ 1 A P 1 ) = m. (c) AX = and XP 1 X = Q 1 have a common solution if and only if AQ 1 A = P 1 and m n. (d) XP 1 X = Q 1 holds for all solutions of AX = if and only if AQ 1 A = P 1 and r(a) = n. (e) AX = has a solution such that Q 1 > XP 1 X if and only if i + ( AQ 1 A P 1 ) = r(a); has a solution such that Q 1 < XP 1 X if and only if i ( AQ 1 A P 1 ) = r(a) and m n. (f) Q 1 > XP 1 X holds for all solutions of AX = if and only if i + ( AQ 1 A P 1 ) = n; Q 1 < XP 1 X holds for all solutions of AX = if and only if i ( AQ 1 A P 1 ) = n. (g) AX = has a solution such that Q 1 XP 1 X if and only if AQ 1 A P 1 ; has a solution such that Q 1 XP 1 X if and only if AQ 1 A P 1 and n m. (i) Q 1 XP 1 X holds for all solutions of AX = if and only if AQ 1 A P 1 and r(a) = n; Q 1 XP 1 X holds for all solutions of AX = if and only if AQ 1 A P 1 and r(a) = n. In particular, we have the following results on the rank/inertia of I n XX and the corresponding equality and inequalities for the solution of the matrix equation in (1.2). Corollary 2.4 Assume that (1.2) is consistent. Then, Hence, max r( I n XX ) = min{ n, 2n + r( AA ) 2r(A) }, (2.35) AX= min r( I n XX ) = max{ r( AA ), i ( AA ) + n m }, (2.36) AX= max i ±( I n XX ) = min{ n + i ± ( AA ) r(a), i ± (I n ) + i (I m ) }, (2.37) AX= min i ±( I n XX ) = max{ i ± ( AA ), i ± (I m ) i ± (I n )}. (2.38) AX= (a) AX = has a solution such that I n XX is nonsingular if and only if r( AA ) 2r(A) n. (b) I n XX is nonsingular for all solutions of AX = if and only if r( AA ) = n or i ( AA ) = m. (c) AX = has a solution such that XX = I n if and only if AA = and m n. (d) XX = I n holds for all solutions of AX = if and only if AA = and r(a) = n. 9

(f) AX = has a solution such that XX < I n if and only if i + ( AA ) = r(a); has a solution such that XX > I n if and only if i ( AA ) = r(a) and m n. (g) XX < I n holds for all solutions of AX = if and only if i + ( AA ) = n; XX > I n holds for all solutions of AX = if and only if i ( AA ) = n. (h) AX = has a solution such that XX I n if and only if AA ; has a solution such that XX I n if and only if AA and m n. (i) XX I n holds for all solutions of AX = if and only if i ( AA ) = n r(a); XX I n holds for all solutions of AX = if and only if i + ( AA ) = n r(a). In the remaining part of this section, we solve the optimization problems in (1.5) and (1.6), i.e., to find X 1, X 2 C n m such that hold, respectively. AX 1 = and q 1 (X) q 1 (X 1 ) s.t. AX =, (2.39) AX 2 = and q 1 (X) q 1 (X 2 ) s.t. AX = (2.4) Theorem 2.5 Assume that (1.2) is consistent, and its solution is not unique, namely, r(a) < n. Then, (a) There exists an X 1 C n m such that (2.39) holds if and only if P 1 and P 1 =. (2.41) In this case, the general matrix satisfying (2.39) is X 1 = A + F A UF P1, where U is an arbitrary matrix, and the maximal matrix of q 1 (X) in the Löwner partial ordering is q 1 (X 1 ) = Q 1. (b) There exists an X 2 C n m such that (2.4) holds if and only if P 1 and P 1 =. (2.42) In this case, the general matrix satisfying (2.4) is X 2 = A + F A UF P1, where U is an arbitrary matrix, and the minimal matrix of q 1 (X) in the Löwner partial ordering is q 1 (X 2 ) = Q 1. Proof Under r(a) = n, the solution to (1.2) is unique by Lemma 1.6(a), so that q 1 (X) subject to (1.2) is unique as well. Under r(a) < n, let Then, (2.39) and (2.4) are equivalent to h i (X) := q 1 (X) q 1 (X i ) = X i P 1 X i XP 1 X, i = 1, 2, h 1 (X), AX =, AX 1 =, (2.43) h 2 (X), AX =, AX 2 =. (2.44) Under r(a) < n, we see from Corollary 2.2(k) and (l) that (2.43) and (2.44) are equivalent to both of which are further equivalent to X 1 P 1 X 1, P 1, AX 1 =, (2.45) X 2 P 1 X 2, P 1, AX 2 =, (2.46) X 1 P 1 = 1, AX 1 =, P 1, (2.47) X 2 P 1 =, AX 2 =, P 1. (2.48) The two equations (2.47) have a common solution for X 1 if and only if P 1 =. In this case, the general solution to (2.47) is X 1 = A + F A UF P1. Substituting it into (1.1) gives q 1 (X 1 ) = Q 1, establishing (a). Result (b) can be shown similarly. 1

3 Rank/inertia optimization of Q 2 X P 2 X subject to AX = Theorem 3.1 Let q 2 (X) be as given in (1.1) and assume that (1.2) is consistent. Also denote Then, Q 2 T 1 = A A, T 2 = A A P. (3.1) P 2 2 max rq 2(X) = min{ m, r( T 1 ) 2r(A) }, (3.2) AX= min rq 2(X) = max{, r(t 1 ) 2r(T 2 ) + 2r(A), i + (T 1 ) r(t 2 ) + r(a), i (T 1 ) r(t 2 ) + r(a) }, AX= max ±q 2 (X) = min{ m, AX= i ± ( T 1 ) r(a) }, (3.4) min ±q 2 (X) = max{, AX= i ± (T 1 ) r(t 2 ) + r(a) }. (3.5) Proof Substituting (1.27) into Q 2 X P 2 X yields Applying (2.2) to (3.6) gives the following result (3.3) Q 2 X P 2 X = Q 2 ( A + F A V ) P 2 ( A + F A V ). (3.6) i ± Q 2 (A + F A V ) P 2 (A + F A V ) Q = i 2 (A + F A V ) P 2 ± P 2 (A + F A V ) P 2 ( Q = i 2 (A ) P 2 ± P 2 A + P 2 i ± (P 2 ) V I P 2 F m, + A Im ) V, F A P 2, i ± (P 2 ). (3.7) Denote q(v ) = Q 2 (A ) P 2 P 2 A P 2 + P 2 F A V I m, + Im V, F A P 2,. (3.8) Applying Lemma 1.5 to (3.8) yields max rq(v ) = min{ r(m), r(m 1 ), r(m 2 ) }, (3.9) V min rq(v ) = 2r(M) + max{ V s + + s, t + + t, s + + t, s + t + }, (3.1) max i ± q(v ) = min{ i ± (M 1 ), i ± (M 2 ) }, (3.11) V min i ±q(v ) = r(m) + max{ s ±, t ± }, (3.12) V where N 1 = M = Q 2 (A ) P 2 M 1 = P 2 A P 2 P 2 F A F A P 2 Q 2 (A ) P 2 I m P 2 A P 2 P 2 F A F A P 2 Q 2 (A ) P 2 I m P 2 A P 2 P 2 F A, M 2 =, N 2 =, Q 2 (A ) P 2 I m P 2 A P 2, I m Q 2 (A ) P 2 I m P 2 A P 2 P 2 F A I m. 11

and s ± = i ± (M 1 ) r(n 1 ), t ± = i ± (M 2 ) r(n 2 ). Applying (1.19) (1.22), and simplifying by EMOs and ECMOs, we obtain and r(m) = m + r(p 2 ), (3.13) Q 2 (A ) P 2 i ± (M 1 ) = i ± P 2 A P 2 P 2 P 2 A r(a) A Q 2 (A ) P 2 A (A ) P 2 = i ± P 2 P 2 A P 2 A r(a) A Q 2 = i ± P 2 P 2 A + i ±(P 2 ) r(a) A = i ± (P 1 ) + i ± (T 1 ) r(a), (3.14) Q 2 (A ) P 2 I m i ± (M 2 ) = i ± P 2 A P 2 = m + i ± (P 2 ), (3.15) I m Q 2 (A ) P 2 I m r(n 1 ) = r P 2 A P 2 P 2 P 2 A 2r(A) A P 2 = r P 2 A P 2 A + m 2r(A) A P2 A = r + m + r(p A 2 ) 2r(A) = m + r(t 2 ) + r(p 2 ) 2r(A), (3.16) r(n 2 ) = 2m + r(p 2 ), (3.17) s ± =i ± (M 1 ) r(n 1 ) = i ± (T 1 ) r(t 2 ) + r(a) i (P 2 ) m, (3.18) t ± =i ± (M 2 ) r(n 2 ) = i (P 2 ) m. (3.19) Substituting (3.13) (3.19) into (3.9) (3.12), and then substituting (3.9) (3.12) into (3.7), we obtain (3.2) (3.5). Corollary 3.2 Let q 2 (X) be as given in (1.1), T 1 and T 2 be as given in (3.1), and assume that (1.2) be consistent. Then, (a) AX = has a solution such that Q 2 X P 2 X is nonsingular if and only if r( T 1 ) 2r(A) + m. (b) AX = has a solution such that Q 2 = X P 2 X if and only if i + (T 1 ) r(t 2 ) r(a) and i (T 1 ) r(t 2 ) r(a). (3.2) (c) AX = has a solution such that Q 2 > X P 2 X if and only if i + (T 1 ) m + r(a); has a solution such that Q 2 < X P 2 X if and only if i (T 1 ) m + r(a). (d) AX = has a solution such that Q 2 X P 2 X if and only if i (T 1 ) r(t 2 ) r(a); has a solution such that Q 2 X P 2 X if and only if i + (T 1 ) r(t 2 ) r(a). (e) Q 2 X P 2 X holds for all solutions of AX = if and only if i (T 1 ) = r(a). 12

(f) Q 2 X P 2 X holds for all solutions of AX = if and only if i + (T 1 ) = r(a). Corollary 3.3 Let q 2 (X) be as given in (1.1) with P 1 > and Q 1 >, and assume that (1.2) is consistent. Then, max r( Q 2 X P 2 X ) = min{ m, m + n + r( AP2 1 A Q 1 2 AX= ) 2r(A) }, (3.21) min r( Q 2 X P 2 X ) = max{ r( AP2 1 A Q 1 2 AX= ) + n m, i ( AP2 1 A Q 1 2 ) }, (3.22) max i +( Q 2 X P 2 X ) = min{ m, n + i + ( AP2 1 A Q 1 2 AX= ) r(a) }, (3.23) max i ( Q 2 X P 2 X ) = m + i ( AP2 1 A Q 1 2 AX= ) r(a), (3.24) min i +( Q 2 X P 2 X ) = max{, i + ( AP2 1 A Q 1 2 AX= ) + n m }, (3.25) min i ( Q 2 X P 2 X ) = i ( AP2 1 A Q 1 2 AX= ). (3.26) Hence, (a) AX = has a solution such that Q 2 X P 2 X is nonsingular if and only if r( AP 1 2 A Q 1 2 ) 2r(A) n. (b) AX = has a solution such that Q 2 = X P 2 X if and only if r( AP 1 2 A Q 1 2 ) m n and AP 1 2 A Q 1 2. (c) AX = has a solution such that Q 2 > X P 2 X if and only if i + ( AP 1 2 A Q 1 2 ) r(a) + m n. (d) AX = has a solution such that Q 2 < X P 2 X if and only if i ( AP 1 2 A Q 1 2 ) = r(a). (e) AX = has a solution such that Q 2 X P 2 X if and only if AP 1 2 A Q 1 2. (f) AX = has a solution such that Q 2 X P 2 X if and only if i + ( AP 1 2 A Q 1 2 ) m n. Corollary 3.4 Assume that (1.2) is consistent. Then, Hence, max r( I m X X ) = min{ m, m + n + r( AA ) 2r(A) }, (3.27) AX= min r( I m X X ) = max{ r( AA ) + n m, i ( AA ) }, (3.28) AX= max i +( I m X X ) = min{ m, n + i + ( AA ) r(a) }, (3.29) AX= max i ( I m X X ) = m + i ( AA ) r(a), (3.3) AX= min i +( I m X X ) = max{, i + ( AA ) + n m }, (3.31) AX= min i ( I m X X ) = i ( AA ). (3.32) AX= (a) AX = has a solution such that I m X X is nonsingular if and only if r( AA ) 2r(A) n. (b) AX = has a solution such that X X = I m if and only if r( AA ) m n and AA. (c) AX = has a solution such that X X < I m if and only if i + ( AA ) r(a) + m n. (d) AX = has a solution such that X X > I m if and only if i ( AA ) = r(a). (e) AX = has a solution such that X X I m if and only if AA. 13

(f) AX = has a solution such that X X I m if and only if i + ( AA ) m n. In the remaining part of this section, we solve the optimization problems in (1.5) and (1.6), i.e., to find X 1, X 2 C n m such that hold, respectively. AX 1 = and q 2 (X) q 2 (X 1 ) s.t. AX =, (3.33) AX 2 = and q 2 (X) q 2 (X 2 ) s.t. AX = (3.34) Theorem 3.5 Assume that (1.2) is consistent, and its solution is not unique, namely, r(a) < n. Then, (a) There exists an X 1 C n m such that (3.33) holds if and only if P2 A E A P 1 E A and R R A In this case, the matrix X 1 satisfying (3.33) is determined by AX 1 = and X1 P 2 X 1, P2 A A Correspondingly, the maximal matrix of q 2 (X) in the Löwner partial ordering is q 2 (X 1 ) = Q 2, P2 A A (b) There exists an X 2 such that (3.34) holds if and only if P2 A E A P 1 E A and R R A In this case, the matrix X 2 satisfying (3.34) is determined by AX 2 = and X2 P 2 X 2, P2 A A Correspondingly, the minimal matrix of q 2 (X) in the Löwner partial ordering is Proof Under r(a) < n, let Then, (3.33) and (3.34) are equivalent to q 2 (X 2 ) = Q 2, P2 A A. (3.35). (3.36). (3.37). (3.38). (3.39). (3.4) h i (X) := q 2 (X) q 2 (X i ) = X i P 2 X i X P 2 X, i = 1, 2. (3.41) h 1 (X), AX =, AX 1 =, (3.42) h 2 (X), AX =, AX 2 =. (3.43) From Corollary 3.2(e) and (f), (3.42) and (3.43) are equivalent to X1 P 2 X 1 i + A = r(a), AX 1 =, (3.44) A P 2 X2 P 2 X 2 i A = r(a), AX 2 =. (3.45) A P 2 14

It is easily seen from (1.1) and (1.14) that X P 2 X i ± A r(a) + i (E A P 2 E A ) + i ± (X P 2 X, P2 A ) r(a). A A P 2 Hence, (3.44) and (3.45) are equivalent to E A P 2 E A, P2 A R R A E A P 2 E A, P2 A R R A, AX 1 =, X 1 P 2 X 1,, AX 2 =, X 2 P 2 X 2, P2 A A P2 A A, (3.46), (3.47) respectively. Under the first two conditions in (3.46) and (3.47), we can verify by from Corollary 3.2(d) that the two pairs of matrix equations in (3.46) and (3.47) have a common solution, respectively. Thus, we obtain the results in (a) and (b). 4 Concluding remarks After centuries development, it is not easy nowadays to discover bunches of new and simple formulas for certain fundamental problems in classical areas of mathematics. However, we really gave in the previous two sections some simple closed-formulas for the extremal ranks/inertias of two simple matrix functions in (1.1) subject to (1.2), and used them to derive a variety of equalities and inequalities for the solutions of the matrix equation in (1.2). ecause these formulas are represented through the ranks/inertias of the given matrices in (1.1) and (1.2), they are easy to calculate and simplify under various given conditions. The procedure for deriving these extremal ranks and inertias is unreplaceable, while the results obtained are unique from algebraic point of view. Hence, the research on the extremal rank/inetia of matrices and their applications, as shown in this paper as well as in 2, 3, 31, etc., can be classified as a fundamental and independent branch in mathematical optimization theory. Motivated by the results in the previous two sections, we are also able to solve rank/inertia optimization problems on some general quadratic matrix functions, such as, q 1 (X) = XP 1 X + XQ 1 + Q 1X + R 1, q 2 (X) = X P 2 X + X Q 2 + Q 2X + R 2. (4.1) This type of quadratic matrix functions occur in some block elementary congruence transformations, for example, Im P1 Q 1 Im X P X I n Q = 1 P 1 X + Q 1 1 R 1 I n Q 1 + XP 1 XP 1 X + XQ 1 + Q 1X. (4.2) + R 1 The lower-right block in (4.2) is the quadratic matrix function q 1 (X) in (4.1). Some optimization problems related to (4.1) were considered in 2, 3, 6, 14. Without much effort, we are also able to derive the extremal ranks and inertias of (4.1) subject to (1.2), and then use them to examine various algebraic properties of (4.1). The corresponding results will be presented in another paper. Rank/inertia optimization problems of quadratic matrix functions widely occur in matrix theory and applications. For instance, many matrix inverse completion problems can be converted to RMinP of quadratic matrix functions. A simple example is A M(X) = X, (4.3) where A = A C m m and C m n are given. In this case, find X = X C n n such that M(X) is nonsingular and its inverse has the form M 1 Y Z (X) = Z, (4.4) G where G = G C n n is given. Eqs. (4.3) and (4.4) are obviously equivalent to the following nonlinear rank minimization problem ( ) A Y Z min r X,Y,Z X Z I G m+n =, 15

or alternatively, the following linear rank minimization problem Y Z min r Z I G m+n X,Y,Z A = m + n. I m+n X A complete solution this matrix inverse completion problem was given in 39. It has been realized that the matrix/rank methods can serve as effective tools to deal with matrices and their operations. In recent years, Tian and his coauthors studied many problems on the extremal ranks and inertias of matrix functions and their applications; see, 17, 19, 2, 21, 31, 34, 35, 36, 38. This series of work opened a new and fruitful research area in matrix theory and have attracted much attention. In fact, lots of matrix rank formulas and their applications were collected in some recent handbooks for matrix theory; see 4, 29. The new techniques for solving matrix rank/inertia optimization problems enable us to develop new extensions of classic theory on matrix equations and matrix inequalities, which allowed us to analyze algebraic properties of a wide variety of Hermitian matrix function that could not be handled before. We expect that more problems on maximizing/minimizing ranks/inertias of matrix functions can be proposed reasonably and solved analytically, while the matrix rank/inertia methods will play more important roles in matrix theory and applications. References 1 F.A. adawia, On a quadratic matrix inequality and the corresponding algebraic Riccati equation, Internat. J. Contr. 6(1982), 313 322. 2 A. eck, Quadratic matrix programming, SIAM J. Optimization 17(26), 1224 1238. 3 A. eck, Convexity properties associated with nonconvex quadratic matrix functions and applications to quadratic programming, J. Optim. Theory Appl. 142(29), 1 29. 4 D.S. ernstein, Matrix Mathematics: Theory, Facts and Formulas, Second ed., Princeton University Press, Princeton, 29. 5 E. Candes and. Recht, Exact matrix completion via convex optimization, Found. of Comput. Math. 9, 717 772, 29. 6 Y. Chen, Nonnegative definite matrices and their applications to matrix quadratic programming problems, Linear and Multilinear Algebra 33(1993), 189 21. 7 M. Fazel, H. Hindi and S. oyd, A Rank minimization heuristic with application to minimum order system approximation, In: Proceedings of the 21 American Control Conference, pp. 4734 4739, 21. 8 M. Fazel, H. Hindi and S. oyd, Rank minimization and applications in system theory, In: Proceedings of the 24 American Control Conference, pp. 3273 3278, 24. 9 J.F. Geelen, Maximum rank matrix completion, Linear Algebra Appl. 288(1999), 211 217. 1 J. Groß, A note on the general Hermitian solution to AXA =, ull. Malaysian Math. Soc. (2nd Ser.) 21(1998), 57 62. 11 N.J.A. Harvey, D.R. Karger and S. Yekhanin, The complexity of matrix completion, In: Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm, Association for Computing Machinery, New York, pp. 113 1111, 26. 12 T.M. Hoang and T. Thierauf, The complexity of the inertia, Lecture Notes in Computer Science, Vol. 2556, Springer, pp. 26 217, 22. 13 T.M. Hoang and T. Thierauf, The complexity of the inertia and some closure properties of GapL, In: Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity, pp. 28 37, 25. 14 V.A. Khatskevich, M.I. Ostrovskii and V.S. Shulman, Quadratic inequalities for Hilbert space operators, Integr. Equ. Oper. Theory 59(27), 19 34. 15 Y. Kim and M. Mesbahi, On the rank minimization problem, In: Proceedings of the 24 American Control Conference, oston, pp. 215 22, 24. 16 M. Laurent, Matrix completion problems, In: Encyclopedia of Optimization (C.A. Floudas and P.M. Pardalos, eds.), Vol. III, pp. 221 229, Kluwer, 21. 17 Y. Liu and Y. Tian, More on extremal ranks of the matrix expressions A X ± X with statistical applications, Numer. Linear Algebra Appl. 15(28), 37 325. 18 Y. Liu and Y. Tian, Extremal ranks of submatrices in an Hermitian solution to the matrix equation AXA = with applications, J. Appl. Math. Comput. 32(21), 289 31. 16

19 Y. Liu and Y. Tian, A simultaneous decomposition of a matrix triplet with applications, Numer. Linear Algebra Appl. (21), DOI:1.12/nla.71. 2 Y. Liu and Y. Tian, Max-min problems on the ranks and inertias of the matrix expressions A XC ± (XC) with applications, J. Optim. Theory Appl., accepted. 21 Y. Liu, Y. Tian and Y. Takane, Ranks of Hermitian and skew-hermitian solutions to the matrix equation AXA =, Linear Algebra Appl. 431(29), 2359 2372. 22 M. Mahajan and J. Sarma, On the complexity of matrix rank and rigidity, Lecture Notes in Computer Science, Vol. 4649, Springer, pp. 269 28, 27. 23 M. Mesbahi, On the rank minimization problem and its control applications, Systems & Control Letters 33(1998), 31 36. 24 M. Mesbahi and G.P. Papavassilopoulos, Solving a class of rank minimization problems via semi-definite programs, with applications to the fixed order output feedback synthesis, In: Proceedings of the American Control Conference, Albuquerque, New Mexico, pp. 77 8, 1997. 25 G. Marsaglia and G.P.H. Styan, Equalities and inequalities for ranks of matrices, Linear and Multilinear Algebra 2(1974), 269 292. 26.K. Natarajan, Sparse approximate solutions to linear systems, SIAM J. Comput. 24(1995), 227 234. 27 R. Penrose, A generalized inverse for matrices, Proc. Cambridge Philos. Soc. 51 (1955) 46 413. 28. Recht, M. Fazel and P.A. Parrilo, Guaranteed minimum rank Solutions to linear matrix equations via nuclear norm minimization, SIAM Review 52(21), 471 51. 29 G.A. Seber, Matrix Handbook for Statisticians, John Wiley & Sons, 28. 3 Y. Tian, Ranks of solutions of the matrix equation AX = C, Linear and Multilinear Algebra 51(23), 111 125. 31 Y. Tian, Equalities and inequalities for inertias of Hermitian matrices with applications, Linear Algebra Appl. 433(21), 263 296. 32 Y. Tian, Rank and inertia of submatrices of the Moore Penrose inverse of a Hermitian matrix, Electron. J. Linear Algebra 2 (21) 226 24. 33 Y. Tian, Completing block Hermitian matrices with maximal and minimal ranks and inertias, Electron. J Linear Algebra 21 (21) 141 158. 34 Y. Tian, Optimization problems on the rank and inertia of the Hermitian matrix expression A X (X) with applications, submitted. 35 Y. Tian, Optimization problems on the rank and inertia of the Hermitian Schur complement with applications, submitted. 36 Y. Tian, Rank and inertia optimization of the Hermitian matrix expression A 1 1X 1 subject to a pair of Hermitan matrix equations ( 2X 2, 3X 3 ) = ( A 2, A 3 ), submitted. 37 Y. Tian, Optimization problems on the rank and inertia of a linear Hermitian matrix function subject to range, rank and definiteness restrictions, submitted. 38 Y. Tian and Y. Liu, Extremal ranks of some symmetric matrix expressions with applications, SIAM J. Matrix Anal. Appl. 28(26), 89 95. 39 Y. Tian and Y. Takane, The inverse of any two-by-two nonsingular partitioned matrix and three matrix inverse completion problems, Comput. Math. Appl. 57(29), 1294 134. 4 J. Wang, V. Sreeram and W. Liu, The parametrization of the pencil A + KC with constant rank and its application, Internat. J. Inform. Sys. Sci. 4(28), 488 499. 17