Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications

Size: px
Start display at page:

Download "Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications"

Transcription

1 Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications Yongge Tian a, Ying Li b,c a China Economics and Management Academy, Central University of Finance and Economics, eijing 181, China b School of Management, University of Shanghai for Science and Technology, Shanghai 293, China c College of Mathematics Science, Liaocheng University, Liaocheng, Shandong 25259, China Abstract. In this paper, we first give the maximal and minimal values of the ranks and inertias of the quadratic matrix functions q 1(X) = Q 1 XP 1X and q 2(X) = Q 2 X P 2X subject to a consistent matrix equation AX =, where Q 1, Q 2, P 1 and P 2 are Hermitian matrices. As applications, we derive necessary and sufficient conditions for the solution of AX = to satisfy the quadratic equality XP 1X = Q 1 and X P 2X = Q 2, as well as the quadratic inequalities XP 1X Q 1 and X P 2X Q 2 in the Löwner partial ordering. In particular, we give the minimal matrices of q 1(X) and q 2(X) subject to AX = in the Löwner partial ordering. Mathematics Subject Classifications: 15A9; 15A24; 15A63; 151; 1557; 65K1; 65K15 Key Words: Linear matrix function; quadratic matrix function; rank; inertia; Löwner partial ordering; generalized inverse; matrix equations 1 Introduction Suppose that q 1 (X) = Q 1 XP 1 X, q 2 (X) = Q 2 X P 2 X (1.1) are two Hermitian quadratic matrix functions, where P 1 = P 1 C m m, Q 1 = Q 1 C n n, P 2 = P 2 C n n, Q 2 = Q 2 C m m are given, and X C n m is a variable matrix assumed to satisfy the linear matrix equation AX =, (1.2) where A C p n and C p m are given. Since the two matrix functions may vary with respect to the choice of the variable matrix X, as well as the the solution to (1.2), the numerical characteristics of these matrix functions, such as, their ranks, inertias, traces, norms, eigenvalues, etc., may vary with respect to the choice of X as well. Hence, many research problems on these two matrix functions and their properties can be proposed and studied. In fact, linear and quadratic matrix functions, and their special cases linear and quadratic matrix equations, have been two classes of fundamental object of study in matrix theory and applications. The two quadratic matrix functions in (1.1) and their variations occur widely in matrix theory and applications. Some previous work on (1.1) subject to (1.2) in quadratic programming and control theory can be found, e.g., in 1, 6. In a recent paper 33, Tian considered the rank and inertia of the quadratic matrix function X AX, and gave closed-form formulas for the maximal and minimal ranks and inertias of this matrix function with respect to the variable matrix X. As continuation, we consider in this paper the maximization and minimization problems on the ranks and inertias of (1.1) subject to (1.2). y using some known results on rank/inertia optimization of linear matrix functions and generalized inverses of matrices, we shall first derive closed-form formulas for the extremal ranks/inertias of q 1 (X) and q 2 (X) subject to (1.2). Then we use them to derive necessary and sufficient conditions for the following quadratic equalities and inequalities in the Löwner partial ordering XP 1 X = Q 1, X P 2 X = Q 2, (1.3) XP 2 X > Q 1 (< Q 1, Q 1, Q 1 ), X P 2 X > Q 2 (< Q 2, Q 2, Q 2 ) (1.4) to hold. In addition, we shall solve four optimization problems on q 1 (X) and q 2 (X) subject to (1.2) in the Löwner partial ordering, namely, to find X 1, X 2 C n m such that AX 1 = and q i (X) q i (X 1 ) s.t. AX =, i = 1, 2, (1.5) AX 2 = and q i (X) q i (X 2 ) s.t. AX =, i = 1, 2 (1.6) Addresses: yongge.tian@gmail.com; liyingliaoda@gmail.com 1

2 hold, respectively, where the four matrices q i (X 1 ) and q i (X 2 ), i = 1, 2, when they exist, are called the maximal and minimal matrices of q 1 (X) and q 2 (X) in (1.1) subject to (1.2), respectively. The rank and inertia of a Hermitian matrix are two basic concepts in matrix theory for describing the dimension of row/column vector space and the sign distribution of the eigenvalues of the matrix, which are well understood and are easy to compute by the well-known elementary or congruent matrix operations. These two quantities play an essential role in characterizing algebraic properties of Hermitian matrices. When considering max/min problems on rank/inertia of a matrix function globally, we can separate them as rank maximization problem (RMaxP), rank minimization problem(rminp), inertia maximization problem (IMaxP), and inertia minimization problem(iminp), respectively. These max/min problems consist of determining the extremal rank/inertia of the matrix function, and finding the variable matrices such that the matrix function attains the extremal ranks/inertias. Just like the classic optimization problems on determinants, traces and norms of matrices, the problem of maximizing or minimizing the rank and inertia of a matrix could be regarded as a special topic in mathematical optimization theory, although it was not classified clearly in the literature. Such optimization problems occur in regression analysis and control theory; see, e.g., 7, 8, 15, 23, 24, 4. ecause the rank/inertia of a matrix are finite nonnegative integers, the extremal rank/inertia of a matrix function always exist no matter what the domains of the variable entries in the matrix function are given. The extremal rank/inertias of a matrix function can be used to characterize some fundamental algebraic properties of the matrix function, for example, (I) the maximal/minimal dimensions of the row and column spaces of the matrix function; (II) nonsingularity of the matrix function when it is square; (III) solvability of the corresponding matrix equation; (IV) rank, inertia and range invariance of the matrix function; (V) definiteness of the matrix function when it is Hermitian; etc. Notice that these optimal properties of a matrix function can hardly be characterized by determinants, traces and norms of matrices, etc. Hence, it is really necessary to pay attention to optimization problems on ranks and inertias of matrices. Since the variable entries in a matrix function are often taken as continuous variables from some constraint sets, while the objective functions the rank/inertia of the matrix function take values only from a finite set of nonnegative integers, this kind of continuous-integer optimization problems cannot be solved by various optimization methods for continuous or discrete cases. In fact, there is no rigorous mathematical theory for solving a general rank/inertia optimization problem except some special cases that can be solved by pure algebraical methods. It has been realized that rank/inertia optimization and completion problems of a general matrix function have deep connections with computational complexity, and were regarded as NP-hard; e.g., 5, 7, 8, 9, 11, 12, 13, 16, 22, 26, 28. Fortunately, closed-form solutions of the rank/inertia optimization problems of q 1 (X) and q 2 (X) in (1.1), as well as many others can be derived algebraically. Throughout this paper, C m n and C m H stand for the sets of all m n complex matrices and all m m complex Hermitian matrices, respectively. The symbols A, r(a), R(A) and N (A) stand for the conjugate transpose, rank, range (column space) and null space of a matrix A C m n, respectively; I m denotes the identity matrix of order m; A, denotes a row block matrix consisting of A and. We write A > (A ) if A is Hermitian positive definite (nonnegative definite). Two Hermitian matrices A and of the same size are said to satisfy the inequality A > (A ) in the Löwner partial ordering if A is positive definite (nonnegative definite). The Moore Penrose inverse of A C m n, denoted by A, is defined to be the unique solution X satisfying the four matrix equations (i) AXA = A, (ii) XAX = X, (iii) (AX) = AX, (iv) (XA) = XA. The symbols E A = I m AA and F A = I n A A stand for the two orthogonal projectors onto the null spaces N(A ) and N(A), respectively. The ranks of E A and F A are given by r(e A ) = m r(a) and r(f A ) = n r(a). As is well known, the eigenvalues of a Hermitian matrix A C m H are all real, and the inertia of A is defined to be the triplet In(A) = { i + (A), i (A), i (A) }, 2

3 where i + (A), i (A) and i (A) are the numbers of the positive, negative and zero eigenvalues of A counted with multiplicities, respectively. The two numbers i + (A) and i (A), usually called the partial inertia, can easily be computed by elementary congruence matrix operations. For a matrix A C m H, we have r(a) = i + (A) + i (A) and i (A) = m r(a). Hence, once i + (A) and i (A) are both determined, r(a) and i (A) are both obtained as well. Note that the inertia of a Hermitian matrix divides the eigenvalues of the matrix into three sets on the real line. Hence the inertia of a Hermitian matrix can be used to characterize definiteness of the Hermitian matrix. The following results are obvious from the definitions of the rank/inertia of a matrix. Lemma 1.1 Let A C m m, C m n, and C C m H. Then, (a) A is nonsingular if and only if r(a) = m. (b) = if and only if r() =. (c) C > (C < ) if and only if i + (C) = m (i (C) = m). (d) C (C ) if and only if i (C) = (i + (C) = ). Lemma 1.2 Let S be a set consisting of (square) matrices over C m n, and let H be a set consisting of Hermitian matrices over C m H. Then, (a) S has a nonsingular matrix if and only if max r(x) = m. X S (b) All X S are nonsingular if and only if min r(x) = m. X S (c) S if and only if min r(x) =. X S (d) S = {} if and only if max r(x) =. X S (e) All X S have the same rank if and only if max r(x) = min r(x). X S X S (f) H has a matrix X > (X < ) if and only if max X H i +(X) = m (g) All X H satisfy X > (X < ) if and only if min X H i +(X) = m (h) H has a matrix X (X ) if and only if min X H i (X) = (i) All X H satisfy X (X ) if and only if max X H i (X) = ( ) max i (X) = m. X H ( ) min i (X) = m. X H ( ) min i +(X) =. X H ( ) max i +( X) =. X H (j) All X H have the same positive index of inertia if and only if max X H i +(X) = min X H i +(X). (k) All X H have the same negative index of inertia if and only if max X H i (X) = min X H i (X). These two lemmas show that once certain formulas for the (extremal) rank and the positive and negative signatures of a Hermitian matrix are derived, we can use them to characterize equalities and inequalities for the Hermitian matrix. This basic algebraic method, referred to as the matrix rank/inertia method, is available for studying various Hermitian matrix functions that involve generalized inverses of matrices and variable matrices. The following are some known formulas for ranks/inertias of partitioned matrices and generalized inverses of matrices, which will be used in the latter part of this paper. 3

4 Lemma 1.3 (25) Let A C m n, C m k, C C l n. Then, r A, = r(a) + r(e A ) = r() + r(e A), (1.7) A r = r(a) + r(cf C A ) = r(c) + r(af C ), (1.8) A r = r() + r(c) + r(e C AF C ). (1.9) Lemma 1.4 (31) Let A C m H, Cm n, D C m n, and denote A A M 1 =, M 2 =. D Then, In particular, i ± (M 1 ) = r() + i ± (E AE ), (1.1) r(m 1 ) = 2r() + 2r(E AE ), (1.11) E i ± (M 2 ) = i ± (A) + i A ± E A D A, (1.12) E r(m 2 ) = r(a) + r A E A D A. (1.13) (a) The partial inertias of M 2 satisfies the following inequalities (b) If A, then (c) If A, then (d) If R() R(A), then i ± (M 2 ) i ± (A) + i ± ( D A ) i ± (A). (1.14) i + (M 1 ) = r A,, i (M 1 ) = r(), r(m 1 ) = r A, + r(). (1.15) i + (M 1 ) = r(), i (M 1 ) = r A,, r(m 1 ) = r A, + r(). (1.16) i ± (M 2 ) = i ± (A) + i ± ( D A ), r(m 2 ) = r(a) + r( D A ). (1.17) (e) If R() R(A) = {} and R( ) R(D) = {}, then i ± (M 2 ) = i ± (A) + i ± (D) + r(), r(m 2 ) = r(a) + 2r() + r(d). (1.18) Some formulas derived from (1.1) and (1.11) are A A F i P ± F P = i ± C P r(p ), (1.19) P A A F r P F P = r C P 2r(P ), (1.2) P A Q EQ AE i Q E Q ± = i E Q D ± D r(q), (1.21) Q A Q EQ AE r Q E Q = r E Q D D 2r(Q). (1.22) Q We shall use them to simplify the inertias of block Hermitian matrices involving Moore Penrose inverses of matrices. 4

5 Lemma 1.5 (2) Let A C m H, Cm n and C C p m be given. Then, { max r A XC (XC) = min r A,, C A A C, r X C n p, r C }, (1.23) min r A XC X C (XC) = 2r A,, C + max{ s + + s, t + + t, s + + t, s + t + }, n p { max i ± A XC (XC) = min X C n p (1.24) } A A C i ±, i ±, (1.25) C min X C n p i ± A XC (XC) = r A,, C + max{ s ±, t ± }, (1.26) where A s ± = i ± r A C A C, t ± = i ± r C A C C The right-hand sides of (1.23) (1.26) contain only the ranks/inertias of some block matrices consisting of the given matrices in the linear matrix function. Hence, their simplification and applications are quite easy in most situations. As fundamental formulas, (1.23) (1.26) can be widely used for finding the extremal ranks/inertias of various matrix functions with variable matrices with symmetric patterns. Lemma 1.6 (a) 27 The matrix equation in (1.2) has a solution if and only if AA =. In this case, the general solution to (1.2) can be written in the following parametric form X = A + F A V, (1.27) where V C n m is arbitrary. The solution to (1.2) is unique if and only if r(a) = n. (b) 3 In this case, max r(x) = min{ m, AX= n + r() r(a) }, (1.28) min r(x) = r(). AX= (1.29) In order to derive explicit formulas for ranks of block matrices, we use the following three types of elementary block matrix operation (EMOs, for short): (I) interchange two block rows (columns) in a block matrix; (II) multiply a block row (column) by a nonsingular matrix from the left-hand (right-hand) side in a block matrix; (III) add a block row (column) multiplied by a matrix from the left-hand (right-hand) side to another block row (column). In order to derive explicit formulas for the inertia of a block Hermitian matrix, we use the following three types of elementary block congruence matrix operation (ECMOs, for short) for a block Hermitian matrix with the same row and column partition: (IV) interchange ith and jth block rows, while interchange ith and jth block columns in the block Hermitian matrix; (V) multiply ith block row by a nonsingular matrix P from the left-hand side, while multiply ith block column by P from the right-hand side in the block Hermitian matrix; (VI) add ith block row multiplied by a matrix P from the left-hand side to jth block row, while add ith block column multiplied by P from the right-hand side to jth block column in the block Hermitian matrix. The three types of operation are in fact equivalent to some congruence transformation of a Hermitian matrix A P AP, where the nonsingular matrix P is from the elementary block matrix operations to the block rows of A, and P is from the elementary block matrix operations to the block columns of A. Some applications of the ECMOs in establishing formulas for inertias of Hermitian matrices can be found, e.g., in 31, 32,

6 2 Rank/inertias optimization of Q 1 XP 1 X subject to AX = Note that I X I Q XP P X P I Q XP X X =. I P It is easy to get the following equalities Q XP Q XP X i ± P X = i P ± = i P ± ( Q XP X ) + i ± (P ), (2.1) or equivalently, Note that Q XP P X P i ± ( Q XP X Q XP ) = i ± P X P = Q In + X, P + X I P P n, i ± (P ). (2.2) is a linear matrix function. Hence, we are able to derive the extremal ranks/inertias of Q XP X from Lemma 1.5. In fact the rank/inertia of any nonlinear Hermitian matrix function can be converted to the rank/inertia of certain linear Hermitian matrix function. Theorem 2.1 Let q 1 (X) be as given in (1.1) and assume that (1.2) is consistent. Then, (a) The maximal rank of q 1 (X) subject to (1.2) is max rq 1(X) AX= = min{ 2n + r( AQ 1 A P 1 ) 2r(A), n + r AQ 1, P 1 r(a), r(q 1 ) + r(p 1 ) }. (2.3) (b) The minimal rank of q 1 (X) subject to (1.2) is where min rq 1(X) = max{ s 1, s 2, s 3, s 4 }, (2.4) AX= s 1 = r( AQ 1 A P 1 ) + 2r AQ 1, P 1 2r AQ 1 A, P 1, s 2 = 2r AQ 1, P 1 + r(q 1 ) r(p 1 ) 2r(AQ 1 ), s 3 = 2r AQ 1, P 1 + i + ( AQ 1 A P 1 ) r AQ 1 A, P 1 i (P 1 ) + i (Q 1 ) r(aq 1 ), s 4 = 2r AQ 1, P 1 + i ( AQ 1 A P 1 ) r AQ 1 A, P 1 i + (P 1 ) + i + (Q 1 ) r(aq 1 ). (c) The maximal partial inertia of q 1 (X) subject to (1.2) is max i ±q 1 (X) = min{ n + i ± ( AQ 1 A P 1 ) r(a), i ± (Q 1 ) + i (P 1 ) }. (2.5) AX= (d) The minimal partial inertia of q 1 (X) subject to (1.2) is min i ±q 1 (X) = max{ i ± ( AQ 1 A P 1 ) + r AQ 1, P 1 r AQ 1 A, P 1, AX= Proof Substituting (1.27) into Q 1 XP 1 X yields Applying (2.2) to (2.7) gives the following result r AQ 1, P 1 + i ± (Q 1 ) i ± (P 1 ) r(aq 1 )}. (2.6) Q 1 XP 1 X = Q 1 ( A + F A V )P 1 ( A + F A V ). (2.7) i ± Q 1 (A + F A V )P 1 (A + F A V ) Q = i 1 (A + F A V )P 1 ± P 1 (A + F A V ) P 1 ( Q = i 1 A P 1 FA ± P 1 (A ) + P 1 i ± (P 1 ) ) V, P 1 + V F P A, i ± (P 1 ). (2.8) 1 6

7 Denote q(v ) = Applying Lemma 1.4 to (2.9) yields where Q 1 A P 1 P 1 (A ) P 1 FA + V, P 1 + V F A,. (2.9) P1 max rq(v ) = min{ r(m), r(m 1 ), r(m 2 ) }, (2.1) V min rq(v ) = 2r(M) + max{ s + + s, t + + t, s + + t, s + t + }, (2.11) V max i ± q(v ) = min{ i ± (M 1 ), i ± (M 2 )}, (2.12) V min i ±q(v ) = r(m) + max{ s ±, t ± }, (2.13) V Q M = 1 A P 1 F A P 1 A, P 1 P 1 Q 1 A P 1 F A Q 1 A P 1 M 1 = P 1 A P 1, M 2 = P 1 A P 1 P 1, F A P 1 Q 1 A P 1 F A Q 1 A P 1 F A N 1 = P 1 (A ) P 1 P 1, N 2 = P 1 (A ) P 1 P 1 F A P 1 and s ± = i ± (M 1 ) r(n 1 ), t ± = i ± (M 2 ) r(n 2 ). Applying (1.19) (1.22), and simplifying by EMOs and ECMOs, we obtain r(m) = Q 1 A Q1 A P 1 F A + r(p1 ) = r P 1 I n + r(p A 1 ) r(a) I = r n + r(p AQ 1 P 1 1 ) r(a) = n + r(p 1 ) + r AQ 1, P 1 r(a), (2.14) Q1 A r(n 1 ) = r P 1 F A + r(p F A 1 ) = r 1 A P 1 I n I n A + r(p 1 ) 2r(A) A I n = r I n + r(p 1 ) 2r(A) P 1 AQ 1 A = 2n + r(p 1 ) 2r(A) + r P 1, AQ 1 A, (2.15) r(n 2 ) = n + 2r(P 1 ) + r(aq 1 ) r(a), (2.16) Q 1 A P 1 I n i ± (M 1 ) = i ± P 1 (A ) P 1 I n A r(a) A Q 1 A P 1 I n Q 1 A = i ± P 1 (A ) P 1 P 1 I n r(a) AQ 1 P 1 AQ 1 A I n = i ± P 1 I n r(a) AQ 1 A P 1 = n + i ± (P 1 ) + i ± ( AQ 1 A P 1 ) r(a), (2.17) Q 1 A P 1 i ± (M 2 ) = i ± P 1 (A ) P 1 P 1 = i ± (Q 1 ) + r(p 1 ), (2.18) P 1, 7

8 and s ± =i ± (M 1 ) r(n 1 ) = r(a) + i ± (AQ 1 A P 1 ) r P 1, AQ 1 A i (P 1 ) n, (2.19) t ± =i ± (M 2 ) r(n 2 ) = r(a) + i ± (Q 1 ) r(aq 1 ) r(p 1 ) n. (2.2) Substituting (2.14) (2.2) into (2.1) (2.13), and then (2.1) (2.13) into (2.8), we obtain (2.3) (2.6). Many consequences can be derived from Theorem 2.1. Corollary 2.2 Let q 1 (X) be as given in (1.1) and assume that (1.2) is consistent. Then, (a) AX = has a solution such that Q 1 XP 1 X is nonsingular if and only if r( AQ 1 A P 1 ) 2r(A) n, r AQ 1, P 1 r(a), r(q 1 ) + r(p 1 ) n. (2.21) (b) Q 1 XP 1 X is nonsingular for all solutions of AX = if and only if one of s i n, i = 1,... 4 holds. (c) AX = and XP 1 X = Q 1 have a common solution if and only if AQ 1 A = P 1, R(AQ 1 ) R(P 1 ), i ± (Q 1 ) i ± (P 1 ). (2.22) (d) XP 1 X = Q 1 holds for all solutions of AX = if and only if r(a) = n and AQ 1 A = P 1, or Q 1 = and P 1 =. (e) AX = has a solution such that Q 1 XP 1 X > if and only if i + ( AQ 1 A P 1 ) r(a) and i + (Q 1 ) + i (P 1 ) n. (2.23) (f) AX = has a solution such that Q 1 XP 1 X < if and only if i ( AQ 1 A P 1 ) r(a) and i (Q 1 ) + i + (P 1 ) n. (2.24) (g) Q 1 XP 1 X > holds for all solutions of AX = if and only if or i + ( AQ 1 A P 1 ) + r AQ 1, P 1 = n + r AQ 1 A, P 1 (2.25) r AQ 1, P 1 + i + (Q 1 ) = n + i + (P 1 ) + r(aq 1 ); (2.26) (h) Q 1 XP 1 X < holds for all solutions of AX = if and only if or i ( AQ 1 A P 1 ) + r AQ 1, P 1 = n + r AQ 1 A, P 1 (2.27) r AQ 1, P 1 + i (Q 1 ) = n + i (P 1 ) r(aq 1 ). (2.28) (i) AX = has a solution such that Q 1 XP 1 X if and only if AQ 1 A P 1, r AQ 1 A, P 1 = r AQ 1, P 1 i (P 1 ) i (Q 1 ) + r(aq 1 ). (2.29) (j) AX = has a solution such that Q 1 XP 1 X if and only if AQ 1 A P 1, r AQ 1 A, P 1 = r AQ 1, P 1 i + (P 1 ) i + (Q 1 ) + r(aq 1 ). (2.3) (k) Q 1 XP 1 X holds for all solutions of AX = if and only if r(a) = n and AQ 1 A P 1, or Q 1 and P 1 ; 8

9 (l) Q 1 XP 1 X holds for all solutions of AX = if and only if r(a) = n and AQ 1 A P 1, or Q 1 and P 1. Corollary 2.3 Let q 1 (X) be as given in (1.1) with P 1 > and Q 1 >, and assume that (1.2) is consistent. Then, Then, max r( Q 1 XP 1 X ) = min{ n, 2n + r( AQ 1 A P 1 ) 2r(A) }, (2.31) AX= min r( Q 1 XP 1 X ) = max{ r( AQ 1 A P 1 ), i ( AQ 1 A P 1 ) + n m }, (2.32) AX= max i ±( Q 1 XP 1 X ) = min{ n + i ± ( AQ 1 A P 1 ) r(a), i ± (I n ) + i (I m ) }, (2.33) AX= min i ±( Q 1 XP 1 X ) = max{ i ± ( AQ 1 A P 1 ), i ± (I n ) i ± (I m )}. (2.34) AX= (a) AX = has a solution such that Q 1 XP 1 X is nonsingular if and only if r( AQ 1 A P 1 ) 2r(A) n. (b) Q 1 XP 1 X is nonsingular for all solutions of AX = if and only if r( AQ 1 A P 1 ) = n or i ( AQ 1 A P 1 ) = m. (c) AX = and XP 1 X = Q 1 have a common solution if and only if AQ 1 A = P 1 and m n. (d) XP 1 X = Q 1 holds for all solutions of AX = if and only if AQ 1 A = P 1 and r(a) = n. (e) AX = has a solution such that Q 1 > XP 1 X if and only if i + ( AQ 1 A P 1 ) = r(a); has a solution such that Q 1 < XP 1 X if and only if i ( AQ 1 A P 1 ) = r(a) and m n. (f) Q 1 > XP 1 X holds for all solutions of AX = if and only if i + ( AQ 1 A P 1 ) = n; Q 1 < XP 1 X holds for all solutions of AX = if and only if i ( AQ 1 A P 1 ) = n. (g) AX = has a solution such that Q 1 XP 1 X if and only if AQ 1 A P 1 ; has a solution such that Q 1 XP 1 X if and only if AQ 1 A P 1 and n m. (i) Q 1 XP 1 X holds for all solutions of AX = if and only if AQ 1 A P 1 and r(a) = n; Q 1 XP 1 X holds for all solutions of AX = if and only if AQ 1 A P 1 and r(a) = n. In particular, we have the following results on the rank/inertia of I n XX and the corresponding equality and inequalities for the solution of the matrix equation in (1.2). Corollary 2.4 Assume that (1.2) is consistent. Then, Hence, max r( I n XX ) = min{ n, 2n + r( AA ) 2r(A) }, (2.35) AX= min r( I n XX ) = max{ r( AA ), i ( AA ) + n m }, (2.36) AX= max i ±( I n XX ) = min{ n + i ± ( AA ) r(a), i ± (I n ) + i (I m ) }, (2.37) AX= min i ±( I n XX ) = max{ i ± ( AA ), i ± (I m ) i ± (I n )}. (2.38) AX= (a) AX = has a solution such that I n XX is nonsingular if and only if r( AA ) 2r(A) n. (b) I n XX is nonsingular for all solutions of AX = if and only if r( AA ) = n or i ( AA ) = m. (c) AX = has a solution such that XX = I n if and only if AA = and m n. (d) XX = I n holds for all solutions of AX = if and only if AA = and r(a) = n. 9

10 (f) AX = has a solution such that XX < I n if and only if i + ( AA ) = r(a); has a solution such that XX > I n if and only if i ( AA ) = r(a) and m n. (g) XX < I n holds for all solutions of AX = if and only if i + ( AA ) = n; XX > I n holds for all solutions of AX = if and only if i ( AA ) = n. (h) AX = has a solution such that XX I n if and only if AA ; has a solution such that XX I n if and only if AA and m n. (i) XX I n holds for all solutions of AX = if and only if i ( AA ) = n r(a); XX I n holds for all solutions of AX = if and only if i + ( AA ) = n r(a). In the remaining part of this section, we solve the optimization problems in (1.5) and (1.6), i.e., to find X 1, X 2 C n m such that hold, respectively. AX 1 = and q 1 (X) q 1 (X 1 ) s.t. AX =, (2.39) AX 2 = and q 1 (X) q 1 (X 2 ) s.t. AX = (2.4) Theorem 2.5 Assume that (1.2) is consistent, and its solution is not unique, namely, r(a) < n. Then, (a) There exists an X 1 C n m such that (2.39) holds if and only if P 1 and P 1 =. (2.41) In this case, the general matrix satisfying (2.39) is X 1 = A + F A UF P1, where U is an arbitrary matrix, and the maximal matrix of q 1 (X) in the Löwner partial ordering is q 1 (X 1 ) = Q 1. (b) There exists an X 2 C n m such that (2.4) holds if and only if P 1 and P 1 =. (2.42) In this case, the general matrix satisfying (2.4) is X 2 = A + F A UF P1, where U is an arbitrary matrix, and the minimal matrix of q 1 (X) in the Löwner partial ordering is q 1 (X 2 ) = Q 1. Proof Under r(a) = n, the solution to (1.2) is unique by Lemma 1.6(a), so that q 1 (X) subject to (1.2) is unique as well. Under r(a) < n, let Then, (2.39) and (2.4) are equivalent to h i (X) := q 1 (X) q 1 (X i ) = X i P 1 X i XP 1 X, i = 1, 2, h 1 (X), AX =, AX 1 =, (2.43) h 2 (X), AX =, AX 2 =. (2.44) Under r(a) < n, we see from Corollary 2.2(k) and (l) that (2.43) and (2.44) are equivalent to both of which are further equivalent to X 1 P 1 X 1, P 1, AX 1 =, (2.45) X 2 P 1 X 2, P 1, AX 2 =, (2.46) X 1 P 1 = 1, AX 1 =, P 1, (2.47) X 2 P 1 =, AX 2 =, P 1. (2.48) The two equations (2.47) have a common solution for X 1 if and only if P 1 =. In this case, the general solution to (2.47) is X 1 = A + F A UF P1. Substituting it into (1.1) gives q 1 (X 1 ) = Q 1, establishing (a). Result (b) can be shown similarly. 1

11 3 Rank/inertia optimization of Q 2 X P 2 X subject to AX = Theorem 3.1 Let q 2 (X) be as given in (1.1) and assume that (1.2) is consistent. Also denote Then, Q 2 T 1 = A A, T 2 = A A P. (3.1) P 2 2 max rq 2(X) = min{ m, r( T 1 ) 2r(A) }, (3.2) AX= min rq 2(X) = max{, r(t 1 ) 2r(T 2 ) + 2r(A), i + (T 1 ) r(t 2 ) + r(a), i (T 1 ) r(t 2 ) + r(a) }, AX= max ±q 2 (X) = min{ m, AX= i ± ( T 1 ) r(a) }, (3.4) min ±q 2 (X) = max{, AX= i ± (T 1 ) r(t 2 ) + r(a) }. (3.5) Proof Substituting (1.27) into Q 2 X P 2 X yields Applying (2.2) to (3.6) gives the following result (3.3) Q 2 X P 2 X = Q 2 ( A + F A V ) P 2 ( A + F A V ). (3.6) i ± Q 2 (A + F A V ) P 2 (A + F A V ) Q = i 2 (A + F A V ) P 2 ± P 2 (A + F A V ) P 2 ( Q = i 2 (A ) P 2 ± P 2 A + P 2 i ± (P 2 ) V I P 2 F m, + A Im ) V, F A P 2, i ± (P 2 ). (3.7) Denote q(v ) = Q 2 (A ) P 2 P 2 A P 2 + P 2 F A V I m, + Im V, F A P 2,. (3.8) Applying Lemma 1.5 to (3.8) yields max rq(v ) = min{ r(m), r(m 1 ), r(m 2 ) }, (3.9) V min rq(v ) = 2r(M) + max{ V s + + s, t + + t, s + + t, s + t + }, (3.1) max i ± q(v ) = min{ i ± (M 1 ), i ± (M 2 ) }, (3.11) V min i ±q(v ) = r(m) + max{ s ±, t ± }, (3.12) V where N 1 = M = Q 2 (A ) P 2 M 1 = P 2 A P 2 P 2 F A F A P 2 Q 2 (A ) P 2 I m P 2 A P 2 P 2 F A F A P 2 Q 2 (A ) P 2 I m P 2 A P 2 P 2 F A, M 2 =, N 2 =, Q 2 (A ) P 2 I m P 2 A P 2, I m Q 2 (A ) P 2 I m P 2 A P 2 P 2 F A I m. 11

12 and s ± = i ± (M 1 ) r(n 1 ), t ± = i ± (M 2 ) r(n 2 ). Applying (1.19) (1.22), and simplifying by EMOs and ECMOs, we obtain and r(m) = m + r(p 2 ), (3.13) Q 2 (A ) P 2 i ± (M 1 ) = i ± P 2 A P 2 P 2 P 2 A r(a) A Q 2 (A ) P 2 A (A ) P 2 = i ± P 2 P 2 A P 2 A r(a) A Q 2 = i ± P 2 P 2 A + i ±(P 2 ) r(a) A = i ± (P 1 ) + i ± (T 1 ) r(a), (3.14) Q 2 (A ) P 2 I m i ± (M 2 ) = i ± P 2 A P 2 = m + i ± (P 2 ), (3.15) I m Q 2 (A ) P 2 I m r(n 1 ) = r P 2 A P 2 P 2 P 2 A 2r(A) A P 2 = r P 2 A P 2 A + m 2r(A) A P2 A = r + m + r(p A 2 ) 2r(A) = m + r(t 2 ) + r(p 2 ) 2r(A), (3.16) r(n 2 ) = 2m + r(p 2 ), (3.17) s ± =i ± (M 1 ) r(n 1 ) = i ± (T 1 ) r(t 2 ) + r(a) i (P 2 ) m, (3.18) t ± =i ± (M 2 ) r(n 2 ) = i (P 2 ) m. (3.19) Substituting (3.13) (3.19) into (3.9) (3.12), and then substituting (3.9) (3.12) into (3.7), we obtain (3.2) (3.5). Corollary 3.2 Let q 2 (X) be as given in (1.1), T 1 and T 2 be as given in (3.1), and assume that (1.2) be consistent. Then, (a) AX = has a solution such that Q 2 X P 2 X is nonsingular if and only if r( T 1 ) 2r(A) + m. (b) AX = has a solution such that Q 2 = X P 2 X if and only if i + (T 1 ) r(t 2 ) r(a) and i (T 1 ) r(t 2 ) r(a). (3.2) (c) AX = has a solution such that Q 2 > X P 2 X if and only if i + (T 1 ) m + r(a); has a solution such that Q 2 < X P 2 X if and only if i (T 1 ) m + r(a). (d) AX = has a solution such that Q 2 X P 2 X if and only if i (T 1 ) r(t 2 ) r(a); has a solution such that Q 2 X P 2 X if and only if i + (T 1 ) r(t 2 ) r(a). (e) Q 2 X P 2 X holds for all solutions of AX = if and only if i (T 1 ) = r(a). 12

13 (f) Q 2 X P 2 X holds for all solutions of AX = if and only if i + (T 1 ) = r(a). Corollary 3.3 Let q 2 (X) be as given in (1.1) with P 1 > and Q 1 >, and assume that (1.2) is consistent. Then, max r( Q 2 X P 2 X ) = min{ m, m + n + r( AP2 1 A Q 1 2 AX= ) 2r(A) }, (3.21) min r( Q 2 X P 2 X ) = max{ r( AP2 1 A Q 1 2 AX= ) + n m, i ( AP2 1 A Q 1 2 ) }, (3.22) max i +( Q 2 X P 2 X ) = min{ m, n + i + ( AP2 1 A Q 1 2 AX= ) r(a) }, (3.23) max i ( Q 2 X P 2 X ) = m + i ( AP2 1 A Q 1 2 AX= ) r(a), (3.24) min i +( Q 2 X P 2 X ) = max{, i + ( AP2 1 A Q 1 2 AX= ) + n m }, (3.25) min i ( Q 2 X P 2 X ) = i ( AP2 1 A Q 1 2 AX= ). (3.26) Hence, (a) AX = has a solution such that Q 2 X P 2 X is nonsingular if and only if r( AP 1 2 A Q 1 2 ) 2r(A) n. (b) AX = has a solution such that Q 2 = X P 2 X if and only if r( AP 1 2 A Q 1 2 ) m n and AP 1 2 A Q 1 2. (c) AX = has a solution such that Q 2 > X P 2 X if and only if i + ( AP 1 2 A Q 1 2 ) r(a) + m n. (d) AX = has a solution such that Q 2 < X P 2 X if and only if i ( AP 1 2 A Q 1 2 ) = r(a). (e) AX = has a solution such that Q 2 X P 2 X if and only if AP 1 2 A Q 1 2. (f) AX = has a solution such that Q 2 X P 2 X if and only if i + ( AP 1 2 A Q 1 2 ) m n. Corollary 3.4 Assume that (1.2) is consistent. Then, Hence, max r( I m X X ) = min{ m, m + n + r( AA ) 2r(A) }, (3.27) AX= min r( I m X X ) = max{ r( AA ) + n m, i ( AA ) }, (3.28) AX= max i +( I m X X ) = min{ m, n + i + ( AA ) r(a) }, (3.29) AX= max i ( I m X X ) = m + i ( AA ) r(a), (3.3) AX= min i +( I m X X ) = max{, i + ( AA ) + n m }, (3.31) AX= min i ( I m X X ) = i ( AA ). (3.32) AX= (a) AX = has a solution such that I m X X is nonsingular if and only if r( AA ) 2r(A) n. (b) AX = has a solution such that X X = I m if and only if r( AA ) m n and AA. (c) AX = has a solution such that X X < I m if and only if i + ( AA ) r(a) + m n. (d) AX = has a solution such that X X > I m if and only if i ( AA ) = r(a). (e) AX = has a solution such that X X I m if and only if AA. 13

14 (f) AX = has a solution such that X X I m if and only if i + ( AA ) m n. In the remaining part of this section, we solve the optimization problems in (1.5) and (1.6), i.e., to find X 1, X 2 C n m such that hold, respectively. AX 1 = and q 2 (X) q 2 (X 1 ) s.t. AX =, (3.33) AX 2 = and q 2 (X) q 2 (X 2 ) s.t. AX = (3.34) Theorem 3.5 Assume that (1.2) is consistent, and its solution is not unique, namely, r(a) < n. Then, (a) There exists an X 1 C n m such that (3.33) holds if and only if P2 A E A P 1 E A and R R A In this case, the matrix X 1 satisfying (3.33) is determined by AX 1 = and X1 P 2 X 1, P2 A A Correspondingly, the maximal matrix of q 2 (X) in the Löwner partial ordering is q 2 (X 1 ) = Q 2, P2 A A (b) There exists an X 2 such that (3.34) holds if and only if P2 A E A P 1 E A and R R A In this case, the matrix X 2 satisfying (3.34) is determined by AX 2 = and X2 P 2 X 2, P2 A A Correspondingly, the minimal matrix of q 2 (X) in the Löwner partial ordering is Proof Under r(a) < n, let Then, (3.33) and (3.34) are equivalent to q 2 (X 2 ) = Q 2, P2 A A. (3.35). (3.36). (3.37). (3.38). (3.39). (3.4) h i (X) := q 2 (X) q 2 (X i ) = X i P 2 X i X P 2 X, i = 1, 2. (3.41) h 1 (X), AX =, AX 1 =, (3.42) h 2 (X), AX =, AX 2 =. (3.43) From Corollary 3.2(e) and (f), (3.42) and (3.43) are equivalent to X1 P 2 X 1 i + A = r(a), AX 1 =, (3.44) A P 2 X2 P 2 X 2 i A = r(a), AX 2 =. (3.45) A P 2 14

15 It is easily seen from (1.1) and (1.14) that X P 2 X i ± A r(a) + i (E A P 2 E A ) + i ± (X P 2 X, P2 A ) r(a). A A P 2 Hence, (3.44) and (3.45) are equivalent to E A P 2 E A, P2 A R R A E A P 2 E A, P2 A R R A, AX 1 =, X 1 P 2 X 1,, AX 2 =, X 2 P 2 X 2, P2 A A P2 A A, (3.46), (3.47) respectively. Under the first two conditions in (3.46) and (3.47), we can verify by from Corollary 3.2(d) that the two pairs of matrix equations in (3.46) and (3.47) have a common solution, respectively. Thus, we obtain the results in (a) and (b). 4 Concluding remarks After centuries development, it is not easy nowadays to discover bunches of new and simple formulas for certain fundamental problems in classical areas of mathematics. However, we really gave in the previous two sections some simple closed-formulas for the extremal ranks/inertias of two simple matrix functions in (1.1) subject to (1.2), and used them to derive a variety of equalities and inequalities for the solutions of the matrix equation in (1.2). ecause these formulas are represented through the ranks/inertias of the given matrices in (1.1) and (1.2), they are easy to calculate and simplify under various given conditions. The procedure for deriving these extremal ranks and inertias is unreplaceable, while the results obtained are unique from algebraic point of view. Hence, the research on the extremal rank/inetia of matrices and their applications, as shown in this paper as well as in 2, 3, 31, etc., can be classified as a fundamental and independent branch in mathematical optimization theory. Motivated by the results in the previous two sections, we are also able to solve rank/inertia optimization problems on some general quadratic matrix functions, such as, q 1 (X) = XP 1 X + XQ 1 + Q 1X + R 1, q 2 (X) = X P 2 X + X Q 2 + Q 2X + R 2. (4.1) This type of quadratic matrix functions occur in some block elementary congruence transformations, for example, Im P1 Q 1 Im X P X I n Q = 1 P 1 X + Q 1 1 R 1 I n Q 1 + XP 1 XP 1 X + XQ 1 + Q 1X. (4.2) + R 1 The lower-right block in (4.2) is the quadratic matrix function q 1 (X) in (4.1). Some optimization problems related to (4.1) were considered in 2, 3, 6, 14. Without much effort, we are also able to derive the extremal ranks and inertias of (4.1) subject to (1.2), and then use them to examine various algebraic properties of (4.1). The corresponding results will be presented in another paper. Rank/inertia optimization problems of quadratic matrix functions widely occur in matrix theory and applications. For instance, many matrix inverse completion problems can be converted to RMinP of quadratic matrix functions. A simple example is A M(X) = X, (4.3) where A = A C m m and C m n are given. In this case, find X = X C n n such that M(X) is nonsingular and its inverse has the form M 1 Y Z (X) = Z, (4.4) G where G = G C n n is given. Eqs. (4.3) and (4.4) are obviously equivalent to the following nonlinear rank minimization problem ( ) A Y Z min r X,Y,Z X Z I G m+n =, 15

16 or alternatively, the following linear rank minimization problem Y Z min r Z I G m+n X,Y,Z A = m + n. I m+n X A complete solution this matrix inverse completion problem was given in 39. It has been realized that the matrix/rank methods can serve as effective tools to deal with matrices and their operations. In recent years, Tian and his coauthors studied many problems on the extremal ranks and inertias of matrix functions and their applications; see, 17, 19, 2, 21, 31, 34, 35, 36, 38. This series of work opened a new and fruitful research area in matrix theory and have attracted much attention. In fact, lots of matrix rank formulas and their applications were collected in some recent handbooks for matrix theory; see 4, 29. The new techniques for solving matrix rank/inertia optimization problems enable us to develop new extensions of classic theory on matrix equations and matrix inequalities, which allowed us to analyze algebraic properties of a wide variety of Hermitian matrix function that could not be handled before. We expect that more problems on maximizing/minimizing ranks/inertias of matrix functions can be proposed reasonably and solved analytically, while the matrix rank/inertia methods will play more important roles in matrix theory and applications. References 1 F.A. adawia, On a quadratic matrix inequality and the corresponding algebraic Riccati equation, Internat. J. Contr. 6(1982), A. eck, Quadratic matrix programming, SIAM J. Optimization 17(26), A. eck, Convexity properties associated with nonconvex quadratic matrix functions and applications to quadratic programming, J. Optim. Theory Appl. 142(29), D.S. ernstein, Matrix Mathematics: Theory, Facts and Formulas, Second ed., Princeton University Press, Princeton, E. Candes and. Recht, Exact matrix completion via convex optimization, Found. of Comput. Math. 9, , Y. Chen, Nonnegative definite matrices and their applications to matrix quadratic programming problems, Linear and Multilinear Algebra 33(1993), M. Fazel, H. Hindi and S. oyd, A Rank minimization heuristic with application to minimum order system approximation, In: Proceedings of the 21 American Control Conference, pp , M. Fazel, H. Hindi and S. oyd, Rank minimization and applications in system theory, In: Proceedings of the 24 American Control Conference, pp , J.F. Geelen, Maximum rank matrix completion, Linear Algebra Appl. 288(1999), J. Groß, A note on the general Hermitian solution to AXA =, ull. Malaysian Math. Soc. (2nd Ser.) 21(1998), N.J.A. Harvey, D.R. Karger and S. Yekhanin, The complexity of matrix completion, In: Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm, Association for Computing Machinery, New York, pp , T.M. Hoang and T. Thierauf, The complexity of the inertia, Lecture Notes in Computer Science, Vol. 2556, Springer, pp , T.M. Hoang and T. Thierauf, The complexity of the inertia and some closure properties of GapL, In: Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity, pp , V.A. Khatskevich, M.I. Ostrovskii and V.S. Shulman, Quadratic inequalities for Hilbert space operators, Integr. Equ. Oper. Theory 59(27), Y. Kim and M. Mesbahi, On the rank minimization problem, In: Proceedings of the 24 American Control Conference, oston, pp , M. Laurent, Matrix completion problems, In: Encyclopedia of Optimization (C.A. Floudas and P.M. Pardalos, eds.), Vol. III, pp , Kluwer, Y. Liu and Y. Tian, More on extremal ranks of the matrix expressions A X ± X with statistical applications, Numer. Linear Algebra Appl. 15(28), Y. Liu and Y. Tian, Extremal ranks of submatrices in an Hermitian solution to the matrix equation AXA = with applications, J. Appl. Math. Comput. 32(21),

17 19 Y. Liu and Y. Tian, A simultaneous decomposition of a matrix triplet with applications, Numer. Linear Algebra Appl. (21), DOI:1.12/nla Y. Liu and Y. Tian, Max-min problems on the ranks and inertias of the matrix expressions A XC ± (XC) with applications, J. Optim. Theory Appl., accepted. 21 Y. Liu, Y. Tian and Y. Takane, Ranks of Hermitian and skew-hermitian solutions to the matrix equation AXA =, Linear Algebra Appl. 431(29), M. Mahajan and J. Sarma, On the complexity of matrix rank and rigidity, Lecture Notes in Computer Science, Vol. 4649, Springer, pp , M. Mesbahi, On the rank minimization problem and its control applications, Systems & Control Letters 33(1998), M. Mesbahi and G.P. Papavassilopoulos, Solving a class of rank minimization problems via semi-definite programs, with applications to the fixed order output feedback synthesis, In: Proceedings of the American Control Conference, Albuquerque, New Mexico, pp. 77 8, G. Marsaglia and G.P.H. Styan, Equalities and inequalities for ranks of matrices, Linear and Multilinear Algebra 2(1974), K. Natarajan, Sparse approximate solutions to linear systems, SIAM J. Comput. 24(1995), R. Penrose, A generalized inverse for matrices, Proc. Cambridge Philos. Soc. 51 (1955) Recht, M. Fazel and P.A. Parrilo, Guaranteed minimum rank Solutions to linear matrix equations via nuclear norm minimization, SIAM Review 52(21), G.A. Seber, Matrix Handbook for Statisticians, John Wiley & Sons, Y. Tian, Ranks of solutions of the matrix equation AX = C, Linear and Multilinear Algebra 51(23), Y. Tian, Equalities and inequalities for inertias of Hermitian matrices with applications, Linear Algebra Appl. 433(21), Y. Tian, Rank and inertia of submatrices of the Moore Penrose inverse of a Hermitian matrix, Electron. J. Linear Algebra 2 (21) Y. Tian, Completing block Hermitian matrices with maximal and minimal ranks and inertias, Electron. J Linear Algebra 21 (21) Y. Tian, Optimization problems on the rank and inertia of the Hermitian matrix expression A X (X) with applications, submitted. 35 Y. Tian, Optimization problems on the rank and inertia of the Hermitian Schur complement with applications, submitted. 36 Y. Tian, Rank and inertia optimization of the Hermitian matrix expression A 1 1X 1 subject to a pair of Hermitan matrix equations ( 2X 2, 3X 3 ) = ( A 2, A 3 ), submitted. 37 Y. Tian, Optimization problems on the rank and inertia of a linear Hermitian matrix function subject to range, rank and definiteness restrictions, submitted. 38 Y. Tian and Y. Liu, Extremal ranks of some symmetric matrix expressions with applications, SIAM J. Matrix Anal. Appl. 28(26), Y. Tian and Y. Takane, The inverse of any two-by-two nonsingular partitioned matrix and three matrix inverse completion problems, Comput. Math. Appl. 57(29), J. Wang, V. Sreeram and W. Liu, The parametrization of the pencil A + KC with constant rank and its application, Internat. J. Inform. Sys. Sci. 4(28),

Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China

Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China On global optimizations of the rank and inertia of the matrix function A 1 B 1 XB 1 subject to a pair of matrix equations B 2 XB 2, B XB = A 2, A Yongge Tian China Economics and Management Academy, Central

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China

More information

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 181, China Abstract. Let f(x) =

More information

ELA

ELA Electronic Journal of Linear Algebra ISSN 181-81 A publication of te International Linear Algebra Society ttp://mat.tecnion.ac.il/iic/ela RANK AND INERTIA OF SUBMATRICES OF THE MOORE PENROSE INVERSE OF

More information

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,

More information

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix Electronic Journal of Linear Algebra Volume 2 Volume 2 (21) Article 17 21 Rank and inertia of submatrices of te Moore- Penrose inverse of a Hermitian matrix Yongge Tian yongge.tian@gmail.com Follow tis

More information

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair

More information

On V-orthogonal projectors associated with a semi-norm

On V-orthogonal projectors associated with a semi-norm On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai

More information

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,

More information

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China

More information

Moore Penrose inverses and commuting elements of C -algebras

Moore Penrose inverses and commuting elements of C -algebras Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We

More information

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 433 (2010) 476 482 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Nonsingularity of the

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN Electronic Journal of Linear lgebra ISSN 1081-3810 publication of the International Linear lgebra Society EL SPECIL FORMS OF GENERLIZED INVERSES OF ROW BLOCK MTRICES YONGGE TIN bstract. Given a row block

More information

Re-nnd solutions of the matrix equation AXB = C

Re-nnd solutions of the matrix equation AXB = C Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.

More information

Nonsingularity and group invertibility of linear combinations of two k-potent matrices

Nonsingularity and group invertibility of linear combinations of two k-potent matrices Nonsingularity and group invertibility of linear combinations of two k-potent matrices Julio Benítez a Xiaoji Liu b Tongping Zhu c a Departamento de Matemática Aplicada, Instituto de Matemática Multidisciplinar,

More information

The DMP Inverse for Rectangular Matrices

The DMP Inverse for Rectangular Matrices Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for

More information

arxiv: v1 [math.ra] 28 Jan 2016

arxiv: v1 [math.ra] 28 Jan 2016 The Moore-Penrose inverse in rings with involution arxiv:1601.07685v1 [math.ra] 28 Jan 2016 Sanzhang Xu and Jianlong Chen Department of Mathematics, Southeast University, Nanjing 210096, China Abstract:

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

A new algebraic analysis to linear mixed models

A new algebraic analysis to linear mixed models A new algebraic analysis to linear mixed models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. This article presents a

More information

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE J. Appl. Math. & Computing Vol. 19(2005), No. 1-2, pp. 297-310 MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE K. KAMARAJ AND K. C. SIVAKUMAR Abstract. The concept of the Moore-Penrose inverse

More information

A note on the equality of the BLUPs for new observations under two linear models

A note on the equality of the BLUPs for new observations under two linear models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract

More information

On the Hermitian solutions of the

On the Hermitian solutions of the Journal of Applied Mathematics & Bioinformatics vol.1 no.2 2011 109-129 ISSN: 1792-7625 (print) 1792-8850 (online) International Scientific Press 2011 On the Hermitian solutions of the matrix equation

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Diagonal and Monomial Solutions of the Matrix Equation AXB = C Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS RICHARD J. CARON, HUIMING SONG, AND TIM TRAYNOR Abstract. Let A and E be real symmetric matrices. In this paper we are concerned with the determination

More information

Generalized Schur complements of matrices and compound matrices

Generalized Schur complements of matrices and compound matrices Electronic Journal of Linear Algebra Volume 2 Volume 2 (200 Article 3 200 Generalized Schur complements of matrices and compound matrices Jianzhou Liu Rong Huang Follow this and additional wors at: http://repository.uwyo.edu/ela

More information

On Sums of Conjugate Secondary Range k-hermitian Matrices

On Sums of Conjugate Secondary Range k-hermitian Matrices Thai Journal of Mathematics Volume 10 (2012) Number 1 : 195 202 www.math.science.cmu.ac.th/thaijournal Online ISSN 1686-0209 On Sums of Conjugate Secondary Range k-hermitian Matrices S. Krishnamoorthy,

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Ep Matrices and Its Weighted Generalized Inverse

Ep Matrices and Its Weighted Generalized Inverse Vol.2, Issue.5, Sep-Oct. 2012 pp-3850-3856 ISSN: 2249-6645 ABSTRACT: If A is a con s S.Krishnamoorthy 1 and B.K.N.MuthugobaI 2 Research Scholar Ramanujan Research Centre, Department of Mathematics, Govt.

More information

Generalized Principal Pivot Transform

Generalized Principal Pivot Transform Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

On the solvability of an equation involving the Smarandache function and Euler function

On the solvability of an equation involving the Smarandache function and Euler function Scientia Magna Vol. 008), No., 9-33 On the solvability of an equation involving the Smarandache function and Euler function Weiguo Duan and Yanrong Xue Department of Mathematics, Northwest University,

More information

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model Yongge Tian a and Douglas P. Wiens b a School of Economics, Shanghai University of

More information

New insights into best linear unbiased estimation and the optimality of least-squares

New insights into best linear unbiased estimation and the optimality of least-squares Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto

More information

Some results on the reverse order law in rings with involution

Some results on the reverse order law in rings with involution Some results on the reverse order law in rings with involution Dijana Mosić and Dragan S. Djordjević Abstract We investigate some necessary and sufficient conditions for the hybrid reverse order law (ab)

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

Rank equalities for idempotent and involutory matrices

Rank equalities for idempotent and involutory matrices Linear Algebra and its Applications 335 (2001) 101 117 www.elsevier.com/locate/laa Rank equalities for idempotent and involutory matrices Yongge Tian a, George P.H. Styan a, b, a Department of Mathematics

More information

Operators with Compatible Ranges

Operators with Compatible Ranges Filomat : (7), 579 585 https://doiorg/98/fil7579d Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat Operators with Compatible Ranges

More information

On Pseudo SCHUR Complements in an EP Matrix

On Pseudo SCHUR Complements in an EP Matrix International Journal of Scientific Innovative Mathematical Research (IJSIMR) Volume, Issue, February 15, PP 79-89 ISSN 47-7X (Print) & ISSN 47-4 (Online) wwwarcjournalsorg On Pseudo SCHUR Complements

More information

Singular Value Inequalities for Real and Imaginary Parts of Matrices

Singular Value Inequalities for Real and Imaginary Parts of Matrices Filomat 3:1 16, 63 69 DOI 1.98/FIL16163C Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Singular Value Inequalities for Real Imaginary

More information

EXPLICIT SOLUTION TO MODULAR OPERATOR EQUATION T XS SX T = A

EXPLICIT SOLUTION TO MODULAR OPERATOR EQUATION T XS SX T = A EXPLICIT SOLUTION TO MODULAR OPERATOR EQUATION T XS SX T A M MOHAMMADZADEH KARIZAKI M HASSANI AND SS DRAGOMIR Abstract In this paper by using some block operator matrix techniques we find explicit solution

More information

Row and Column Distributions of Letter Matrices

Row and Column Distributions of Letter Matrices College of William and Mary W&M ScholarWorks Undergraduate Honors Theses Theses, Dissertations, & Master Projects 5-2016 Row and Column Distributions of Letter Matrices Xiaonan Hu College of William and

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

The Drazin inverses of products and differences of orthogonal projections

The Drazin inverses of products and differences of orthogonal projections J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,

More information

COLUMN RANKS AND THEIR PRESERVERS OF GENERAL BOOLEAN MATRICES

COLUMN RANKS AND THEIR PRESERVERS OF GENERAL BOOLEAN MATRICES J. Korean Math. Soc. 32 (995), No. 3, pp. 53 540 COLUMN RANKS AND THEIR PRESERVERS OF GENERAL BOOLEAN MATRICES SEOK-ZUN SONG AND SANG -GU LEE ABSTRACT. We show the extent of the difference between semiring

More information

The reflexive re-nonnegative definite solution to a quaternion matrix equation

The reflexive re-nonnegative definite solution to a quaternion matrix equation Electronic Journal of Linear Algebra Volume 17 Volume 17 28 Article 8 28 The reflexive re-nonnegative definite solution to a quaternion matrix equation Qing-Wen Wang wqw858@yahoo.com.cn Fei Zhang Follow

More information

Research Article Constrained Solutions of a System of Matrix Equations

Research Article Constrained Solutions of a System of Matrix Equations Journal of Applied Mathematics Volume 2012, Article ID 471573, 19 pages doi:10.1155/2012/471573 Research Article Constrained Solutions of a System of Matrix Equations Qing-Wen Wang 1 and Juan Yu 1, 2 1

More information

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic

More information

Tensor Complementarity Problem and Semi-positive Tensors

Tensor Complementarity Problem and Semi-positive Tensors DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New

More information

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ZHONGPENG YANG AND XIAOXIA FENG Abstract. Under the entrywise dominance partial ordering, T.L. Markham

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

Subset selection for matrices

Subset selection for matrices Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Trace inequalities for positive semidefinite matrices with centrosymmetric structure

Trace inequalities for positive semidefinite matrices with centrosymmetric structure Zhao et al Journal of Inequalities pplications 1, 1:6 http://wwwjournalofinequalitiesapplicationscom/content/1/1/6 RESERCH Trace inequalities for positive semidefinite matrices with centrosymmetric structure

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES XIAOJI LIU, LINGLING WU, AND JULIO BENíTEZ Abstract. In this paper, some formulas are found for the group inverse of ap +bq,

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

The generalized Schur complement in group inverses and (k + 1)-potent matrices

The generalized Schur complement in group inverses and (k + 1)-potent matrices The generalized Schur complement in group inverses and (k + 1)-potent matrices Julio Benítez Néstor Thome Abstract In this paper, two facts related to the generalized Schur complement are studied. The

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

The Moore-Penrose inverse of differences and products of projectors in a ring with involution

The Moore-Penrose inverse of differences and products of projectors in a ring with involution The Moore-Penrose inverse of differences and products of projectors in a ring with involution Huihui ZHU [1], Jianlong CHEN [1], Pedro PATRÍCIO [2] Abstract: In this paper, we study the Moore-Penrose inverses

More information

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass

More information

Product of Range Symmetric Block Matrices in Minkowski Space

Product of Range Symmetric Block Matrices in Minkowski Space BULLETIN of the Malaysian Mathematical Sciences Society http://math.usm.my/bulletin Bull. Malays. Math. Sci. Soc. (2) 29(1) (2006), 59 68 Product of Range Symmetric Block Matrices in Minkowski Space 1

More information

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D International Journal of Algebra, Vol. 5, 2011, no. 30, 1489-1504 The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D D. Krishnaswamy Department of Mathematics Annamalai University

More information

EXPLICIT SOLUTION TO MODULAR OPERATOR EQUATION T XS SX T = A

EXPLICIT SOLUTION TO MODULAR OPERATOR EQUATION T XS SX T = A Kragujevac Journal of Mathematics Volume 4(2) (216), Pages 28 289 EXPLICI SOLUION O MODULAR OPERAOR EQUAION XS SX A M MOHAMMADZADEH KARIZAKI 1, M HASSANI 2, AND S S DRAGOMIR 3 Abstract In this paper, by

More information

On some linear combinations of hypergeneralized projectors

On some linear combinations of hypergeneralized projectors Linear Algebra and its Applications 413 (2006) 264 273 www.elsevier.com/locate/laa On some linear combinations of hypergeneralized projectors Jerzy K. Baksalary a, Oskar Maria Baksalary b,, Jürgen Groß

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

arxiv: v1 [math.ra] 21 Jul 2013

arxiv: v1 [math.ra] 21 Jul 2013 Projections and Idempotents in -reducing Rings Involving the Moore-Penrose Inverse arxiv:1307.5528v1 [math.ra] 21 Jul 2013 Xiaoxiang Zhang, Shuangshuang Zhang, Jianlong Chen, Long Wang Department of Mathematics,

More information

Workshop on Generalized Inverse and its Applications. Invited Speakers Program Abstract

Workshop on Generalized Inverse and its Applications. Invited Speakers Program Abstract Workshop on Generalized Inverse and its Applications Invited Speakers Program Abstract Southeast University, Nanjing November 2-4, 2012 Invited Speakers: Changjiang Bu, College of Science, Harbin Engineering

More information

Positive definite preserving linear transformations on symmetric matrix spaces

Positive definite preserving linear transformations on symmetric matrix spaces Positive definite preserving linear transformations on symmetric matrix spaces arxiv:1008.1347v1 [math.ra] 7 Aug 2010 Huynh Dinh Tuan-Tran Thi Nha Trang-Doan The Hieu Hue Geometry Group College of Education,

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.

More information

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level. Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,

More information

Dragan S. Djordjević. 1. Introduction

Dragan S. Djordjević. 1. Introduction UNIFIED APPROACH TO THE REVERSE ORDER RULE FOR GENERALIZED INVERSES Dragan S Djordjević Abstract In this paper we consider the reverse order rule of the form (AB) (2) KL = B(2) TS A(2) MN for outer generalized

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 432 21 1691 172 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Drazin inverse of partitioned

More information

of a Two-Operator Product 1

of a Two-Operator Product 1 Applied Mathematical Sciences, Vol. 7, 2013, no. 130, 6465-6474 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2013.39501 Reverse Order Law for {1, 3}-Inverse of a Two-Operator Product 1 XUE

More information

Group inverse for the block matrix with two identical subblocks over skew fields

Group inverse for the block matrix with two identical subblocks over skew fields Electronic Journal of Linear Algebra Volume 21 Volume 21 2010 Article 7 2010 Group inverse for the block matrix with two identical subblocks over skew fields Jiemei Zhao Changjiang Bu Follow this and additional

More information

Jordan Canonical Form of A Partitioned Complex Matrix and Its Application to Real Quaternion Matrices

Jordan Canonical Form of A Partitioned Complex Matrix and Its Application to Real Quaternion Matrices COMMUNICATIONS IN ALGEBRA, 29(6, 2363-2375(200 Jordan Canonical Form of A Partitioned Complex Matrix and Its Application to Real Quaternion Matrices Fuzhen Zhang Department of Math Science and Technology

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Positive Semidefiniteness and Positive Definiteness of a Linear Parametric Interval Matrix

Positive Semidefiniteness and Positive Definiteness of a Linear Parametric Interval Matrix Positive Semidefiniteness and Positive Definiteness of a Linear Parametric Interval Matrix Milan Hladík Charles University, Faculty of Mathematics and Physics, Department of Applied Mathematics, Malostranské

More information

On Monoids over which All Strongly Flat Right S-Acts Are Regular

On Monoids over which All Strongly Flat Right S-Acts Are Regular Æ26 Æ4 ² Vol.26, No.4 2006µ11Â JOURNAL OF MATHEMATICAL RESEARCH AND EXPOSITION Nov., 2006 Article ID: 1000-341X(2006)04-0720-05 Document code: A On Monoids over which All Strongly Flat Right S-Acts Are

More information

On EP elements, normal elements and partial isometries in rings with involution

On EP elements, normal elements and partial isometries in rings with involution Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 39 2012 On EP elements, normal elements and partial isometries in rings with involution Weixing Chen wxchen5888@163.com Follow this

More information