Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications
|
|
- Brandon Randall
- 5 years ago
- Views:
Transcription
1 Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 181, China Abstract. We give in this paper some closed-form formulas for the maximal and minimal values of the rank and inertia of the Hermitian matrix expression A BX ± (BX) with respect to a variable matrix X. As applications, we derive the extremal values of the ranks/inertias of the matrices X and X ± X, where X is a (Hermitian) solution to the matrix equation AXB = C, respectively, and give necessary and sufficient conditions for the matrix equation AXB = C to have Hermitian, definite and Re-definite solutions. In particular, we derive the extremal ranks/inertias of Hermitian solutions X of the matrix equation AXA = C, as well as the extremal ranks/inertias of Hermitian solution X of a pair of matrix equations A 1XA 1 = C 1 and A 2XA 2 = C 2. AMS Classifications: 15A9; 15A24; 15B57 Keywords: Moore Penrose inverse; matrix expression; matrix equation; inertia; rank; equality; inequality; Hermitian solution, definite solution; Re-definite solution; Hermitian perturbation 1 Introduction In a recent paper [63, the present author studied upper and lower bounds of the rank/inertia of the following linear Hermitian matrix expression (matrix function) A BXB, (1.1) where A is a given m m Hermitian matrix, B is a given m n matrix, X is an n n variable Hermitian matrix X, and B denotes the conjugate transpose of B, and obtained a group of closed-form formulas for the exact upper and lower bounds (maximal and minimal values) of the rank/inertia of p(x) in (1.2) through pure algebraic operations matrices and their generalized inverses of matrices. The closed-form formulas obtained enable us to derive some valuable consequences on nonsingularity and definiteness of (1.1), as well existence of Hermitian solution of the matrix equation BXB = A. As a continuation, we consider in this paper the optimization problems on the rank/inertia of the Hermitian matrix expression p(x) = A BX (BX), (1.2) where A = A and B are given m m and m n matrices, respectively, and X is an n m variable matrix. This expression is often encountered in solving some matrix equations with symmetric patterns and in the investigation of Hermitian parts of complex matrices. The problem of maximizing or minimizing the rank or inertia of a matrix is a special topic in optimization theory. The maximial/minimimal rank/inertia of a matrix expression can be used to characterize: (I) the maximal/minimal dimensions of the row and column spaces of the matrix expression; (II) nonsingularity of the matrix expression when it is square; (III) solvability of the corresponding matrix equation; (IV) rank/inertia invariance of the matrix expression; (V) definiteness of the matrix expression when it is Hermitian; etc. Notice that the domain of p(x) in (1.2) is the continuous set of all n m matrices, while the objective functions the rank and inertia of p(x) take values only from a finite set of nonnegative integers. Hence, this kind of continuous-discrete optimization problems cannot be solved by various optimization methods for continuous or discrete cases. It has been realized that rank/inertia optimization and completion problems have deep connections with computational complexity and numerous important algorithmic Address: yongge.tian@gmail.com 1
2 applications. Except some special cases as in (1.1) and (1.2), solving rank optimization problems (globally) is very difficult. In fact, optimization problems and completion problems on the rank/inertia of a general matrix expression were regarded as NP-hard; see, e.g., [14, 15, 16, 21, 24, 25, 36, 42, 47. Fortunately, closed-form solutions to the rank/inertia optimization problems of A BXB and A BX (BX), as shown in [37, 39, 63, 67 and Section 2 below, can be derived algebraically by using generalized inverses of matrices. Throughout this paper, C m n and C m H stand for the sets of all m n complex matrices and all m m complex Hermitian matrices, respectively. The symbols A T, A, r(a), R(A) and N (A) stand for the transpose, conjugate transpose, rank, range (column space) and null space of a matrix A C m n, respectively; I m denotes the identity matrix of order m; [ A, B denotes a row block matrix consisting of A and B. We write A > (A ) if A is Hermitian positive (nonnegative) definite. Two Hermitian matrices A and B of the same size are said to satisfy the inequality A > B (A B) in the Löwner partial ordering if A B is positive (nonnegative) definite. The Moore Penrose inverse of A C m n, denoted by A, is defined to be the unique solution X satisfying the four matrix equations (i) AXA = A, (ii) XAX = X, (iii) (AX) = AX, (iv) (XA) = XA. If X satisfies (i), it is called a g-inverse of A and is denoted by A. A matrix X is called a Hermitian g-inverse of A C m H, denoted by A, if it satisfies both AXA = A and X = X. Further, the symbols E A and F A stand for the two orthogonal projectors E A = I m AA and F A = I n A A onto the null spaces N (A ) = R(A) and N (A) = R(A ), respectively. The ranks of E A and F A are given by r(e A ) = m r(a) and r(f A ) = n r(a). A well-known property of the Moore Penrose inverse is (A ) = (A ). In addition, AA = A A if A = A. We shall repeatedly use them in the latter part of this paper. Results on the Moore Penrose inverse can be found, e.g., in [4, 5, 28. The Hermitian part of a square matrix A is defined to be H(A) = (A+A )/2. A square matrix A is said to be Re-positive (Re-nonnegative) definite if H(A) > (H(A) ), and Re-negative (Re-nonpositive) definite if H(A) < (H(A) ). As is well known, the eigenvalues of a Hermitian matrix A C m H are all real, and the inertia of A is defined to be the triplet In(A) = { i + (A), i (A), i (A) }, where i + (A), i (A) and i (A) are the numbers of the positive, negative and zero eigenvalues of A counted with multiplicities, respectively. The two numbers i + (A) and i (A) are called the positive and negative index of inertia, respectively, and both of which are usually called the partial inertia of A; see, e.g., [2. For a matrix A C m H, we have r(a) = i + (A) + i (A), i (A) = m r(a). (1.3) Hence, once i + (A) and i (A) are both determined, r(a) and i (A) are both obtained as well. It is obvious that p(x) =, p(x) > (, <, ) in (1.4) correspond to the well-known matrix equation and inequalities of Lyapunov type BX + (BX) = A, BX + (BX) < A( A, > A, A). Some previous work on these kinds of equation and inequality can be found, e.g., in [6, 26, 27, 35, 67, 72. In addition, the Hermitian part of A + BX (see [31), Re-definite solutions of the matrix equations AX = B and AXB = C (see, e.g., [11, 71, 73, 74, 75), Hermitian solution of the consistent matrix equation AXA = B, as well as the Hermitian generalized inverse of a Hermitian matrix (see, e.g., [63) can also be represented in the form of (1.2). When X runs over C n m, the p(x) in (1.2) may vary with respect to the choice of X. In such a case, it is would be of interest to know how the rank, range, nullity, inertia of p(x) vary with respect to X. In two recent papers [37, 67, the p(x) was studied, and the maximal and minimal possible ranks of p(x) with respect to X C n m were derived through generalized inverses of matrices and partitioned matrices. This paper aim at deriving the maximal and minimal possible values of the inertias of p(x) with respect to X through the Moore Penrose generalized inverse of matrices, and give closed-form expressions of the matrix X such that the extremal values are attained. In optimization theory, as well as system and control theory, minimizing/maximizing the rank of a partially specified matrix or matrix expression subject to its variable entries is referred to as a rank minimization/maximization problem, and is denoted collectively by RMPs; see [3, 14, 15, 34, 44, 45. Correspondingly, minimizing/maximizing the inertia of a partially specified Hermitian matrix or matrix 2
3 expression subject to its variable entries is referred to as an inertia minimization/maximization problem, and is denoted collectively by IMPs. RMPs/IMPs now are known to be NP-hard in general, and a satisfactory characterization of the solution set of a general RMP/IMP is currently not available. For a large amount of RMPs/IMPs associated with linear matrix equations and linear matrix expressions, it is, however, possible to give closed-form solutions through some matrix tools, such as, generalized SVDs and generalized inverses of matrices. Note that the inertia of a Hermitian matrix divides the eigenvalues of the matrix into three sets on the real line. Hence, the inertia can be used to characterize definiteness of the Hermitian matrix. The following results are obvious from the definitions of the rank/inertia of a matrix. Lemma 1.1 Let A C m m, B C m n, and C C m H. Then, (a) A is nonsingular if and only if r(a) = m. (b) B = if and only if r(b) =. (c) C > (C < ) if and only if i + (C) = m (i (C) = m). (d) C (C ) if and only if i (C) = (i + (C) = ). Because the rank and partial inertia of a (Hermitian) matrix are fine nonnegative integers, the maximal and minimal values of the rank and partial inertia of a (Hermitian) matrix expression with respect to its variable components must exist. Combining this fact with Lemma 1.1, we have the following assertions. Lemma 1.2 Let S be a set consisting of (square) matrices over C m n, and let H be a set consisting of Hermitian matrices over C m H. Then, (a) S has a nonsingular matrix if and only if max r(x) = m. X S (b) All X S are nonsingular if and only if min r(x) = m. X S (c) S if and only if min r(x) =. X S (d) S = {} if and only if max r(x) =. X S (e) All X S have the same rank if and only if max r(x) = min r(x). X S X S (f) H has a matrix X > (X < ) if and only if max X H i +(X) = m (g) All X H satisfy X > (X < ) if and only if min X H i +(X) = m (h) H has a matrix X (X ) if and only if min X H i (X) = (i) All X H satisfy X (X ) if and only if max X H i (X) = ( ) max i (X) = m. X H ( ) min i (X) = m. X H ( ) min i +(X) =. X H ( ) max i +( X) =. X H (j) All X H have the same positive index of inertia if and only if max X H i +(X) = min X H i +(X). (k) All X H have the same negative index of inertia if and only if max X H i (X) = min X H i (X). Lemma 1.3 Let S 1 and S 2 be two sets consisting of (square) matrices over C m n, and let H 1 and H 2 be two sets consisting of Hermitian matrices over C m H. Then, (a) There exist X 1 S 1 and X 2 S 2 such that X 1 X 2 is nonsingular if and only if max r( X 1 X 2 ) = m. X 1 S 1, X 2 S 2 3
4 (b) X 1 X 2 is nonsingular for all X 1 S 1 and X 2 S 2 if and only if min r( X 1 X 2 ) = m. X 1 S 1, X 2 S 2 (c) There exist X 1 S 1 and X 2 S 2 such that X 1 = X 2, i.e., S 1 S 2, i.e., if and only if min r( X 1 X 2 ) =. X 1 S 1, X 2 S 2 (d) S 1 S 2 (S 1 S 2 ) if and only if max X 1 S 1 min r( X 1 X 2 ) = X 2 S 2 ( max X 2 S 2 ) min r( X 1 X 2 ) =. X 1 S 1 (e) There exist X 1 H 1 and X 2 H 2 such that X 1 > X 2 (X 1 < X 2 ) if and only if max i + ( X 1 X 2 ) = m X 1 H 1, X 2 H 2 ( ) max i ( X 1 X 2 ) = m. X 1 H 1, X 2 H 2 (f) X 1 > X 2 (X 1 < X 2 ) for all X 1 H 1 and X 2 H 2 if and only if min i + ( X 1 X 2 ) = m X 1 H 1, X 2 H 2 ( ) min i ( X 1 X 2 ) = m. X 1 H 1, X 2 H 2 (g) There exist X 1 H 1 and X 2 H 2 such that X 1 X 2 (X 1 X 2 ) if and only if min i ( X 1 X 2 ) = X 1 H 1, X 2 H 2 ( ) min i + ( X 1 X 2 ) =. X 1 H 1, X 2 H 2 (h) X 1 X 2 (X 1 X 2 ) for all X 1 H 1 and X 2 H 2 if and only if max i ( X 1 X 2 ) = X 1 H 1, X 2 H 2 ( ) max i + ( X 1 X 2 ) =. X 1 H 1, X 2 H 2 These three lemmas show that once some closed-form formulas for (extremal) ranks/inertias of Hermitian matrices are derived, we can use these formulas to characterize equalities and inequalities for Hermitian matrices. This basic algebraic method, referred to as the matrix/inertia method, is available for studying various matrix expressions that involve generalized inverses of matrices and arbitrary matrices. In the past two decades, the present author and his colleagues established many closed-form formulas for (extremal) ranks/inertias of (Hermitian) matrices, and used them to derive numerous consequences and applications; see, e.g., [37, 38, 39, 41, 55, 56, 58, 59, 6, 61, 62, 63, 67, 68, 69. The following are some known results on ranks/inertias of matrices, which are used later in this paper. Lemma 1.4 ([43) Let A C m n, B C m k, and C C l n be given. Then, r[ A, B = r(a) + r(e A B) = r(b) + r(e B A), (1.4) A r = r(a) + r(cf C A ) = r(c) + r(af C ), (1.5) A B r = r(b) + r(c) + r(e C B AF C ), (1.6) ±AA B r B = r[ A, B + r(b). (1.7) We shall repeatedly use the following simple results on partial inertias of Hermitian matrices. 4
5 Lemma 1.5 Let A C m H, B Cn H and P Cm n. Then, i ± (P AP ) i ± (A), (1.8) i ± (P AP ) (A), if P is nonsingular, (1.9) { i± (A) if λ > i ± (λa) = i (A) if λ <, (1.1) A i ± = i B ± (A) + i ± (B), (1.11) P i ± P = r(p ). (1.12) The two inequalities in (1.8) were first given in [48, see also [41, Lemma 2. Eq. (1.9) is the well-known Sylvester s law of inertia, which was first established in 1852 by Sylvester [54 (see, e.g., [22, Theorem and [37, p. 377). Eq. (1.1) is from the fact that the eigenvalues of λa are the eigenvalues of A multiplied by λ. Eq. (1.11) is obvious from the definition of inertia, and (1.12) is well known (see, e.g., [22, 23). Lemma 1.6 ([17, 49) Let A, B C m H. The following statements are equivalent: (a) R(A) R(B) = {}. (b) r( A + B ) = r(a) + r(b). (c) i + ( A + B ) = i + (A) + i + (B) and i ( A + B ) = i (A) + i (B). A B Lemma 1.7 Let A C m H and B Cm n, and denote M = B. Then, In particular, (a) If A, then i + (M) = r[ A, B and i (M) = r(b). (b) If A, then i + (M) = r(b) and i (M) = r[ A, B. (c) i ± (A) i ± (M) i ± (A) + r(b). i ± (M) = r(b) + i ± (E B AE B ). (1.13) An alternative form of (1.13) was given in [25, Theorem 2.1, and a direct proof of (1.13) was given in [52, Theorem 2.3. Results (a) (c) follow from (1.8), (1.13) and Lemma 1.1. Some formulas derived from (1.13) are [ A BF i P ± F P B = i ± A B C P r(p ), (1.14) P [ A BF r P F P B = r A B C P 2r(P ), (1.15) P EQ AE i Q E Q B ± B = i E Q D ± A B Q B D r(q), (1.16) Q EQ AE r Q E Q B B = r A B Q B E Q D D 2r(Q). (1.17) Q We shall use them to simplify the inertias of block Hermitian matrices involving Moore Penrose inverses of matrices. Lemma 1.8 ([52) Let A C m n, B C p q and C C m q be given. Then, 5
6 (a) The matrix equation AX = C (1.18) has a solution for X C n q if and only if R(C) R(A), or equivalently, AA C = C. In this case, the general solution to (1.18) can be written in the following parametric form where V C n q is arbitrary. (b) The matrix equation X = A C + F A V, (1.19) AXB = C (1.2) has a solution for X C n p if and only if R(C) R(A) and R(C ) R(B ), or equivalently, AA CB B = C. In this case, the general solution to (1.2) can be written in the following parametric form X = A CB + F A V 1 + V 2 E B, (1.21) where V 1, V 2 C n p are arbitrary. Lemma 1.9 Let A j C mj n, B j C p qj and C j C mj qj be given, j = 1, 2. Then, (a) [5 The pair of matrix equations A 1 XB 1 = C 1 and A 2 XB 2 = C 2 (1.22) have a common solution for X C n p if and only if C 1 A 1 R(C j ) R(A j ), R(Cj ) R(Bj ), r C 2 A 2 A1 = r[ +r[ B A 1, B 2, j = 1, 2. (1.23) B 1 B 2 2 (b) [57 Under (1.23), the general common solution to (1.22) can be written in the following parametric form X = X + F A V 1 + V 2 E B + F A1 V 3 E B2 + F A2 V 4 E B1, (1.24) A1 where A =, B = [ B A 1, B 2, and the four matrices V 1,..., V 4 C n p are arbitrary. 2 In order to derive explicit formulas for ranks of block matrices, we use the following three types of elementary block matrix operation (EBMO, for short): (I) interchange two block rows (columns) in a block matrix; (II) multiply a block row (column) by a nonsingular matrix from the left-hand (right-hand) side in a block matrix; (III) add a block row (column) multiplied by a matrix from the left-hand (right-hand) side to another block row (column). In order to derive explicit formulas for the inertia of a block Hermitian matrix, we use the following three types of elementary block congruence matrix operation (EBCMO, for short) for a block Hermitian matrix with the same row and column partition: (IV) interchange ith and jth block rows, while interchange ith and jth block columns in the block Hermitian matrix; (V) multiply ith block row by a nonsingular matrix P from the left-hand side, while multiply ith block column by P from the right-hand side in the block Hermitian matrix; (VI) add ith block row multiplied by a matrix P from the left-hand side to jth block row, while add ith block column multiplied by P from the right-hand side to jth block column in the block Hermitian matrix. 6
7 The three types of operation are in fact equivalent to some congruence transformation of a Hermitian matrix A P AP, where the nonsingular matrix P is from the elementary block matrix operations to the block rows of A, and P is from the elementary block matrix operations to the block columns of A. An example of exposition for such congruence operations associated with (1.2) is given by P X I n X A B I n B P = I n A BX X B I n, P = I n B I m X. I n Because P is nonsingular, it is a simple matter to establish by using (1.9), (1.11) and (1.12) the following equalities i ± X I n X A B I n A BX X B = n + i ± ( A BX X B ). I n B I n In fact, this kind of congruence operations for block Hermitian matrices were widely used by some authors in the investigations of inertias of block Hermitian matrices; see, e.g., [7, 8, 9, 12, 13, 22, 23, 51, 63, 64, 65. Because EBCMOs don t change the inertia of a Hermitian matrix, we shall repeatedly use the algebraic EBCMOs to simplify block Hermitian matrices and to establish equalities for their inertias in the following sections. 2 Extremal values of the rank/inertia of A BX (BX) The problem of maximizing/minimizing the ranks of the two matrix expressions A BX ± (BX) with respect to a variable matrix X were studied in [37, 67, in which the following results were given. Lemma 2.1 Let A = ±A C m m and B C m n be given. Then, the maximal and minimal ranks of A BX ± (BX) with respect to X C n m are given by { } A B max r[ A BX ± X C (BX) = min m, r n m B, (2.1) A B min r[ A BX ± X C (BX) = r n m B 2r(B). (2.2) Hence, A B (a) There exists an X C n m such that A BX±(BX) is nonsingular if and only if r B m. (b) A BX ± (BX) is nonsingular for all X C n m if and only if r(a) = m and B =. A B (c) There exists an X C n m such that BX ± (BX) = A if and only if r B = 2r(B), or equivalently, E B AE B =. In this case, the general solution of BX ± (BX) = A can be written as X = B A 1 2 B ABB + UB + F B V, where U = U C n n and V C n m are arbitrary. (d) A BX ± (BX) = for all X C n m if and only if both A = and B =. (e) r[ A BX ± (BX) = r(a) for all X C n m if and only if B =. The expressions of the matrices Xs satisfying (2.1) and (2.2) were also presented in [37, 67. Theorem 2.1(c) was given in [26, see also [53. We next derive the extremal inertia of the Hermitian matrix expression A BX (BX), and give the corresponding matrices Xs such that inertia of A BX (BX) attain the extremal values. Theorem 2.2 Let p(x) be as given in (1.2). Then, 7
8 (a) The maximal values of the partial inertia of p(x) are given by A B max i ± [ p(x) X C n m B = r(b) + i ± (E B AE B ). (2.3) Two matrices satisfying (2.3) are given by X = B A 1 2 B ABB + (U I n )B + F B V, (2.4) respectively, where U = U C n n and V C n m are arbitrary. (b) The minimal values of the partial inertia of p(x) are given by A B min i ± [ p(x) X C n m B r(b) = i ± (E B AE B ). (2.5) A matrix X C n m satisfying the two in (2.5) is given by where U = U C n n and V C n m are arbitrary. X = B A 1 2 B ABB + UB + F B V, (2.6) Proof Note from Lemma 1.7(c) that p(x) B i ± [p(x) i ± B i ± [p(x) + r(b). (2.7) By (1.13), p(x) B i ± B = r(b) + i ± [ E B p(x)e B = r(b) + i ± (E B AE B ). (2.8) Combining (2.7) and (2.8) leads to i ± (E B AE B ) i ± [p(x) r(b) + i ± (E B AE B ), (2.9) that is, i ± (E B AE B ) and r(b) + i ± (E B AE B ) are lower and upper bounds of i ± [p(x), respectively. Substituting (2.4) into p(x) gives p(x) = A BB A ABB + BB ABB BUB (BUB ) ± 2BB = E B AE B ± 2BB. Note that R(E B AE B ) R(BB ) = {}. Hence, it follows from Lemma 1.6 that i ± [p(x) (E B AE B ± 2BB ) (E B AE B ) + i ± (±2BB ) = r(b) + i ± (E B AE B ). These two equalities imply that the right-hand side of (2.9) are the maximal values of the partial inertia of p(x), establishing (a). Substituting (2.6) into p(x) gives p(x) = A BB A ABB + BB ABB BUB (BUB ) = E B AE B. Hence, i ± [p(x) (E B AE B ), establishing (b). Lemma 2.1 and Theorem 2.2 formulate explicitly the extremal values of the rank/inertia of the Hermitian matrix expression A BX (BX) with respect to the variable matrix X. Hence, we can easily use these formulas and the corresponding Xs, as demonstrated in Lemma 2.1(a) (d), to study various optimization problems on ranks/inertias of Hermitian matrix expressions. As described in Lemma 2.1, one of the important applications of the extremal values of the partial inertia of A BX (BX) is to characterize the four matrix inequalities BX + (BX) > A (< A, A, A). In a recent paper [7, these inequalities were considered and the following results were obtained. 8
9 Corollary 2.3 Let A C m H and B Cm n be given. Then, (a) There exists an X C n m such that BX + (BX) A (2.1) if and only if E B AE B. (2.11) In this case, the general solution to (2.1) can be written as X = 1 2 B [ A + ( M + BU )( M + BU ) ( 2I m BB ) + V B + F B W, (2.12) where M = ( E B AE B ) 1/2, and U, W C n m and V = V C n n are arbitrary. (b) There exists an X C n m such that BX + (BX) > A (2.13) if and only if E B AE B and r(e B AE B ) = r(e B ). (2.14) In this case, the general solution to (2.13) can be written as (2.12), in which U is any matrix such that r[ ( E B AE B ) 1/2 + BU = m, and W C n m and V = V C n n are arbitrary. (c) There exists an X C n m such that BX + (BX) A (2.15) if and only if E B AE B (2.16) In this case, the general solution to (2.15) can be written in the following parametric form X = 1 2 B [ A ( M + BU )( M + BU ) ( 2I m BB ) + V B + F B W, (2.17) where M = (E B AE B ) 1/2, and U, W C n m and V = V C n n are arbitrary. (d) There exists an X C n m such that BX + (BX) < A (2.18) if and only if E B AE B and r(e B AE B ) = r(e B ). (2.19) In this case, the general solution to (2.18) can be written as (2.17), in which U is any matrix such that r[ (E B AE B ) 1/2 + BU = m, and W C n m and V = V C n n are arbitrary. Setting A in Lemma 2.1 and Theorem 2.2, and applying (1.7) and Lemma 1.8(a) leads to the following result. Corollary 2.4 Let p(x) be as given in (1.2), and assume A. Then, max r[ p(x) = min {m, r[ A, B + r(b)}, (2.2) X Cn m min r[ p(x) = r[ A, B r(b), (2.21) X Cn m max X C n m i +[ p(x) = r[ A, B, (2.22) min i +[ p(x) = r[ A, B r(b), (2.23) X C n m max X C n m i [ p(x) = r(b), (2.24) min i [ p(x) =. (2.25) X C n m 9
10 The expressions of the matrices Xs satisfying (2.2) (2.25) can routinely be derived from the previous results, and therefore is omitted. The results in the previous theorem and corollaries can be used to derive algebraic properties of various matrix expressions that can be written in the form of p(x) in (1.2). For instance, the Hermitian part of the linear matrix expression A + BX can be written as (A + A )/2 + [ BX + (BX) /2; the Hermitian part of the linear matrix expression A + BX + Y C can be written as 1 2 (A + A ) + 1 X 2 [ B, C Y + 1 B 2 [ X, Y. C Hence, some formulas for the extremal ranks and partial inertias of the Hermitian parts of A + BX and A + BX + Y C can trivially be derived from Lemma 2.1 and Theorem 2.2. Some previous work on the inertia of Hermitian part of A + BX was given in [31. Furthermore, the results in Lemma 2.1 and Theorem 2.2 can be used to characterize relations between the following two matrix expressions p 1 (X 1 ) = A 1 + B 1 X 1 + (B 1 X 1 ), p 2 (X 2 ) = A 2 + B 2 X 2 + (B 2 X 2 ), (2.26) where A j C m H and B j C m nj are given, and X j C nj m is a variable matrix, j = 1, 2. Theorem 2.5 Let p 1 (X 1 ) and p 2 (X 2 ) be as given in (2.26), and denote Then, Hence, B = [ B 1, B 2, M = A1 A 2 B B. max r[ p 1(X 1 ) p 2 (X 2 ) = min{ m, r(m) }, (2.27) X 1 C n 1 m, X 2 C n 2 m min r[ p 1 (X 1 ) p 2 (X 2 ) = r(m) 2r(B), (2.28) X 1 C n 1 m, X 2 C n 2 m max i ±[ p 1 (X 1 ) p 2 (X 2 ) (M), (2.29) X 1 C n 1 m, X 2 C n 2 m min i ±[ p 1 (X 1 ) p 2 (X 2 ) (M) r(b). (2.3) X 1 C n 1 m, X 2 C n 2 m (a) There exist X 1 C n1 m and X 2 C n2 m such that p 1 (X 1 ) p 2 (X 2 ) is nonsingular if and only if r(m) m. (b) p 1 (X 1 ) p 2 (X 2 ) is nonsingular for all X 1 C n1 m and X 2 C n2 m if and only if r( A 1 A 2 ) = m and B =. (c) There exist X 1 C n1 m and X 2 C n2 m such that p 1 (X 1 ) = p 2 (X 2 ) if and only if r(m) = 2r(B). (d) p 1 (X 1 ) = p 2 (X 2 ) for all X 1 C n1 m and X 2 C n2 m if and only if A 1 = A 2 and B =. (e) There exist X 1 C n1 m and X 2 C n2 m such that p 1 (X 1 ) > p 2 (X 2 ) (p 1 (X 1 ) < p 2 (X 2 )) if and only if i + (M) = m (i (M) = m). (f) p 1 (X 1 ) > p 2 (X 2 ) (p 1 (X 1 ) < p 2 (X 2 )) for all X 1 C n1 m and X 2 C n2 m if and only if i (M) = m (i + (M) = m). (g) There exist X 1 C n1 m and X 2 C n2 m such that p 1 (X 1 ) p 2 (X 2 ) (p 1 (X 1 ) p 2 (X 2 )) i (M) = r(b) (i + (M) = r(b)). (h) p 1 (X 1 ) p 2 (X 2 ) (p 1 (X 1 ) p 2 (X 2 )) for all X 1 C n1 m and X 2 C n2 m if and only if A 1 A 2 and B = (A 1 A 2 and B =.) 1
11 Proof The difference of p 1 (X 1 ) and p 2 (X 2 ) in (2.26) can be written as p 1 (X 1 ) p 2 (X 2 ) = A 1 A 2 + [ B 1, B 2 [ X1 X 2 + [ X 1, X 2 [ B 1 B 2. (2.31) Applying Lemma 2.1 and Theorem 2.2 to this matrix expression leads to (2.27) (2.3). Results (a) (h) follow from (2.27) (2.3) and Lemma1.2. The following result was recently shown in [63. Lemma 2.6 Let A C m H, B Cm n and C C m p be given, and denote N = [ B, C. Then, max X C n H, Y Cp H max X C n H, Y Cp H [ i ± ( A BXB CY C A N ) N [ i ± ( A BXB CY C A N ) = r[ A, N i N Combining Theorem 2.2 with Lemma 2.6 leads to the following result., (2.32) Theorem 2.7 Let A C m H, B Cm n and C C m p and D C m q be given, and denote. (2.33) p(x, Y, Z ) = A BX (BX) CY C DZD, N = [ B, C, D. (2.34) Then, max X C n m, Y C p H, Z Cq H, (2.35) [ A N i ± [ p(x, Y, Z ) N [ A N A N B r(b) i N min i ± [ p(x, Y, Z ) = r X C n m, Y C p H, Z Cq H. (2.36) If A, then max i + [ p(x, Y, Z ) = r[ A, B, C, D, (2.37) X C n m, Y C p H, Z Cq H max i [ p(x, Y, Z ) = r[ B, C, D, (2.38) X C n m, Y C p H, Z Cq H min i ± [ p(x, Y, Z ) = r[ A, B, C, D r[ B, C, D, (2.39) X C n m, Y C p H, Z Cq H min X C n m, Y C p H, Z Cq H i ± [ p(x, Y, Z ) =. (2.4) Proof Applying (2.3) and (2.5) to (2.34) gives A CY C max i ±[ p(x, Y, Z ) = i DZD B ± X B, (2.41) A CY C min i ±[ p(x, Y, Z ) = i DZD B ± X B r(b). (2.42) Note that A CY C DZD B B = A B C B Y [ C D, Z[ D,. (2.43) Applying (2.32) and (2.33) to (2.43) gives ( ) A B C max i ± Y, Z B Y [ C D, Z[ D A N, = i ± N, ( ) [ A B C min i ± Y, Z B Y [ C D, Z[ D A N A N, = r B i N. Substituting them into (2.41) and (2.42) produces (2.35) and (2.36), respectively. 11
12 Eqs. (2.35) and (2.36) can simplify further if the given matrices in them satisfy some restriction. For instance, if R(B) R[ C, D, then max X C n m, Y C p H, Z Cq H, (2.44) [ A N i ± [ p(x, Y, Z ) N [ A N A N B r(b) i N min i ± [ p(x, Y, Z ) = r X C n m, Y C p H, Z Cq H, (2.45) where N = [ C, D. We shall use (2.44) and (2.45) in Section 4 to characterize the existence of nonnegative definite solution of the matrix equation AXB = C. In the remaining of this section, we give the extremal values of the rank and partial inertia of A BX (BX) subject to a consistent matrix equation CX = D. Theorem 2.8 Let p(x) be as given in (1.2), and assume the matrix equation CX = D is solvable for X C n m, where C C p n and D C p m are given. Also, denote Then, Hence, M = A B D B C D C, N = [ B C. max r[ p(x) = min{ m, r(m) 2r(C)}, (2.46) CX=D r[ p(x) = r(m) 2r(N), (2.47) min CX=D max ±[ p(x) (M) r(c), CX=D (2.48) min ±[ p(x) (M) r(n). CX=D (2.49) (a) CX = D has a solution X such that p(x) is nonsingular if and only if r(m) m + 2r(C). (b) p(x) is nonsingular for all solutions of CX = D if and only if r(m) = 2r(N) + m. (c) The two equations CX = D and BX + (BX) = A have a common solution if and only if r(m) = 2r(N). (d) Any solution of CX = D satisfying BX + (BX) = A if and only if r(m) = 2r(C). (e) The rank of p(x) is invariant subject to CX = D if and only if r(m) = 2r(N)+m or R(B) R(C). (f) CX = D has a solution X satisfying p(x) > (< ) if and only if i + (M) = r(c)+m (i (M) = r(c) + m). (g) p(x) (< ) for all solutions of CX = D if and only if i + (M) = r(n) + m (i (M) = r(n) + m). (h) CX = D has a solution X satisfying p(x) ( ) if and only if i (M) = r(n) (i + (M) = r(n)). (i) Any solution of CX = D satisfying p(x) ( ) if and only if i (M) = r(c) (i + (M) = r(c)). (j) i + [ p(x) subject to CX = D i [ p(x) is invariant subject to CX = D R(B) R(C). Proof Note from Lemma 1.8(a) that the general solution of CX = D can be written as X = C D+F C V, where V C n m is arbitrary. Substituting it into p(x) gives rise to p(x) = A BC D (BC D) BF C V V (BF C ). (2.5) 12
13 Applying (2.1), (2.2), (2.3) and (2.5) to it gives max r[ p(x) = max CX=D min r[ p(x) = min CX=D max i ±[ p(x) = CX=D r[ A V C BC D (BC D) BF C V V (BF C ) n m { [ A BC = min m, r D (BC D) BF C (BF C ) min i ±[ p(x) = min CX=D r[ A BC D (BC D) BF C V V (BF C ) V C [ n m A BC = r D (BC D) BF C (BF C ) }, (2.51) 2r(BF C ), (2.52) max i ±[ A BC D (BC D) BF C V V (BF C ) V C n m [ A BC D (BC D) BF C (BF C ), (2.53) V i ±[ A BC D (BC D) BF C V V (BF C ) [ A BC D (BC D) BF C (BF C ) r(bf C ). (2.54) Applying (1.14) and (1.15), and simplifying by CC D = D and EBCMOs, we obtain A BC i D (BC D) BF C ± (BF C ) = i ± A BC D (BC D) B B C r(c) C A B D B C r(c) D C (M) r(c), (2.55) A BC r D (BC D) BF C (BF C ) = r(m) 2r(C), (2.56) B r(bf C ) = r r(c) = r(n) r(c). (2.57) C Substituting (2.55), (2.56) and (2.57) into (2.51) (2.54) yields (2.46) (2.49). Results (a) (j) follow from (2.46) (2.49) and Lemma Extremal values of ranks/inertias of Hermitian parts of solutions to some matrix equations As some applications of results in Section 2, we derive in this section the extremal values of the ranks and partial inertias of for the Hermitian parts of solutions of the two equations in (1.18) and (1.2), and give some direct consequences of these extremal values. Theorem 3.1 Let A, B C m n be given, and assume the matrix equation AX = B is solvable for X C n n. Then, (a) The maximal value of the rank of X + X is max r( X + AX=B X ) = min{ n, 2n + r( AB + BA ) 2r(A)}. (3.1) (b) The minimal value of the rank of X + X is A matrix X C n n satisfying (3.2) is given by where U = U C n n is arbitrary. min r( X + AX=B X ) = r( AB + BA ). (3.2) X = A B (A B) + A AB (A ) + F A UF A, (3.3) 13
14 (c) The maximal values of the partial inertia of X + X are max i ±( X + X ) = n + i ± ( AB + BA ) r(a). (3.4) AX=B A matrix X C n n satisfying the two formulas in (3.4) is given by where U = U C n n is arbitrary. X = A B (A B) + A AB (A ) ± F A + F A UF A, (3.5) (d) The minimal values of the partial inertia of X + X are min i ±( X + X ) ( AB + BA ). (3.6) AX=B A matrix X C n n satisfying the the two formulas in (3.6) is given by (3.3). In particular, (e) AX = B has a solution such that X + X is nonsingular if and only if r( AB + BA ) 2r(A) n. (f) X + X is nonsingular for all solutions of AX = B if and only if r( AB + BA ) = n. (g) AX = B has a solution satisfying X + X =, i.e., AX = B has a skew-hermitian solution, if and only if AB + BA =. Such a solution is given by where U = U C n n is arbitrary. X = A B (A B) + A AB (A ) + F A UF A, (3.7) (h) Any solution of AX = B satisfying X + X = if and only if r( AB + BA ) = 2r(A) 2n. (i) The rank of X + X is invariant subject to AX = B if and only if r( AB + BA ) = n or r(a) = n. (j) AX = B has a solution satisfying X + X >, i.e., AX = B has a Re-positive definite solution, if and only if i + ( AB + BA ) = r(a). Such a solution is given by where U = U C n n is arbitrary. X = A B (A B) + A AB (A ) + F A + F A UF A, (3.8) (k) AX = B has a solution X C n n satisfying X + X <, i.e., AX = B has a Re-negative definite solution, if and only if i ( AB + BA ) = r(a). Such a matrix is given by where U = U C n n is arbitrary. X = A B (A B) + A AB (A ) F A + F A UF A, (3.9) (l) Any solution of AX = B satisfying X + X (i ( AB + BA ) = n). > (< ) if and only if i + ( AB + BA ) = n (m) AX = B has a solution satisfying X +X, i.e., AX = B has a Re-nonnegative definite solution, if and only if AB + BA. Such a matrix is given by X = A B (A B) + A AB (A ) + F A (U + W )F A, (3.1) where U = U C n n and W C n n are arbitrary. (n) AX = B has a solution X satisfying X + X, i.e., AX = B has a Re-non-positive definite solution, if and only if AB + BA. Such a matrix is given by X = A B (A B) + A AB (A ) + F A (U W )F A, (3.11) where U = U C n n and W C n n are arbitrary. (o) Any solution of AX = B satisfying X + X ( ) if and only if AB + BA and r(a) = n (AB + BA and r(a) = n). 14
15 (p) i + ( X + X ) is invariant subject to CX = D i ( X + X ) is invariant subject to CX = D r(a) = n. Proof In fact, setting A = and B = I m, and replacing C and D with A and B in (2.46) (2.49), we obtain (3.1), (3.2), (3.4) and (3.6). It is easy to verify that (3.3) satisfies AX = B. Substituting (3.3) into X + X gives X + X = A B (A B) + (A B) A B + A AB (A ) + A BA A + F A UF A F A UF A Also, note that = A ( AB + BA )(A ). (3.12) A( X + X )A = AA ( AB + BA )(A ) A = AB + BA. (3.13) Both (3.12) and (3.13) imply that r( X + X ) = r( AB + BA ), that is, (3.3) satisfies (3.2). It is easy to verify that (3.5) satisfies AX = B. Substituting (3.5) into X + X gives X + X = A ( AB + BA )(A ) ± 2F A. (3.14) Also, note that R(A ) R(F A ) = {}. Hence, (3.14) implies that i ± ( X + X ) ( AB + BA ) + i ± (±F A ) = n + i ± ( AB + BA ) r(a). that is, (3.5) satisfies (3.4). It is easy to verify that (3.3) satisfies AX = B and (3.6). Results (e) (p) follow from (a) (d) and Lemma 1.2. The Re-nonnegative definite solutions of the matrix equation AX = B were considered in [11, 19, 73, 74. Theorem 3.1(h) was partially given in these papers. In addition to the Re-nonnegative definite solutions, we are also able to derive from (2.46) (2.49) the solutions of AX = B that satisfies X +X > P (< P, P, P ). In what follows, we derive the extremal values of the ranks and partial inertias of the Hermitian parts of solutions of the matrix equation AXB = C. Theorem 3.2 Let A C m n, B C n p and C C m p be given, and assume that the matrix equation AXB = C is solvable for X C n n. Also, denote C A M = C B, N = [ A, B. A B Then, Hence, max AXB=C X ) = min {n, 2n + r(m) 2r(A) 2r(B)}, (3.15) min AXB=C X ) = r(m) 2r(N), (3.16) max ±(X + X ) = n + i (M) r(a) r(b), AXB=C (3.17) min ±(X + X ) = i (M) r(n). AXB=C (3.18) (a) AXB = C has a solution such that X +X is nonsingular if and only if r(m) 2r(A)+2r(B) n. (b) X + X is nonsingular for all solutions of AXB = C if and only if r(m) = 2r(N) + n. (c) AXB = C has a solution X C n n satisfying X + X = i.e., AXB = C has a skew-hermitian solution, if and only if r(m) = 2r(N). (d) Any solutions of AXB = C are skew-hermitian if and only if r(m) = 2r(A) + 2r(B) 2n. (e) The rank of X + X r(a) = r(b) = n. subject to AXB = C is invariant if and only if r(m) = 2r(N) + n or 15
16 (f) AXB = C has a solution satisfying X + X > (X + X < ), i.e., AXB = C has a Repositive definite solution (a Re-negative definite solution), if and only if i (M) = r(a) + r(b) (i + (M) = r(a) + r(b)). (g) All solutions of AXB = C satisfy X + X > (X + X < ) if and only if i (M) = r(n) + n (i + (M) = r(n) + n). (h) AXB = C has a solution X C n n satisfying X + X (X + X ), i.e., AXB = C has a Re-nonnegative definite solution (a Re-nonpositive definite solution), if and only if i + (M) = r(n) (i (M) = r(n)). (i) All solutions of AXB = C satisfy X +X (X +X ) if and only if i + (M) = r(a)+r(b) n (i (M) = r(a) + r(b) n). (j) i + ( X + X ) is invariant subject to AXB = C i ( X + X ) is invariant subject to AXB = C r(a) = r(b) = n. Proof Note from Lemma 1.8(b) that if AXB = C is consistent, the general expression X + X for the solution of AXB = C can be written as X + X = A CB + (A CB ) + [ F A, E B V + V [ F A, E B, (3.19) where V = V1 V2 C 2n n is arbitrary. Applying (2.1), (2.2), (2.3) and (2.5) to (3.22) gives max r(x + AXB=C X ) = max r ( A CB + (A CB ) + [ F A, E B V + V [ F A, E B ) V = min {n, r(j)}, (3.2) min r(x + AXB=C X ) = min r( A CB + (A CB ) + [ F A, E B V + V [ F A, E B ) V = r (J) 2r[ F A, E B, (3.21) max i ±(X + X ( ) = max i ± A CB + (A CB ) + [ F A, E B V + V [ F A, E B ) AXB=C V (J), (3.22) min i ±(X + X ) = min i ( ± A CB + (A CB ) + [ F A, E B V + V [ F A, E B ) AXB=C V (J) r[ F A, E B, (3.23) where J = A CB + (A CB ) F A E B F A E B. 16
17 Applying (1.14) and simplifying by AA CB B = AA C = CB B = C and EBCMOs, we obtain i ± (J) A CB + (A CB ) F A E B F A E B A CB + (A CB ) I n I n I n A I n B A B I n I n 1 2 (CB ) 1 2 A C I n A I n B 1 2 CB A 1 2 (A C) B I n I n A B A 1 2 CB A A(B ) C 1 2 C B 1 2 C = n + i ± A B A C r(a) r(b) B C = n + i Applying (1.5) and simplifying by EBMOs, we obtain r(a) r(b) r(a) r(b) r(a) r(b) C A C B r(a) r(b), (3.24) A B r[ F A, E B = r I n I n A r(a) r(b) = r I n A r(a) r(b) B B Adding the two equalities in (3.24) gives i ± = n + r[ A, B r(a) r(b). (3.25) A CB + (A CB ) F A E B C A F A = 2n + r C B 2r(A) 2r(B), (3.26) E B A B Substituting (3.24), (3.25) and (3.26) into (3.2) (3.23) yields (3.15) (3.18). Results (a) (j) follow from (3.15) (3.18) and Lemma 1.2. The existence of Re-definite solution of the matrix equation AXB = C was considered, e.g., in [11, 71, 73, 74, 75, and some identifying conditions were derived through matrix decompositions and generalized inverse of matrices among them. In comparison, Theorem 3.2 shows that the existence of skew-hermitian solution and Re-definite solution of the matrix equation AXB = C can be characterized by some explicit equalities for the rank and partial inertia of a Hermitian block matrix composed by the given matrices in the equation. Theorem 3.3 Let A C m n, B C n p, C C m p and P C n H be given, and assume that the matrix equation AXB = C is solvable for X C n n. Also, denote M = AP A C A C B, N = [ A, B. A B 17
18 Then, max AXB=C X P ) = min {n, 2n + r(m) 2r(A) 2r(B)}, (3.27) min AXB=C X P ) = r(m) 2r(N), (3.28) max ±( X + X P ) = n + i (M) r(a) r(b), AXB=C (3.29) min ±( X + X P ) = i (M) r(n). AXB=C (3.3) Hence, (a) AXB = C has a solution such that X + X P is nonsingular if and only if r(m) 2r(A) + 2r(B) n. (b) X + X P is nonsingular for all solutions of AXB = C if and only if r(m) = 2r(N) + n. (c) AXB = C has a solution X C n n satisfying X + X = P if and only if r(m) = 2r(N). (d) All solutions of AXB = C satisfy X + X = P if and only if r(m) = 2r(A) + 2r(B) 2n. (e) The rank of X + X P subject to AXB = C is invariant if and only if r(m) = 2r(N) + n or r(a) = r(b) = n. (f) AXB = C has a solution satisfying X + X > P (X + X < P ) if and only if i + (M) = r(a) +r(b) (i (M) = r(a) + r(b)). (g) All solutions of AXB = C satisfy X + X > P (X + X < P ) if and only if i + (M) = r(n) + n (i (M) = r(n) + n). (h) AXB = C has a solution X C n n satisfying X + X P (X + X P ) if and only if i (M) = r(n) (i + (M) = r(n)). (i) All solutions of AXB = C satisfy X +X P (X +X P ) if and only if i (M) = r(a)+r(b) n (i + (M) = r(a) + r(b) n). (j) i + ( X + X ) is invariant subject to AXB = C i ( X + X ) is invariant subject to AXB = C r(a) = r(b) = n. Proof Note from Lemma 1.8(b) that if AXB = C is consistent, the general expression X + X P can be written as X + X P = A CB + (A CB ) P + [ F A, E B V + V [ F A, E B, (3.31) where V C 2n n ia arbitrary. Applying (2.1), (2.2), (2.3) and (2.5) to (3.31) gives max r( X + AXB=C X P ) = max r ( A CB + (A CB ) P + [ F A, E B V + V [ F A, E B ) V = min {n, r(j)}, (3.32) min r( X + AXB=C X P ) = min r( A CB + (A CB ) P + [ F A, E B V + V [ F A, E B ) V = r(j) 2r[ F A, E B, (3.33) max i ±( X + X ( P ) = max i ± A CB + (A CB ) P + [ F A, E B V + V [ F A, E B ) AXB=C V r(j), (3.34) min i ±( X + X P ) = min i ( ± A CB + (A CB ) P + +[ F A, E B V + V [ F A, E B ) AXB=C V (J) r[ F A, E B, (3.35) where J = A CB + (A CB ) P F A E B F A E B. 18
19 Applying (1.14) and simplifying by AA CB B = AA C = CB B = C and EBCMOs, we obtain i ± (J) A CB + (A CB ) P F A E B F A E B A CB + (A CB ) P I n I n I n A I n B A B r(a) r(b) I n I n 1 2 (CB ) P A 1 2 A C P B I n A I n B 1 2 CB AP A 1 2 (A C) B P B r(a) r(b) I n I n A B A 1 2 CB A A(B ) C 1 2 AP A 1 2 C 1 4 AP B B 1 2 C 1 4 B P A r(a) r(b) A B = n + i ± A AP A C r(a) r(b) B C AP A C A = n + i C B r(a) r(b) A B = n + i (M) r(a) r(b), (3.36) Adding the two equalities in (3.24) gives r(j) = 2n + r(m) 2r(A) 2r(B), (3.37) Substituting (3.36), (3.37) and (3.25) into (3.32) (3.35) yields (3.27) (3.3). Results (a) (j) follow from (3.27) (3.3) and Lemma 1.2. Recalling that the generalized inverse A of a matrix A is a solution of the matrix equation AXA = A, we apply Theorem 3.2 to AXA = A to produce the following result. Corollary 3.4 Let A C m m. Then, Hence, min r[ A + (A ) = r( A + A ) + 2r(A) 2r[ A, A, (3.38) A min i ±[ A + (A ) ( A + A ) + r(a) r[ A, A. (3.39) A (a) There exists an A such that A + (A ) = if and only if r( A + A ) + 2r(A) = 2r[ A, A. (b) There exists an A such that A + (A ) if and only if i + ( A + A ) + r(a) = r[ A, A. (c) There exists an A such that A + (A ) if and only if i ( A + A ) + r(a) = r[ A, A. 19
20 4 Extremal values of ranks/inertias of Hermitian solutions to some matrix equations Hermitian solutions and definite solutions of the matrix equations AX = B and AXB = C were considered in the literature, and various results were derived; see, e.g., [3, 33. In this section, we derive some new results on Hermitian and definite solutions of AXB = C through the matrix rank/inertia methods. Theorem 4.1 Let A C m n, B C n p and C C m p be given, and assume that the matrix equation AXB = C is solvable for X C n n. Then, Hence, the following statements are equivalent: min r( X AXB=C X ) = r C A C B 2r[ A, B. (4.1) B A (a) The matrix equation AXB = C has a Hermitian solution for X. (b) The pair of matrix equations AY B = C and B Y A = C (4.2) have a common solution for Y. (c) C A R(C) R(A), R(C ) R(B ), r C B = 2r[ A, B. (4.3) B A In this case, the general Hermitian solution to AXB = C can be written as X = 1 2 (Y + Y ), (4.4) where Y is the common solution to (4.2), or equivalently, X = 1 2 (Y + Y ) + E G U 1 + (E G U 1 ) + F A U 2 F A + E B U 3 E B, (4.5) where Y is a special common solution to (4.2), G = [ A, B, and three matrices U 1 C n n, U 2, U 3 C n H are arbitrary. Proof Note from (1.21) that the difference X X for the general solution of AXB = C can be written as where V = X X = A CB (A CB ) + F A V 1 + V 2 E B (F A V 1 ) (V 2 E B ) = A CB (A CB ) + [ F A, E B V + V [ F A, E B, V1 V2 is arbitrary. Applying (1.5) and simplifying by EMBOs, we obtain r[ F A, E B = r I n I n A r(a) r(b) B = r I n A r(a) r(b) = n + r[ A, B r(a) r(b). (4.6) B 2
21 Applying (2.2) to it and simplifying by (1.15), (4.6), AA C = CB B = C and EMBOs, we obtain min r( X AXB=C X ) = min r( A CB (A CB ) + [ F A, E B V + V [ F A, E B ) V = r A CB (A CB ) F A E B F A 2r[ F A, E B E B A CB (A CB ) I n I n I n A = r I n B A B I n I n A = r I n B CB A 2n 2r[ A, B (A C) B I n I n A B = r A C 2n 2r[ A, B B C = r C A C B 2r[ A, B, B A 2n 2r[ A, B establishing (4.1). Equating the right-hand side of (4.1) to zero leads to the equivalence of (a) and (c). Also, note that if AXB = C has a Hermitian solution X, then it satisfies A X B = C, that is to say, the pair of equations in (4.2) have a common solution X. Conversely, if the pair of equations in (4.2) have a common solution, then the matrix X in (4.4) is Hermitian and it also satisfies AXB = 1 2 (AY B + AY B) = 1 (C + C) = C. 2 Thus (4.4) is a Hermitian solution to AXB = C. This fact shows that (a) and (b) are equivalent. Also, note that any Hermitian solution X to AXB = C is a common solution to (4.2), and can be written as X = 1 2 (X + X ). Thus the general solution of AXB = C can really be rewritten as (4.4). The equivalence of (b) and (c) follows from Lemma 1.9(a). Solving (4.2) by Lemma 1.8(b) gives the following common general solution Y = Y + E G V 1 + V 2 E G + F A V 3 F A + E B V 4 E B, where Y is a special solution of the pair, G = [ A, B, and V 1,..., V 4 C n n are arbitrary. Substituting it into (4.4) yields X = 1 2 (Y + Y ) = 1 2 (Y + Y ) E G(V 1 + V 2 ) (V 1 + V 2 )E G F A(V 3 + V 3 )F A E B(V 4 + V 4 )E B = 1 2 (Y + Y ) + E G U 1 + (E G U 1 ) + F A U 2 F A + E B U 3 E B, where U 1 C n n, U 2, U 3 C n H are arbitrary, establishing (4.5). Theorem 4.2 Let A C m n, B C n p, C C m p and P C n H be given, and assume that the matrix equation AXB = C has a solution X C n H. Also, denote M 1 = AP A C A C B P B B, M 2 = C A C B, A B A B 21
22 A AP A C N 1 = B C B, N P B 2 = A C B C. Then, max i ± ( X P ) = i (M 1 ) + n r(a) r(b), (4.7) AXB=C, X C n H min AXB=C, X C n H i ± ( X P ) = r(n 1 ) i ± (M 1 ), (4.8) max i ± (X) = i (M 2 ) + n r(a) r(b), (4.9) AXB=C, X C n H min AXB=C, X C n H i ± (X) = r(n 2 ) i ± (M 2 ). (4.1) Hence, (a) AXB = C has a solution X > P (X < P ) if and only if i (M 1 ) = r(a) + r(b) (i + (M 1 ) = r(a) + r(b)). (b) AXB = C has a solution X P (X P ) if and only if i (M 1 ) = r(n 1 ) (i + (M 1 ) = r(n 1 )). (c) AXB = C has a solution X > (X < ) if and only if i (M 2 ) = r(a) + r(b) (i + (M 2 ) = r(a) + r(b)). (d) All solutions of AXB = C satisfy X > (X < ) if and only if i + (M 2 ) = r(n 2 ) n (i (M 2 ) = r(n 2 ) n). (e) AXB = C has a solution X (X ) if and only if i (M 2 ) = r(n 2 ) (i + (M 2 ) = r(n 2 )). (f) All solutions of AXB = C satisfy X (X ) if and only if i + (M 2 ) = r(a) + r(b) n (i (M 2 ) = r(a) + r(b) n). Proof We first show that the set inclusion R(F G ) R[ F A, E B. Applying (1.5) to [ E G, F A, E B and [ F A, E B and simplifying by EBMOs, we obtain I n I n I n r[ E G, F A, E B = r G A r(g) r(a) r(b) B I n = r G G A r(g) r(a) r(b) B A = r B A + n r(g) r(a) r(b) = n + r(g) r(a) r(b). B 22
23 Combining it with (4.6) leads to r[ F G, F A, E B = r[ F A, E B, i.e., R(F G ) R[ F A, E B. In this case, applying (2.44) and (2.45) to (4.5) gives max i ± ( X P ) = max i ± [ X P + E G U 1 + (E G U 1 ) + F A U 2 F A + E B U 3 E B AXB=C, X C n H U 1, U 2, U 3 X P F A E B F A, (4.11) E B min i ± ( X P ) = min i ± [X P + E G U 1 + (E G U 1 ) + F A U 2 F A + E B U 3 E B AXB=C,X C n H U 1, U 2, U 3 X P F = r A E B i E G X P F A E B F A r(e G ). (4.12) E B Applying (1.14) and simplifying by AX B = C and EBCMOs, we obtain i ± X P F A E B F A E B X P I n I n I n A I n B A B r(a) r(b) I n I n 1 4 X A P A 1 4 X B P B I n A I n B 1 4 AX AP A 1 4 B X B P B r(a) r(b) I n I n A B A 1 2 AX A 1 2 AP A 1 4 C 1 4 AP B r(a) r(b) B 1 4 C 1 4 B P A A B A 1 2 AP A 1 2 C + n r(a) r(b) B 1 2 C 1 2 B P B A B A AP A C + n r(a) r(b) B C B P B = i (M 1 ) + n r(a) r(b), 23
24 and applying (1.15) and simplifying by AX B = C and EBMOs, we obtain X P I n I n X P F r A E B = r I n G E G A r(g) r(a) r(b) B I n I n P G X G = r I n A r(g) r(a) r(b) B = r I n A AX G AP G + n r(g) r(a) r(b) B A AX A = r AP A C AP B B + 2n r(g) r(a) r(b) A AP A C AP B = r B C + 2n r(g) r(a) r(b) A AP A C = r B C B + 2n r(g) r(a) r(b) P B = r(n 1 ) + 2n r(g) r(a) r(b). Substituting them into (4.11) and (4.12) leads to (4.7) and (4.8), respectively. Setting P = in (4.7) and (4.8) yields (4.9) and (4.1), respectively. Results (a) (f) follow from (4.7) (4.1) and Lemma 1.4. A special case of the matrix equation AXB = C is the matrix equation AXA = C, which was studied by many authors; see, e.g., [1, 18, 2, 33, 38, 41. Applying Theorems 4.1 and 4.2 to AXA = C leads to the following result. Corollary 4.3 Let A C m n, C C m H and P Cn H be given. Then, (a) The matrix equation AXA = C (4.13) has a solution X C n H if and only if R(C) R(A). In this case, the general Hermitian solution to (4.13) can be written as X = A C(A ) + F A U + (F A U), (4.14) where U C n n is arbitrary. (b) Under R(C) R(A), max i ± ( X P ) = n + i ± ( C AP A ) r(a), (4.15) AXA =C, X C n H min AXA =C, X C n H i ± ( X P ) ( C AP A ), (4.16) max i ± (X) = n + i ± (C) r(a), (4.17) AXA =C, X C n H min AXA =C, X C n H i ± (X) (C). (4.18) (c) Under R(C) R(A), (4.13) has a solution X > P (X < P ) if and only if i + ( C AP A ) = r(a) (i ( C AP A ) = r(a)). (d) Under R(C) R(A), (4.13) has a solution X P (X P ) if and only if C AP A (C AP A ). (e) Under R(C) R(A), (4.13) has a solution X > (X < ) if and only if C and r(a) = r(c) (C and r(a) = r(c)). (f) Under R(C) R(A), (4.13) has a solution X (X ) if and only if C (C ). 24
25 Corollary 4.4 Let A C m m be given. Then, (a) A has a Hermitian g-inverse if and only if In this case, r( A A ) = 2r[ A, A 2r(A). (4.19) max r(a ) = min{m, r( A + A ) + 2m 2r(A)}, (4.2) A min r(a ) = 2r(A) r( A + A ), (4.21) A max i ±(A ) ( A + A ) + m r(a), (4.22) A min i ±(A ) = r(a) i ( A + A ). (4.23) A (b) A has a nonsingular A if and only if r( A + A ) = 2r(A) m. (c) The positive index of inertia of A is invariant the negative inertia of A is invariant r( A + A ) = 2r(A) n. (d) There exists an A > there exists an A i + ( A + A ) = r(a). (e) There exists an A < there exists an A i ( A + A ) = r(a). As is well known, one of the fundamental concepts in matrix theory is the partition of a matrix. Many algebraic properties of a matrix and is operations can be derived from the submatrices in its partitions and their operations. In order to reveal more properties of Hermitian solutions to (4.13), we partition the unknown Hermitian matrix X in (4.13) into a 2 2 block form Consequently, (4.13) can be rewritten as X = X1 X 2 X2. X 3 X1 X [ A 1, A 2 2 A 1 X2 X 3 A = C, (4.24) 2 where A 1 C m n1, A 2 C m n2, X 1 C n1 H, X 2 C n1 n2 and X 3 C n2 H with n 1 + n 2 = n. In what follows, we derive the extremal values of the ranks and partial inertias of the submatrices in a Hermitian solution to (4.24). Note that X 1, X 2, X 3 can be rewritten as X 1 = P 1 XP 1, X 2 = P 1 XP 2, X 3 = P 2 XP 2, (4.25) where P 1 = [ I n1, and P 2 = [, I n2. Substituting the general solution in (4.14) into (4.25) yields X 1 = P 1 X P1 + P 1 F A V 1 + V1 F A P1, (4.26) X 3 = P 2 X P2 + P 2 F A V 2 + V2 E B P2, (4.27) where X = A C(A ), V = [ V 1, V 2. For convenience, we adopt the following notation for the collections of the submatrices X 1 and X 3 in (4.24): { S 1 = S 3 = X j C n1 H { X 3 C n2 H [ A X1 X 1, A 2 2 A 1 X2 X 3 A 2 [ A X1 X 1, A 2 2 A 1 X2 X 3 A = C 2 } = C Theorem 4.5 Suppose that the matrix equation (4.24) is consistent. Then,, (4.28) }. (4.29) 25
Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China
On global optimizations of the rank and inertia of the matrix function A 1 B 1 XB 1 subject to a pair of matrix equations B 2 XB 2, B XB = A 2, A Yongge Tian China Economics and Management Academy, Central
More informationAnalytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix
Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China
More informationRank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications
Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications Yongge Tian a, Ying Li b,c a China Economics and Management Academy, Central University
More informationAnalytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications
Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,
More informationSolutions of a constrained Hermitian matrix-valued function optimization problem with applications
Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 181, China Abstract. Let f(x) =
More informationA revisit to a reverse-order law for generalized inverses of a matrix product and its variations
A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair
More informationELA
Electronic Journal of Linear Algebra ISSN 181-81 A publication of te International Linear Algebra Society ttp://mat.tecnion.ac.il/iic/ela RANK AND INERTIA OF SUBMATRICES OF THE MOORE PENROSE INVERSE OF
More informationRank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix
Electronic Journal of Linear Algebra Volume 2 Volume 2 (21) Article 17 21 Rank and inertia of submatrices of te Moore- Penrose inverse of a Hermitian matrix Yongge Tian yongge.tian@gmail.com Follow tis
More informationMATH36001 Generalized Inverses and the SVD 2015
MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationOn V-orthogonal projectors associated with a semi-norm
On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More informationMultiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationMore on generalized inverses of partitioned matrices with Banachiewicz-Schur forms
More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationThe symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation
Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationGeneralized Principal Pivot Transform
Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the
More informationRe-nnd solutions of the matrix equation AXB = C
Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationa 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real
More informationMoore Penrose inverses and commuting elements of C -algebras
Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We
More informationLinear Equations in Linear Algebra
1 Linear Equations in Linear Algebra 1.7 LINEAR INDEPENDENCE LINEAR INDEPENDENCE Definition: An indexed set of vectors {v 1,, v p } in n is said to be linearly independent if the vector equation x x x
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationMATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.
MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:
More informationMatrix operations Linear Algebra with Computer Science Application
Linear Algebra with Computer Science Application February 14, 2018 1 Matrix operations 11 Matrix operations If A is an m n matrix that is, a matrix with m rows and n columns then the scalar entry in the
More informationMATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.
as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations
More informationLinear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations
More informationDiagonalization by a unitary similarity transformation
Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 433 (2010) 476 482 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Nonsingularity of the
More information4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns
L. Vandenberghe ECE133A (Winter 2018) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows
More informationSPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN
Electronic Journal of Linear lgebra ISSN 1081-3810 publication of the International Linear lgebra Society EL SPECIL FORMS OF GENERLIZED INVERSES OF ROW BLOCK MTRICES YONGGE TIN bstract. Given a row block
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS
More informationFurther Mathematical Methods (Linear Algebra) 2002
Further Mathematical Methods (Linear Algebra) 00 Solutions For Problem Sheet 0 In this Problem Sheet we calculated some left and right inverses and verified the theorems about them given in the lectures.
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith
More informationNotes on Mathematics
Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationDETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX
INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 8 14 c 2008 Institute for Scientific Computing and Information DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX RAVI
More informationSCTT The pqr-method august 2016
SCTT The pqr-method august 2016 A. Doledenok, M. Fadin, À. Menshchikov, A. Semchankau Almost all inequalities considered in our project are symmetric. Hence if plugging (a 0, b 0, c 0 ) into our inequality
More informationResearch Article Some Results on Characterizations of Matrix Partial Orderings
Applied Mathematics, Article ID 408457, 6 pages http://dx.doi.org/10.1155/2014/408457 Research Article Some Results on Characterizations of Matrix Partial Orderings Hongxing Wang and Jin Xu Department
More informationMatrix Algebra. Matrix Algebra. Chapter 8 - S&B
Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number
More informationThe Hermitian R-symmetric Solutions of the Matrix Equation AXA = B
International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China
More information7 : APPENDIX. Vectors and Matrices
7 : APPENDIX Vectors and Matrices An n-tuple vector x is defined as an ordered set of n numbers. Usually we write these numbers x 1,...,x n in a column in the order indicated by their subscripts. The transpose
More informationOHSx XM511 Linear Algebra: Solutions to Online True/False Exercises
This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationChapter 3. Matrices. 3.1 Matrices
40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Corrected Version, 7th April 013 Comments to the author at keithmatt@gmail.com Chapter 1 LINEAR EQUATIONS 1.1
More informationMath 4377/6308 Advanced Linear Algebra
2.3 Composition Math 4377/6308 Advanced Linear Algebra 2.3 Composition of Linear Transformations Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math4377
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationSystems of Linear Equations and Matrices
Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationOn the Moore-Penrose and the Drazin inverse of two projections on Hilbert space
On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space Sonja Radosavljević and Dragan SDjordjević March 13, 2012 Abstract For two given orthogonal, generalized or hypergeneralized
More informationSystems of Linear Equations and Matrices
Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first
More information~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.
Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical
More informationMassachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra
Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used
More informationj=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.
LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a
More informationChapter 2: Matrix Algebra
Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry
More informationA new algebraic analysis to linear mixed models
A new algebraic analysis to linear mixed models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. This article presents a
More informationMath 54 HW 4 solutions
Math 54 HW 4 solutions 2.2. Section 2.2 (a) False: Recall that performing a series of elementary row operations A is equivalent to multiplying A by a series of elementary matrices. Suppose that E,...,
More informationDigital Workbook for GRA 6035 Mathematics
Eivind Eriksen Digital Workbook for GRA 6035 Mathematics November 10, 2014 BI Norwegian Business School Contents Part I Lectures in GRA6035 Mathematics 1 Linear Systems and Gaussian Elimination........................
More informationDeterminantal divisor rank of an integral matrix
Determinantal divisor rank of an integral matrix R. B. Bapat Indian Statistical Institute New Delhi, 110016, India e-mail: rbb@isid.ac.in Abstract: We define the determinantal divisor rank of an integral
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More informationChapter 4. Matrices and Matrix Rings
Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,
More informationEquality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.
Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read
More informationMatrix Algebra 2.1 MATRIX OPERATIONS Pearson Education, Inc.
2 Matrix Algebra 2.1 MATRIX OPERATIONS MATRIX OPERATIONS m n If A is an matrixthat is, a matrix with m rows and n columnsthen the scalar entry in the ith row and jth column of A is denoted by a ij and
More informationThe Drazin inverses of products and differences of orthogonal projections
J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationMatrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.
Matrix Arithmetic There is an arithmetic for matrices that can be viewed as extending the arithmetic we have developed for vectors to the more general setting of rectangular arrays: if A and B are m n
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationarxiv: v1 [math.ra] 16 Nov 2016
Vanishing Pseudo Schur Complements, Reverse Order Laws, Absorption Laws and Inheritance Properties Kavita Bisht arxiv:1611.05442v1 [math.ra] 16 Nov 2016 Department of Mathematics Indian Institute of Technology
More informationDefinition 2.3. We define addition and multiplication of matrices as follows.
14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationOn Sums of Conjugate Secondary Range k-hermitian Matrices
Thai Journal of Mathematics Volume 10 (2012) Number 1 : 195 202 www.math.science.cmu.ac.th/thaijournal Online ISSN 1686-0209 On Sums of Conjugate Secondary Range k-hermitian Matrices S. Krishnamoorthy,
More informationDiagonal and Monomial Solutions of the Matrix Equation AXB = C
Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationLinear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay
Linear Algebra Summary Based on Linear Algebra and its applications by David C. Lay Preface The goal of this summary is to offer a complete overview of all theorems and definitions introduced in the chapters
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,
More informationHASSE-MINKOWSKI THEOREM
HASSE-MINKOWSKI THEOREM KIM, SUNGJIN 1. Introduction In rough terms, a local-global principle is a statement that asserts that a certain property is true globally if and only if it is true everywhere locally.
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationGAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)
GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.
More informationSolution to Homework 1
Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false
More informationσ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2
HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationA = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].
Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes
More informationSteven J. Leon University of Massachusetts, Dartmouth
INSTRUCTOR S SOLUTIONS MANUAL LINEAR ALGEBRA WITH APPLICATIONS NINTH EDITION Steven J. Leon University of Massachusetts, Dartmouth Boston Columbus Indianapolis New York San Francisco Amsterdam Cape Town
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationMatrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein
Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,
More informationA VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010
A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 00 Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics
More informationTHE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX
THE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX A Dissertation Submitted For The Award of the Degree of Master of Philosophy In Mathematics Purva Rajwade School of Mathematics Devi Ahilya Vishwavidyalaya,
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationEp Matrices and Its Weighted Generalized Inverse
Vol.2, Issue.5, Sep-Oct. 2012 pp-3850-3856 ISSN: 2249-6645 ABSTRACT: If A is a con s S.Krishnamoorthy 1 and B.K.N.MuthugobaI 2 Research Scholar Ramanujan Research Centre, Department of Mathematics, Govt.
More informationRigid Geometric Transformations
Rigid Geometric Transformations Carlo Tomasi This note is a quick refresher of the geometry of rigid transformations in three-dimensional space, expressed in Cartesian coordinates. 1 Cartesian Coordinates
More information