The skew-symmetric orthogonal solutions of the matrix equation AX = B

Size: px
Start display at page:

Download "The skew-symmetric orthogonal solutions of the matrix equation AX = B"

Transcription

1 Linear Algebra and its Applications 402 (2005) The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics and Econometrics, Hunan University, Changsha , PR China Received 15 January 2004; accepted 18 January 2005 Available online 16 March 2005 Submitted by R. Byers Abstract An n n real matrix X is said to be a skew-symmetric orthogonal matrix if X T = X and X T X = I. Using the special form of the C S decomposition of an orthogonal matrix with skew-symmetric k k leading principal submatrix, this paper establishes the necessary and sufficient conditions for the existence of and the expressions for the skew-symmetric orthogonal solutions of the matrix equation AX = B. In addition, in corresponding solution set of the equation, the explicit expression of the nearest matrix to a given matrix in the Frobenius norm have been provided. Furthermore, the Procrustes problem of skew-symmetric orthogonal matrices is considered and the formula solutions are provided. Finally an algorithm is proposed for solving the first and third problems. Numerical experiments show that it is feasible Elsevier Inc. All rights reserved. AMS classification: 65F15; 65F20 Keywords: Skew-symmetric orthogonal matrix; Leading principal submatrix; C S decomposition; The matrix nearness problem; The least-square solutions The work was supported by the National Natural Science Foundation of China ( ) and by Science Fund of Hunan University ( ). Corresponding author. address: chunjunm@hnu.cn (C. Meng) /$ - see front matter 2005 Elsevier Inc. All rights reserved. doi: /j.laa

2 304 C. Meng et al. / Linear Algebra and its Applications 402 (2005) Introduction We first introduce some notations to be used. Let R m n denote the set of real m n matrices, and OR n n, SR n n, ASR n n denote the sets of orthogonal n n matrices, real n n symmetric matrices, real n n skew-symmetric matrices respectively, I n the identity matrix of order n. A T means the transpose of the real matrix A. We define the inner product in space R m n, (A, B) = trace(b T A) = mi=1 nj=1 a ij b ij A,B R n m.thenr m n is a Hilbert inner product space. The norm of a matrix generated by the inner product is Frobenius norm, denoted by. Skew-symmetric orthogonal matrices play an important role in numerical analysis and numerical solution of partial differential equation. For example, the symplectic transformation can preserve the Hamiltonian structure, ( and an) matrix S is symplectic matrix if and only if SJS T 0 In = J,whereJ =. There the I n 0 matrix J is a skew-symmetric orthogonal matrix. It is pointed out that the order of a skew-symmetric orthogonal matrix is necessarily even number. So we denote the set of skew-symmetric orthogonal matrices by SSOR 2m 2m. As another example for skew-symmetric orthogonal matrices, we can see that if P OR m m, then the 0 P matrix P T SSOR 0 2m 2m. In this paper we consider the skew-symmetric orthogonal solutions of the matrix equation AX = B, (1) where A and B are given matrices in R n 2m. This problem has important geometry meanings. If X SSOR 2m 2m satisfies Eq. (1), then there exists an orthogonal transformation with skew-symmetric structure that can transform the row space of the matrix A onto that of B. And an orthogonal transformation is widely applied in numerical analysis because of its numerical stability, hence the problem we will discuss here is useful both theoretically and practically. We also consider the matrix nearness problem min X X, (2) X S X where X is a given matrix in R 2m 2m and S X is the solution set of Eq. (1). In Section 3 we will prove that problem (1) is solvable whenever the given matrices A and B satisfy two equations. However, the data matrices A and B are often perturbed by lots of means, such as observing error, model error, rounding error etc. In many cases the changed A and B do not meet the solvability conditions, which makes problem (1) have no solution. Hence we consider the least-square solutions of problem (1). That means to find a X SSOR 2m 2m to minimize the distance between AX and B,i.e. min AX B. (3) X SSOR2m 2m

3 C. Meng et al. / Linear Algebra and its Applications 402 (2005) The well-known equation (1) with the unknown matrix X being symmetric, skewsymmetric, symmetric positive semidefinite, bisymmetric and orthogonal were studied (see, for instance, [1 8]). All in this paper, the necessary and sufficient conditions for the existence of and the expressions for the solution of the equation were given by using the structure properties of matrices in required subset and the singular value decomposition of the matrix. But the skew-symmetric orthogonal solutions of Eq. (1) have not been considered yet, which motivates us to study the problem. Another motivation to solve the skew-symmetric orthogonal solutions of Eq. (1) is that it will be probably helpful for checking if the matrix A is generalized Hamiltonian matrix. The matrix A is called to be generalized Hamiltonian iff there exists a skew-symmetric orthogonal matrix Q such that AQ is symmetric. If the skew-symmetric orthogonal matrix X solves Eq. (1) whereb is symmetric matrix then we can claim that the matrix A is generalized Hamiltonian. To see the applications of generalized Hamiltonian matrix, we refer the readers to the paper [9]. The matrix nearness problem (2), that is, finding the nearest matrix in the solution setofeq.(1) to a given matrix in Frobenius norm, was initially proposed in the process of test or recovery of linear systems due to incomplete dates or revising given dates. A preliminary estimate X of the unknown matrix X can be obtained by the experimental observation values and the information of statistical distribution. The other form of the matrix nearness problem (2), min A A A subject to AX = B, where X and B are given matrices, A is given matrix and A is a symmetric, bisymmetric or symmetric positive semidefinite matrix were discussed (see [10,11] and references therein). Problem (3) is a special case of the Procrustes problem. Schonemann studied the orthogonal Procrustes problem in the paper [12] and obtained a explicit solution. The symmetric Procrustes problem was researched by Higham in the paper [13], where the general solutions of the problem and the algorithm to compute it are presented. Problem (3) we will discuss here may be called the skew-symmetric orthogonal Procrustes problem and be the extension of the research of Procrustes problem. The paper is organized as follows: in Section 2, the special form of C S decomposition of an orthogonal matrix with a skew-symmetric k k leading principal submatrix will be introduced. In Section 3, the necessary and sufficient conditions for the existence of and the expressions for the skew-symmetric orthogonal solutions of Eq. (1) will be derived. Section 4 will provide the expressions of the solution of the matrix nearness problem (2). In Section 5, the formula of solutions of problem (3)

4 306 C. Meng et al. / Linear Algebra and its Applications 402 (2005) is presented and in Section 6 some examples are given to illustrate the main results obtained in the paper. 2. The C S decomposition of an orthogonal matrix with a skew-symmetric k k leading principal submatrix Let X OR 2m 2m, partition X as X11 X X = 12, (4) X 21 X 22 where X11 T = X 11,i.e.X 11 ASR k k. Using the skew-symmetric structure of k k leading principal submatrix of the orthogonal matrix X, we prove that the C S decomposition of X has special form described as the following theorem. Theorem 1. If X OR 2m 2m, and partitioned as (4), X 11 ASR k k, then the C S decomposition of X can be expressed as T D1 0 X11 X 12 D1 0 Σ11 Σ = 12, (5) 0 D 2 X 21 X 22 0 R 2 Σ 21 Σ 22 here, D 1 OR k k,d 2,R 2 OR (2m k) (2m k), I 0 Σ 11 = 0 C 0, Σ 12 = 0 S 0, 0 I 0 I 0 0 Σ 21 = 0 S 0, Σ 22 = 0 C T 0, I 0 where I = diag( I 1,..., I r ), I 1 = = I 0 1 r =, C = diag(c 1 0 1,...,C l ), 0 ci si 0 C i =,i= 1,...,l. S = diag(s c i 0 1,...,S l ), S i =,S 0 s i TS i+ Ci TC i = i I 2,i= 1,...,l. 2r + 2l = rank(x 11 ), Σ 21 = Σ T 12,k 2r = rank(x 21). Proof. Suppose X OR 2m 2m, and partitioned as (4), X 11 ASR k k.asx 11 is skew-symmetric, and the k k leading principle submatrix of the orthogonal matrix X, thenifλ is one of eigenvalue of X 11,wemusthavethatλ is a pure imaginary number and the absolute value of λ, denoted by λ, is not larger than 1, i.e. λ 1. Assume the spectral decomposition of the real skew-symmetric matrix X 11 is written as I D1 T X 11D 1 = Σ 11 = 0 C 0, (6) 0

5 C. Meng et al. / Linear Algebra and its Applications 402 (2005) where D 1 OR k k, I = diag( I 1,..., I r ), I 1 = = I 0 1 r =. C = 1 0 diag(c 1,...,C l ), C i = rank(x 11 ). ( 0 ci c i 0 ), 0 < c i < 1, i = 1,...,l and 2(r + l) = T D1 0 X11 Consider the matrix D 0 I X 1, it remains column orthogonal. Let 21 C X 21 D 1 = (W 1,W 2 ), W 1 R (2m k) 2r, W 2 R (2m k) (k 2r) 0, Ĉ = R (k 2r) (k 2r),then Therefore, T D1 0 X11 D 0 I X 1 = 21 D T 1 X 11 D 1 = X 21 D 1 I 0 0 Ĉ W 1 W 2. (7) W 1 = 0, W2 T W 2 = I k 2r Ĉ T Ĉ. (8) Write s i = 1 ci 2, i = r + 1,...,r + l. s 2r+2l+1 = =s k = 1, and let then ( 1 Z 2 = W 2 diag, s r+1 Z T 2 Z 2 = I. 1 s r+1,..., 1 s r+l, 1 s r+l,..., 1 s k ), (9) And we see that Z 2 R (2m k) (k 2r) has orthonormal columns, hence the matrix Z 2 can be expanded into an orthogonal matrix D 2, i.e. D 2 = (Z 1,Z 2 ) OR (2m k) (2m k). From (8) and(9), we obtain Z T 1 W 2 = Z T 1 Z 2 diag(s r+1,...,s k ) = 0 (10) and ( 1 Z2 T W 2 = diag,..., 1 ) W T S 0 2 s r+1 s W 2 =. (11) k 0 I here, S = diag(s r+1,s r+1,...,s r+l,s r+l ), s i = 1 ci 2, i = r + 1,...,r + l. Therefore, we get T Z T D 2 X21 D 1 = 1 (0 ) 0 W2 = Z2 T 0 S 0. (12) I

6 308 C. Meng et al. / Linear Algebra and its Applications 402 (2005) Applying the same method, we can also find an orthogonal matrix R 2 OR (2m k) (2m k) such that 0 D1 T X 12R 2 = 0 S 0. (13) I Now we consider the matrix D T 1 0 X11 X 12 D1 0 0 D 2 T, X 21 X 22 0 R 2 it is still orthogonal matrix, and we have, from (6), (12)and(13), that I 0 0 C S 0 D1 0 X11 X 12 D1 0 = 0 I 0 D 2 X 21 X 22 0 R 2 0 X 44 X 45 X S 0 X 54 X 55 X 56 I X 64 X 65 X 66 (14) It can deduce that X 64 = X 65 = X 66 = X 56 = X 46 = 0. And the CS T + SX55 T = 0 implies that X 55 = (S 1 CS T ) T = C T, X 45 = X 54 = 0, X 44 OR 2r 2r. (15) Let D 2 = D 2 diag(x 44,I), the expression (5) can be obtained. Thus the proof is completed. 3. The solvable conditions for problem (1) At first, we assume that there exists a matrix X SSOR 2m 2m such that Eq. (1) holds, then we get and BB T = AXX T A T = AA T (16) AB T = AX T A T = AXA T = BA T. (17) Therefore Eqs. (16) and(17) are necessary for the solvability of problem (1), we will prove in the following paper that they is also sufficient for the solvability.

7 C. Meng et al. / Linear Algebra and its Applications 402 (2005) Theorem 2. Given A, B R n 2m, there is a X SSOR 2m 2m such that Eq. (1) holds if and only if (a) BB T = AA T ; (b) AB T = BA T. When these conditions are satisfied, the solutions of problem (1) can be expressed as I 0 X = Ṽ Q T, (18) 0 G where Ṽ, Q T OR 2m 2m have relation to given matrices A and B. G SSOR 2r 2r is arbitrary. In order to prove the sufficiency and obtain the expression of solutions, we introduce some important lemmas. Lemma 1. Given A, B R n 2m, if A, B satisfy the condition (a) of theorem 2, then the singular value decomposition of A, B can be described as Σ 0 A = U V T Σ 0, B = U Q T, (19) where U = U 1 U 2 OR n n,v=(v 1 V 2 ), Q = (Q 1 Q 2 ) OR 2m 2m,U 1 R n k,p 1,Q 1 R 2m k,k= rank(a), and Σ R k k > 0 is diagonal, its diagonal elements are nonzero singular value of A or B. Proof. Suppose the singular value decomposition of A be written as the fore part of (19), from condition (a), we have BB T Σ 2 0 = U U T. It implies that B T U 2 = 0, Σ 1 U1 T BBT U 1 Σ 1 = I k. (20) Hence we may find Q 2 such that Q = (B T U 1 Σ 1 Q 2 ) OR 2m 2m.Using(20), we obtain that U T Σ 0 BQ =, that ends our proof. Lemma 2. Given A, B R n 2m, there exists an orthogonal matrix X such that Eq. (1) holds if and only if (a) BB T = AA T.

8 310 C. Meng et al. / Linear Algebra and its Applications 402 (2005) Let the singular value decomposition of A, B be (19), then the orthogonal solutions of Eq. (1) can be described as Ik 0 X = V Q T, (21) 0 P where P OR (2m k) (2m k) is arbitrary. Proof. Substitute (19) into Eq. (1), we get Σ 0 V T Σ 0 XQ =. (22) Write V T XQ = X = Using (22) and(23), it can be deduced that Ik 0 X =. 0 P Therefore we have proved the lemma. X 11 X 12, X 11 R X 21 X k k. (23) 22 Proof of Theorem 2. Since the given matrices A and B satisfy the condition (a), we can assume, from the Lemma 1, that the singular value decomposition of A, B be (19). Morever A and B satisfy the condition (b), then Eq. (1) has orthogonal solutions with the form (21). Substitute (19) into the condition (b), we get V1 T Q 1 = Q T 1 V 1. (24) Note that X T = X is equivalent to V T I 0 I 0 Q = 0 P 0 P T Q T V. Comparing the corresponding blocks, with V = ( V 1 ) ( V 2, Q = Q1 ) Q 2,we have Q T 1 V 1 = V1 T Q 1, (25) V T 1 Q 2P = Q T 1 V 2, (26) V2 T Q 2P = P T Q T 2 V 2. (27) And (25) already stands from (24). Then we intend to find the common orthogonal solutions of (26) and(27). Firstly, consider (26), we observe that ( Q T 1 V 2)( Q T 1 V 2) T = V1 T Q 2(V1 T Q 2) T. Then we know, from the Lemma, that Eq. (26) has orthogonal solutions P.

9 C. Meng et al. / Linear Algebra and its Applications 402 (2005) Consider the orthogonal matrix V T Q,thep p leading principal submatrix of which is skew-symmetric from (24). Applying Theorem 1, weget Q T 1 V 2 = D 1 Σ 12 D T 2, VT 1 Q 2 = D 1 Σ 12 R T 2. (28) And the orthogonal solutions P of (26) can be expressed as G P = R 2 0 I 0 D2 T, (29) I where, G OR 2r 2r is arbitrary. Since V T 2 Q 2 = D 2 Σ 22 R T 2. (30) Using (27), (29) and(30), we obtain that G T = G, (31) that means G SSOR 2r 2r,let ˆQ = Q diag(i, R 2 ), ˆV = U diag(i, D 2 ),weget the skew-symmetric orthogonal solutions of Eq. (1) can be described as I 0 0 X = ˆV 0 G 0 ˆQ. I I 0 0 Let E be the permutation matrix and E = I, partitioned with accordance 0 I 0 I to the 0 G 0, write Ṽ = ˆVE, Q = ˆQE. Therefore, the solutions of I problem (1)be(18), so the proof is finished. 4. The solutions of problem (2) We first consider the following matrix nearness problem. Given C R 2m 2m, seek a matrix P SSOR 2m 2m such that C P =min. (32) For 2 is a continuous function, and the subset SSOR 2m 2m is closed and bounded in R 2m 2m, the problem (32) must have solutions. But the set SSOR 2m 2m is not convex set, then problem (32) has possibly not only one solution.

10 312 C. Meng et al. / Linear Algebra and its Applications 402 (2005) Next we will derive the expression of solutions for problem (32). From the properties of Frobenius norm, one know that C P 2 = (C + C T )/2 2 + (C C T )/2 P 2. Hence C P =min is equivalent to (C C T )/2 P 2 = min. Let the spectral decomposition of the skew-symmetric matrix Ĉ = (C C T )/2 be Λ 0 Ĉ = D D T, (33) 0 λi here Λ = diag(λ 1,...,Λ k ), Λ i =,λ λ i 0 i > 0, i = 1,...,k,2k = rank(ĉ), D T D = DD T = I 2m. With the help of (33) and the definition of Frobenius norm, we have ( ) (C C T )/2 P 2 = 2m + Ĉ 2 Λ 0 2 trace D T PD. As D T PDis still skew-symmetric orthogonal matrix, we obtain that the solutions of problem (32) can be expressed as I 0 E 0 (Ĉ) = D D T, (34) 0 H where I = diag( I 1,..., I k ), is arbitrary. I 1 = = I k = 0 1, H SSOR 1 0 (2m 2k) (2m 2k) Theorem 3. Suppose the spectral decomposition of the skew-symmetric matrix Ĉ = (C C T )/2 be (33), then the solution set of problem (32) can be written as (34). Because the solution set of problem (1) with expression (18) is closed and bounded but not convex, the solution of (2) exists and is not unique if the conditions of Theorem 1 are satisfied. Given X R 2m 2m,using(18), we get X X 2 = I 0 Ṽ T X 2 Q 0 G. Write Ṽ = Ṽ 1 Ṽ 2 ), Q = Q 1 Q 2, partitioned according to the matrix I 0, then we observe that X X 0 G =min is equivalent to min G SSOR 2r 2r G Ṽ T 2 X Q 2 2.

11 C. Meng et al. / Linear Algebra and its Applications 402 (2005) Let C = Ṽ T 2 X Q 2 and Ĉ = (C C T )/2, from Theorem 3, the solution set of can be expressed as (34). Theorem 4. Given X R 2m 2m, and A, B R n 2m, satisfying the conditions of Theorem 1, the solution set of problem (1) be (18), write Ṽ = Ṽ 1 Ṽ 2, Q = I 0 Q 1 Q 2, partitioned according to the matrix, and let C = Ṽ 0 G 2 TX Q 2, Ĉ = (C C T )/2, then if and only if I 0 X = Ṽ Q T, G E 0 G 0 (Ĉ), the minimum of (2) can be reached. 5. The solutions of problem (3) Consider (3), from the definition of Frobenius norm and X SSOR 2m 2m,itis obvious to see that AX B 2 = A 2 + B 2 2 trace(b T AX). (35) Furthermore, 2 trace(b T AX) = trace((b T A A T B)X). Denote the skew-symmetric matrix (B T A A T B) by Ĉ. Assume the spectral decomposition of Ĉ is the form with (33), then we know that 2 trace(b T AX) 2 k i=1 λ i and the equality holds if and only if X E 0 (Ĉ). Therefore we obtain the following theorem. Theorem 5. Given A, B R n 2m, denote (A T B B T A) by Ĉ. Suppose the spectral decomposition of Ĉ be (33), then the solution set of problem (3) can be described as (34). 6. Numerical algorithm and several examples The following algorithm can be used to solve problem (1) and problem (3).

12 314 C. Meng et al. / Linear Algebra and its Applications 402 (2005) Algorithm Step 1. Input A, B R n 2m, compute the matrix Ĉ = B T A A T B. Step 2. Compute the spectral decomposition of Ĉ with the form (33). Step 3. Compute the solutions of problem (1) orproblem(3) according to (34). Example 1. Suppose A, B R 6 6 and A = , B = We can easily see that the given A and B do not satisfy the solvable conditions of problem (1). Hence we tryto seek an orthogonalskew-symmetric matrixtominimize AX B F. and Solution: Applying the algorithm, we get the following the result: X = min AX B = , X SSOR6 6 e1 = X T X I 6 = , e2 = X T + X = Remark 1 (1) Problem (3) has a unique solution if and only if the matrix (B T A A T B) is nonsingular matrix. Example 2 is just the case.

13 C. Meng et al. / Linear Algebra and its Applications 402 (2005) (2) When we use the stable algorithms to do the steps 1 3, we find, by computing, that if the perturbations A, B are small, the solution perturbation of X generally remains small. We can also apply the algorithm to solve problem (1). The following example is in the case. We will compare the solutions computed by our algorithm with that computed by MATLAB procedure X = A \ B. Example 2. Let U OR 10 10,V OR 6 6,and U = , V = S1 Assume S 1 = diag(2, 1,e/1,e/2,e/3,e/4), S = with e = 10,e = 1,e = ,e = 10 2,...,e = 10 13, A = USV T. Let X SSOR 6 6, X = and B = AX. Clearly, these given matrices A and B are consistent with a skewsymmetric orthogonal solution X. For such matrices A and B, we first seek the skewsymmetric orthogonal solution of AX = B applying our algorithm, then we compute the solution of AX = B using MATLAB procedure X = A\B. The entries in the following table are with exponent and with the mantissas which absolute value are less than one. And we only report their exponents.

14 316 C. Meng et al. / Linear Algebra and its Applications 402 (2005) The results 1 of Example 2 e e 1 e 2 e 3 e 4 ee 1 ee 2 ee 3 ee here, X i s denote the solutions computed by our algorithm and X i s are the solutions by MATLAB procedure X = A\B. And e 1 = AX i B, e 3 = X i + X T i, ee 1 = A X i B, e 2 = X i X, e 4 = X T i X i + I, ee 2 = X i X, ee 3 = X i + X T i, ee 4 = X T i X i + I. Remark 2 (1) The results show that the matrices X i computed by our algorithm can solve well problem (1) in 14 correct significant digits. At the same time the MATLAB procedure X = A\B can also solve well the equation AX = B with the solutions X i. (2) When e = 10, 1,...,10 4, both Ĉ = B T A A T B and A are full rank and the solution of AX = B is unique, so X i,x i,x are very nearness. In this case we suggest that one should prefer to choose MATLAB procedure X = A\B to solve problem (1) for it is simple. But when some of eigenvalues of the matrix A become nearer and nearer to zero, we find that the solution X i s have gradually lost the skewsymmetric and orthogonality. While the solution X i computed by our algorithm still preserve the property of skew-symmetry and orthogonality well. Therefore when A

15 C. Meng et al. / Linear Algebra and its Applications 402 (2005) has small eigenvalues near to zero, the MATLAB procedure X = A\B cannot settle problem (1) any longer while our algorithm can do it well. (3) It is pointed out that AX = B may be ill-conditioned linear equation or least squares problem but a well conditioned skew-symmetric-orthogonal-solutionproblem. An extreme example is A = ( 1 0 ), B = ( 0 1 ). The linear equation or least squares ( problem ) has condition. However, since the matrix Ĉ = B T A A T 0 1 B =, it is well-conditioned for skew-symmetric-orthogonalsolution-problem Conclusion In this paper, we have discussed the following three problems: Problem 1. Given matrices A, B R n 2m, find the skew-symmetric orthogonal solutions of the matrix equation AX = B. Problem 2. Given X R 2m 2m, find a matrix X S X, which denotes the solution set of problem 1, such that min X S X X X. Problem 3. Given matrices A, B R n 2m, find a skew-symmetric orthogonal matrix (SSOR 2m 2m )X such that min AX B. X SSOR2m 2m By applying the special form of the C S decomposition of an orthogonal matrix with skew-symmetric k k leading principal submatrix, we have obtained necessary and sufficient conditions for the solvability of problem 1 and the general forms of solutions for these three problems. Furthermore, we propose an algorithm to solve problems (1) and(3), which can preserve both orthogonality and skew-symmetry in the presence of rounding error, and compute the solutions of both problem (1) and problem (3). Numerical examples show that it is feasible. Acknowledgments We would like to thank the referee for his comments and suggestions that significantly improve the presentation of the work.

16 318 C. Meng et al. / Linear Algebra and its Applications 402 (2005) References [1] W.J. Vetter, Vector structures and solutions of linear matrix equations, Linear Algebra Appl. 10 (1975) [2] J.R. Magnus, H. Neudecker, The commutation matrix: some properties and applications, Ann. Statist. 7 (1979) [3] J.R. Magnus, H. Neudecker, The elimination matrix: some lemmas and applications, SIAM J. Algebr. Discrete Methods 1 (1980) [4] F.J.H. Don, On the symmetric solutions of a linear matrix equation, Linear Algebra Appl. 93 (1987) 1 7. [5] H. Dai, On the symmetric solutions of linear matrix equations, Linear Algebra Appl. 131 (1990) 1 7. [6] E.W.C. Heney, Introduction to Approximation Theory, McGraw-Hill, New York, [7] D. Xie, X. Hu, L. Zhang, The solvability conditions for inverse eigenvalue problem of anti-bisymmetric matrices, J. Comput. Math. (China) 20 (3) (2002) [8] L. Zhang, D. Xie, The least-square solution of subspace revolution, J. Hunan Univ. 19 (1) (1992) [9] Z. Zhang, X. Hu, L. Zhang, The solvability for the inverse eigenvalue problem of Hermitian generalized Hamiltonian matrices, Inverse Problems 18 (2002) [10] D. Xie, X. Hu, L. Zhang, The inverse problem for bisymmetric matrices on a linear manifold, Math. Numer. Sinica 2 (2000) [11] S.Q. Zhou, H. Dai, The Algebraic Inverse Eigenvalue Problem, Henan Science and Technology Press, Zhengzhou, China, 1991 (in Chinese). [12] P.H. Schonemann, A generalized solution of the orthogonal Procrustes problem, Psychomctrika 31 (1966) [13] N.J. Higham, The symmetric Procrustes problem, BIT 28 (1988) [14] G.H. Golub, C.F. Van Loan, Matrix Computation, third ed., The Johns Hopkins University Press, USA, 1996.

Research Article Constrained Solutions of a System of Matrix Equations

Research Article Constrained Solutions of a System of Matrix Equations Journal of Applied Mathematics Volume 2012, Article ID 471573, 19 pages doi:10.1155/2012/471573 Research Article Constrained Solutions of a System of Matrix Equations Qing-Wen Wang 1 and Juan Yu 1, 2 1

More information

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Journal of Computational and Applied Mathematics 200 (2007) 749 760 www.elsevier.com/locate/cam The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Xiang-yang Peng a,b,, Xi-yan

More information

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D International Journal of Algebra, Vol. 5, 2011, no. 30, 1489-1504 The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D D. Krishnaswamy Department of Mathematics Annamalai University

More information

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction Journal of Computational Mathematics, Vol22, No4, 2004, 535 544 THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1 Zhen-yun Peng Department of Mathematics, Hunan University of

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Subset selection for matrices

Subset selection for matrices Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, 2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

More information

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with -(Skew) Centrosymmetry Zhong-Yun Liu 1 You-Cai Duan 1 Yun-Feng Lai 1 Yu-Lin Zhang 1 School of Math., Changsha University

More information

On the solvability of an equation involving the Smarandache function and Euler function

On the solvability of an equation involving the Smarandache function and Euler function Scientia Magna Vol. 008), No., 9-33 On the solvability of an equation involving the Smarandache function and Euler function Weiguo Duan and Yanrong Xue Department of Mathematics, Northwest University,

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Phys 201. Matrices and Determinants

Phys 201. Matrices and Determinants Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

Simultaneous Diagonalization of Positive Semi-definite Matrices

Simultaneous Diagonalization of Positive Semi-definite Matrices Simultaneous Diagonalization of Positive Semi-definite Matrices Jan de Leeuw Version 21, May 21, 2017 Abstract We give necessary and sufficient conditions for solvability of A j = XW j X, with the A j

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

Positive definite preserving linear transformations on symmetric matrix spaces

Positive definite preserving linear transformations on symmetric matrix spaces Positive definite preserving linear transformations on symmetric matrix spaces arxiv:1008.1347v1 [math.ra] 7 Aug 2010 Huynh Dinh Tuan-Tran Thi Nha Trang-Doan The Hieu Hue Geometry Group College of Education,

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 430 (2009) 532 543 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: wwwelseviercom/locate/laa Computing tight upper bounds

More information

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS Adv. Oper. Theory 3 (2018), no. 1, 53 60 http://doi.org/10.22034/aot.1702-1129 ISSN: 2538-225X (electronic) http://aot-math.org POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Miscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

Miscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51 Miscellaneous Results, Solving Equations, and Generalized Inverses opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 51 Result A.7: Suppose S and T are vector spaces. If S T and

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Is there a Small Skew Cayley Transform with Zero Diagonal?

Is there a Small Skew Cayley Transform with Zero Diagonal? Is there a Small Skew Cayley Transform with Zero Diagonal? Abstract The eigenvectors of an Hermitian matrix H are the columns of some complex unitary matrix Q. For any diagonal unitary matrix Ω the columns

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

The minimum rank of matrices and the equivalence class graph

The minimum rank of matrices and the equivalence class graph Linear Algebra and its Applications 427 (2007) 161 170 wwwelseviercom/locate/laa The minimum rank of matrices and the equivalence class graph Rosário Fernandes, Cecília Perdigão Departamento de Matemática,

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

A property of orthogonal projectors

A property of orthogonal projectors Linear Algebra and its Applications 354 (2002) 35 39 www.elsevier.com/locate/laa A property of orthogonal projectors Jerzy K. Baksalary a,, Oskar Maria Baksalary b,tomaszszulc c a Department of Mathematics,

More information

A property concerning the Hadamard powers of inverse M-matrices

A property concerning the Hadamard powers of inverse M-matrices Linear Algebra and its Applications 381 (2004 53 60 www.elsevier.com/locate/laa A property concerning the Hadamard powers of inverse M-matrices Shencan Chen Department of Mathematics, Fuzhou University,

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

arxiv: v3 [math.ra] 22 Aug 2014

arxiv: v3 [math.ra] 22 Aug 2014 arxiv:1407.0331v3 [math.ra] 22 Aug 2014 Positivity of Partitioned Hermitian Matrices with Unitarily Invariant Norms Abstract Chi-Kwong Li a, Fuzhen Zhang b a Department of Mathematics, College of William

More information

Re-nnd solutions of the matrix equation AXB = C

Re-nnd solutions of the matrix equation AXB = C Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

CLASSIFICATION OF COMPLETELY POSITIVE MAPS 1. INTRODUCTION

CLASSIFICATION OF COMPLETELY POSITIVE MAPS 1. INTRODUCTION CLASSIFICATION OF COMPLETELY POSITIVE MAPS STEPHAN HOYER ABSTRACT. We define a completely positive map and classify all completely positive linear maps. We further classify all such maps that are trace-preserving

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Moore Penrose inverses and commuting elements of C -algebras

Moore Penrose inverses and commuting elements of C -algebras Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Equivalence constants for certain matrix norms II

Equivalence constants for certain matrix norms II Linear Algebra and its Applications 420 (2007) 388 399 www.elsevier.com/locate/laa Equivalence constants for certain matrix norms II Bao Qi Feng a,, Andrew Tonge b a Department of Mathematical Sciences,

More information

Generalized Principal Pivot Transform

Generalized Principal Pivot Transform Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

On Euclidean distance matrices

On Euclidean distance matrices On Euclidean distance matrices R. Balaji and R. B. Bapat Indian Statistical Institute, New Delhi, 110016 November 19, 2006 Abstract If A is a real symmetric matrix and P is an orthogonal projection onto

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Sums of diagonalizable matrices

Sums of diagonalizable matrices Linear Algebra and its Applications 315 (2000) 1 23 www.elsevier.com/locate/laa Sums of diagonalizable matrices J.D. Botha Department of Mathematics University of South Africa P.O. Box 392 Pretoria 0003

More information

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,

More information

Sub-Stiefel Procrustes problem. Krystyna Ziętak

Sub-Stiefel Procrustes problem. Krystyna Ziętak Sub-Stiefel Procrustes problem (Problem Procrustesa z macierzami sub-stiefel) Krystyna Ziętak Wrocław April 12, 2016 Outline 1 Joao Cardoso 2 Orthogonal Procrustes problem 3 Unbalanced Stiefel Procrustes

More information

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix 3.3. Matrix Rank and the Inverse of a Full Rank Matrix 1 Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix Note. The lengthy section (21 pages in the text) gives a thorough study of the rank

More information

On the Eigenstructure of Hermitian Toeplitz Matrices with Prescribed Eigenpairs

On the Eigenstructure of Hermitian Toeplitz Matrices with Prescribed Eigenpairs The Eighth International Symposium on Operations Research and Its Applications (ISORA 09) Zhangjiajie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp 298 305 On the Eigenstructure of Hermitian

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

The Laplacian spectrum of a mixed graph

The Laplacian spectrum of a mixed graph Linear Algebra and its Applications 353 (2002) 11 20 www.elsevier.com/locate/laa The Laplacian spectrum of a mixed graph Xiao-Dong Zhang a,, Jiong-Sheng Li b a Department of Mathematics, Shanghai Jiao

More information