MINIMAL NORMAL AND COMMUTING COMPLETIONS
|
|
- Shanon Hunt
- 5 years ago
- Views:
Transcription
1 INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND HUGO J WOERDEMAN Abstract We study the minimal normal completion problem: given A A C n n A1, how do we find an n+q n+q normal matri A et := A 1 A of smallest possible size? We will show that this smallest number q of rows and columns we need to add, called the normal defect of A, satisfies nda ma{i AA A A, i + AA A A}, where i ± M denotes the number of positive and negative eigenvalues of the Hermitian matri M counting multiplicities Subsequently, we will show that for some matrices a minimal normal completion can be chosen to be a multiple of a unitary, addressing a conjecture from [H J Woerdeman, Linear and Multilinear Algebra , 59 68] In addition, we study the related question where A C n n and B A C n n A1 are given, and where we look for A et := and B et := A 1 A B B1 such that they commute and are of smallest possible size B 1 B Key Words commuting completions, commuting defect, normal completions, normal defect, inertia, inverse defect, unitary defect 1 Introduction The minimal normal completion problem was introduced in [6] and concerns the following Given A C n n, we wish to find a smallest possible normal matri with A as a principal submatri Recall that a matri A is normal if and only if the commutator of A and its conjugate transpose A, denoted by [A, A ] := AA A A, equals In other words, we would like to find a normal completion of A? :?? C n C q of smallest possible size thus smallest possible q We shall call this smallest number q the normal defect of A, and denote it by nda Clearly, nda = if and A A only if A is normal As observed in [], the matri A is normal, so it follows A that for an n n matri A we have that nda n As was observed in [6], and as we shall see further on, we have in fact that nda n 1 It is also not hard to A come up with the lower bound nda 1 rankaa A A1 A Indeed if A 1 A is of size n + q n + q and normal then AA A A = A 1A 1 A 1 A 1 and thus rankaa A A rank A 1A 1 + rank A 1 A 1 q + q = q C n C q Received by the editors December 8, 6 and, in revised form, May 18, 7 Mathematics Subject Classification 15A4, 15A4, 15A57 5
2 MINIMAL NORMAL AND COMMUTING COMPLETIONS 51 In order to obtain sharper bounds for nda, the so-called unitary defect was introduced in [6] It corresponds to the smallest number of rows and columns we need to add to A such that the completion is a multiple of a unitary matri, and as it turns out we have that 1 uda := rank A A A, where denotes the spectral norm As multiples of unitaries are normal we clearly have that nda uda Formula 1 implies that uda n 1, which yields nda n 1 In order to state a conjecture from [6], let us recall that a matri A C n n is called unitarily reducible if A = U A1 U, with U A unitary and A 1, A of nontrivial size Clearly, with A as above we have that nda nda 1 + nda uda 1 + uda uda As soon as A 1 = A we have that the last inequality in is strict, and thus nda < uda in that case So for a general statement for the case when nda = uda, it is natural to require that A is unitarily irreducible, which by definition means that A is not unitarily reducible An open question from [6] is whether the following conjecture holds Conjecture 1 For a unitarily irreducible matri A we have that nda = uda In this paper we refine some of the estimates for nda and as a result obtain more evidence for this conjecture Let us mention that the separability problem that appears in quantum computation can be seen as a normal completion problem where additional constraints need to be met; see [7] for details In that contet, minimizing the size of the matri corresponds to minimizing the number of states in the separable representation We will end this paper with considering the problem of completing two matrices to make them commute The paper is organized as follows In Section, we obtain an improved lower bound for nda by showing that nda ma{i + [A, A ],i [A, A ]}, where i ± M denotes the number of positive/negative eigenvalues of the Hermitian matri M Using this improved lower bound we are able to show that for some weighted Jordan blocks we have that nda = uda, providing new evidence for Conjecture 1 Net, in Section 3 we eamine matrices for which nda = 1 Finally in Section 4 we eplore the commuting completion problem Main result and normal defect conjecture In this section we shall prove our main result and provide evidence for Conjecture 1 Theorem 1 Given A C n n Then nda ma{i + [A, A ], i [A, A ]} A A1 Proof Let A et := be a normal completion of size n + q n + q A 1 A From the normality of A et we get 3 AA A A = A 1A 1 A 1 A 1 Let us denote the eigenvalues of the Hermitian matrices A 1A 1 A 1 A 1 and A 1A 1 by λ 1 λ n and µ 1 µ n, respectively By the Courant-Fischer
3 5 DP KIMSEY AND HJ WOERDEMAN theorem see eg, Theorem 411 in [3], we get for 1 j n 4 5 λ j = min w 1,w,,w n j C n min w 1,w,,w n j C n ma, C n w 1,w,,w n j ma, C n w 1,w,,w n j A 1A 1 A 1 A 1 A 1A 1 = µ j Since µ n q =, we get λ n q Thus 3 gives i + [A, A ] q Notice that a similar argument can be carried out by looking at the eigenvalues of A 1A 1 A 1 A 1 and A 1 A 1, which will give i [A, A ] q This proves the result Using the well-known connection between normal matrices N and pairs of commuting Hermitian matrices ReN, ImN, where ReN = 1 N + N and ImN = 1 i N N, one can easily deduce the following corollary Corollary 1 Let Hermitian matrices A, B C n n be given If there eist Hermitian matrices A et =, B A B et = of size n + q n + q that commute, then q ma{i + iba AB, i iba AB} Proof Let N = A + ib Calculating NN we get NN = A + iba ib = A iab+iba+b Now if we calculate N N we get N N = A +iab iba+b Thus NN N N = iba AB It then follows from Theorem 1 that q ma{i + iba AB, i iba AB} Net we eplore a class of matrices for which the equality nda = uda is true a 1 Proposition 1 Let A C n n be of the form A := with an 1 either a 1 = = a l > > a n 1 > or < a 1 < < a n l = = a n 1, where 1 l n 1 Then nda = uda = n l Proof Let α := a 1 = = a l = A By Proposition 54 in [6] we have that nda uda Recall that uda = rank A A A Thus α uda = rank a 1 α a n 1 α = rank α a l+1 α a n 1 = n l
4 MINIMAL NORMAL AND COMMUTING COMPLETIONS 53 Thus we have nda uda = n l To achieve a lower bound for nda, we notice that Theorem 1 yields nda ma{i + [A, A ], i [A, A ]} Observe that α [A, A ] = a l+1 a l a n 1 a n a n 1 Therefore i + [A, A ] = 1 and i + [A, A ] = n l Clearly, n l 1 and thus nda n l We conclude that nda = uda = n l The proof for the second part of the statement is similar We observe that the matrices in Proposition 1 are unitarily irreducible, as the following lemma shows Lemma 1 Let A C n n be strictly upper triangular with a i,i+1 for 1 i n 1 Then A is unitarily irreducible Proof Let A = U A1 U with U unitary, A A 1 C p p and A C q q with p + q = n We need to show that p = or q = Notice that n 1 = ranka = ranka 1 + ranka p + q = n So there eist two possibilities, either ranka 1 = p and ranka = q 1 or ranka 1 = p 1 and ranka = q Without loss of generality we assume that ranka 1 = p and ranka = q 1 Thus A 1 is invertible and ranka k 1 = p for all k As = ranka n ranka n 1 = p, it follows that p = 3 When the normal defect equals one Inspired by [4] we consider the case when the normal defect is equal to one For the class of matrices considered before we have the following observation a 1 Proposition Given a matri A of the form A :=, with an 1 a 1,, a n 1 C \ {} Then nda = 1 if and only if a 1 = = a n 1 =: α Furthermore, when n 4 all minimal normal completions have the form a 1 a n 1, with α = a n = a n+1 A et := a n a n+1
5 54 DP KIMSEY AND HJ WOERDEMAN Proof Observe that a 1 a a 1 [A, A ] =, a n 1 a n a n 1 which yields that ma{i + [A, A ], i [A, A ]} 1, and equality holds if and only if a 1 = = a n 1 Using Theorem 1 it follows that for nda = 1 it is necessary that a 1 = = a n 1 Let now a 1 = = a n 1, and put a 1 β 1 β A et := A β a n 1 =, γ δ β n γ 1 γ γ n δ where β := T β 1 β n and γ := γ1 γ n Let us assume that Aet is normal Then we see that AA + ββ = A A + γ γ, or equivalently [A,A ] = γ γ ββ Recall that a 1 [A, A ] = a n 1 = γ 1 γ 1 γ n γ n β 1 β n β 1 β n γ 1 β 1 γ 1 γ β 1 β γ 1 γ n β 1 β n γ γ 1 β β 1 γ β γ γ n β β n = γ n γ 1 β n β 1 γ n γ β n β γ n β n =: W = w ij n i,j=1 As a 1, a n 1 > we get that γ 1 = a 1 + β 1, β n = a n 1 + γ n From w ii = for i n 1, we see that γ i = β i for i =,, n 1 From w i1 = for i =,, n, we see that γ i γ 1 β i β 1 = This implies that γ i γ 1 β i β 1 = Since γ i = β i for i =,, n 1, we get γ i γ 1 β 1 = So either γ i = β i = or γ 1 = β 1 But γ 1 = β 1 since γ 1 = a 1 + β 1 > β 1 Therefore γ i = β i = for i =,, n 1
6 MINIMAL NORMAL AND COMMUTING COMPLETIONS 55 To find β 1, β n, γ 1 and γ n, we observe the following equation that results from A et being normal, Aγ + βδ = A β + γ δ Rewriting we see that γ a 1 1 β 1 + δ an 1 γ n β n β 1 γ 1 = a 1 + δ a n 1 β n γ n β 1 δ γ 1 δ a 1 β 1 Simplifying we get that + = + a n 1 γ n As n 4 we β n δ γ n δ see that 6 β 1 δ = γ 1 δ, β n δ = γ n δ, 7 a 1 β 1 =, 8 a n 1 γ n = From 7 we get β 1 = since a 1 = From 8 we get γ n = From 6 we get δ = since β 1 = and γ 1 Finally from the equations for w 11 and w nn we see that γ 1 = β n = α In conclusion, for n 4 the normality of A et implies that it is of the form a 1 9 A et := a n 1, a 1 = = a n+1 = a n a n+1 Clearly, for any n, if A et is as in 9, then A et is normal This proves the proposition It is worth mentioning that for n = or n = 3 the above proposition does 3 not describe the full situation For eample, consider A = and à := 1 1 Then A et := and Ãet := 1 also yield minimal normal completions for A and Ã, respectively Net we determine the eigenvalues of the completed normal matri A et from Proposition
7 56 DP KIMSEY AND HJ WOERDEMAN Lemma For the normal matri A et := α e iθ 1 α e iθ n 1 α e iθn α e iθn+1 all eigenvalues of A et are eactly λ k = α e i ψ+πk n+1 where ψ = θ θ n+1 and k =,, n Proof Computing the characteristic polynomial of A et we see that λ n+1 = α n+1 e iψ, where ψ = θ 1 + +θ n+1 Thus λ k = α e i ψ+πk n+1 with k =,, n λ n n k= λ λ k λ λ k λ n λ n+1 α n+1 e iψ = In [4] the following problem was considered Let λ,, λ n and µ 1,, µ n be two sequences of comple numbers When can one find a n + 1 n + 1 normal matri with eigenvalues λ,, λ n whose n n principal submatri has eigenvalues µ 1,, µ n? In [4] the author obtained the following result Define n j=1 λ := λ µ j n k= λ λ k Then there eists a normal matri A with spectrum λ,, λ n and with an n n principal submatri with spectrum µ 1,, µ n if and only if the rational function has only simple poles and Res λk λ, k =,, n If we take A et in Lemma we observe that A et has spectrum λ k = α e i ψ+πk n+1, where ψ = θ θ n+1 and k =,, n When we remove the last row and last column in A et the remaining matri has eigenvalues µ i =, i = 1,, n λ Thus by the result in [4] we must have that Res λk n is satisfied This is indeed true, since Res λk = lim λ n+1 α n+1 e iψ λ λk λ lim n +λ λ k nλ n 1 λ λk n+1λ = 1 n n+1, where we used L Hôpital s rule in the second equality 4 Commuting defect In [1] the following minimal commuting completion problem was introduced Given A 1,, A d C n n, how do we find A 1 et,, A d et C n+q n+q of smallest possible size with Ai A i et = such that [A i et, A j et ] =, i j We will restrict our investigation to completing only two matrices: given A, B C n n how do we find A et, B et of smallest possible size, with A A1 1 A et = A 1 A, B et = B B1 B 1 B such that [A et, B et ] = We shall call the smallest possible number q the commuting defect, and denote it cda, B Clearly, cda, B = if and only if [A, B] = As shown in [1], cda, B 1 rank[a, B] Indeed if [A et, B et ] = then AB BA = B 1 A 1 A 1 B 1 and thus rank[a, B] rankb 1 A 1 + ranka 1 B 1 q + q = q The results in [1] also show that cda, B n One easily sees this by taking A B B A A et = and B B A et = A B One useful observation for this problem is that if two square matrices C and D satisfy CD = αi for some α, then automatically CD = DC With this in,
8 MINIMAL NORMAL AND COMMUTING COMPLETIONS 57 mind we introduce the minimal inverse completion problem: Given A, B C n n, how do we find A et, B et C n+q n+q as in 1 of smallest possible size such that A et B et = αi n+q, for some α We shall call this smallest number q the inverse defect and denote it by ida, B The inverse defect of a pair of matrices is easily determined as the following theorem shows Theorem For A, B C n n, suppose α is the nonzero eigenvalue of AB with the highest geometric multiplicity Then ida, B = rankαi n AB Proof Let A et A? B? := and B?? et := eist such that A?? et B et = αi n+q, where α We notice that A et B et = αi n+q if and only if αin+q A rank et = n + q, B et I n+q αin which is obviously bigger than or equal to rank B A As the latter equals I n n + rankαi n AB by a Schur complement argument, we conclude that n + q n + rankαi n AB or q rankαi n AB as required Now let α be a nonzero eigenvalue of AB with highest geometric multiplicity or equivalently, the α so that rankαi n AB is minimal, and set αi n A? q = rankαi n AB We define W := αi q?? B? I n After a permutation similarity we arrive at W := αi n A??? I q αi q??? B I n By inspection we see?? I q that W is a partial banded matri with a pattern J, say Therefore we can apply Theorem 11 from [5] We now have that min rank W = ma T J min rank W T, where W T is the partial matri obtained from W by only keeping the known entries { that lie in the triangular } subpattern T This gives us that min rank W = αin A ma n + q, rank = n + q, by the choice of q Thus B I has a completion n of rank n + q and consequently we can find A et and B et such that A et B et = αi n+q Thus ida, B rankαi n AB This proves the proposition Let us outline how to find a minimal inverse completion Algorithm 1 Let A, B and α be as in Theorem 41 and put q = ida, B and p = n q First determine an invertible matri S so that SABS 1 αip P = for Q C11 C some P and Q of sizes p q and q q, respectively Write now SA = 1, C 1 C BS 1 D11 D = 1 with C D 1 D 11, D 11 of size p p and C, D of size q q Choose X and Y to be invertible matrices of size q q and let D et = D 11 D 1 D 11 C 1 + D 1 C Y 1 D 1 D D 1 C 1 + D C αiy 1 X XC Y 1
9 58 DP KIMSEY AND HJ WOERDEMAN and put B et = D et S I q, A et = αb 1 et Then one can check that A et and B et have the required form and obviously A et B et = αi Let us try the algorithm out on an eample 1 1 Eample 1 Let A = and B = Then AB = 1 eigenvalues ± Let us choose α = and S =, SA = Thus we get D et = 1 1 A et = 4 y, B et = 1 y y y y y has Then BS 1 = y y y 1 1 and As observed before, we have that cda, B ida, B In general there is no equality For instance, when A = and B = 1 we have cda, B = and ida, B = 1 That such a simple eample eists seems to be due to the fact that we are ecluding the possibility α = in the definition of ida, B; we do this since CD = does not imply DC = But now one can ask what happens when A and B are nonsingular Even in that case one can in general improve upon the estimate cda, B ida, B by doing the following Suppose there is an invertible matri S so that A 1 B 1 11 SAS 1 =, SBS 1 =, A d B d where A i and B i, i = 1,, d, are square matrices of the same nontrivial size Then completing A i and B i to A i et and B i et that commute for all i {1,, d} yields completions A et and B et for A and B, respectively, that commute Thus cda, B d i=1 cda i, B i d i=1 ida i, B i We are now led to the following question: Let A and B be nonsingular matrices so that for no invertible S we have that 11 holds with d Is it then true that cda, B = ida, B? The questions in this section may also be pursued in the class of real symmetric matrices In other words, let A and B be real symmetric and look for A et and B et that are also real symmetric As a comple symmetric matri N is normal if and only if the real symmetric matrices A = Re N and B = Im N commute, Corollary 1 applies The real symmetric case is of interest in deriving multivariable quadrature formulas; see [1] In their setting A and B have a tridiagonal block form and A et and B et are required to have this form as well For this reason it may not be optimal to look for A et and B et with A et B et = αi n+q We hope to further pursue the real symmetric case in a future publication Acknowledgments The authors are very grateful for the helpful suggestions that were provided by the anonymous referee In addition, we wish to thank Professor Ilya Spitkovsky for suggesting a more compact proof of Theorem 1 Both authors were partially supported by NSF grant DMS-5678 David P Kimsey performed the research as part of an REU project
10 MINIMAL NORMAL AND COMMUTING COMPLETIONS 59 References [1] I Degani, J Schiff, and D J Tannor Commuting etensions and cubature formulae Numer Math, 113:479 5, 5 [] P R Halmos Subnormal suboperators and the subdiscrete topology In Anniversary volume on approimation theory and functional analysis Oberwolfach, 1983, volume 65 of Internat Schriftenreihe Numer Math, pages Birkhäuser, Basel, 1984 [3] R A Horn and C R Johnson Matri analysis Cambridge University Press, Cambridge, 199 Corrected reprint of the 1985 original [4] S M Malamud Inverse spectral problem for normal matrices and the Gauss-Lucas theorem Trans Amer Math Soc, 3571: electronic, 5 [5] H J Woerdeman Minimal rank completions of partial banded matrices Linear and Multilinear Algebra, 361:59 68, 1993 [6] H J Woerdeman Hermitian and normal completions Linear and Multilinear Algebra, 43:39 8, 1997 [7] H J Woerdeman The separability problem and normal completions Linear Algebra Appl, 376:85 95, 4 Department of Mathematics, Dreel University, Philadelphia, PA 1914, USA dpk7@dreeledu and hugo@mathdreeledu
Elementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationFrame Diagonalization of Matrices
Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)
More informationCommuting nilpotent matrices and pairs of partitions
Commuting nilpotent matrices and pairs of partitions Roberta Basili Algebraic Combinatorics Meets Inverse Systems Montréal, January 19-21, 2007 We will explain some results on commuting n n matrices and
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationMoore Penrose inverses and commuting elements of C -algebras
Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We
More informationMATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006
MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 2.1 : 2, 5, 9, 12 2.3 : 3, 6 2.4 : 2, 4, 5, 9, 11 Section 2.1: Unitary Matrices Problem 2 If λ σ(u) and U M n is unitary, show that
More informationMath 408 Advanced Linear Algebra
Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationTrace Inequalities for a Block Hadamard Product
Filomat 32:1 2018), 285 292 https://doiorg/102298/fil1801285p Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat Trace Inequalities for
More informationb jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60
On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationAbstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.
HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationMatrix Algebra. Matrix Algebra. Chapter 8 - S&B
Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number
More informationSolving Homogeneous Systems with Sub-matrices
Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationSome bounds for the spectral radius of the Hadamard product of matrices
Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard
More informationarxiv: v3 [math.ra] 10 Jun 2016
To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles
More informationMath Homework 8 (selected problems)
Math 102 - Homework 8 (selected problems) David Lipshutz Problem 1. (Strang, 5.5: #14) In the list below, which classes of matrices contain A and which contain B? 1 1 1 1 A 0 0 1 0 0 0 0 1 and B 1 1 1
More informationInequalitiesInvolvingHadamardProductsof HermitianMatrices y
AppliedMathematics E-Notes, 1(2001), 91-96 c Availablefreeatmirrorsites ofhttp://math2.math.nthu.edu.tw/»amen/ InequalitiesInvolvingHadamardProductsof HermitianMatrices y Zhong-pengYang z,xianzhangchong-guangcao
More informationMath 489AB Exercises for Chapter 2 Fall Section 2.3
Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial
More informationThe spectral decomposition of near-toeplitz tridiagonal matrices
Issue 4, Volume 7, 2013 115 The spectral decomposition of near-toeplitz tridiagonal matrices Nuo Shen, Zhaolin Jiang and Juan Li Abstract Some properties of near-toeplitz tridiagonal matrices with specific
More informationELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES
ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ZHONGPENG YANG AND XIAOXIA FENG Abstract. Under the entrywise dominance partial ordering, T.L. Markham
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationGENERAL ARTICLE Realm of Matrices
Realm of Matrices Exponential and Logarithm Functions Debapriya Biswas Debapriya Biswas is an Assistant Professor at the Department of Mathematics, IIT- Kharagpur, West Bengal, India. Her areas of interest
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More informationInterlacing Inequalities for Totally Nonnegative Matrices
Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are
More informationLinear Operators Preserving the Numerical Range (Radius) on Triangular Matrices
Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices Chi-Kwong Li Department of Mathematics, College of William & Mary, P.O. Box 8795, Williamsburg, VA 23187-8795, USA. E-mail:
More informationMaximizing the numerical radii of matrices by permuting their entries
Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and
More informationMath 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.
Math 554 Qualifying Exam January, 2019 You may use any theorems from the textbook. Any other claims must be proved in details. 1. Let F be a field and m and n be positive integers. Prove the following.
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationGeneralized Numerical Radius Inequalities for Operator Matrices
International Mathematical Forum, Vol. 6, 011, no. 48, 379-385 Generalized Numerical Radius Inequalities for Operator Matrices Wathiq Bani-Domi Department of Mathematics Yarmouk University, Irbed, Jordan
More informationPositive definite preserving linear transformations on symmetric matrix spaces
Positive definite preserving linear transformations on symmetric matrix spaces arxiv:1008.1347v1 [math.ra] 7 Aug 2010 Huynh Dinh Tuan-Tran Thi Nha Trang-Doan The Hieu Hue Geometry Group College of Education,
More informationCHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then
LECTURE 6: POSITIVE DEFINITE MATRICES Definition: A Hermitian matrix A Mn is positive definite (pd) if x Ax > 0 x C n,x 0 A is positive semidefinite (psd) if x Ax 0. Definition: A Mn is negative (semi)definite
More informationIsometries Between Matrix Algebras
Abstract Isometries Between Matrix Algebras Wai-Shun Cheung, Chi-Kwong Li 1 and Yiu-Tung Poon As an attempt to understand linear isometries between C -algebras without the surjectivity assumption, we study
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationdeterminantal conjecture, Marcus-de Oliveira, determinants, normal matrices,
1 3 4 5 6 7 8 BOUNDARY MATRICES AND THE MARCUS-DE OLIVEIRA DETERMINANTAL CONJECTURE AMEET SHARMA Abstract. We present notes on the Marcus-de Oliveira conjecture. The conjecture concerns the region in the
More informationMath 315: Linear Algebra Solutions to Assignment 7
Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are
More informationTwo applications of the theory of primary matrix functions
Linear Algebra and its Applications 361 (2003) 99 106 wwwelseviercom/locate/laa Two applications of the theory of primary matrix functions Roger A Horn, Gregory G Piepmeyer Department of Mathematics, University
More informationMathematical Foundations
Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for
More informationCentral Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J
Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central
More information13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2
The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=
More informationBounds for Levinger s function of nonnegative almost skew-symmetric matrices
Linear Algebra and its Applications 416 006 759 77 www.elsevier.com/locate/laa Bounds for Levinger s function of nonnegative almost skew-symmetric matrices Panayiotis J. Psarrakos a, Michael J. Tsatsomeros
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationIyad T. Abu-Jeib and Thomas S. Shores
NEW ZEALAND JOURNAL OF MATHEMATICS Volume 3 (003, 9 ON PROPERTIES OF MATRIX I ( OF SINC METHODS Iyad T. Abu-Jeib and Thomas S. Shores (Received February 00 Abstract. In this paper, we study the determinant
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationDistribution for the Standard Eigenvalues of Quaternion Matrices
International Mathematical Forum, Vol. 7, 01, no. 17, 831-838 Distribution for the Standard Eigenvalues of Quaternion Matrices Shahid Qaisar College of Mathematics and Statistics, Chongqing University
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationFACTORING A QUADRATIC OPERATOR AS A PRODUCT OF TWO POSITIVE CONTRACTIONS
FACTORING A QUADRATIC OPERATOR AS A PRODUCT OF TWO POSITIVE CONTRACTIONS CHI-KWONG LI AND MING-CHENG TSAI Abstract. Let T be a quadratic operator on a complex Hilbert space H. We show that T can be written
More informationMeans of unitaries, conjugations, and the Friedrichs operator
J. Math. Anal. Appl. 335 (2007) 941 947 www.elsevier.com/locate/jmaa Means of unitaries, conjugations, and the Friedrichs operator Stephan Ramon Garcia Department of Mathematics, Pomona College, Claremont,
More informationSingular Value and Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices
Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 8 2017 Singular Value Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices Aliaa Burqan Zarqa University,
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationDiagonalization by a unitary similarity transformation
Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationGeometric Mapping Properties of Semipositive Matrices
Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices
More informationHigher rank numerical ranges of rectangular matrix polynomials
Journal of Linear and Topological Algebra Vol. 03, No. 03, 2014, 173-184 Higher rank numerical ranges of rectangular matrix polynomials Gh. Aghamollaei a, M. Zahraei b a Department of Mathematics, Shahid
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationSingular Value Inequalities for Real and Imaginary Parts of Matrices
Filomat 3:1 16, 63 69 DOI 1.98/FIL16163C Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Singular Value Inequalities for Real Imaginary
More informationLinGloss. A glossary of linear algebra
LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal
More informationThe Singular Value Decomposition and Least Squares Problems
The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationAN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION
Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationMultiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationLU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU
LU Factorization A m n matri A admits an LU factorization if it can be written in the form of Where, A = LU L : is a m m lower triangular matri with s on the diagonal. The matri L is invertible and is
More informationStrict diagonal dominance and a Geršgorin type theorem in Euclidean
Strict diagonal dominance and a Geršgorin type theorem in Euclidean Jordan algebras Melania Moldovan Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More information0.1 Rational Canonical Forms
We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best
More informationLinear Equations in Linear Algebra
1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION,, 1 n A linear equation in the variables equation that can be written in the form a a a b 1 1 2 2 n n a a is an where
More informationSome Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces
Some Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces S.S. Dragomir Abstract. Some new inequalities for commutators that complement and in some instances improve recent results
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationCounting Matrices Over a Finite Field With All Eigenvalues in the Field
Counting Matrices Over a Finite Field With All Eigenvalues in the Field Lisa Kaylor David Offner Department of Mathematics and Computer Science Westminster College, Pennsylvania, USA kaylorlm@wclive.westminster.edu
More informationTopic 1: Matrix diagonalization
Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it
More informationLecture notes on Quantum Computing. Chapter 1 Mathematical Background
Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For
More informationChapter 7. Linear Algebra: Matrices, Vectors,
Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.
More informationN-WEAKLY SUPERCYCLIC MATRICES
N-WEAKLY SUPERCYCLIC MATRICES NATHAN S. FELDMAN Abstract. We define an operator to n-weakly hypercyclic if it has an orbit that has a dense projection onto every n-dimensional subspace. Similarly, an operator
More informationOn the Schur Complement of Diagonally Dominant Matrices
On the Schur Complement of Diagonally Dominant Matrices T.-G. Lei, C.-W. Woo,J.-Z.Liu, and F. Zhang 1 Introduction In 1979, Carlson and Markham proved that the Schur complements of strictly diagonally
More informationWhat is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =
STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row
More informationFinding eigenvalues for matrices acting on subspaces
Finding eigenvalues for matrices acting on subspaces Jakeniah Christiansen Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 Faculty advisor: Prof Todd Kapitula Department
More informationMatrix inversion and linear equations
Learning objectives. Matri inversion and linear equations Know Cramer s rule Understand how linear equations can be represented in matri form Know how to solve linear equations using matrices and Cramer
More informationCLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES
Bull Korean Math Soc 45 (2008), No 1, pp 95 99 CLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES In-Jae Kim and Bryan L Shader Reprinted
More informationChapter 4 & 5: Vector Spaces & Linear Transformations
Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think
More informationELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this
More information3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:
3.3 Gradient Vector and Jacobian Matri 3 3.3 Gradient Vector and Jacobian Matri Overview: Differentiable functions have a local linear approimation. Near a given point, local changes are determined by
More informationQUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES. 1. Introduction. A matrix A C n n is normal if AA = A A. A is said to be conjugate-normal if
QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES H. FAßBENDER AND KH. D. IKRAMOV Abstract. A matrix A C n n is unitarily quasi-diagonalizable if A can be brought by a unitary similarity transformation
More informationGeneralized Principal Pivot Transform
Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationOn Euclidean distance matrices
On Euclidean distance matrices R. Balaji and R. B. Bapat Indian Statistical Institute, New Delhi, 110016 November 19, 2006 Abstract If A is a real symmetric matrix and P is an orthogonal projection onto
More informationNotes on Linear Algebra and Matrix Analysis
Notes on Linear Algebra and Matrix Analysis Maxim Neumann May 2006, Version 0.1.1 1 Matrix Basics Literature to this topic: [1 4]. x y < y,x >: standard inner product. x x = 1 : x is normalized x y = 0
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationLinear algebra. S. Richard
Linear algebra S. Richard Fall Semester 2014 and Spring Semester 2015 2 Contents Introduction 5 0.1 Motivation.................................. 5 1 Geometric setting 7 1.1 The Euclidean space R n..........................
More informationEquality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.
Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read
More information