Department of Mathematics Technical Report May 2000 ABSTRACT. for any matrix norm that is reduced by a pinching. In addition to known
|
|
- Louise Morris
- 5 years ago
- Views:
Transcription
1 University of Kentucky Lexington Department of Mathematics Technical Report Pinchings and Norms of Scaled Triangular Matrices 1 Rajendra Bhatia 2 William Kahan 3 Ren-Cang Li 4 May 2000 ABSTRACT Suppose U is an upper-triangular matrix, and D a nonsingular diagonal matrix whose diagonal entries appear in nondescending order of magnitude down the diagonal. It is proved that kd ;1 UDkkUk for any matrix norm that is reduced by a pinching. In addition to known examples { weakly unitarily invariant norms { we show that any matrix norm dened by kak def Re (x Ay) = max x6=0 y6=0 (x) (y) where () and () are two absolute vector norms, has this property. This includes `p operator norms as a special case. 1 This report is available on the web at 2 Indian Statistical Institute, New Delhi , India. rbh@isid.ernet.in 3 Computer Science Division and Department of Mathematics, University of California at Berkeley, Berkeley, CA wkahan@cs.berkeley.edu 4 Department of Mathematics, University ofkentucky, Lexington, KY (rcli@ms.uky.edu). This work was supported in part by the National Science Foundation under Grant No. ACI and by the National Science Foundation CAREER award under Grant No. CCR
2 Pinchings and Norms of Scaled Triangular Matrices Rajendra Bhatia William Kahan y Ren-Cang Li z May 2000 Abstract Suppose U is an upper-triangular matrix, and D a nonsingular diagonal matrix whose diagonal entries appear in nondescending order of magnitude down the diagonal. It is proved that kd ;1 UDkkU k for any matrix norm that is reduced by apinching. In addition to known examples { weakly unitarily invariant norms { we show that any matrix norm dened by kak def Re (x Ay) = max x6=0 y6=0 (x) (y) where () and () are two absolute vector norms, has this property. This includes operator norms as a special case. `p Indian Statistical Institute, New Delhi , India. rbh@isid.ernet.in y Computer Science Division and Department of Mathematics, University of California at Berkeley, Berkeley, CA wkahan@cs.berkeley.edu z Department of Mathematics, University of Kentucky, Lexington, KY rcli@ms.uky.edu. This work was supported in part by the National Science Foundation under Grant No. ACI and by the National Science Foundation CAREER award under Grant No. CCR
3 Bhatia, Kahan, & Li: Norms of Scaled Triangular Matrices 2 1 Introduction Suppose U = (u ij ) is an upper-triangular matrix, i.e., u ij = 0 if i > j, and D = diag( 1 2 n ) a nonsingular diagonal matrix whose diagonal entries appear in ascending order of magnitude down the diagonal, i.e., 0 < j i jj j j if i<j. The matrix D ;1 UD is still upper-triangular and its nonzero entries satisfy j ;1 i u ij j j = j j = i jju ij jju ij j (1.1) since i j. As a consequence, we would expect that for certain matrix norms kk kd ;1 UDkkUk: (1.2) Inequality (1.2) is clearly equivalent tokuk kdud ;1 k and if norm kk is invariant under either transpose or conjugate transpose, then (1.2) is equivalent tokdld ;1 kklk and klk kd ;1 LDk which are also equivalent toeach other, where L is a lower-triangular matrix. Inequality (1.2) may be used to bound the componentwise relative condition number kju ;1 jjujk [4, p.37], [5] for solving a linear triangular system, where jj of a matrix is understood entrywise. The inequality implies kju ;1 jjujk= min kjd ;1 U ;1 jjudjk: D = diag( i )with0< j i jj j j for i<j Inequality (1.2) is obviously true for any absolute matrix norms since absolute matrix norms are increasing functions of absolute values of the matrix's entries [6, Denition and Theorem , p.285]. Frequently used absolute matrix norms are jaj p j(a ij )j p def = p s X i j ja ij j p for p 1 including the Frobenius norm (p = 2) as a special one. What about other matrix norms, especially the often used `p operator norms and the unitarily invariant norms? These norms are not, in general, increasing functions of the absolute values of the matrix entries. However, we shall prove that Inequality (1.2) is valid for them. This is true because they all satisfy the pinching inequality (see x2 below). More generally, we shall consider matrix norms dened by Re (x Ay) def kak = max and () x6=0 y6=0 (x) (y) and () are two absolute vector norms. (1.3) Here Re [] denotes the real part of a complex number. These include `p operator norms dened by def kayk p Re (x Ay) kak p = max = max for p 1 y6=0 kyk p x6=0 y6=0 kxk q kyk p where q is p's dual dened by 1=p +1=q =1,and kxk p = p s X j j j j p
4 Bhatia, Kahan, & Li: Norms of Scaled Triangular Matrices 3 for any vector x =( 1 2 ::: n ). The assumption that both () and () are absolute is essential. Here is a counterexample: n =2,() =kk 2, ()=kb k 2,and U = ;3 ;1 0 1 D = 1 2 and B = Then () is absolute while () is not. Calculations lead to 8 5 ;2 ;1 kuk = kub ;1 k 2 =5:42 > 4:16 = kd ;1 UDB ;1 k 2 = kd ;1 UDk : With this example, counterexamples with none of () and () being absolute can be constructed: one way istotake () asabove and () =ki k 2 for I = 1 1 for small jj. Itiseasytoverify that () is not absolute for any 6= 0. Then 0 () =kk 2, the same as the ()above. Since both kuk and kd ;1 UDk are continuous at =0, for sucient small6= 0wehave 2 Pinchings and Norms kuk > kd ;1 UDk : Let A be a n n matrix partitioned conformally into blocks as A = 0 A 11 A 12 A 1N 1 C A 21 A 22 A 2N A (2.1) A N 1 A N 2 A NN where the diagonal blocks are square. The block diagonal matrix C(A) def = 0 A 11 A ANN 1 C A : (2.2) is called a pinching of A. It is known that pinching reduces A's weakly unitarily invariant norms 1 (and thus unitarily invariant norms), i.e., The Pinching Inequality: kc(a)k kak (2.3) holds for every weakly unitarily invariant norm. The following lemma enlarges the domain of this inequality: 1 A matrix norm kkwu is weakly unitarily invariant, if it satises, besides the usual properties of any norm, also kxax kwu = kakwu for any unitary matrix X:
5 Bhatia, Kahan, & Li: Norms of Scaled Triangular Matrices 4 Lemma 2.1 Every norm kk dened as in (1.3) satises the pinching inequality. Proof: We follow the idea used in [2]. Let be a primitive Nth root of unity, and let X be the diagonal matrix that in a partitioning conformal with (2.1) has the form X = diag(i I 2 I ::: N;1 I) where the I's are the identity matrices of appropriate sizes. It can be seen that C(A) = 1 N N;1 X Let x, y be vectors such that (x) =1= (y) andkc(a)k (2.4) we have kc(a)k = 1 N = 1 N j=0 N;1 X j=0 X N;1 j=0 X j AX j : (2.4) Re [x X j AX j y] = Re [x C(A)y]. Then by Re [(X j x) A(X j y)]: (2.5) Now note that each X j is a diagonal matrix with unimodular diagonal entries. Since and are absolute norms, (X j x)=1= (X j y)forallj. Hence, each of the summands on the right hand side of (2.5) is bounded by kak. Consequently kc(a)k kak. In the next section, we shall need a special corollary of the pinching inequality: Lemma 2.2 Let A = A 11 A 12 be any block matrix in which the diagonal blocks are A 21 A 22 square. Let 0 1. Then A 11 A 12 A 21 A 22 kak for every norm that satises the pinching inequality (2.3). Proof: Write A 11 A 12 = A +(1; )C(A) A 21 A 22 and then use the triangle inequality and the pinching inequality. See [1, pp.97{107] or [3]. A matrix norm jjjjjj is unitarily invariant if it satises, besides the usual properties of any norm, also 1. jjxay jj = jja jj for any unitary matrices X and Y 2. jjajj = kak 2 for any A having rank one. See Bhatia [1, p.91] or Stewart and Sun [8, pp.74-88]. We denote by jjj jjj a general unitarily invariant norm. The most frequently used unitarily invariant norms are the spectral norm kk 2 (also called the `2 operator norm) and the Frobenius norm kk F.
6 Bhatia, Kahan, & Li: Norms of Scaled Triangular Matrices 5 3 Norms of Scaled Triangular Matrices Theorem 3.1 Suppose U is an upper-triangular matrix, and D a nonsingular diagonal matrix whose diagonal entries appear in ascending order of magnitude down the diagonal. Then the inequality kd ;1 UDkkUk (1.2) holds for any matrix norm that satises the pinching inequality (2.3). Proof: Without loss of any generality, wemay assume D = diag( 1 2 ::: n ) with 1= 1 2 n : Decompose D = F 2 F n, where F i = I i;1 i I n;i+1 i = i = i;1 1 for i =2 3 n,andi k is the k k identity matrix. Inequality (1.2) is proved if it is proved for every D = F i. This is what we shall do. Let F = I I Conformally partition, where 1 and the I's generally have dierent dimensions. U = R S T To prove the inequality (1.2) for D = F,wehave toshow R S T for everey norm that satises the pinching inequality. This follows from Lemma Norms and Wiping out of Entries The idea that we have used in proving Lemma 2.1 depends on expressing C(A) asan average of special diagonal similarities of A. This can be done for some other parts of a matrix, too. Let Sbe an equivalence relation on f1 2 ::: ng with equivalence classes I 1 I 2 ::: I N. Let F be the map on the space of n n matrices that sets all those (i j) entries { (i j) 62 S{ tozeroandkeeps the others unchanged. It is shown [2] that such a map has a representation F(A)= 1 N N;1 X j=0 R : S T X j AX j (4.1)
7 Bhatia, Kahan, & Li: Norms of Scaled Triangular Matrices 6 where X is a diagonal matrix whose ith diagonal entry is k;1 if i 2 I k,and is a primitive Nth root of unity. Using the argument in the proof of Lemma 2.1 we see that kf(a)k kak (4.2) for every norm dened in (1.3). A particularly interesting example of such a map F is the one that wipes out all entries of a matrix except those on the dexter and the sinister diagonals, i.e., F sets all A's entries to zero except those (i j) entries for which either i = j or i + j = n + 1, where nothing is changed. For A and C(A) as in (2.1) and (2.2), let O(A) =A ;C(A) be the o-diagonal part of A. A straightforward application of the pinching inequality shows that ko(a)k 2kAk : However, using the representation (2.4) we can show, as in [2], that ko(a)k 2(1 ; 1=N)kAk : (4.3) This inequality is sharp for the norm kk 2 [2]. The same idea can be used to estimate the norms ka ;F(A)k using the representation (4.1). Finally we remark that for the `p operator norms (1 p 1), we could obtain better bounds than (4.3) by aninterpolation argument. To see this, we notice that the operator norms kak 1 and kak 1 are diminished if any entry of A is wiped out. Hence, we have ko(a)k p kak p for p =1 1. (4.4) Since A 7 O(A) is a linear map on the space of matrices, we getbyinterpolating between the values p =1 2 1, the inequality ko(a)k p (2 ; 2=N) 2= maxfp qg kak p (4.5) where q is p's dual. For details of the interpolation method, the reader is referred to Reed and Simon [7, pp.27-45]. (The reader should be aware that kak p denotes the Schatten p-norm in [2], but here it denotes the `p operator norm.)
8 Bhatia, Kahan, & Li: Norms of Scaled Triangular Matrices 7 References [1] R. Bhatia. Matrix Analysis. Graduate Texts in Mathematics, vol Springer, New York, [2] R. Bhatia, M.-D. Choi, and C. Davis. Comparing a matrix to its o-diagonal part. Operator Theory: Advances and Applications, 40:151{164, [3] R. Bhatia and J. A. R. Holbrook. Unitary invariance and spectral variation. Linear Algebra and Its Applications, 95:43{68, [4] J. Demmel. Applied Numerical Linear Algebra. SIAM, Philadelphia, [5] N. J. Higham. Accuracy and Stability of Numerical Algorithms. SIAM, Philadephia, [6] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge, [7] M. Reed and B. Simon. Methods of Modern Mathematical Physics, II: Fourier Analysis, Self- Adjointness. Academic Press, New York, [8] G. W. Stewart and J.-G. Sun. Matrix Perturbation Theory. Academic Press, Boston, 1990.
A Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationClarkson Inequalities With Several Operators
isid/ms/2003/23 August 14, 2003 http://www.isid.ac.in/ statmath/eprints Clarkson Inequalities With Several Operators Rajendra Bhatia Fuad Kittaneh Indian Statistical Institute, Delhi Centre 7, SJSS Marg,
More informationthe Unitary Polar Factor æ Ren-Cang Li Department of Mathematics University of California at Berkeley Berkeley, California 94720
Relative Perturbation Bounds for the Unitary Polar Factor Ren-Cang Li Department of Mathematics University of California at Berkeley Berkeley, California 9470 èli@math.berkeley.eduè July 5, 994 Computer
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995
More informationInterpolating the arithmetic geometric mean inequality and its operator version
Linear Algebra and its Applications 413 (006) 355 363 www.elsevier.com/locate/laa Interpolating the arithmetic geometric mean inequality and its operator version Rajendra Bhatia Indian Statistical Institute,
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationthe Unitary Polar Factor æ Ren-Cang Li P.O. Box 2008, Bldg 6012
Relative Perturbation Bounds for the Unitary Polar actor Ren-Cang Li Mathematical Science Section Oak Ridge National Laboratory P.O. Box 2008, Bldg 602 Oak Ridge, TN 3783-6367 èli@msr.epm.ornl.govè LAPACK
More informationLecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?
KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of
More informationarxiv: v1 [math.ca] 7 Jan 2015
Inertia of Loewner Matrices Rajendra Bhatia 1, Shmuel Friedland 2, Tanvi Jain 3 arxiv:1501.01505v1 [math.ca 7 Jan 2015 1 Indian Statistical Institute, New Delhi 110016, India rbh@isid.ac.in 2 Department
More informationMultiplicative Perturbation Analysis for QR Factorizations
Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations
More informationPROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS
PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix
More informationThe matrix arithmetic-geometric mean inequality revisited
isid/ms/007/11 November 1 007 http://wwwisidacin/ statmath/eprints The matrix arithmetic-geometric mean inequality revisited Rajendra Bhatia Fuad Kittaneh Indian Statistical Institute Delhi Centre 7 SJSS
More informationON PERTURBATIONS OF MATRIX PENCILS WITH REAL SPECTRA. II
MATEMATICS OF COMPUTATION Volume 65, Number 14 April 1996, Pages 637 645 ON PERTURBATIONS OF MATRIX PENCILS WIT REAL SPECTRA. II RAJENDRA BATIA AND REN-CANG LI Abstract. A well-known result on spectral
More informationAn Analog of the Cauchy-Schwarz Inequality for Hadamard. Products and Unitarily Invariant Norms. Roger A. Horn and Roy Mathias
An Analog of the Cauchy-Schwarz Inequality for Hadamard Products and Unitarily Invariant Norms Roger A. Horn and Roy Mathias The Johns Hopkins University, Baltimore, Maryland 21218 SIAM J. Matrix Analysis
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{97{02 TR{3864 On the Adjoint Matrix G. W. Stewart y ABSTRACT The adjoint A A of a matrix A
More informationIntroduction to Numerical Linear Algebra II
Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in
More informationor H = UU = nx i=1 i u i u i ; where H is a non-singular Hermitian matrix of order n, = diag( i ) is a diagonal matrix whose diagonal elements are the
Relative Perturbation Bound for Invariant Subspaces of Graded Indenite Hermitian Matrices Ninoslav Truhar 1 University Josip Juraj Strossmayer, Faculty of Civil Engineering, Drinska 16 a, 31000 Osijek,
More information12 CHAPTER 1. PRELIMINARIES Lemma 1.3 (Cauchy-Schwarz inequality) Let (; ) be an inner product in < n. Then for all x; y 2 < n we have j(x; y)j (x; x)
1.4. INNER PRODUCTS,VECTOR NORMS, AND MATRIX NORMS 11 The estimate ^ is unbiased, but E(^ 2 ) = n?1 n 2 and is thus biased. An unbiased estimate is ^ 2 = 1 (x i? ^) 2 : n? 1 In x?? we show that the linear
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More informationthen kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja
Homework Haimanot Kassa, Jeremy Morris & Isaac Ben Jeppsen October 7, 004 Exercise 1 : We can say that kxk = kx y + yk And likewise So we get kxk kx yk + kyk kxk kyk kx yk kyk = ky x + xk kyk ky xk + kxk
More informationMULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun)
NUMERICAL ALGEBRA, doi:10.3934/naco.011.1.301 CONTROL AND OPTIMIZATION Volume 1, Number, June 011 pp. 301 316 MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS Xiao-Wen Chang School of Computer
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationGeneralized Numerical Radius Inequalities for Operator Matrices
International Mathematical Forum, Vol. 6, 011, no. 48, 379-385 Generalized Numerical Radius Inequalities for Operator Matrices Wathiq Bani-Domi Department of Mathematics Yarmouk University, Irbed, Jordan
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationMultiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationComponentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])
SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to
More informationBanach Journal of Mathematical Analysis ISSN: (electronic)
Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH
More informationWavelets and Linear Algebra
Wavelets and Linear Algebra () (05) 49-54 Wavelets and Linear Algebra http://wala.vru.ac.ir Vali-e-Asr University of Rafsanjan Schur multiplier norm of product of matrices M. Khosravia,, A. Sheikhhosseinia
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationNotes on matrix arithmetic geometric mean inequalities
Linear Algebra and its Applications 308 (000) 03 11 www.elsevier.com/locate/laa Notes on matrix arithmetic geometric mean inequalities Rajendra Bhatia a,, Fuad Kittaneh b a Indian Statistical Institute,
More informationNP-hardness of the stable matrix in unit interval family problem in discrete time
Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and
More informationWavelets and Linear Algebra
Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza
More informationMatrix Energy. 1 Graph Energy. Christi DiStefano Gary Davis CSUMS University of Massachusetts at Dartmouth. December 16,
Matrix Energy Christi DiStefano Gary Davis CSUMS University of Massachusetts at Dartmouth December 16, 2009 Abstract We extend Ivan Gutmans idea of graph energy, stemming from theoretical chemistry via
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationChapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationFor δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])
LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and
More informationMATRICES WITH POSITIVE DEFINITE HERMITIAN PART: INEQUALITIES AND LINEAR SYSTEMS ROY MATHIAS
MATRICES WITH POSITIVE DEFINITE HERMITIAN PART: INEQUALITIES AND LINEAR SYSTEMS ROY MATHIAS Abstract. The Hermitian and skew-hermitian parts of a square matrix A are dened by H(A) (A + A )=2 and S(A) (A?
More informationSome bounds for the spectral radius of the Hadamard product of matrices
Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard
More information2 and bound the error in the ith eigenvector in terms of the relative gap, min j6=i j i? jj j i j j 1=2 : In general, this theory usually restricts H
Optimal Perturbation Bounds for the Hermitian Eigenvalue Problem Jesse L. Barlow Department of Computer Science and Engineering The Pennsylvania State University University Park, PA 1682-616 e-mail: barlow@cse.psu.edu
More informationSTABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin
On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting
More informationSufficiency of Signed Principal Minors for Semidefiniteness: A Relatively Easy Proof
Sufficiency of Signed Principal Minors for Semidefiniteness: A Relatively Easy Proof David M. Mandy Department of Economics University of Missouri 118 Professional Building Columbia, MO 65203 USA mandyd@missouri.edu
More informationQUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those
QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.
More informationAN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION
Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper
More informationPart 1a: Inner product, Orthogonality, Vector/Matrix norm
Part 1a: Inner product, Orthogonality, Vector/Matrix norm September 19, 2018 Numerical Linear Algebra Part 1a September 19, 2018 1 / 16 1. Inner product on a linear space V over the number field F A map,
More informationACI-matrices all of whose completions have the same rank
ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationII. Determinant Functions
Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function
More information290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f
Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica
More informationOn condition numbers for the canonical generalized polar decompostion of real matrices
Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 57 2013 On condition numbers for the canonical generalized polar decompostion of real matrices Ze-Jia Xie xiezejia2012@gmail.com
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationMoore Penrose inverses and commuting elements of C -algebras
Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We
More informationON THE QR ITERATIONS OF REAL MATRICES
Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationMath 413/513 Chapter 6 (from Friedberg, Insel, & Spence)
Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector
More informationarxiv: v3 [math.ra] 22 Aug 2014
arxiv:1407.0331v3 [math.ra] 22 Aug 2014 Positivity of Partitioned Hermitian Matrices with Unitarily Invariant Norms Abstract Chi-Kwong Li a, Fuzhen Zhang b a Department of Mathematics, College of William
More informationDeterminantal divisor rank of an integral matrix
Determinantal divisor rank of an integral matrix R. B. Bapat Indian Statistical Institute New Delhi, 110016, India e-mail: rbb@isid.ac.in Abstract: We define the determinantal divisor rank of an integral
More informationOn Joint Eigenvalues of Commuting Matrices. R. Bhatia L. Elsner y. September 28, Abstract
On Joint Eigenvalues of Commuting Matrices R. Bhatia L. Elsner y September 28, 994 Abstract A spectral radius formula for commuting tuples of operators has been proved in recent years. We obtain an analog
More informationb jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60
On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department
More informationPositive entries of stable matrices
Positive entries of stable matrices Shmuel Friedland Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago Chicago, Illinois 60607-7045, USA Daniel Hershkowitz,
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationLinear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)
Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationdeviation of D and D from similarity (Theorem 6.). The bound is tight when the perturbation is a similarity transformation D = D? or when ^ = 0. From
RELATIVE PERTURBATION RESULTS FOR EIGENVALUES AND EIGENVECTORS OF DIAGONALISABLE MATRICES STANLEY C. EISENSTAT AND ILSE C. F. IPSEN y Abstract. Let ^ and ^x be a perturbed eigenpair of a diagonalisable
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationonto the column space = I P Zj. Embed PZ j
EIGENVALUES OF AN ALIGNMENT MATRIX IN NONLINEAR MANIFOLD LEARNING CHI-KWONG LI, REN-CANG LI, AND QIANG YE Abstract Alignment algorithm is an effective method recently proposed for nonlinear manifold learning
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationI = i 0,
Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationOn the Perturbation of the Q-factor of the QR Factorization
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. ; :1 6 [Version: /9/18 v1.] On the Perturbation of the Q-factor of the QR Factorization X.-W. Chang McGill University, School of Comptuer
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationRank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col
Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationAlgorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices
Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices
More information1 Matrices and Systems of Linear Equations
Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation
More informationYimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract
Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory
More informationLinear Operators Preserving the Numerical Range (Radius) on Triangular Matrices
Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices Chi-Kwong Li Department of Mathematics, College of William & Mary, P.O. Box 8795, Williamsburg, VA 23187-8795, USA. E-mail:
More informationHigher rank numerical ranges of rectangular matrix polynomials
Journal of Linear and Topological Algebra Vol. 03, No. 03, 2014, 173-184 Higher rank numerical ranges of rectangular matrix polynomials Gh. Aghamollaei a, M. Zahraei b a Department of Mathematics, Shahid
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationMINIMAL NORMAL AND COMMUTING COMPLETIONS
INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationECON 186 Class Notes: Linear Algebra
ECON 86 Class Notes: Linear Algebra Jijian Fan Jijian Fan ECON 86 / 27 Singularity and Rank As discussed previously, squareness is a necessary condition for a matrix to be nonsingular (have an inverse).
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationMultiple eigenvalues
Multiple eigenvalues arxiv:0711.3948v1 [math.na] 6 Nov 007 Joseph B. Keller Departments of Mathematics and Mechanical Engineering Stanford University Stanford, CA 94305-15 June 4, 007 Abstract The dimensions
More informationOn the Schur Complement of Diagonally Dominant Matrices
On the Schur Complement of Diagonally Dominant Matrices T.-G. Lei, C.-W. Woo,J.-Z.Liu, and F. Zhang 1 Introduction In 1979, Carlson and Markham proved that the Schur complements of strictly diagonally
More informationElementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q),
Elementary 2-Group Character Codes Cunsheng Ding 1, David Kohel 2, and San Ling Abstract In this correspondence we describe a class of codes over GF (q), where q is a power of an odd prime. These codes
More informationELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationFrame Diagonalization of Matrices
Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)
More informationDiagonalization by a unitary similarity transformation
Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple
More informationMatrix functions that preserve the strong Perron- Frobenius property
Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu
More informationarxiv: v3 [math.ra] 10 Jun 2016
To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More information