the Unitary Polar Factor æ RenCang Li P.O. Box 2008, Bldg 6012


 Russell Stanley
 11 months ago
 Views:
Transcription
1 Relative Perturbation Bounds for the Unitary Polar actor RenCang Li Mathematical Science Section Oak Ridge National Laboratory P.O. Box 2008, Bldg 602 Oak Ridge, TN LAPACK working notes è 83 èrst published July, 994, revised January, 996è Abstract Let B be an m n èm ç nè complex matrix. It is known that there is a unique polar decomposition B = QH, where Q Q = I, the n n identity matrix, and H is positive denite, provided B has full column rank. Existing perturbation bounds for complex matrices suggest that in the worst case, the change in Q be proportional to the reciprocal of the smallest singular value of B. However, there are situations where this unitary polar factor is much more accurately determined by the data than the existing perturbation bounds would indicate. In this paper the following question is addressed: how much mayqchange if B is perturbed to e B = D BD 2? Here D and D 2 are nonsingular and close to the identity matrices of suitable dimensions. It will be proved that for such kinds of perturbations, the change in Q is bounded only by the distances from D and D 2 to identity matrices, and thus independent of the singular values of B. This material is based in part upon work supported, during January, 992íAugust, 995, by Argonne National Laboratory under grant No and by the University of Tennessee through the Advanced Research Projects Agency under contract No. DAAL039C0047, by the National Science oundation under grant No. ASC , and by the National Science Infrastructure grants No. CDA and CDA94056, and supported, since August, 995, by a Householder ellowship in Scientic Computing at Oak Ridge National Laboratory, supported by the Applied Mathematical Sciences Research Program, Oce of Energy Research, United States Department of Energy contract DEAC0584OR2400 with Lockheed Martin Energy System, Inc. Part of this work was done during summer of 994 while the author was at Department of Mathematics, University of California at Berkeley.
2 RenCang Li: Relative Perturbation Bounds for the Unitary Polar actor 2 Let B be an m n èm ç nè complex matrix. It is known that there are Q with orthonormal column vectors, i.e., Q Q = I, and a unique positive semidenite H such that B = QH: èè Hereafter I denotes an identity matrix with appropriate dimensions which should be clear from the context or specied. The decomposition èè is called the polar decomposition of B. If, in addition, B has full column rank, then Q is uniquely determined also. In fact, H =èb Bè =2 ; Q=BèB Bè,=2 ; è2è where superscript ë " denotes conjugate transpose. The decomposition èè can also be computed from the singular value decomposition èsvdè B = UV by H = V V ; Q = U V ; è3è è! where U =èu ;U 2 è and V are unitary, U is mn,= and 0 = diag èç ;:::;ç n è is nonnegative. There are several published bounds stating how much the two factor matrices Q and H may change if entries of B are perturbed in arbitrary manner ë2, 3,6,8,9,, 2, 3, 4ë. In these papers, no assumption was made on how B was perturbed to B e except possibly an assumption on the smallness of kb e, Bk for some matrix norm kk. Roughly speaking, bounds in these published papers suggest that in the worst case, the change in Q be proportional to the reciprocal of the smallest singular value of B. In this paper, on the other hand, we study the change in Q, assuming B is complex and perturbed to B e = D BD 2, where D and D 2 are nonsingular and close to the identity matrices of suitable dimensions. Such perturbations covers, for example, componentwise relative perturbations to entries of symmetric tridiagonal matrices with zero diagonal ë5, 7ë, entries of bidiagonal and biacyclic matrices ë, 4, 5ë. Our results indicate that under such perturbations, the change in Q is independent of the smallest singular value of B. Assume that B has full column rank and so does B e = D BD 2. Let be the polar decompositions of B and e B respectively, and let B = QH; e B = e Q e H è4è B = UV ; e B = e U e e V è5è be the SVDs of B and e B, respectively, where e U =è e U ; e U 2 è, e U is m n, and e = and e = diag èeç ;:::;eç n è. Assume as usual that è e 0 ç ç ç ç n é0; and eç ç ç eç n é0: è6è It follows from è2è and è5è that Q = U V and e Q = e U e V.!
3 RenCang Li: Relative Perturbation Bounds for the Unitary Polar actor 3 In what follows, kxk denotes the robenius norm which is the square root of the trace of X X. Notice that and eu è e B, BèV = e e V V, e U U; eu è e B, BèV = e U èd BD 2, D B + D B, BèV = e U h ebèi, D, 2 è+èd,ièb iv = e e V èi,d, 2 èv + e U èd,ièu; U è e B, Bè e V = U e U e, V e V; U èb,bè e V e = U èd BD 2, BD 2 + BD 2, BèV e h = U èi, D, è e B + BèD 2, Iè i V e to obtain two perturbation equations: = U èi, D, è e U e +V èd 2,Iè e V: e V e V, U e U = e V e èi, D, 2 èv + U e èd, IèU; è7è U U e e, V V e = U èi, D, U+V e èd 2,IèV: e è8è The rst n rows of equation è7è yields e e V V, e U U = e e V èi, D, 2 èv + e U èd, IèU : è9è The rst n rows of equation è8è yields U e U e, V V e = U èi, D, è e U e + V èd 2,IèV; e on taking conjugate transpose of which, one has e e U U, e V V = e e U èi, D, èu + e V èd 2, IèV : è0è Now subtracting è0è from è9è leads to e è e U U, e V V è+è e U U, e V Vè = e h eu èi, D, èu, e V èi, D, 2 èv i + To continue, we need a lemma from Li ë0ë. h ev èd 2, IèV, U e i èd, IèU : Lemma Let 2 C ss and, 2 C tt be two Hermitian matrices, and let E; 2 C st. If çèè T çè,è = ;, where çè è is the spectrum of a matrix, then q matrix equation X, X, =E+,has a unique solution, and moreover kxk ç kek 2 +. k k2 ç, where ç def = min!2çèè;2çè,è j!,j pj!j 2 +jj 2. èè
4 RenCang Li: Relative Perturbation Bounds for the Unitary Polar actor 4 Apply this lemma to equation èè with = e,,=,, and X = e U U, e V V = e V è e V e U U V, IèV = e V è e Q Q, IèV; min E = U e èi, D, èu, V e èi, D, 2 èv; ee = V e èd 2, IèV, U e èd, IèU q kek 2 + k Ek e 2 =ç, where ç = to get kxk = k e Q Q, Ik ç q eç i +ç j çi; jçn eç 2 i +ç2 j Theorem Let B and B e = D BD 2 be two m n èm ç nè complex matrices having full column rank and with polar decompositions è4è. Then r ç kq, Qk e 2 ç ki, D, k + ki, D, 2 ç k +èkd2,ik +kd,ik è 2 ; è2è p q ç 2 ki, D, k2 + ki, D, 2 k2 + kd 2, Ik 2 + kd, Ik 2 : è3è Proof: Inequality è3è follows from è2è by the fact that è+è 2 ç 2è è for ; ç 0. Now we prove è2è. When m = n, both Q and e Q are unitary. Thus k e Q Q, Ik = k e Q èq, e Qèk = kq, e Qk. Inequality è2è is a consequence of kek çki,d, k +ki,d, 2 k and k e Ek çkd 2,Ik +kd,ik : è4è or the case mén, the last m, n rows of equation è8è produce that U 2 e U e =U 2èI,D, è e U e : Since e B has full column rank, e is nonsingular. We haveu 2 e U =U 2 èi,d, è e U : Thus ç. ku 2 e U k çku 2 èi,d, èk = kèi, D, èu 2k : è5è Notice that èu V ;U 2 è=èq; U 2 è is unitary. Hence U 2 Q = 0, and è kq, Qk e = kèq; U 2 è èq, Qèk e I, Q = e! Q,U e 2 Q = qki, Q Qk e 2 + e q k,u2 U V e k 2 ç kek 2 +ke Ek 2 e +ku 2U k 2: è6è By è5è, we get kek 2 + ku e ç ç 2 2 U k 2 ç kèi, D, èu k + ki, D, 2 k + kèi, D, èu 2k 2 = kèi, D, èu k 2 +2kèI,D, èu k ki,d, 2 k +ki,d, 2 k2 +kèi,d, èu 2k 2 ç kèi,d, èu k 2 +kèi,d, èu 2k 2 +2kI, D, k ki, D, 2 k + ki, D, 2 k2 = ki, D, k2 +2kI,D, k ki,d, 2 k +ki,d, 2 k2 = ç 2: ki,d, k +ki,d, 2 ç k
5 RenCang Li: Relative Perturbation Bounds for the Unitary Polar actor 5 Inequality è2è is a consequence of è6è, è4è, and the above inequalities. Now we are in the position to apply Theorem to perturbations for oneside scaling. Here we consider two n n nonsingular matrices B = S G and B e = S G, e where S is a scaling matrix and usually diagonal. èbut this is not necessary to the theorem below.è The elements of S can vary wildly. G is nonsingular and usually better conditioned than B itself. Set G def = G e, G: eg is guaranteed nonsingular if kgk 2 kg, k 2 é which will be assumed henceforth. Notice that eb = S e G = S èg +Gè=S GèI+G, ègèè = BèI + G, ègèè: So applying Theorem with D = 0 and D 2 = I + G, ègè leads to Theorem 2 Let B = S G and e B = S e G be two n n nonsingular matrices with the polar decompositions è4è. If kgk 2 kg, k 2 é then kq, e Qk ç ç r kg, ègèk 2 + I, èi + G, ègèè, 2 ; s + è,kg, k 2 kgk 2 è 2 kg, k 2 kgk : One can deal with oneside scaling from the right in the same way. Acknowledgement: I thank Professor W. Kahan for his supervision and Professor J. Demmel for valuable discussions.
6 RenCang Li: Relative Perturbation Bounds for the Unitary Polar actor 6 References ëë J. Barlow and J. Demmel. Computing accurate eigensystems of scaled diagonally dominant matrices. SIAM Journal on Numerical Analysis, 27:762í79, 990. ë2ë A. Barrlund. Perturbation bounds on the polar decomposition BIT, 30:0í3, ë3ë C.H. Chen and J.G. Sun. Math., 7:397í40, 989. Perturbation bounds for the polar factors. J. Comp. ë4ë J. Demmel and W. Gragg. On computing accurate singular values and eigenvalues of matrices with acyclic graphs. Linear Algebra and its Application, 85:203í27, 993. ë5ë J. Demmel and W. Kahan. Accurate singular values of bidiagonal matrices. SIAM Journal on Scientic and Statistical Computing, :873í92, 990. ë6ë N. J. Higham. Computing the polar decompositioníwith applications. SIAM Journal on Scientic and Statistical Computing, 7:60í74, 986. ë7ë W. Kahan. Accurate eigenvalues of a symmetric tridiagonal matrix. Technical Report CS4, Computer Science Department, Stanford University, Stanford, CA, 966. èrevised June 968è. ë8ë C. Kenney and A. J. Laub. Polar decompostion and matrix sign function condition estimates. SIAM Journal on Scientic and Statistical Computing, 2:488í504, 99. ë9ë R.C. Li. A perturbation bound for the generalized polar decomposition. BIT, 33:304í 308, 993. ë0ë R.C. Li. Relative perturbation theory: èiiè eigenspace and singular subspace variations. Technical Report UCBèèCSD , Computer Science Division, Department of EECS, University of California at Berkeley, 994. èrevised January 996è. ëë R.C. Li. New perturbation bounds for the unitary polar factor. SIAM J. Matrix Anal. Appl., 6:327í332, 995. ë2ë J.Q. Mao. The perturbation analysis of the product of singular vector matrices UV H. J. Comp. Math., 4:245í248, 986. ë3ë R. Mathias. Perturbation bounds for the polar decomposition. SIAM J. Matrix Anal. Appl., 4:588í597, 993. ë4ë J.G. Sun and C.H. Chen. Generalized polar decomposition. Math. Numer. Sinica, :262í273, 989. In Chinese.
A Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices ChiKwong Li RenCang Li July 2004 Let ( H1 E A = E H 2 Abstract and Ã = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationUMIACSTR July CSTR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.
UMIASTR95 July 99 STR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationComputational math: Assignment 1
Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a ToolBox Linear Equation Systems Discretization of differential equations: solving linear equations
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{97{02 TR{3864 On the Adjoint Matrix G. W. Stewart y ABSTRACT The adjoint A A of a matrix A
More informationMATH 304 Linear Algebra Lecture 34: Review for Test 2.
MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems PerOlof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationor H = UU = nx i=1 i u i u i ; where H is a nonsingular Hermitian matrix of order n, = diag( i ) is a diagonal matrix whose diagonal elements are the
Relative Perturbation Bound for Invariant Subspaces of Graded Indenite Hermitian Matrices Ninoslav Truhar 1 University Josip Juraj Strossmayer, Faculty of Civil Engineering, Drinska 16 a, 31000 Osijek,
More information1. The Polar Decomposition
A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION MATAN GAVISH Part. Theory. The Polar Decomposition In what follows, F denotes either R or C. The vector space F n is an inner product space with
More informationMultiple eigenvalues
Multiple eigenvalues arxiv:0711.3948v1 [math.na] 6 Nov 007 Joseph B. Keller Departments of Mathematics and Mechanical Engineering Stanford University Stanford, CA 9430515 June 4, 007 Abstract The dimensions
More informationApplied Numerical Linear Algebra. Lecture 8
Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2norm to be k 2 (A) σ max (A)/σ
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 17, pp. 7692, 2004. Copyright 2004,. ISSN 10689613. ETNA STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Ý Abstract. For any symmetric
More informationSingular Value Decomposition (SVD) and Polar Form
Chapter 2 Singular Value Decomposition (SVD) and Polar Form 2.1 Polar Form In this chapter, we assume that we are dealing with a real Euclidean space E. Let f: E E be any linear map. In general, it may
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150  Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationTHE RELATION BETWEEN THE QR AND LR ALGORITHMS
SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian
More informationI.C.F. Ipsen small magnitude. Our goal is to give some intuition for what the bounds mean and why they hold. Suppose you have to compute an eigenvalue
Acta Numerica (998), pp. { Relative Perturbation Results for Matrix Eigenvalues and Singular Values Ilse C.F. Ipsen Department of Mathematics North Carolina State University Raleigh, NC 769585, USA ipsen@math.ncsu.edu
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationON PERTURBATIONS OF MATRIX PENCILS WITH REAL SPECTRA. II
MATEMATICS OF COMPUTATION Volume 65, Number 14 April 1996, Pages 637 645 ON PERTURBATIONS OF MATRIX PENCILS WIT REAL SPECTRA. II RAJENDRA BATIA AND RENCANG LI Abstract. A wellknown result on spectral
More informationMatrix Perturbation Theory
5 Matrix Perturbation Theory RenCang Li University of Texas at Arlington 5 Eigenvalue Problems 55 Singular Value Problems 56 53 Polar Decomposition 57 54 Generalized Eigenvalue Problems 59 55 Generalized
More informationReview of Linear Algebra
Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources
More informationMATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.
as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations
More informationKEYWORDS. Numerical methods, generalized singular values, products of matrices, quotients of matrices. Introduction The two basic unitary decompositio
COMPUTING THE SVD OF A GENERAL MATRIX PRODUCT/QUOTIENT GENE GOLUB Computer Science Department Stanford University Stanford, CA USA golub@sccm.stanford.edu KNUT SLNA SCCM Stanford University Stanford,
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342  Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationChapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationSubStiefel Procrustes problem. Krystyna Ziętak
SubStiefel Procrustes problem (Problem Procrustesa z macierzami substiefel) Krystyna Ziętak Wrocław April 12, 2016 Outline 1 Joao Cardoso 2 Orthogonal Procrustes problem 3 Unbalanced Stiefel Procrustes
More information1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)
Notes for 20170130 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationA Note on Simple Nonzero Finite Generalized Singular Values
A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma ZhengJian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATHGA 2011.003 / CSCIGA 2945.003, Fall 2014 September 18th,
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationI. Multiple Choice Questions (Answer any eight)
Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam  Course Instructor : Prashanth L.A. Date : Sep24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY
More informationLinear Algebra using Dirac Notation: Pt. 2
Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q  Southern Illinois University February 6, 2018 PHYS 476Q  Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018
More informationReal symmetric matrices/1. 1 Eigenvalues and eigenvectors
Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that
More informationTheorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k nk. mk k mk nk
I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column
More informationEigenvalue Problems and Singular Value Decomposition
Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationComputing least squares condition numbers on hybrid multicore/gpu systems
Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning
More informationISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUSTIME ALGEBRAIC RICCATI EQUATION
ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUSTIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negativesemidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic
More informationReview of Vectors and Matrices
A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationDecomposition of Quantum Gates
Decomposition of Quantum Gates With Applications to Quantum Computing Dean Katsaros, Eric Berry, Diane C. Pelejo, ChiKwong Li College of William and Mary January 12, 215 Motivation Current Conclusions
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationFullState Feedback Design for a MultiInput System
FullState Feedback Design for a MultiInput System A. Introduction The openloop system is described by the following state space model. x(t) = Ax(t)+Bu(t), y(t) =Cx(t)+Du(t) () 4 8.5 A =, B =.5.5, C
More informationUsing semiseparable matrices to compute the SVD of a general matrix product/quotient
Using semiseparable matrices to compute the SVD of a general matrix product/quotient Marc Van Barel Yvette Vanberghen Paul Van Dooren Report TW 508, November 007 n Katholieke Universiteit Leuven Department
More information(VII.E) The Singular Value Decomposition (SVD)
(VII.E) The Singular Value Decomposition (SVD) In this section we describe a generalization of the Spectral Theorem to nonnormal operators, and even to transformations between different vector spaces.
More informationComparison of perturbation bounds for the stationary distribution of a Markov chain
Linear Algebra and its Applications 335 (00) 37 50 www.elsevier.com/locate/laa Comparison of perturbation bounds for the stationary distribution of a Markov chain Grace E. Cho a, Carl D. Meyer b,, a Mathematics
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATHGA 2011.003 / CSCIGA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationTHE S 1 EQUIVARIANT COHOMOLOGY RINGS OF (n k, k) SPRINGER VARIETIES
Horiguchi, T. Osaka J. Math. 52 (2015), 1051 1062 THE S 1 EQUIVARIANT COHOMOLOGY RINGS OF (n k, k) SPRINGER VARIETIES TATSUYA HORIGUCHI (Received January 6, 2014, revised July 14, 2014) Abstract The main
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationFractionfree Row Reduction of Matrices of Skew Polynomials
Fractionfree Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univlille1.fr
More informationOn a quadratic matrix equation associated with an Mmatrix
Article Submitted to IMA Journal of Numerical Analysis On a quadratic matrix equation associated with an Mmatrix ChunHua Guo Department of Mathematics and Statistics, University of Regina, Regina, SK
More informationSingular Value Decomposition and Polar Form
Chapter 14 Singular Value Decomposition and Polar Form 14.1 Singular Value Decomposition for Square Matrices Let f : E! E be any linear map, where E is a Euclidean space. In general, it may not be possible
More information132 Text: 2830; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2
The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=
More informationproblem Au = u by constructing an orthonormal basis V k = [v 1 ; : : : ; v k ], at each k th iteration step, and then nding an approximation for the e
A Parallel Solver for Extreme Eigenpairs 1 Leonardo Borges and Suely Oliveira 2 Computer Science Department, Texas A&M University, College Station, TX 778433112, USA. Abstract. In this paper a parallel
More informationAN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b
AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise
More informationDef. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the pdimensional space R p is defined as
MAHALANOBIS DISTANCE Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the pdimensional space R p is defined as d E (x, y) = (x 1 y 1 ) 2 + +(x p y p ) 2
More informationBoolean InnerProduct Spaces and Boolean Matrices
Boolean InnerProduct Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver
More informationThe Spectral Theorem for normal linear maps
MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question
More informationLinear Algebra and Dirac Notation, Pt. 2
Linear Algebra and Dirac Notation, Pt. 2 PHYS 500  Southern Illinois University February 1, 2017 PHYS 500  Southern Illinois University Linear Algebra and Dirac Notation, Pt. 2 February 1, 2017 1 / 14
More informationarxiv:quantph/ v1 22 Aug 2005
Conditions for separability in generalized Laplacian matrices and nonnegative matrices as density matrices arxiv:quantph/58163v1 22 Aug 25 Abstract Chai Wah Wu IBM Research Division, Thomas J. Watson
More informationKatholieke Universiteit Leuven Department of Computer Science
A QR method for computing the singular values via semiseparable matrices. Raf Vandebril Marc Van Barel Nicola Mastronardi Report TW 366, August 2003 Katholieke Universiteit Leuven Department of Computer
More informationarxiv: v1 [math.ra] 11 Aug 2014
Double Btensors and quasidouble Btensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091
More informationAn SVDLike Matrix Decomposition and Its Applications
An SVDLike Matrix Decomposition and Its Applications Hongguo Xu Abstract [ A matrix S C m m is symplectic if SJS = J, where J = I m I m. Symplectic matrices play an important role in the analysis and
More informationECS231: Spectral Partitioning. Based on Berkeley s CS267 lecture on graph partition
ECS231: Spectral Partitioning Based on Berkeley s CS267 lecture on graph partition 1 Definition of graph partitioning Given a graph G = (N, E, W N, W E ) N = nodes (or vertices), E = edges W N = node weights
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationHomework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)
CS 106 Spring 2004 Homework 1 Elena Davidson 8 April 2004 Problem 1.1 Let B be a 4 4 matrix to which we apply the following operations: 1. double column 1, 2. halve row 3, 3. add row 3 to row 1, 4. interchange
More informationOffdiagonal perturbation, firstorder approximation and quadratic residual bounds for matrix eigenvalue problems
Offdiagonal perturbation, firstorder approximation and quadratic residual bounds for matrix eigenvalue problems Yuji Nakatsukasa Abstract When a symmetric block diagonal matrix [ A 1 A2 ] undergoes an
More informationSpectrally arbitrary star sign patterns
Linear Algebra and its Applications 400 (2005) 99 119 wwwelseviercom/locate/laa Spectrally arbitrary star sign patterns G MacGillivray, RM Tifenbach, P van den Driessche Department of Mathematics and Statistics,
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationThe Singular Value Decomposition
The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş BabeşBolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( BabeşBolyai University)The Singular Value Decomposition
More information1 Positive definiteness and semidefiniteness
Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +
More informationLecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,
2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More information18.06 Problem Set 10  Solutions Due Thursday, 29 November 2007 at 4 pm in
86 Problem Set  Solutions Due Thursday, 29 November 27 at 4 pm in 26 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positivedefinite
More informationProposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)
RODICA D. COSTIN. Singular Value Decomposition.1. Rectangular matrices. For rectangular matrices M the notions of eigenvalue/vector cannot be defined. However, the products MM and/or M M (which are square,
More informationLinear Algebra Final Exam Review
Linear Algebra Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationRank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col
Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an
More information18.06SC Final Exam Solutions
18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon
More information4.3  Linear Combinations and Independence of Vectors
 Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationMatrices A brief introduction
Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino September 2013 Basilio Bona (DAUIN) Matrices September 2013 1 / 74 Definitions Definition A matrix is a set of N real or complex numbers
More informationLECTURE VI: SELFADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY
LECTURE VI: SELFADJOINT AND UNITARY OPERATORS MAT 204  FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a ndimensional euclidean vector
More informationConvex Optimization of Graph Laplacian Eigenvalues
Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the
More informationGENERALIZED FINITE ALGORITHMS FOR CONSTRUCTING HERMITIAN MATRICES WITH PRESCRIBED DIAGONAL AND SPECTRUM
SIAM J. MATRIX ANAL. APPL. Vol. 27, No. 1, pp. 61 71 c 2005 Society for Industrial and Applied Mathematics GENERALIZED FINITE ALGORITHMS FOR CONSTRUCTING HERMITIAN MATRICES WITH PRESCRIBED DIAGONAL AND
More informationAbstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.
HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization
More informationSVD and Image Compression
The SVD and Image Compression Lab Objective: The Singular Value Decomposition (SVD) is an incredibly useful matrix factorization that is widely used in both theoretical and applied mathematics. The SVD
More informationp q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q
A NUMERICAL METHOD FOR COMPUTING AN SVDLIKE DECOMPOSITION HONGGUO XU Abstract We present a numerical method to compute the SVDlike decomposition B = QDS 1 where Q is orthogonal S is symplectic and D
More informationA Cholesky LR algorithm for the positive definite symmetric diagonalplussemiseparable eigenproblem
A Cholesky LR algorithm for the positive definite symmetric diagonalplussemiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van
More informationNumerical Methods I: Eigenvalues and eigenvectors
1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they
More informationRankone LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the wellknown Lyapunov's matrix inequality about
Rankone LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the wellknown Lyapunov's matrix inequality about the location of the eigenvalues of a matrix
More informationA graph theoretic approach to matrix inversion by partitioning
Numerische Mathematik 4, t 28 t 35 (1962) A graph theoretic approach to matrix inversion by partitioning By FRANK HARARY Abstract Let M be a square matrix whose entries are in some field. Our object
More informationThe geometric mean decomposition
Linear Algebra and its Applications 396 (25) 373 384 www.elsevier.com/locate/laa The geometric mean decomposition Yi Jiang a, William W. Hager b,, Jian Li c a Department of Electrical and Computer Engineering,
More information