UMIACSTR July CSTR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.


 Jean Wilkins
 1 years ago
 Views:
Transcription
1 UMIASTR95 July 99 STR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated by the fact that arbitrarily small perturbations of the pencil can cause them disappear. However, there are applications in which the properties of the pencil ensure the existence of eigenvalues and eigenvectors. In this paper it is shown how to develop a perturbation theory for such pencils. Department of omputer Science and Institute for Advanced omputer Studies, University of Maryland, ollege Park, MD This work was supported in part by the Air Force Oce of Scientic Research under ontract AFOSR This report is available by anonymous ftp from thales.cs.umd.edu in the directory pub/reports.
2 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart ASTRAT The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated by the fact that arbitrarily small perturbations of the pencil can cause them disappear. However, there are applications in which the properties of the pencil ensure the existence of eigenvalues and eigenvectors. In this paper it is shown how to develop a perturbation theory for such pencils. In this note we will be concerned with the perturbation theory for the generalized eigenvalue problem Ax = x; () where A and are m n matrices with m > n and and are normalized so that jj 2 + jj 2 = : Although rectangular matrix pencils have a long history and a well developed theory of canonical forms (e.g., see [, 5, 6]), their eigenvalues and eigenvectors have been less well studied. There are two reasons. In the rst place, eigenvalues and eigenvectors may fail to exist. For example, if A = and = ; then () has no nontrivial solution unless =. In the second place, even if () has a solution, an arbitrarily small perturbation can make it go away, as the above example also shows. It is not easy to construct a general perturbation theory for objects that may not exist and can vanish at the drop of a hat. Nonetheless, in some applications the nature of the matrices A and insure the existence of eigenvectors, and the conditions that do so are stable under small perturbations (for an example from game theory, see [3]). In these circumstances The homogeneous form given here seems preferable to the more conventional form Ax = x. We call the pair (; ) an eigenvalue of the problem.
3 2 Rectangular Matrix Pencils it is reasonable to try to develop a perturbation theory. In this note we shall show how to do so by reducing the problem to a square one. We begin by describing the space of solutions of the unperturbed problem. First note that equation () says that Ax and x lie in the same onedimensional subspace. Generalizing this observation, we say that a subspace X of dimension k is an eigenspace of the pencil A? if there is a subspace Y of dimension k such that AX + X Y: (2) Here the sums and products are the usual Minkowski operations; e.g., AX = f Ax : x 2 X g. Since the sum of two eigenspaces is an eigenspace, there is a unique maximal eigenspace, which we shall call the eigenspace of the pencil. In what follows X will be the eigenspace of the pencil A? and Y will be the corresponding subspace in (2). We will assume that X has dimension k >. The relation between eigenspaces and eigenvectors can be seen as follows. Let X = (X X 2 ) and Y = (Y Y 2 ) be unitary matrices such that R(X ) = X and R(Y ) = Y. Then Y H AX and Y H X have the forms Y H AX = A A 2 A 22 and Y H X = 2 : (3) 22 It follows that if x is an eigenvector of the pencil A?, then X x is an eigenvector of (). The converse is also true: all eigenvectors of () have the form X x. This is a consequence of the fact that the pencil A 22? 22 does not have an eigenspace. For if it did, we could reduce A 22 and 22 as above, so that the transformed pencil assumes the form A ^A2 ^A3 ^A22 ^A23 ^A33 A? ^2 ^3 ^22 ^23 ^33 A : Then the original pencil has an eigenspace of dimension greater than k corresponding to the pencil A ^A2 ^2? ; ^A22 ^22
4 Rectangular Matrix Pencils 3 which contradicts the maximality of the subspace. In what follows, we will assume that the matrices of the pencil A? are in the reduced form (3). Note that since the reduction is a unitary equivalence any perturbation in the original pencil corresponds (in a unitarily invariant norm) to a perturbation of the same size in the reduced form. Let us now consider the perturbed matrices and where A = = A A2 E 2 A22 A + E A 2 + E 2 E 2 A 22 + E 22 2 F F 2 + F 2 ; F F 22 ke ij k; kf ij k ; i; j = ; 2: (4) Here k k denotes both the Euclidean vector norm and the subordinate spectral matrix norm. Let x = (x H x H 2 ) H be a normalized eigenvector of the corresponding pencil: Ax = x; kxk = : We are going to show that under a natural condition (namely, that dened below is not small) the second component x 2 of x is small. First note that since the pencil A 22? 22 has no eigenvectors, there is a number > such that k A 22 x 2? 22 x 2 k kx 2 k: (5) Since jj 2 + j j 2 =, if then Since x is an eigenvector? p 2 > ; (6) k A 22 x 2? 22 x 2 k kx 2 k: = k (E 2 x? A 22 x 2 )? (F 2 x? 22 x 2 )k kx 2 k? k E 2 x? F 2 x k: Hence kx 2 k p 2 :
5 4 Rectangular Matrix Pencils Now let us perform the reduction procedure describe above on the perturbed pencil, but using only the onedimensional space spanned by the eigenvector x. Specically, let X = R Q P S be a unitary matrix whose rst column is x and for which kp k; kqk kx 2 k p 2 : (7) (The existence of of such a matrix is established in the appendix.) Let Y = be a unitary matrix whose rst column is proportional to Ax (or x if Ax = ). Then the rst k columns of Y H AX and Y H X have the forms a H A E A ^R ^P and ^Q ^S b H F A : (8) Thus (; ) is an eigenvalue of the pencil a H A? b H ut by direct computation, this pencil is or ( ^RH A R + ^RH E R + ^RH A2 P + ^QH E 2 R + ^QH A22 P )? where from (4) and (7) ( ^R H R + ^R H F R + ^R H A2 P + ^Q H F 2 R + ^Q H A22 P ) ( ^R H A R + G)? ( ^R H R + H); kgk; khk p 2 maxfk Ak; k k :
6 Rectangular Matrix Pencils 5 We have thus shown that the eigenvalue (; ) is an eigenvalue of a perturbation of the pencil ^R H A R? ^R H R, a pencil which corresponds to the eigenspace of the original problem. Since the pencil and its perturbation are square, we may use standard perturbation theory to bound the perturbation in the eigenvalue and its eigenvector (for a survey of the perturbation theory of the generalized eigenvalue problem, see [4, h. VI]). The dependence of the bounds on dened by (5) is entirely natural. If is small, then (; ) is almost an eigenvalue of the pencil A 22? 22 and could have come from the perturbed pencil A 22? 22. Such an eigenvalue need not be near an eigenvalue associated with the eigenspace of the original problem. Seen in this light, the condition (6) insures that the perturbed eigenvalue truly comes from the eigenspace of the original problem. Finally, we note that the general technique we have outlined here is at least as important as the particular bounds. For example, it can be extended to obtain perturbations bounds on eigenspaces. Again, if we work with nonorthogonal diagonalizing transformations, so that A 2 = 2 = in (3) and a H = b H = in (8), we obtain the result that and / + y H Ex + O(maxfkEk; kf kg 2 ) / + y H F x + O(maxfkEk; kf kg 2 ); where y H is the left eigenvector corresponding to (; ). The details are left to the reader. Appendix The following extension theorem is more general than we need but may be of independent interest. Let p q and let the matrix X = q p X n?p X 2 have orthonormal columns. Then there is a unitary matrix q p?q n?p p X X = X 2 X 3 n?p X 2 X 22 X 23
7 6 Rectangular Matrix Pencils such that k(x 2 X 22 )k = kx 2 k: To establish the theorem, rst assume that p+q n. y the S decomposition (e.g., see [2, p.77]) we may assume that X = q q p?q q n?p?q S A ; where and S are nonnegative diagonal matrices. Then the required extension is in which we take X = q p?q q n?p?q q?s p?q I q S n?p?q I (X 2 X 22 ) = S On the other hand, if p + q > n, we take X in the form X = In this case the required extension is p+q?n n?p p+q?n I n?p p?q n?p S : A A ; in which X = p+q?n n?p p?q n?p p+q?n I n?p?s p?q I A ; n?p S (X 2 X 22 ) = ( S ):
8 Rectangular Matrix Pencils 7 References [] F. R. Gantmacher (959). The Theory of Matrices, Vols. I, II. helsea Publishing ompany, New York. [2] G. H. Golub and. F. Van Loan (989). Matrix omputations. Johns Hopkins University Press, altimore, Maryland, 2nd edition. [3] E. Marchi, J. A. Oviedo, and J. E. ohen (99). \Pertrubation Theory of a Nonlinear Game of von Neuman." SIAM Journal on Matrix Analysis and Applications, 2, 592{596. [4] G. W. Stewart and G.J. Sun (99). Matrix Perturbation Theory. Academic Press, oston. [5] P. Van Dooren (979). \The omputation of Kronecker's anonical Form of a Singular Pencil." Linear Algebra and Its Applications, 27, 3{4. [6] J. H. Wilkinson (979). \Kronecker's anonical Form and the QZ Algorithm." Linear Algebra and Its Applications, 28, 285{33.
Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995
More informationInstitute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y.
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{94{77 TR{3306 On Markov Chains with Sluggish Transients G. W. Stewart y June, 994 ABSTRACT
More informationUMIACSTR July CSTR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract
UMIACSTR986 July 199 CSTR 2494 Revised January 1991 An Updating Algorithm for Subspace Tracking G. W. Stewart abstract In certain signal processing applications it is required to compute the null space
More informationUMIACSTR July CSTR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract
UMIACSTR986 July 199 CSTR 2494 Revised January 1991 An Updating Algorithm for Subspace Tracking G. W. Stewart abstract In certain signal processing applications it is required to compute the null space
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More information1 Singular Value Decomposition and Principal Component
Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y.
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{93{10 TR{3030 On the Convergence of Multipoint Iterations G. W. Stewart y February, 1993 Reviseed,
More informationReview of similarity transformation and Singular Value Decomposition
Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationSTABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin
On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting
More informationLINEAR ALGEBRA 1, 2012I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More information5 Compact linear operators
5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finitedimensional space, there exists a basis consisting of eigenvectors.
More informationA Singleletter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources
A Singleletter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich Email: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationMain matrix factorizations
Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{97{02 TR{3864 On the Adjoint Matrix G. W. Stewart y ABSTRACT The adjoint A A of a matrix A
More informationMaximum variance formulation
12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal
More informationthe Unitary Polar Factor æ RenCang Li P.O. Box 2008, Bldg 6012
Relative Perturbation Bounds for the Unitary Polar actor RenCang Li Mathematical Science Section Oak Ridge National Laboratory P.O. Box 2008, Bldg 602 Oak Ridge, TN 37836367 èli@msr.epm.ornl.govè LAPACK
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 WeiTa Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2015 WeiTa Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationDefinition (T invariant subspace) Example. Example
Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 1, pp. 111, 8. Copyright 8,. ISSN 168961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.
More informationLECTURE VI: SELFADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY
LECTURE VI: SELFADJOINT AND UNITARY OPERATORS MAT 204  FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a ndimensional euclidean vector
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationSingular Value Decomposition and Polar Form
Chapter 14 Singular Value Decomposition and Polar Form 14.1 Singular Value Decomposition for Square Matrices Let f : E! E be any linear map, where E is a Euclidean space. In general, it may not be possible
More informationModule 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents
Eigenvalue and Leastsquares Problems Module Contents Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems nag nsym gen eig provides procedures for solving nonsymmetric generalized
More informationTMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013
TMA4115  Calculus 3 Lecture 21, April 3 Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013 www.ntnu.no TMA4115  Calculus 3, Lecture 21 Review of last week s lecture Last week
More informationParallel Singular Value Decomposition. Jiaxing Tan
Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (nonzero) vector
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich Email:
More informationREAL ANALYSIS II HOMEWORK 3. Conway, Page 49
REAL ANALYSIS II HOMEWORK 3 CİHAN BAHRAN Conway, Page 49 3. Let K and k be as in Proposition 4.7 and suppose that k(x, y) k(y, x). Show that K is selfadjoint and if {µ n } are the eigenvalues of K, each
More informationControl Systems. Linear Algebra topics. L. Lanari
Control Systems Linear Algebra topics L Lanari outline basic facts about matrices eigenvalues  eigenvectors  characteristic polynomial  algebraic multiplicity eigenvalues invariance under similarity
More informationSPECTRAL THEOREM FOR COMPACT SELFADJOINT OPERATORS
SPECTRAL THEOREM FOR COMPACT SELFADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Selfadjoint
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationDSGA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DSGA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationCompound matrices and some classical inequalities
Compound matrices and some classical inequalities TinYau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using
More informationPerturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.
Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that
More informationLecture 10  Eigenvalues problem
Lecture 10  Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10  Eigenvalues problem Introduction Eigenvalue problems form an important class of problems
More informationMATH 304 Linear Algebra Lecture 20: The GramSchmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The GramSchmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationThe Spectral Theorem for normal linear maps
MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationGeneralized Shifted Inverse Iterations on Grassmann Manifolds 1
Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.A.
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILLCONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILLPOSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationDetailed Assessment Report MATH Outcomes, with Any Associations and Related Measures, Targets, Findings, and Action Plans
Detailed Assessment Report 20152016 MATH 3401 As of: 8/15/2016 10:01 AM EDT (Includes those Action Plans with Budget Amounts marked OneTime, Recurring, No Request.) Course Description Theory and applications
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n nonsingular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationLinear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4
Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am  :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary
More informationChapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More informationA Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices ChiKwong Li RenCang Li July 2004 Let ( H1 E A = E H 2 Abstract and Ã = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationDegenerate Perturbation Theory. 1 General framework and strategy
Physics G6037 Professor Christ 12/22/2015 Degenerate Perturbation Theory The treatment of degenerate perturbation theory presented in class is written out here in detail. The appendix presents the underlying
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationNumerical Methods I: Eigenvalues and eigenvectors
1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a nonzero column vector X with
Appendix: Matrix Estimates and the PerronFrobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More informationTheorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k nk. mk k mk nk
I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column
More informationThe Principles of Quantum Mechanics: Pt. 1
The Principles of Quantum Mechanics: Pt. 1 PHYS 476Q  Southern Illinois University February 15, 2018 PHYS 476Q  Southern Illinois University The Principles of Quantum Mechanics: Pt. 1 February 15, 2018
More informationSolving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners
Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department
More information1 Quasidefinite matrix
1 Quasidefinite matrix The matrix H is a quasidefinite matrix, if there exists a permutation matrix P such that H qd P T H11 H HP = 1 H1, 1) H where H 11 and H + H1H 11 H 1 are positive definite. This
More informationPlan What is a normal formal of a mathematical object? Similarity and Congruence Transformations Jordan normal form and simultaneous diagonalisation R
Pencils of skewsymmetric matrices and JordanKronecker invariants Alexey Bolsinov, Loughborough University September 2527, 27 Journée Astronomie et Systèmes Dynamiques MCCE / Observatoire de Paris Plan
More informationMathematical foundations  linear algebra
Mathematical foundations  linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationBOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION
K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION GuangDa Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex
More informationMP 472 Quantum Information and Computation
MP 472 Quantum Information and Computation http://www.thphys.may.ie/staff/jvala/mp472.htm Outline Open quantum systems The density operator ensemble of quantum states general properties the reduced density
More informationLecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?
KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationExercise Set 7.2. Skills
Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize
More informationBare minimum on matrix algebra. Psychology 588: Covariance structure and factor models
Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1
More informationLinear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions
Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions AnnaKarin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationMultigrid absolute value preconditioning
Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More informationEigenvalues and eigenvectors
Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors
More informationLecture 13 Eigenvalue Problems
Lecture 13 Eigenvalue Problems MIT 18.335J / 6.337J Introduction to Numerical Methods PerOlof Persson (persson@mit.edu) October 24, 2007 1 The Eigenvalue Decomposition Eigenvalue problem for m m matrix
More informationLinear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.
Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear
More informationMajorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II)
1 Majorization for Changes in Ritz Values and Canonical Angles Between Subspaces (Part I and Part II) Merico Argentati (speaker), Andrew Knyazev, Ilya Lashuk and Abram Jujunashvili Department of Mathematics
More informationPROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS
PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the CauchyBinet Theorem on determinants and of the fact that the eigenvalues of a matrix
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationMATLAB TOOLS FOR SOLVING PERIODIC EIGENVALUE PROBLEMS 1
MATLAB TOOLS FOR SOLVING PERIODIC EIGENVALUE PROBLEMS 1 Robert Granat Bo Kågström Daniel Kressner,2 Department of Computing Science and HPC2N, Umeå University, SE90187 Umeå, Sweden. {granat,bokg,kressner}@cs.umu.se
More information5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.
Linear Algebra  Test File  Spring Test # For problems  consider the following system of equations. x + y  z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the
More informationMATLAB TOOLS FOR SOLVING PERIODIC EIGENVALUE PROBLEMS 1
MATLAB TOOLS FOR SOLVING PERIODIC EIGENVALUE PROBLEMS 1 Robert Granat Bo Kågström Daniel Kressner,2 Department of Computing Science and HPC2N, Umeå University, SE90187 Umeå, Sweden. {granat,bokg,kressner}@cs.umu.se
More informationMultiplicative Perturbation Analysis for QR Factorizations
Multiplicative Perturbation Analysis for QR Factorizations XiaoWen Chang RenCang Li Technical Report 01101 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QRalgorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QRalgorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationThe SVDFundamental Theorem of Linear Algebra
Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVDFundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis (Numerical Linear Algebra for Computational and Data Sciences) Lecture 14: Eigenvalue Problems; Eigenvalue Revealing Factorizations Xiangmin Jiao Stony Brook University Xiangmin
More informationor H = UU = nx i=1 i u i u i ; where H is a nonsingular Hermitian matrix of order n, = diag( i ) is a diagonal matrix whose diagonal elements are the
Relative Perturbation Bound for Invariant Subspaces of Graded Indenite Hermitian Matrices Ninoslav Truhar 1 University Josip Juraj Strossmayer, Faculty of Civil Engineering, Drinska 16 a, 31000 Osijek,
More informationError estimates for the ESPRIT algorithm
Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) YanBin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationInvariant Subspace Perturbations or: How I Learned to Stop Worrying and Love Eigenvectors
Invariant Subspace Perturbations or: How I Learned to Stop Worrying and Love Eigenvectors Alexander Cloninger Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu
More informationNOTES ON BILINEAR FORMS
NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education2015. Symmetric bilinear
More information