STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Size: px
Start display at page:

Download "STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5"

Transcription

1 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity we assume that all its nonzero singular values are distinct We define the matrix [ ] A W = A C (m+n) (m+n) It is easy to verify that W = W (after Wielandt, who s the first to consider this matrix) and by the spectral theorem for Hermitian matrices, W has an evd, W = ZΛZ where Z C (m+n) (m+n) is a unitary matrix and Λ R (m+n) (m+n) is a diagonal matrix with real diagonal elements If z is an eigenvector of W, then we can write [ x W z = σz, z = y] and therefore [ [ ] [ A x x A ] = σ y y] or, equivalently, Ay = σx, A x = σy Now, suppose that we apply W to the vector obtained from z by negating y Then we have [ ] [ ] [ ] [ ] [ ] A x Ay σx x A = y A = = σ x σy y In other words, if σ is an eigenvalue, then σ is also an eigenvalue So we may assume without loss of generality that σ 1 σ σ r > = = where r = rank(a) So the diagonal matrix Λ of eigenvalues of W may be written as Λ = diag(σ 1, σ,, σ r, σ 1, σ,, σ r,,, ) C (m+n) (m+n) Observe that there is a zero block of size (m + n r) (m + n r) in the bottom right corner of Λ We scale the eigenvector z of W so that z z = Since W is symmetric, eigenvectors corresponding to the distinct eigenvalues σ and σ are orthogonal, so it follows that [ x y ] [ x y ] = Date: October 1, 18, version 1 Comments, bug reports: lekheng@galtonuchicagoedu 1

2 These yield the system of equations which has the unique solution x x + y y =, x x y y =, x x = 1, y y = 1 Now note that we can represent the matrix of normalized eigenvectors of W corresponding to nonzero eigenvalues (note that there are exactly r of these) as Z = 1 [ ] X X C Y Y (m+n) r Note that the factor 1/ appears because of the way we have chosen the norm of z We also let It is easy to see that Λ = diag(σ 1, σ,, σ r, σ 1, σ,, σ r ) C r r ZΛZ = Z Λ Z just by multiplying out the zero block in Λ So we have [ ] A A = W = ZΛZ = ZΛ Z = 1 [ X X Y Y = 1 [ XΣr XΣ r = 1 [ XΣ r Y ] Y Σ r X [ XΣ = r Y ] Y Σ r X and therefore ] [ ] [ Σr X Y ] Σ r X Y Y Σ r Y Σ r ] [ X Y X Y ] A = XΣ r Y, A = Y Σ r X where X is an m r matrix, Σ is r r, and Y is n r, and r is the rank of A We have obtained the condensed svd of A The last missing bit is the orthonormality of the columns of X and Y This follows from the fact that distinct columns of [ X ] X Y Y are mutually orthogonal since they correspond to distinct eigenvalues and so if we pick [ x i y i ], [ x j ] y j, [ xj y j ] for i j, and take their inner products, we get x i x j + y i y j =, x i x j y i y j = Adding them gives x i x j = and substracting them gives y i y j = for all i j, as required see Theorem 41 in Trefethen and Bau for an alternative non-constructive proof that does not require the use of spectral theorem

3 other characterizations of svd the proof of the above theorem gives us two more characterizations of singular values and singular vectors: (i) in terms of eigenvalues and eigenvectors of an (m + n) (m + n) Hermitian matrix: [ A A ] = 1 [ U U V V ] [ Σ Σ (ii) in terms of a coupled system of equations { Av = σu, A u = σv ] [ U V ] U V the following is yet another way to characterize them in terms of eigenvalues/eigenvectors of an m m Hermitian matrix and an n n Hermitian matrix Lemma 1 The suqare of the singular values of a matrix A are eigenvalues of AA and A A The left singular vectors of A are the eigenvectors of AA and the right singular vectors of A are the eigenvectors of A A Proof From the relationships Ay = σx, A x = σy, we obtain A Ay = σ y, AA x = σ x Therefore, if ±σ are eigenvalues of W, then σ is an eigenvalue of both AA and A A Also AA = (UΣV )(V Σ T U ) = UΣΣ T U, A A = (V Σ T U )(UΣV ) = V Σ T ΣV Note that Σ = Σ T since singular values are real The matrices Σ T Σ and ΣΣ T are respectively n n and m m diagonal matrices with diagonal elements σi and the svd is something like a swiss army knife of linear algebra, matrix theory, and numerical linear algebra, you can do a lot with it over the next few sections we will see that the singular value decomposition is a singularly powerful tool once we have it, we could solve just about any problem involving matrices given A C m n and b im(a) C m, find all solutions of Ax = b given A GL(n), find A 1 given A C m n, find A and A F given A C m n, find A σ,p,k for p [, ] and k N given A C n n, find det(a) given A C m n, find A given A C m n and b C m, find all solutions of the min x C n Ax b the good news is that unlike the Jordan canonical form, the svd is actually computable there are two main methods to compute it: Golub-Reinsch and Golub-Kahan, we will look at these briefly later, right now all you need to know is that you can call Matlab to give you the svd, both the full and compact versions in all of the following we shall assume that we have the full svd of A = UΣV furthermore σ 1 σ σ r > are the singular values of A and r = rank(a) 3 solving linear systems assuming that we want to solve a linear system Ax = b that is known to be consistent, ie b im(a) first we form the matrix-vector product c = U b let y = V x 3

4 then Ax = b becomes Σy = c since Ax = b is consistent, so σ 1 σ r y 1 y r y r+1 y n c 1 = c n c r c r+1 is consistent, which means that we must have c r+1 = = c m =, ie c 1 c c = r now let where y r+1,, y n are free parameters all solutions of Ax = b are given by c 1 /σ 1 c y = r /σ r y r+1 y n x = V y for some choices of y r+1,, y n to see this, observe that σ 1 Ax = UΣV V y = UΣy = U σ r c 1 /σ 1 c 1 c r /σ r c = U r = UU y r+1 b = b y n 4 inverting nonsingular matrices note that only square matrices could have inverses, when we say A is nonsinular or invertible, the fact that A is square is implied by the way, the set of all nonsingular matrices in C n n forms group under matrix multiplication it is called the general linear group and denoted GL(n) := {A C n n : det(a) } if A C n n is nonsingular, then rank(a) = n and so σ 1,, σ n are all non-zero 4

5 so where A 1 = (UΣV ) 1 = (V ) 1 Σ 1 U 1 = V Σ 1 U Σ 1 = diag(σ 1 1,, σ 1 n ) 5 computing matrix -norm and F -norm recall the definition of the matrix -norm, A = max x Ax x which is also called the spectral norm we examine the expression Ax = x A Ax the matrix A A is Hermitian and positive semidefinite, ie x (A A)x for all nonzero x C n exercise: show that if a matrix M is Hermitian positive semidefinite, then its evd and svd coincide as such, A A has svd given by A A = V ΣV where V is a unitary matrix whose columns are the eigenvectors of A A, and Σ is a diagonal matrix of the form σ 1 σn where each σ i is nonnegative and an eigenvalue of A A these eigenvalues can be ordered such that σ 1 σ σ r >, σ r+1 = = σ n =, where r = rank(a) let x C n and let w = V x, then we obtain Ax x = x A Ax x x = x V ΣV x x V V x = w Σw w w n = σ i w i n w i σ1 exercise: show that if a 1,, a n are nonnegative numbers, then a 1 x a n x n max = max(a 1,, a n ) x i x x n where the maximum is taken over x 1,, x n [, ) not all it follows that Ax x σ 1 for all nonzero x 5

6 since V is a unitary matrix, it follows that there exists an x such that 1 w = V x = = e 1 in which case x A Ax = e 1Σe 1 = σ 1 in fact, this vector x is the eigenvector of A A corresponding to the eigenvalue σ 1 we conclude that A = σ 1 note that we have also shown that A = ρ(a A) since the eigenvalues of A A are simply the squares of the singular values of A by Lemma 1 another way to arrive at this same conclusion is to use the fact in Homework 1 that the -norm of a vector is invariant under multiplication by a unitary matrix, ie if Q Q = I, then x = Qx, from which it follows that A = UΣV = Σ = σ 1 by the same problem in Homework 1, the Frobenius norm is also unitarily invarint this yields an expression in terms of singular values A F = UΣV F = Σ F = σ1 + + σ r where r = rank(a) 6 computing other matrix norms the Schatten and Ky Fan norms can be expressed in terms of singular values this is cheating a bit, because we will in fact define them in terms of singular values for any p [1, ], the Schatten p-norm of A C m n is 1/p min(m,n) A σ,p := σ i (A) p and for p =, we have A σ, = max{σ 1 (A),, σ min(m,n) (A)} of course, the sum will stop at r = rank(a) since σ r+1 (A) = = σ min(m,n) (A) = as usual, we also have lim p A σ,p = A σ, (61) for all A C m n for the special values of p = 1,,, we have p = 1: nuclear norm A σ,1 = σ 1 (A) + + σ min(m,n) (A) = A p = : Frobenius norm A σ, = σ 1 (A) + + σ min(m,n) (A) = A F 6

7 p = : spectral norm A σ, = max{σ 1 (A),, σ min(m,n) (A)} = σ 1 (A) = A in fact one may also define Schatten p-norm for values of p [, 1) by dropping the root 1/p outside the sum A σ,p := min(m,n) σ i (A) p these will not be norms in the usual sense of the word because they do not satisfy the triangle inequality but they satisfy an analogue of (61) lim A σ,p = rank(a) p we may regard matrix rank as the Schatten -norm since A σ, = σ 1 (A) + + σ min(m,n) (A) = rank(a) provided we define := Schatten p-norm for p < 1 are useful in the same way l p -norms are useful for p < 1, namely, as continuous surrogates of matrix rank (vector l p -norms for p < 1 are often used as continuous surrogates as the l -norm, ie a count of the number of nonzero entries), which is a discontinuous function on C m n the nuclear norm, ie Schatten 1-norm, being convex in addition to continuous, is the most popular surrogate for matrix rank for any p [1, ) and k N the Ky Fan (p, k)-norm of A C m n is [ k clearly for any A C m n, A σ,p,k := σ i (A) p ] 1/p A σ,p, = A σ,p,min(m,n) = A σ,p and so Ky Fan norms generalize Schatten norms 7 computing magnitude of determinant note that determinants are only defined for square matrices A C n n recall the formula n det(a) = λ i (A) (71) where λ i (A) C is the ith eigenvalue of A recall that every n n matrix has exactly n eigenvalues counted with multiplicity by convention we usually order eigenvalues in decreasing order of magnitudes, ie λ 1 (A) λ (A) λ n (A) now if we have only the singular values of A we cannot get det(a) like we did with its eigenvalues in (71) however we can get the magnitude of the determinant 1 det(a) = det(uσv ) = det(u) det(σ) det(v ) = det(σ) = det σ since det(u) = det(v ) = 1 7 σ n = n σ i (A)

8 exercise: show that all eigenvalues of a unitary matrix U must have absolute value 1 and so det(u) = 1 by (71) 8 computing pseudoinverse as we mentioned above, only square matrices A C n n may have an inverse, ie X C n n such that AX = XA = I if such an X exists, we denote it by A 1 exercise: show that A 1, if exist, must be unique can we define something that behaves like an inverse (we will say exactly what this means in the next lecture when we discuss minimum length least squares problem) for square matrices that are not invertible in the traditional sense? more generally, can we define some kind of inverse for rectangular matrices? such considerations lead us to the notion of pseudoinverse the most famous one is the Moore Penrose pseudoinverse Theorem (Moore Penrose) For any A C m n, there exists a X C n m satisfying (i) (AX) = AX, (ii) (XA) = XA, (iii) XAX = X, (iv) AXA = A Furthermore the X satisfying these four conditions must be unique and is denoted by X = A other types of pseudoinverse, sometimes also called generalized inverse, may be defined by choosing a subset of these four properties the Moore Penrose theorem is actually true over any field but for C (and also R) we can say a lot more in fact we can compute A explicitly using the svd easy fact: if A C n n is invertible, then A 1 = A (81) to show this, just see that all four properties in the theorem hold true if we plug in X = A 1 another easy fact: if D = diag(d 1,, d min(m,n) ) C m n is diagonal in the sense that d ij = for all i j, then where D = diag(δ 1,, δ min(m,n) ) C n m (8) δ i = { 1/d i if d i if d i = to show this, just see that all four properties in the theorem hold true if we plug in X = diag(δ 1,, δ min(m,n) ) in general (AB) B A, AA I, A A I we can get A in terms of the svd of A C m n : if A = UΣV, then A = V Σ U (83) 8

9 where Σ = σ1 1 σ 1 r C n m to see this, just verify that (83) satisfies the four properties in Theorem we may also deduce from (83) that A = (A A) 1 A if A has full column rank (r = n) and that if A has full row rank (r = m) A = A (AA ) 1 9 solving least squares problems given A C m n and b C m, the least squares problem ask to find x C n so that b Ax is minimized note that x minimizes b Ax iff it minimizes b Ax, so whether we write the squared or not doesn t really matter using svd of A and the unitary invariance of the vector -norm in Homework, we can simplify this minimization problem as follows b Ax = b UΣV x = U b ΣV x = c Σy = (c 1 σ 1 y 1 ) + + (c r σ r y r ) + c r c m where c = U b and y = V x so we see that in order to minimize Ax b, we must set y i = c i /σ i for i = 1,, r the unknowns y i, for i = r + 1,, m, can have any value, since they do not affect the value of c Σy so all the least squares solution are of the form c 1 /σ 1 c x = V y = V r /σ r y r+1 where y r+1,, y n C are arbitrary our analysis above also shows that the minimum value of b Ax is given by y n min x C n b Ax = c r c m 9

10 1 solving minimum length least squares problems one of the best-known applications of the svd is that it can be used to obtain the solution to the problem b Ax = min, x = min alternatively, we can write { } min x : x argmin b Ax x C n or using the normal equations notation: f : X R, for S X, write for the set of minmizers, ie min { x : A Ax = A b} argmin f(x) or argmin{f(x) : x S} x S argmin f(x) = x S { } x S : f(x ) = min f(x) x S we often write (sloppily) x = argmin f(x) or argmin f(x) = x to mean that x is a minimizer of f over S even though the proper notation ought to have been x argmin f(x) note that from the last part of the last lecture that if A does not have full rank, then there are infinitely many solutions to the least squares problem because the residual in does not depend on y r+1,, y m and these are thus free parameters that we may freely choose: c 1 /σ 1 c x = V y = V r /σ r (11) y r+1 y n where c = U b we can claim that the unique solution with minimum -norm is given by setting y r+1 = = y m =, ie c 1 /σ 1 c x + = V r /σ r (1) where c = U b this follows because x + = c σ 1 c r σ r c 1 σ whatever our choice of y r+1,, y n in (11) 1 c r σ r + y r y n = x

11 we may also write (1) in terms of the Moore Penrose pseudo inverse since Σ has precisely the right form x + = V Σ c = V Σ U b = A b Σ = σ1 1 σ 1 r 11

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2 1. You are not allowed to use the svd for this problem, i.e. no arguments should depend on the svd of A or A. Let W be a subspace of C n. The

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in 86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

MATH 612 Computational methods for equation solving and function minimization Week # 2

MATH 612 Computational methods for equation solving and function minimization Week # 2 MATH 612 Computational methods for equation solving and function minimization Week # 2 Instructor: Francisco-Javier Pancho Sayas Spring 2014 University of Delaware FJS MATH 612 1 / 38 Plan for this week

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Singular Value Decomposition and Polar Form

Singular Value Decomposition and Polar Form Chapter 14 Singular Value Decomposition and Polar Form 14.1 Singular Value Decomposition for Square Matrices Let f : E! E be any linear map, where E is a Euclidean space. In general, it may not be possible

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Pseudoinverse & Orthogonal Projection Operators

Pseudoinverse & Orthogonal Projection Operators Pseudoinverse & Orthogonal Projection Operators ECE 174 Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 48 The Four

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Singular Value Decomposition (SVD) and Polar Form

Singular Value Decomposition (SVD) and Polar Form Chapter 2 Singular Value Decomposition (SVD) and Polar Form 2.1 Polar Form In this chapter, we assume that we are dealing with a real Euclidean space E. Let f: E E be any linear map. In general, it may

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

ECE 275A Homework # 3 Due Thursday 10/27/2016

ECE 275A Homework # 3 Due Thursday 10/27/2016 ECE 275A Homework # 3 Due Thursday 10/27/2016 Reading: In addition to the lecture material presented in class, students are to read and study the following: A. The material in Section 4.11 of Moon & Stirling

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Summary of Week 9 B = then A A =

Summary of Week 9 B = then A A = Summary of Week 9 Finding the square root of a positive operator Last time we saw that positive operators have a unique positive square root We now briefly look at how one would go about calculating the

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Lecture 1 Systems of Linear Equations and Matrices

Lecture 1 Systems of Linear Equations and Matrices Lecture 1 Systems of Linear Equations and Matrices Math 19620 Outline of Course Linear Equations and Matrices Linear Transformations, Inverses Bases, Linear Independence, Subspaces Abstract Vector Spaces

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Numerical Methods I Singular Value Decomposition

Numerical Methods I Singular Value Decomposition Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011 MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 64: Dynamic Systems Spring Homework 4 Solutions Exercise 47 Given a complex square matrix A, the definition

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Moore-Penrose Conditions & SVD

Moore-Penrose Conditions & SVD ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 8 ECE 275A Moore-Penrose Conditions & SVD ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 2/1

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino September 2013 Basilio Bona (DAUIN) Matrices September 2013 1 / 74 Definitions Definition A matrix is a set of N real or complex numbers

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

Chapter 0 Miscellaneous Preliminaries

Chapter 0 Miscellaneous Preliminaries EE 520: Topics Compressed Sensing Linear Algebra Review Notes scribed by Kevin Palmowski, Spring 2013, for Namrata Vaswani s course Notes on matrix spark courtesy of Brian Lois More notes added by Namrata

More information

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then LECTURE 6: POSITIVE DEFINITE MATRICES Definition: A Hermitian matrix A Mn is positive definite (pd) if x Ax > 0 x C n,x 0 A is positive semidefinite (psd) if x Ax 0. Definition: A Mn is negative (semi)definite

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Spectrum and Pseudospectrum of a Tensor

Spectrum and Pseudospectrum of a Tensor Spectrum and Pseudospectrum of a Tensor Lek-Heng Lim University of California, Berkeley July 11, 2008 L.-H. Lim (Berkeley) Spectrum and Pseudospectrum of a Tensor July 11, 2008 1 / 27 Matrix eigenvalues

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

5 Selected Topics in Numerical Linear Algebra

5 Selected Topics in Numerical Linear Algebra 5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with

More information

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column

More information

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016 Linear Algebra Shan-Hung Wu shwu@cs.nthu.edu.tw Department of Computer Science, National Tsing Hua University, Taiwan Large-Scale ML, Fall 2016 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall

More information

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M) RODICA D. COSTIN. Singular Value Decomposition.1. Rectangular matrices. For rectangular matrices M the notions of eigenvalue/vector cannot be defined. However, the products MM and/or M M (which are square,

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information