Conditioning and illposedness of tensor approximations

Size: px
Start display at page:

Download "Conditioning and illposedness of tensor approximations"

Transcription

1 Conditioning and illposedness of tensor approximations and why engineers should care Lek-Heng Lim MSRI Summer Graduate Workshop July 7 18, 2008 L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

2 Slight change of plans Week 1 Mon: Overview: tensor approximations (LH) Wed: Conditioning and ill-posedness of tensor approximations, nonnegative and symmetric tensors, some applications (LH) Week 2 Tue: Computations: Gauss-Seidel method, semidefinite programming, Hilbert s 17th problem, optimization on Stiefel and Grassmann manifolds (LH) L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

3 Tensor rank is hard to compute Eugene L. Lawler: The Mystical Power of Twoness. 2-SAT is easy, 3-SAT is hard; 2-dimensional matching is easy, 3-dimensional matching is hard; Order-2 tensor rank is easy, order-3 tensor rank is hard. Theorem (Håstad) Computing rank (A) for A F l m n is NP-hard for F = Q and NP-complete for F = F q? Open question: Is tensor rank NP-hard/NP-complete over F = R, C in the sense of BCSS? L. Blum, F. Cucker, M. Shub, S. Smale, Complexity and real computation, Springer-Verlag, New York, NY, L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

4 Tensor rank depends on base field For A R m n C m n, rank R (A) = rank C (A). Not true for tensors. Theorem (Bergman) For A R l m n C l m n, rank (A) is base field dependent. x, y R n linearly independent and let z = x + iy. x x x x y y + y x y + y y x May show that rank,r (A) = 3 and rank,c (A) = 2. = 1 (z z z + z z z). 2 R has 8 distinct orbits under GL 2 (R) GL 2 (R) GL 2 (R). C has 7 distinct orbits under GL 2 (C) GL 2 (C) GL 2 (C). L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

5 Recall: fundamental problem of multiway data analysis A hypermatrix, symmetric hypermatrix, or nonnegative hypermatrix. Want argmin rank(b) r A B. rank(b) may be outer product rank, multilinear rank, symmetric rank (for symmetric hypermatrix), or nonnegative rank (nonnegative hypermatrix). Example Given A R d 1 d 2 d 3, find σ i, u i, v i, w i, i = 1,..., r, that minimizes A σ 1 u 1 v 1 w 1 σ 2 u 2 v 2 w 2 σ r u r v r w r or C R r 1 r 2 r 3 and U R d 1 r 1, V R d 2 r 2, W R d 3 r 3, that minimizes A (U, V, W ) C. May assume u i, v i, w i unit vectors and U, V, W orthonormal columns. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

6 Recall: fundamental problem of multiway data analysis Example Given A S k (C n ), find u i, i = 1,..., r, that minimizes A λ 1 u k 1 λ 2 u k 2 λ r u k r or C R r 1 r 2 r 3 and U R n r i that minimizes A (U, U, U) C. May assume u i unit vector and U orthonormal columns. Pierre s lectures in Week 2. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

7 Best low rank approximation of a matrix Given A R m n. Want argmin rank(b) r A B. More precisely, find σ i, u i, v i, i = 1,..., r, that minimizes A σ 1 u 1 v 1 σ 2 u 2 v 2 σ r u r v r. Theorem (Eckart Young) Let A = UΣV = rank(a) i=1 σ i u i v i be singular value decomposition. For r rank(a), let A r := r i=1 σ iu i v i. Then A A r F = min A B F. rank(b) r No such thing for hypermatrices of order 3 or higher. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

8 Lemma Let r 2 and k 3. Given the norm-topology on R d 1 d k, the following statements are equivalent: 1 The set S r (d 1,..., d k ) := {A rank (A) r} is not closed. 2 There exists a sequence A n, rank (A n ) r, n N, converging to B with rank (B) > r. 3 There exists B, rank (B) > r, that may be approximated arbitrarily closely by hypermatrices of strictly lower rank, i.e. inf{ B A rank (A) r} = 0. 4 There exists C, rank (C) > r, that does not have a best rank-r approximation, i.e. inf{ C A rank (A) r} is not attained (by any A with rank (A) r). L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

9 Non-existence of best low-rank approximation For x i, y i R d i, i = 1, 2, 3, A := x 1 x 2 y 3 + x 1 y 2 x 3 + y 1 x 2 x 3. For n N, ( A n := n x ) n y 1 ( x ) ( n y 2 x ) n y 3 nx 1 x 2 x 3. Lemma rank (A) = 3 iff x i, y i linearly independent, i = 1, 2, 3. Furthermore, it is clear that rank (A n ) 2 and lim n A n = A. Original result, in a slightly different form, due to: D. Bini, G. Lotti, F. Romani, Approximate solutions for the bilinear form computational problem, SIAM J. Comput., 9 (1980), no. 4. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

10 Outer product approximations are ill-behaved Such phenomenon can and will happen for all orders > 2, all norms, and many ranks: Theorem Let k 3 and d 1,..., d k 2. For any s such that 2 s min{d 1,..., d k }, there exists A R d 1 d k with rank (A) = s such that A has no best rank-r approximation for some r < s. The result is independent of the choice of norms. For matrices, the quantity min{d 1, d 2 } will be the maximal possible rank in R d 1 d 2. In general, a hypermatrix in R d 1 d k can have rank exceeding min{d 1,..., d k }. Vin s next three lectures. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

11 Outer product approximations are ill-behaved Tensor rank can jump over an arbitrarily large gap: Theorem Let k 3. Given any s N, there exists a sequence of order-k hypermatrix A n such that rank (A n ) r and lim n A n = A with rank (A) = r + s. Hypermatrices that fail to have best low-rank approximations are not rare. May occur with non-zero probability; sometimes with certainty. Theorem Let µ be a measure that is positive or infinite on Euclidean open sets in R l m n. There exists some r N such that µ({a A does not have a best rank-r approximation}) > 0. In R 2 2 2, all rank-3 hypermatrices fail to have best rank-2 approximation. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

12 Message That the best rank-r approximation problem for hypermatrices has no solution poses serious difficulties. It is incorrect to think that if we just want an approximate solution, then this doesn t matter. If there is no solution in the first place, then what is it that are we trying to approximate? i.e. what is the approximate solution an approximate of? L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

13 Weak solutions For a hypermatrix A that has no best rank-r approximation, we will call a C {A rank (A) r} attaining inf{ C A rank (A) r} a weak solution. In particular, we must have rank (C) > r. It is perhaps surprising that one may completely parameterize all limit points of order-3 rank-2 hypermatrices. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

14 Weak solutions Theorem Let d 1, d 2, d 3 2. Let A n R d 1 d 2 d 3 with rank (A n ) 2 and be a sequence of hypermatrices lim n A n = A, where the limit is taken in any norm topology. If the limiting hypermatrix A has rank higher than 2, then rank (A) must be exactly 3 and there exist pairs of linearly independent vectors x 1, y 1 R d 1, x 2, y 2 R d 2, x 3, y 3 R d 3 such that A = x 1 x 2 y 3 + x 1 y 2 x 3 + y 1 x 2 x 3. In particular, a sequence of order-3 rank-2 hypermatrices cannot jump rank by more than 1. Details: Vin s lectures. Not possible in general: JM s lectures. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

15 Conditioning of linear systems Let A R n n and b R n. Suppose we want to solve system of linear equations Ax = b. M = {A R n n det(a) = 0} is the manifold of ill-posed problems. A M iff Ax = 0 has nontrivial solutions. Note that det(a) is a poor measure of conditioning. Conditioning is the inverse distance to ill-posedness [Demmel; 1987] (also Dedieu, Shub, Smale), ie. 1 A 1 2. Normalizing by A 2 yields condition number Note that 1 A 2 A 1 2 = 1 κ 2 (A). A = σ n = min x i,y i A x 1 y 1 x n 1 y n 1 2. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

16 Conditioning of linear systems Important for error analysis [Wilkinson, 1961]. Let A = UΣV and define Then S forward (ε) = {x R n Ax = b, x x 2 ε} = {x R n n i=1 x i x i 2 ε 2 }, S backward (ε) = {x R n Ax = b, b b 2 ε} = {x R n x x = V (y y), n i=1 σ2 i y i y i 2 ε 2 }. S backward (ε) S forward (σ 1 n ε), S forward (ε) S backward (σ 1 ε). Determined by σ 1 = A 2 and σ 1 n = A 1 2. Rule of thumb: log 10 κ 2 (A) loss in number of digits of precision. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

17 What about multilinear systems? Look at the simplest case. Take A = a ijk R and b 0, b 1, b 2 R 2. a 000 x 0 y 0 + a 010 x 0 y 1 + a 100 x 1 y 0 + a 110 x 1 y 1 = b 00, a 001 x 0 y 0 + a 011 x 0 y 1 + a 101 x 1 y 0 + a 111 x 1 y 1 = b 01, a 000 x 0 z 0 + a 001 x 0 z 1 + a 100 x 1 z 0 + a 101 x 1 z 1 = b 10, a 010 x 0 z 0 + a 011 x 0 z 1 + a 110 x 1 z 0 + a 111 x 1 z 1 = b 11, a 000 y 0 z 0 + a 001 y 0 z 1 + a 010 y 1 z 0 + a 011 y 1 z 1 = b 20, a 100 y 0 z 0 + a 101 y 0 z 1 + a 110 y 1 z 0 + a 111 y 1 z 1 = b 21. When does this have a solution? What is the corresponding manifold of ill-posed problems? When does the homogeneous system, ie. b 0 = b 1 = b 2 = 0, have a non-trivial solution, ie. x 0, y 0, z 0? L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

18 2 2 2 hyperdeterminant Hyperdeterminant of A = a ijk R [Cayley; 1845] is Det 2,2,2(A) = 1 [ ([ ] [ ]) a000 a 010 a100 a 110 det + 4 a 001 a 011 a 101 a 111 ([ ] [ ])] 2 a000 a 010 a100 a 110 det a 001 a 011 a 101 a 111 [ ] a000 a det det a 001 a 011 [ a100 a 110 a 101 a 111 A result that parallels the matrix case is the following: the system of bilinear equations a 000x 0y 0 + a 010x 0y 1 + a 100x 1y 0 + a 110x 1y 1 = 0, a 001x 0y 0 + a 011x 0y 1 + a 101x 1y 0 + a 111x 1y 1 = 0, a 000x 0z 0 + a 001x 0z 1 + a 100x 1z 0 + a 101x 1z 1 = 0, a 010x 0z 0 + a 011x 0z 1 + a 110x 1z 0 + a 111x 1z 1 = 0, a 000y 0z 0 + a 001y 0z 1 + a 010y 1z 0 + a 011y 1z 1 = 0, a 100y 0z 0 + a 101y 0z 1 + a 110y 1z 0 + a 111y 1z 1 = 0, has a non-trivial solution iff Det 2,2,2 (A) = 0. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31 ].

19 2 2 3 hyperdeterminant Hyperdeterminant of A = a ijk R is a 000 a 001 a 002 a 100 a 101 a 102 Det 2,2,3(A) = det a 100 a 101 a 102 det a 010 a 011 a 012 a 010 a 011 a 012 a 110 a 111 a 112 a 000 a 001 a 002 a 000 a 001 a 002 det a 100 a 101 a 102 det a 010 a 011 a 012 a 110 a 111 a 112 a 110 a 111 a 112 Again, the following is true: a 000x 0y 0 + a 010x 0y 1 + a 100x 1y 0 + a 110x 1y 1 = 0, a 001x 0y 0 + a 011x 0y 1 + a 101x 1y 0 + a 111x 1y 1 = 0, a 002x 0y 0 + a 012x 0y 1 + a 102x 1y 0 + a 112x 1y 1 = 0, a 000x 0z 0 + a 001x 0z 1 + a 002x 0z 2 + a 100x 1z 0 + a 101x 1z 1 + a 102x 1z 2 = 0, a 010x 0z 0 + a 011x 0z 1 + a 012x 0z 2 + a 110x 1z 0 + a 111x 1z 1 + a 112x 1z 2 = 0, a 000y 0z 0 + a 001y 0z 1 + a 002y 0z 2 + a 010y 1z 0 + a 011y 1z 1 + a 012y 1z 2 = 0, a 100y 0z 0 + a 101y 0z 1 + a 102y 0z 2 + a 110y 1z 0 + a 111y 1z 1 + a 112y 1z 2 = 0, has a non-trivial solution iff Det 2,2,3 (A) = 0. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

20 Cayley hyperdeterminant and tensor rank The Cayley hyperdeterminant Det 2,2,2 may be extended to any A R d 1 d 2 d 3 with rank (A) 2. Theorem Let d 1, d 2, d 3 2. A R d 1 d 2 d 3 is a weak solution, i.e. iff Det 2,2,2 (A) = 0. Theorem (Kruskal) A = x 1 x 2 y 3 + x 1 y 2 x 3 + y 1 x 2 x 3, Let A R Then rank (A) = 2 if Det 2,2,2 (A) > 0 and rank (A) = 3 if Det 2,2,2 (A) < 0. Vin s next three lectures. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

21 Condition number of a multilinear system Like the matrix determinant, the value of the hyperdeterminant is a poor measure of conditioning. Need to compute distance to M. Theorem Let A R Det 2,2,2 (A) = 0 iff for some x i, y i R 2, i = 1, 2, 3. A = x x y + x y x + y x x Conditioning of the problem can be obtained from min A x x y x y x y x x. x,y x x y + x y x + y x x has outer product rank 3 generically (in fact, iff x, y are linearly independent). Surprising: the manifold of ill-posed problem has full rank almost everywhere! L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

22 Nonnegative hypermatrices and nonnegative tensor rank Let 0 A R l m n. The nonnegative rank of A is rank + (A) := min { r r p=1 x p y p z p, x p, y p, z p 0 } Clearly nonnegative decomposition exists for any A 0. Arises in the Naïve Bayes model, Gaussian mixture models. Theorem Let A = a ijk R l m n be nonnegative. Then inf { A r x p y p z p x p, y p, z p 0 } p=1 is always attained. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

23 Nonnegative matrix factorization D.D. Lee and H.S. Seung, Learning the parts of objects by nonnegative matrix factorization, Nature, 401 (1999), pp Main idea behind NMF (everything else is fluff): the way dictionary functions combine to build target objects is an exclusively additive process and should not involve any cancellations between the dictionary functions. NMF in a nutshell: given nonnegative matrix A, decompose it into a sum of outer-products of nonnegative vectors: A = XY = r i=1 x i y i. Noisy situation: approximate A by a sum of outer-products of nonnegative vectors min A XY F = min A r x F i y i. X 0,Y 0 x i 0,y i 0 i=1 L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

24 Generalizing to hypermatrices Nonnegative outer-product decomposition for hypermatrix A 0 is A = r p=1 x p y p z p where x p R l +, y p R m +, z p R n +. Clear that such a decomposition exists for any A 0. Nonnegative outer-product rank: minimal r for which such a decomposition is possible. Best nonnegative outer-product rank-r approximation: argmin { A r p=1 x p y p z p F x p, y p, z p 0 }. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

25 Recap: outer product decomposition in spectroscopy Application to fluorescence spectral analysis by [Bro; 1997]. Specimens with a number of pure substances in different concentration a ijk = fluorescence emission intensity at wavelength λ em j excited with light at wavelength λ ex k. Get 3-way data A = a ijk R l m n. Get outer product decomposition of A A = x 1 y 1 z x r y r z r. Get the true chemical factors responsible for the data. r: number of pure substances in the mixtures, xp = (x 1p,..., x lp ): relative concentrations of pth substance in specimens 1,..., l, yp = (y 1p,..., y mp ): excitation spectrum of pth substance, z p = (z 1p,..., z np ): emission spectrum of pth substance. of ith sample Noisy case: find best rank-r approximation (candecomp/parafac). L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

26 Proof Naive choice of objective: g : (R l R m R n ) r R, g(x 1, y 1, z 1,..., x r, y r, z r ) := A r p=1 x p y p z p 2 F. Need to show g attains infimum on (R l + R m + R n +) r. Doesn t work because of an additional degree of freedom x i, y i, z i may be scaled by non-zero positive scalars that product to 1, αx βy γz = x y z, αβγ = 1, e.g. (nx) y (z/n) can have a diverging loading vector even while the outer-product remains fixed. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

27 Picking the right objective function Define f : R r (R l R m R n ) r R by f (X ) := A r p=1 λ pu p v p w p 2 F where X = (λ 1,..., λ r ; u 1, v 1, w 1,..., u r, v r, w r ). Let S n 1 + := {x Rn + x 2 = 1} and P := R r + (S l 1 + Sm 1 + S+ n 1 )r. Global minimizer of f on P, (λ 1,..., λ r ; u 1, v 1, w 1,..., u r, v r, w r ) P, gives required global minimizer (non-unique). L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

28 Level sets are compact Note P is closed but unbounded. Will show that the level set of f restricted to P, is compact for all α. E α = {X P f (X ) α} E α = P f 1 (, α] closed since f continuous. Now to show E α bounded. Suppose not, {Xn } n=1 P with X n but f (X n ) α for all n. Clearly, Xn implies λ (n) q for at least one q {1,..., r}. Note f (X ) ( A F r p=1 λ pu p v p w p F ) 2. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

29 Using nonnegativity Taking X 0 into account, r λ pu p v p w p 2 p=1 F = l,m,n ( r λ ) 2 pu pi v pj w pk i,j,k=1 p=1 l,m,n (λ qu qi v qj w qk ) 2 i,j,k=1 = λ 2 q l,m,n i,j,k=1 (u qiv qj w qk ) 2 = λ 2 q u q v q w q 2 F = λ 2 q since u q = v q = w q = 1. (n) Hence, as λ q, f (X n ). Contradicts f (Xn ) α for all n. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

30 Symmetric hypermatrices for blind source separation Problem Given y = Mx + n. Unknown: source vector x C n, mixing matrix M C m n, noise n C m. Known: observation vector y C m. Goal: recover x from y. Assumptions: 1 components of x statistically independent, 2 M full column-rank, 3 n Gaussian. Method: use cumulants κ k (y) = (M, M,..., M) κ k (x) + κ k (n). By assumptions, κ k (n) = 0 and κ k (x) is diagonal. So need to diagonalize the symmetric hypermatrix κ k (y). Pierre s lectures in Week 2. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

31 Diagonalizing a symmetric hypermatrix Example A best symmetric rank approximation may not exist either: Let x, y R n be linearly independent. Define for n N, ( A n := n x + 1 ) k n y nx k and A := x y y + y x y + + y y x. Then rank S (A n ) 2, rank S (A) = k, and lim n A n = A. L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7 18, / 31

Optimal solutions to non-negative PARAFAC/multilinear NMF always exist

Optimal solutions to non-negative PARAFAC/multilinear NMF always exist Optimal solutions to non-negative PARAFAC/multilinear NMF always exist Lek-Heng Lim Stanford University Workshop on Tensor Decompositions and Applications CIRM, Luminy, France August 29 September 2, 2005

More information

Hyperdeterminants, secant varieties, and tensor approximations

Hyperdeterminants, secant varieties, and tensor approximations Hyperdeterminants, secant varieties, and tensor approximations Lek-Heng Lim University of California, Berkeley April 23, 2008 Joint work with Vin de Silva L.-H. Lim (Berkeley) Tensor approximations April

More information

Numerical Multilinear Algebra II

Numerical Multilinear Algebra II Numerical Multilinear Algebra II Lek-Heng Lim University of California, Berkeley January 5 7, 2009 L.-H. Lim (ICM Lecture) Numerical Multilinear Algebra II January 5 7, 2009 1 / 61 Recap: tensor ranks

More information

Numerical multilinear algebra

Numerical multilinear algebra Numerical multilinear algebra From matrices to tensors Lek-Heng Lim University of California, Berkeley March 1, 2008 Lek-Heng Lim (UC Berkeley) Numerical multilinear algebra March 1, 2008 1 / 29 Lesson

More information

Most tensor problems are NP hard

Most tensor problems are NP hard Most tensor problems are NP hard (preliminary report) Lek-Heng Lim University of California, Berkeley August 25, 2009 (Joint work with: Chris Hillar, MSRI) L.H. Lim (Berkeley) Most tensor problems are

More information

Spectrum and Pseudospectrum of a Tensor

Spectrum and Pseudospectrum of a Tensor Spectrum and Pseudospectrum of a Tensor Lek-Heng Lim University of California, Berkeley July 11, 2008 L.-H. Lim (Berkeley) Spectrum and Pseudospectrum of a Tensor July 11, 2008 1 / 27 Matrix eigenvalues

More information

Multilinear Algebra in Data Analysis:

Multilinear Algebra in Data Analysis: Multilinear Algebra in Data Analysis: tensors, symmetric tensors, nonnegative tensors Lek-Heng Lim Stanford University Workshop on Algorithms for Modern Massive Datasets Stanford, CA June 21 24, 2006 Thanks:

More information

Globally convergent algorithms for PARAFAC with semi-definite programming

Globally convergent algorithms for PARAFAC with semi-definite programming Globally convergent algorithms for PARAFAC with semi-definite programming Lek-Heng Lim 6th ERCIM Workshop on Matrix Computations and Statistics Copenhagen, Denmark April 1 3, 2005 Thanks: Vin de Silva,

More information

Numerical multilinear algebra in data analysis

Numerical multilinear algebra in data analysis Numerical multilinear algebra in data analysis (Ten ways to decompose a tensor) Lek-Heng Lim Stanford University April 5, 2007 Lek-Heng Lim (Stanford University) Numerical multilinear algebra in data analysis

More information

Numerical Multilinear Algebra I

Numerical Multilinear Algebra I Numerical Multilinear Algebra I Lek-Heng Lim University of California, Berkeley January 5 7, 2009 L.-H. Lim (ICM Lecture) Numerical Multilinear Algebra I January 5 7, 2009 1 / 55 Hope Past 50 years, Numerical

More information

, one is often required to find a best rank-r approximation to A in other words, determine vectors x i R d1, y i R d2,...

, one is often required to find a best rank-r approximation to A in other words, determine vectors x i R d1, y i R d2,... SIAM J. MATRIX ANAL. APPL. Vol. 30, No. 3, pp. 1084 1127 c 2008 Society for Industrial and Applied Mathematics TENSOR RANK AND THE ILL-POSEDNESS OF THE BEST LOW-RANK APPROXIMATION PROBLEM VIN DE SILVA

More information

Numerical Multilinear Algebra: a new beginning?

Numerical Multilinear Algebra: a new beginning? Numerical Multilinear Algebra: a new beginning? Lek-Heng Lim Stanford University Matrix Computations and Scientific Computing Seminar Berkeley, CA October 18, 2006 Thanks: G. Carlsson, S. Lacoste-Julien,

More information

Algebraic models for higher-order correlations

Algebraic models for higher-order correlations Algebraic models for higher-order correlations Lek-Heng Lim and Jason Morton U.C. Berkeley and Stanford Univ. December 15, 2008 L.-H. Lim & J. Morton (MSRI Workshop) Algebraic models for higher-order correlations

More information

Eigenvalues of tensors

Eigenvalues of tensors Eigenvalues of tensors and some very basic spectral hypergraph theory Lek-Heng Lim Matrix Computations and Scientific Computing Seminar April 16, 2008 Lek-Heng Lim (MCSC Seminar) Eigenvalues of tensors

More information

Tensors. Lek-Heng Lim. Statistics Department Retreat. October 27, Thanks: NSF DMS and DMS

Tensors. Lek-Heng Lim. Statistics Department Retreat. October 27, Thanks: NSF DMS and DMS Tensors Lek-Heng Lim Statistics Department Retreat October 27, 2012 Thanks: NSF DMS 1209136 and DMS 1057064 L.-H. Lim (Stat Retreat) Tensors October 27, 2012 1 / 20 tensors on one foot a tensor is a multilinear

More information

Principal Cumulant Component Analysis

Principal Cumulant Component Analysis Principal Cumulant Component Analysis Lek-Heng Lim University of California, Berkeley July 4, 2009 (Joint work with: Jason Morton, Stanford University) L.-H. Lim (Berkeley) Principal Cumulant Component

More information

Low-rank approximation of tensors and the statistical analysis of multi-indexed data

Low-rank approximation of tensors and the statistical analysis of multi-indexed data Low-rank approximation of tensors and the statistical analysis of multi-indexed data Lek-Heng Lim Institute for Computational and Mathematical Engineering Stanford University NERSC Scientific Computing

More information

Algorithms for tensor approximations

Algorithms for tensor approximations Algorithms for tensor approximations Lek-Heng Lim MSRI Summer Graduate Workshop July 7 18, 2008 L.-H. Lim (MSRI SGW) Algorithms for tensor approximations July 7 18, 2008 1 / 35 Synopsis Naïve: the Gauss-Seidel

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

hal , version 1-16 Aug 2009

hal , version 1-16 Aug 2009 Author manuscript, published in "Journal of Chemometrics 23 (2009) 432-441" DOI : 10.1002/cem.1244 Journal of Chemometrics, vol.23, pp.432 441, Aug. 2009 NONNEGATIVE APPROXIMATIONS OF NONNEGATIVE TENSORS

More information

Numerical Multilinear Algebra in Data Analysis

Numerical Multilinear Algebra in Data Analysis Numerical Multilinear Algebra in Data Analysis Lek-Heng Lim Stanford University Computer Science Colloquium Ithaca, NY February 20, 2007 Collaborators: O. Alter, P. Comon, V. de Silva, I. Dhillon, M. Gu,

More information

Nonnegative approximations of nonnegative tensors

Nonnegative approximations of nonnegative tensors Research Article Received: 29 November 2008, Revised: 12 April 2009, Accepted: 16 April 2009, Published online in Wiley Interscience: 23 June 2009 (www.interscience.wiley.com) DOI: 10.1002/cem.1244 Nonnegative

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 2 Notes, Linear Algebra 5e Lay Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Generic properties of Symmetric Tensors

Generic properties of Symmetric Tensors 2006 1/48 PComon Generic properties of Symmetric Tensors Pierre COMON - CNRS other contributors: Bernard MOURRAIN INRIA institute Lek-Heng LIM, Stanford University 2006 2/48 PComon Tensors & Arrays Definitions

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x. APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Tensor, Tor, UCF, and Kunneth

Tensor, Tor, UCF, and Kunneth Tensor, Tor, UCF, and Kunneth Mark Blumstein 1 Introduction I d like to collect the basic definitions of tensor product of modules, the Tor functor, and present some examples from homological algebra and

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

ECE 598: Representation Learning: Algorithms and Models Fall 2017

ECE 598: Representation Learning: Algorithms and Models Fall 2017 ECE 598: Representation Learning: Algorithms and Models Fall 2017 Lecture 1: Tensor Methods in Machine Learning Lecturer: Pramod Viswanathan Scribe: Bharath V Raghavan, Oct 3, 2017 11 Introduction Tensors

More information

Vector Space Concepts

Vector Space Concepts Vector Space Concepts ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 25 Vector Space Theory

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Notes taken by Graham Taylor. January 22, 2005

Notes taken by Graham Taylor. January 22, 2005 CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

to appear insiam Journal on Matrix Analysis and Applications.

to appear insiam Journal on Matrix Analysis and Applications. to appear insiam Journal on Matrix Analysis and Applications. SYMMETRIC TENSORS AND SYMMETRIC TENSOR RANK PIERRE COMON, GENE GOLUB, LEK-HENG LIM, AND BERNARD MOURRAIN Abstract. A symmetric tensor is a

More information

Problem Set 1. Homeworks will graded based on content and clarity. Please show your work clearly for full credit.

Problem Set 1. Homeworks will graded based on content and clarity. Please show your work clearly for full credit. CSE 151: Introduction to Machine Learning Winter 2017 Problem Set 1 Instructor: Kamalika Chaudhuri Due on: Jan 28 Instructions This is a 40 point homework Homeworks will graded based on content and clarity

More information

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

Lecture 2: Linear Algebra

Lecture 2: Linear Algebra Lecture 2: Linear Algebra Rajat Mittal IIT Kanpur We will start with the basics of linear algebra that will be needed throughout this course That means, we will learn about vector spaces, linear independence,

More information

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna Inverse Theory COST WaVaCS Winterschool Venice, February 2011 Stefan Buehler Luleå University of Technology Kiruna Overview Inversion 1 The Inverse Problem 2 Simple Minded Approach (Matrix Inversion) 3

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Review and problem list for Applied Math I

Review and problem list for Applied Math I Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

8 Eigenvectors and the Anisotropic Multivariate Gaussian Distribution

8 Eigenvectors and the Anisotropic Multivariate Gaussian Distribution Eigenvectors and the Anisotropic Multivariate Gaussian Distribution Eigenvectors and the Anisotropic Multivariate Gaussian Distribution EIGENVECTORS [I don t know if you were properly taught about eigenvectors

More information

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE MICHAEL LAUZON AND SERGEI TREIL Abstract. In this paper we find a necessary and sufficient condition for two closed subspaces, X and Y, of a Hilbert

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition We are interested in more than just sym+def matrices. But the eigenvalue decompositions discussed in the last section of notes will play a major role in solving general

More information

A LITTLE REAL ANALYSIS AND TOPOLOGY

A LITTLE REAL ANALYSIS AND TOPOLOGY A LITTLE REAL ANALYSIS AND TOPOLOGY 1. NOTATION Before we begin some notational definitions are useful. (1) Z = {, 3, 2, 1, 0, 1, 2, 3, }is the set of integers. (2) Q = { a b : aεz, bεz {0}} is the set

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Sufficient Conditions to Ensure Uniqueness of Best Rank One Approximation

Sufficient Conditions to Ensure Uniqueness of Best Rank One Approximation Sufficient Conditions to Ensure Uniqueness of Best Rank One Approximation Yang Qi Joint work with Pierre Comon and Lek-Heng Lim Supported by ERC grant #320594 Decoda August 3, 2015 Yang Qi (GIPSA-Lab)

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Global Optimization of Polynomials

Global Optimization of Polynomials Semidefinite Programming Lecture 9 OR 637 Spring 2008 April 9, 2008 Scribe: Dennis Leventhal Global Optimization of Polynomials Recall we were considering the problem min z R n p(z) where p(z) is a degree

More information

Chapter 6 - Orthogonality

Chapter 6 - Orthogonality Chapter 6 - Orthogonality Maggie Myers Robert A. van de Geijn The University of Texas at Austin Orthogonality Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Orthogonal Vectors and Subspaces http://z.cs.utexas.edu/wiki/pla.wiki/

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Lecture 7: Semidefinite programming

Lecture 7: Semidefinite programming CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational

More information

MP 472 Quantum Information and Computation

MP 472 Quantum Information and Computation MP 472 Quantum Information and Computation http://www.thphys.may.ie/staff/jvala/mp472.htm Outline Open quantum systems The density operator ensemble of quantum states general properties the reduced density

More information

MOST TENSOR PROBLEMS ARE NP HARD CHRISTOPHER J. HILLAR AND LEK-HENG LIM

MOST TENSOR PROBLEMS ARE NP HARD CHRISTOPHER J. HILLAR AND LEK-HENG LIM MOST TENSOR PROBLEMS ARE NP HARD CHRISTOPHER J. HILLAR AND LEK-HENG LIM Abstract. The idea that one might extend numerical linear algebra, the collection of matrix computational methods that form the workhorse

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 2: Orthogonal Vectors and Matrices; Vector Norms Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 11 Outline 1 Orthogonal

More information

Tangent bundles, vector fields

Tangent bundles, vector fields Location Department of Mathematical Sciences,, G5-109. Main Reference: [Lee]: J.M. Lee, Introduction to Smooth Manifolds, Graduate Texts in Mathematics 218, Springer-Verlag, 2002. Homepage for the book,

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information