Hyperdeterminants, secant varieties, and tensor approximations

Size: px
Start display at page:

Download "Hyperdeterminants, secant varieties, and tensor approximations"

Transcription

1 Hyperdeterminants, secant varieties, and tensor approximations Lek-Heng Lim University of California, Berkeley April 23, 2008 Joint work with Vin de Silva L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

2 Hypermatrices Totally ordered finite sets: [n] = {1 < 2 < < n}, n N. Vector or n-tuple f : [n] R. If f (i) = a i, then f is represented by a = [a 1,..., a n ] R n. Matrix f : [m] [n] R. If f (i, j) = a ij, then f is represented by A = [a ij ] m,n i,j=1 Rm n. Hypermatrix (order 3) f : [l] [m] [n] R. If f (i, j, k) = a ijk, then f is represented by A = a ijk l,m,n i,j,k=1 Rl m n. Normally R X = {f : X R}. Ought to be R [n], R [m] [n], R [l] [m] [n]. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

3 Hypermatrices and tensors Up to choice of bases a R n can represent a vector in V (contravariant) or a linear functional in V (covariant). A R m n can represent a bilinear form V W R (contravariant), a bilinear form V W R (covariant), or a linear operator V W (mixed). A R l m n can represent trilinear form U V W R (covariant), bilinear operators V W U (mixed), etc. A hypermatrix is the same as a tensor if 1 we give it coordinates (represent with respect to some bases); 2 we ignore covariance and contravariance. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

4 Basic operation on a hypermatrix A matrix can be multiplied on the left and right: A R m n, X R p m, Y R q n, where (X, Y ) A = XAY = [c αβ ] R p q c αβ = m,n i,j=1 x αiy βj a ij. A hypermatrix can be multiplied on three sides: A = a ijk R l m n, X R p l, Y R q m, Z R r n, (X, Y, Z) A = c αβγ R p q r where c αβγ = l,m,n i,j,k=1 x αiy βj z γk a ijk. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

5 Basic operation on a hypermatrix Covariant version: A (X, Y, Z ) := (X, Y, Z) A. Gives convenient notations for multilinear functionals and multilinear operators. For x R l, y R m, z R n, A(x, y, z) := A (x, y, z) = l,m,n a ijkx i y j z k, i,j,k=1 A(I, y, z) := A (I, y, z) = m,n a ijky j z k. j,k=1 L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

6 Symmetric hypermatrices Example Cubical hypermatrix a ijk R n n n is symmetric if a ijk = a ikj = a jik = a jki = a kij = a kji. Invariant under all permutations σ S k on indices. S k (R n ) denotes set of all order-k symmetric hypermatrices. Higher order derivatives of multivariate functions. Example Moments of a random vector x = (X 1,..., X n ): m k (x) = [ E(x i1 x i2 x ik ) ] [ n = x i 1,...,i k =1 i1 x i2 x ik dµ(x i1 ) dµ(x ik ) ] n. i 1,...,i k =1 L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

7 Symmetric hypermatrices Example Cumulants of a random vector x = (X 1,..., X n ): ( ) ( ) κ k (x) = ( 1) p 1 (p 1)!E x i E x i n i A 1 i A p A 1 A p={i 1,...,i k } i 1,...,i k =1 For n = 1, κ k (x) for k = 1, 2, 3, 4 are the expectation, variance, skewness, and kurtosis. Important in Independent Component Analysis (ICA).. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

8 Inner products and norms l 2 ([n]): a, b R n, a, b = a b = n i=1 a ib i. l 2 ([m] [n]): A, B R m n, A, B = tr(a B) = m,n i,j=1 a ijb ij. l 2 ([l] [m] [n]): A, B R l m n, A, B = l,m,n i,j,k=1 a ijkb ijk. In general, l 2 ([m] [n]) = l 2 ([m]) l 2 ([n]), l 2 ([l] [m] [n]) = l 2 ([l]) l 2 ([m]) l 2 ([n]). Frobenius norm A 2 F = l,m,n i,j,k=1 a2 ijk. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

9 DARPA mathematical challenge eight One of the twenty three mathematical challenges announced at DARPA Tech Problem Beyond convex optimization: can linear algebra be replaced by algebraic geometry in a systematic way? Algebraic geometry in a slogan: polynomials are to algebraic geometry what matrices are to linear algebra. Polynomial f R[x 1,..., x n ] of degree d can be expressed as f (x) = a 0 + a 1 x + x A 2 x + A 3 (x, x, x) + + A d (x,..., x). a 0 R, a 1 R n, A 2 R n n, A 3 R n n n,..., A d R n n. Numerical linear algebra: d = 2. Numerical multilinear algebra: d > 2. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

10 Multilinear spectral theory Let A R n n n (easier if A symmetric). 1 How should one define its eigenvalues and eigenvectors? 2 What is a decomposition that generalizes the eigenvalue decomposition of a matrix? Let A R l m n 1 How should one define its singular values and singular vectors? 2 What is a decomposition that generalizes the singular value decomposition of a matrix? Somewhat surprising: (1) and (2) have different answers. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

11 Multilinear spectral theory May define eigenvalues/vectors of A S k (R n ) as critical values/points of the multilinear Raleigh quotient A(x,..., x)/ x k k. Lagrangian L(x, λ) := A(x,..., x) λ( x k k 1). At a critical point A(I n, x,..., x) = λx k 1. Ditto for singular values/vectors of A R d 1 d k. Perron-Frobenius theorem for irreducible non-negative hypermatrices, spectral hypergraph theory: L, Singular values and eigenvalues of tensors: a variational approach, Proc. IEEE Int. Workshop on Computational Advances in Multi-Sensor Adaptive Processing, 1 (2005). L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

12 Tensor ranks (Hitchcock, 1927) Matrix rank. A R m n. rank(a) = dim(span R {A 1,..., A n }) (column rank) = dim(span R {A 1,..., A m }) (row rank) = min{r A = r i=1 u iv T i } (outer product rank). Multilinear rank. A R l m n. rank (A) = (r 1 (A), r 2 (A), r 3 (A)), r 1 (A) = dim(span R {A 1,..., A l }) r 2 (A) = dim(span R {A 1,..., A m }) r 3 (A) = dim(span R {A 1,..., A n }) Outer product rank. A R l m n. rank (A) = min{r A = r i=1 u i v i w i } where u v w : = u i v j w k l,m,n i,j,k=1. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

13 Eigenvalue and singular value decompositions Rank revealing decompositions associated with outer product rank. Symmetric eigenvalue decomposition of A S 3 (R n ), A = r i=1 λ iv i v i v i (1) where rank S (A) = min { r A = r i=1 λ } iv i v i v i = r. P. Comon, G. Golub, L, B. Mourrain, Symmetric tensor and symmetric tensor rank, SIAM J. Matrix Anal. Appl. Singular value decomposition of A R l m n, where rank (A) = r. A = r i=1 σ iu i v i w i (2) V. de Silva, L, Tensor rank and the ill-posedness of the best low-rank approximation problem, SIAM J. Matrix Anal. Appl. (1) used in applications of ICA to signal processing; (2) used in applications of the parafac model to analytical chemistry. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

14 Eigenvalue and singular value decompositions Rank revealing decompositions associated with the multilinear rank. Symmetric eigenvalue decomposition of A S 3 (R n ), A = (U, U, U) C (3) where rank (A) = (r, r, r), U R n r has orthonormal columns and C S 3 (R r ). Singular value decomposition of A R l m n, A = (U, V, W ) C (4) where rank (A) = (r 1, r 2, r 3 ), U R l r 1, V R m r 2, W R n r 3 have orthonormal columns and C R r 1 r 2 r 3. L. De Lathauwer, B. De Moor, J. Vandewalle A multilinear singular value decomposition, SIAM J. Matrix Anal. Appl., 21 (2000), no. 4. B. Savas, L, Best multilinear rank approximation with quasi-newton method on Grassmannians, preprint. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

15 Segre variety and its secant varieties The set of all rank-1 hypermatrices is known as the Segre variety in algebraic geometry. It is a closed set (in both the Euclidean and Zariski sense) as it can be described algebraically: Seg(R l, R m, R n ) = {A R l m n A = u v w} = {A R l m n a i1 i 2 i 3 a j1 j 2 j 3 = a k1 k 2 k 3 a l1 l 2 l 3, {i α, j α } = {k α, l α }} Hypermatrices that have rank > 1 are elements on the higher secant varieties of S = Seg(R l, R m, R n ). E.g. a hypermatrix has rank 2 if it sits on a secant line through two points in S but not on S, rank 3 if it sits on a secant plane through three points in S but not on any secant lines, etc. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

16 Decomposition approach to data analysis More generally, F = C, R, R +, R max (max-plus algebra), R[x 1,..., x n ] (polynomial rings), etc. Dictionary, D F N, not contained in any hyperplane. Let D 2 = union of bisecants to D, D 3 = union of trisecants to D,..., D r = union of r-secants to D. Define D-rank of A F N to be min{r A D r }. If ϕ : F N F N R is some measure of nearness between pairs of points (e.g. norms, Bregman divergences, etc), we want to find a best low-rank approximation to A: argmin{ϕ(a, B) D-rank(B) r}. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

17 Decomposition approach to data analysis In the presence of noise, approximation instead of decomposition A α 1 B α r B r D r. B i D reveal features of the dataset A. Note that another way to say best low-rank is sparsest possible. Examples 1 candecomp/parafac: D = {A rank (A) 1}, ϕ(a, B) = A B F. 2 De Lathauwer model: D = {A rank (A) (r 1, r 2, r 3 )}, ϕ(a, B) = A B F. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

18 Fundamental problem of multiway data analysis A hypermatrix, symmetric hypermatrix, or nonnegative hypermatrix. Solve argmin rank(b) r A B. rank may be outer product rank, multilinear rank, symmetric rank (for symmetric hypermatrix), or nonnegative rank (nonnegative hypermatrix). Example Given A R d 1 d 2 d 3, find u i, v i, w i, i = 1,..., r, that minimizes A u 1 v 1 w 1 u 2 v 2 w 2 u r v r z r or C R r 1 r 2 r 3 and U R d 1 r 1, V R d 2 r 2, W R d 3 r 3, that minimizes A (U, V, W ) C. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

19 Fundamental problem of multiway data analysis Example Given A S k (C n ), find u i, i = 1,..., r, that minimizes A u k 1 u k 2 u k r or C R r 1 r 2 r 3 and U R n r i that minimizes A (U, U, U) C. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

20 Separation of variables Approximation by sum or integral of separable functions Continuous f (x, y, z) = θ(x, t)ϕ(y, t)ψ(z, t) dt. Semi-discrete f (x, y, z) = r p=1 θ p(x)ϕ p (y)ψ p (z) θ p (x) = θ(x, t p ), ϕ p (y) = ϕ(y, t p ), ψ p (z) = ψ(z, t p ), r possibly. Discrete a ijk = r u ipv jp w kp p=1 a ijk = f (x i, y j, z k ), u ip = θ p (x i ), v jp = ϕ p (y j ), w kp = ψ p (z k ). L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

21 Separation of variables Useful for data analysis, machine learning, pattern recognition. Gaussians are separable exp(x 2 + y 2 + z 2 ) = exp(x 2 ) exp(y 2 ) exp(z 2 ). More generally for symmetric positive-definite A R n n, exp(x Ax) = exp(z Λz) = n exp(λ iz 2 i=1 i ). Gaussian mixture models f (x) = m α j exp[(x µ j=1 j ) A j (x µ j )], f is a sum of separable functions. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

22 Integral kernels Approximation by sum or integral kernels Continuous f (x, y, z) = K(x, y, z )θ(x, x )ϕ(y, y )ψ(z, z ) dx dy dz. Semi-discrete f (x, y, z) = p,q,r c i,j,k i =1 j k θ i (x)ϕ j (y)ψ k (z) c i j k = K(x i, y j, z k ), θ i (x) = θ(x, x i ), ϕ j (y) = ϕ(y, y j ), ψ k (z) = ψ(z, z k ), p, q, r possibly. Discrete a ijk = p,q,r c i,j,k i =1 j k u ii v jj w kk a ijk = f (x i, y j, z k ), u ii = θ i (x i ), v jj = ϕ j (y j ), w kk = ψ k (z k ). L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

23 Lemma (de Silva, L) Let r 2 and k 3. Given the norm-topology on R d 1 d k, the following statements are equivalent: 1 The set S r (d 1,..., d k ) := {A rank (A) r} is not closed. 2 There exists a sequence A n, rank (A n ) r, n N, converging to B with rank (B) > r. 3 There exists B, rank (B) > r, that may be approximated arbitrarily closely by hypermatrices of strictly lower rank, i.e. inf{ B A rank (A) r} = 0. 4 There exists C, rank (C) > r, that does not have a best rank-r approximation, i.e. inf{ C A rank (A) r} is not attained (by any A with rank (A) r). L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

24 Non-existence of best low-rank approximation For x i, y i R d i, i = 1, 2, 3, A := x 1 x 2 y 3 + x 1 y 2 x 3 + y 1 x 2 x 3. For n N, ( A n := n x ) n y 1 ( x ) ( n y 2 x ) n y 3 nx 1 x 2 x 3. Lemma (de Silva, L) rank (A) = 3 iff x i, y i linearly independent, i = 1, 2, 3. Furthermore, it is clear that rank (A n ) 2 and lim n A n = A. Original result, in a different form, due to: D. Bini, G. Lotti, F. Romani, Approximate solutions for the bilinear form computational problem, SIAM J. Comput., 9 (1980), no. 4. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

25 Outer product approximations are ill-behaved Such phenomenon can and will happen for all orders > 2, all norms, and many ranks: Theorem (de Silva, L) Let k 3 and d 1,..., d k 2. For any s such that 2 s min{d 1,..., d k }, there exists A R d 1 d k with rank (A) = s such that A has no best rank-r approximation for some r < s. The result is independent of the choice of norms. For matrices, the quantity min{d 1, d 2 } will be the maximal possible rank in R d 1 d 2. In general, a hypermatrix in R d 1 d k can have rank exceeding min{d 1,..., d k }. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

26 Outer product approximations are ill-behaved Tensor rank can jump over an arbitrarily large gap: Theorem (de Silva, L) Let k 3. Given any s N, there exists a sequence of order-k hypermatrix A n such that rank (A n ) r and lim n A n = A with rank (A) = r + s. Hypermatrices that fail to have best low-rank approximations are not rare. May occur with non-zero probability; sometimes with certainty. Theorem (de Silva, L) Let µ be a measure that is positive or infinite on Euclidean open sets in R l m n. There exists some r N such that µ({a A does not have a best rank-r approximation}) > 0. In R 2 2 2, all rank-3 hypermatrices fail to have best rank-2 approximation. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

27 Message That the best rank-r approximation problem for hypermatrices has no solution poses serious difficulties. It is incorrect to think that if we just want an approximate solution, then this doesn t matter. If there is no solution in the first place, then what is it that are we trying to approximate? i.e. what is the approximate solution an approximate of? L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

28 Weak solutions For a hypermatrix A that has no best rank-r approximation, we will call a C {A rank (A) r} attaining inf{ C A rank (A) r} a weak solution. In particular, we must have rank (C) > r. It is perhaps surprising that one may completely parameterize all limit points of order-3 rank-2 hypermatrices. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

29 Weak solutions Theorem (de Silva, L) Let d 1, d 2, d 3 2. Let A n R d 1 d 2 d 3 with rank (A n ) 2 and be a sequence of hypermatrices lim n A n = A, where the limit is taken in any norm topology. If the limiting hypermatrix A has rank higher than 2, then rank (A) must be exactly 3 and there exist pairs of linearly independent vectors x 1, y 1 R d 1, x 2, y 2 R d 2, x 3, y 3 R d 3 such that A = x 1 x 2 y 3 + x 1 y 2 x 3 + y 1 x 2 x 3. In particular, a sequence of order-3 rank-2 hypermatrices cannot jump rank by more than 1. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

30 Hyperdeterminant Work in C (d 1+1) (d k +1) for the time being (d i 1). Consider M := {A C (d 1+1) (d k +1) A(x 1,..., x k ) = 0 Theorem (Gelfand, Kapranov, Zelevinsky) M is a hypersurface iff for all j = 1,..., k, d j i j d i. for non-zero x 1,..., x k }. The hyperdeterminant Det(A) is the equation of the hypersurface, i.e. a multivariate polynomial in the entries of A such that M = {A C (d 1+1) (d k +1) Det(A) = 0}. Det(A) may be chosen to have integer coefficients. For C m n, condition becomes m n and n m, i.e. square matrices. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

31 2 2 2 hyperdeterminant Hyperdeterminant of A = a ijk R [Cayley; 1845] is Det(A) = 1 [ ([ ] [ ]) a000 a 010 a100 a 110 det + 4 a 001 a 011 a 101 a 111 ([ ] [ ])] 2 a000 a 010 a100 a 110 det a 001 a 011 a 101 a 111 [ ] a000 a det det a 001 a 011 [ a100 a 110 a 101 a 111 A result that parallels the matrix case is the following: the system of bilinear equations a 000x 0y 0 + a 010x 0y 1 + a 100x 1y 0 + a 110x 1y 1 = 0, a 001x 0y 0 + a 011x 0y 1 + a 101x 1y 0 + a 111x 1y 1 = 0, a 000x 0z 0 + a 001x 0z 1 + a 100x 1z 0 + a 101x 1z 1 = 0, a 010x 0z 0 + a 011x 0z 1 + a 110x 1z 0 + a 111x 1z 1 = 0, a 000y 0z 0 + a 001y 0z 1 + a 010y 1z 0 + a 011y 1z 1 = 0, a 100y 0z 0 + a 101y 0z 1 + a 110y 1z 0 + a 111y 1z 1 = 0, has a non-trivial solution iff Det(A) = 0. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37 ].

32 2 2 3 hyperdeterminant Hyperdeterminant of A = a ijk R is a 000 a 001 a 002 a 100 a 101 a 102 Det(A) = det a 100 a 101 a 102 det a 010 a 011 a 012 a 010 a 011 a 012 a 110 a 111 a 112 a 000 a 001 a 002 a 000 a 001 a 002 det a 100 a 101 a 102 det a 010 a 011 a 012 a 110 a 111 a 112 a 110 a 111 a 112 Again, the following is true: a 000x 0y 0 + a 010x 0y 1 + a 100x 1y 0 + a 110x 1y 1 = 0, a 001x 0y 0 + a 011x 0y 1 + a 101x 1y 0 + a 111x 1y 1 = 0, a 002x 0y 0 + a 012x 0y 1 + a 102x 1y 0 + a 112x 1y 1 = 0, a 000x 0z 0 + a 001x 0z 1 + a 002x 0z 2 + a 100x 1z 0 + a 101x 1z 1 + a 102x 1z 2 = 0, a 010x 0z 0 + a 011x 0z 1 + a 012x 0z 2 + a 110x 1z 0 + a 111x 1z 1 + a 112x 1z 2 = 0, a 000y 0z 0 + a 001y 0z 1 + a 002y 0z 2 + a 010y 1z 0 + a 011y 1z 1 + a 012y 1z 2 = 0, a 100y 0z 0 + a 101y 0z 1 + a 102y 0z 2 + a 110y 1z 0 + a 111y 1z 1 + a 112y 1z 2 = 0, has a non-trivial solution iff Det(A) = 0. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

33 Cayley hyperdeterminant and tensor rank The Cayley hyperdeterminant Det 2,2,2 may be extended to any A R d 1 d 2 d 3 with rank (A) 2. Theorem (de Silva, L) Let d 1, d 2, d 3 2. A R d 1 d 2 d 3 is a weak solution, i.e. iff Det 2,2,2 (A) = 0. Theorem (Kruskal) A = x 1 x 2 y 3 + x 1 y 2 x 3 + y 1 x 2 x 3, Let A R Then rank (A) = 2 if Det 2,2,2 (A) > 0 and rank (A) = 3 if Det 2,2,2 (A) < 0. See de Silva-L for a proof via the Cayley hyperdeterminant. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

34 Symmetric hypermatrices for blind source separation Problem Given y = Mx + n. Unknown: source vector x C n, mixing matrix M C m n, noise n C m. Known: observation vector y C m. Goal: recover x from y. Assumptions: 1 components of x statistically independent, 2 M full column-rank, 3 n Gaussian. Method: use cumulants κ k (y) = (M, M,..., M) κ k (x) + κ k (n). By assumptions, κ k (n) = 0 and κ k (x) is diagonal. So need to diagonalize the symmetric hypermatrix κ k (y). L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

35 Diagonalizing a symmetric hypermatrix A best symmetric rank approximation may not exist either: Example (Comon, Golub, L, Mourrain) Let x, y R n be linearly independent. Define for n N, and ( A n := n x + 1 ) k n y nx k A := x y y + y x y + + y y x. Then rank S (A n ) 2, rank S (A) = k, and lim n A n = A. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

36 Nonnegative hypermatrices and nonnegative tensor rank Let 0 A R d 1 d k. The nonnegative rank of A is rank + (A) := min { r r i=1 u i v i z i, u i,..., z i 0 } Clearly nonnegative decomposition exists for any A 0. Arises in the Naïve Bayes model. Theorem (L) Let A = a j1 j k R d 1 d k be nonnegative. Then inf { A r u i v i z i u i,..., z i 0 } i=1 is always attained. L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

37 Advertisement Geometry and representation theory of tensors for computer science, statistics, and other areas 1 MSRI Summer Graduate Workshop July 7 to July 18, 2008 Organized by J.M. Landsberg, L.-H. Lim, J. Morton Mathematical Sciences Research Institute, Berkeley, CA sgw 2 AIM Workshop July 21 to July 25, 2008 Organized by J.M. Landsberg, L.-H. Lim, J. Morton, J. Weyman American Institute of Mathematics, Palo Alto, CA L.-H. Lim (Berkeley) Tensor approximations April 23, / 37

Eigenvalues of tensors

Eigenvalues of tensors Eigenvalues of tensors and some very basic spectral hypergraph theory Lek-Heng Lim Matrix Computations and Scientific Computing Seminar April 16, 2008 Lek-Heng Lim (MCSC Seminar) Eigenvalues of tensors

More information

Spectrum and Pseudospectrum of a Tensor

Spectrum and Pseudospectrum of a Tensor Spectrum and Pseudospectrum of a Tensor Lek-Heng Lim University of California, Berkeley July 11, 2008 L.-H. Lim (Berkeley) Spectrum and Pseudospectrum of a Tensor July 11, 2008 1 / 27 Matrix eigenvalues

More information

Conditioning and illposedness of tensor approximations

Conditioning and illposedness of tensor approximations Conditioning and illposedness of tensor approximations and why engineers should care Lek-Heng Lim MSRI Summer Graduate Workshop July 7 18, 2008 L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7

More information

Numerical multilinear algebra

Numerical multilinear algebra Numerical multilinear algebra From matrices to tensors Lek-Heng Lim University of California, Berkeley March 1, 2008 Lek-Heng Lim (UC Berkeley) Numerical multilinear algebra March 1, 2008 1 / 29 Lesson

More information

Multilinear Algebra in Data Analysis:

Multilinear Algebra in Data Analysis: Multilinear Algebra in Data Analysis: tensors, symmetric tensors, nonnegative tensors Lek-Heng Lim Stanford University Workshop on Algorithms for Modern Massive Datasets Stanford, CA June 21 24, 2006 Thanks:

More information

Numerical Multilinear Algebra: a new beginning?

Numerical Multilinear Algebra: a new beginning? Numerical Multilinear Algebra: a new beginning? Lek-Heng Lim Stanford University Matrix Computations and Scientific Computing Seminar Berkeley, CA October 18, 2006 Thanks: G. Carlsson, S. Lacoste-Julien,

More information

Most tensor problems are NP hard

Most tensor problems are NP hard Most tensor problems are NP hard (preliminary report) Lek-Heng Lim University of California, Berkeley August 25, 2009 (Joint work with: Chris Hillar, MSRI) L.H. Lim (Berkeley) Most tensor problems are

More information

Numerical Multilinear Algebra II

Numerical Multilinear Algebra II Numerical Multilinear Algebra II Lek-Heng Lim University of California, Berkeley January 5 7, 2009 L.-H. Lim (ICM Lecture) Numerical Multilinear Algebra II January 5 7, 2009 1 / 61 Recap: tensor ranks

More information

Algebraic models for higher-order correlations

Algebraic models for higher-order correlations Algebraic models for higher-order correlations Lek-Heng Lim and Jason Morton U.C. Berkeley and Stanford Univ. December 15, 2008 L.-H. Lim & J. Morton (MSRI Workshop) Algebraic models for higher-order correlations

More information

Numerical Multilinear Algebra I

Numerical Multilinear Algebra I Numerical Multilinear Algebra I Lek-Heng Lim University of California, Berkeley January 5 7, 2009 L.-H. Lim (ICM Lecture) Numerical Multilinear Algebra I January 5 7, 2009 1 / 55 Hope Past 50 years, Numerical

More information

Tensors. Lek-Heng Lim. Statistics Department Retreat. October 27, Thanks: NSF DMS and DMS

Tensors. Lek-Heng Lim. Statistics Department Retreat. October 27, Thanks: NSF DMS and DMS Tensors Lek-Heng Lim Statistics Department Retreat October 27, 2012 Thanks: NSF DMS 1209136 and DMS 1057064 L.-H. Lim (Stat Retreat) Tensors October 27, 2012 1 / 20 tensors on one foot a tensor is a multilinear

More information

Principal Cumulant Component Analysis

Principal Cumulant Component Analysis Principal Cumulant Component Analysis Lek-Heng Lim University of California, Berkeley July 4, 2009 (Joint work with: Jason Morton, Stanford University) L.-H. Lim (Berkeley) Principal Cumulant Component

More information

Numerical Multilinear Algebra in Data Analysis

Numerical Multilinear Algebra in Data Analysis Numerical Multilinear Algebra in Data Analysis Lek-Heng Lim Stanford University Computer Science Colloquium Ithaca, NY February 20, 2007 Collaborators: O. Alter, P. Comon, V. de Silva, I. Dhillon, M. Gu,

More information

Symmetric eigenvalue decompositions for symmetric tensors

Symmetric eigenvalue decompositions for symmetric tensors Symmetric eigenvalue decompositions for symmetric tensors Lek-Heng Lim University of California, Berkeley January 29, 2009 (Contains joint work with Pierre Comon, Jason Morton, Bernard Mourrain, Berkant

More information

Globally convergent algorithms for PARAFAC with semi-definite programming

Globally convergent algorithms for PARAFAC with semi-definite programming Globally convergent algorithms for PARAFAC with semi-definite programming Lek-Heng Lim 6th ERCIM Workshop on Matrix Computations and Statistics Copenhagen, Denmark April 1 3, 2005 Thanks: Vin de Silva,

More information

Numerical multilinear algebra in data analysis

Numerical multilinear algebra in data analysis Numerical multilinear algebra in data analysis (Ten ways to decompose a tensor) Lek-Heng Lim Stanford University April 5, 2007 Lek-Heng Lim (Stanford University) Numerical multilinear algebra in data analysis

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Optimal solutions to non-negative PARAFAC/multilinear NMF always exist

Optimal solutions to non-negative PARAFAC/multilinear NMF always exist Optimal solutions to non-negative PARAFAC/multilinear NMF always exist Lek-Heng Lim Stanford University Workshop on Tensor Decompositions and Applications CIRM, Luminy, France August 29 September 2, 2005

More information

, one is often required to find a best rank-r approximation to A in other words, determine vectors x i R d1, y i R d2,...

, one is often required to find a best rank-r approximation to A in other words, determine vectors x i R d1, y i R d2,... SIAM J. MATRIX ANAL. APPL. Vol. 30, No. 3, pp. 1084 1127 c 2008 Society for Industrial and Applied Mathematics TENSOR RANK AND THE ILL-POSEDNESS OF THE BEST LOW-RANK APPROXIMATION PROBLEM VIN DE SILVA

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

MOST TENSOR PROBLEMS ARE NP HARD CHRISTOPHER J. HILLAR AND LEK-HENG LIM

MOST TENSOR PROBLEMS ARE NP HARD CHRISTOPHER J. HILLAR AND LEK-HENG LIM MOST TENSOR PROBLEMS ARE NP HARD CHRISTOPHER J. HILLAR AND LEK-HENG LIM Abstract. The idea that one might extend numerical linear algebra, the collection of matrix computational methods that form the workhorse

More information

to appear insiam Journal on Matrix Analysis and Applications.

to appear insiam Journal on Matrix Analysis and Applications. to appear insiam Journal on Matrix Analysis and Applications. SYMMETRIC TENSORS AND SYMMETRIC TENSOR RANK PIERRE COMON, GENE GOLUB, LEK-HENG LIM, AND BERNARD MOURRAIN Abstract. A symmetric tensor is a

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Perron-Frobenius theorem for nonnegative multilinear forms and extensions

Perron-Frobenius theorem for nonnegative multilinear forms and extensions Perron-Frobenius theorem for nonnegative multilinear forms and extensions Shmuel Friedland Univ. Illinois at Chicago ensors 18 December, 2010, Hong-Kong Overview 1 Perron-Frobenius theorem for irreducible

More information

Low-rank approximation of tensors and the statistical analysis of multi-indexed data

Low-rank approximation of tensors and the statistical analysis of multi-indexed data Low-rank approximation of tensors and the statistical analysis of multi-indexed data Lek-Heng Lim Institute for Computational and Mathematical Engineering Stanford University NERSC Scientific Computing

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Tensors. Notes by Mateusz Michalek and Bernd Sturmfels for the lecture on June 5, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra

Tensors. Notes by Mateusz Michalek and Bernd Sturmfels for the lecture on June 5, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Tensors Notes by Mateusz Michalek and Bernd Sturmfels for the lecture on June 5, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra This lecture is divided into two parts. The first part,

More information

The number of singular vector tuples and approximation of symmetric tensors / 13

The number of singular vector tuples and approximation of symmetric tensors / 13 The number of singular vector tuples and approximation of symmetric tensors Shmuel Friedland Univ. Illinois at Chicago Colloquium NYU Courant May 12, 2014 Joint results with Giorgio Ottaviani and Margaret

More information

A concise proof of Kruskal s theorem on tensor decomposition

A concise proof of Kruskal s theorem on tensor decomposition A concise proof of Kruskal s theorem on tensor decomposition John A. Rhodes 1 Department of Mathematics and Statistics University of Alaska Fairbanks PO Box 756660 Fairbanks, AK 99775 Abstract A theorem

More information

Algorithms for tensor approximations

Algorithms for tensor approximations Algorithms for tensor approximations Lek-Heng Lim MSRI Summer Graduate Workshop July 7 18, 2008 L.-H. Lim (MSRI SGW) Algorithms for tensor approximations July 7 18, 2008 1 / 35 Synopsis Naïve: the Gauss-Seidel

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Sufficient Conditions to Ensure Uniqueness of Best Rank One Approximation

Sufficient Conditions to Ensure Uniqueness of Best Rank One Approximation Sufficient Conditions to Ensure Uniqueness of Best Rank One Approximation Yang Qi Joint work with Pierre Comon and Lek-Heng Lim Supported by ERC grant #320594 Decoda August 3, 2015 Yang Qi (GIPSA-Lab)

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

A Geometric Perspective on the Singular Value Decomposition

A Geometric Perspective on the Singular Value Decomposition arxiv:1503.07054v1 [math.ag] 24 Mar 2015 A Geometric Perspective on the Singular Value Decomposition Giorgio Ottaviani and Raffaella Paoletti dedicated to Emilia Mezzetti Abstract This is an introductory

More information

arxiv: v1 [math.ra] 13 Jan 2009

arxiv: v1 [math.ra] 13 Jan 2009 A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule GROUP THEORY PRIMER New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule 1. Tensor methods for su(n) To study some aspects of representations of a

More information

Generic properties of Symmetric Tensors

Generic properties of Symmetric Tensors 2006 1/48 PComon Generic properties of Symmetric Tensors Pierre COMON - CNRS other contributors: Bernard MOURRAIN INRIA institute Lek-Heng LIM, Stanford University 2006 2/48 PComon Tensors & Arrays Definitions

More information

Tensor decomposition and tensor rank

Tensor decomposition and tensor rank from the point of view of Classical Algebraic Geometry RTG Workshop Tensors and their Geometry in High Dimensions (September 26-29, 2012) UC Berkeley Università di Firenze Content of the three talks Wednesday

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Eigenvalues and invariants of tensors

Eigenvalues and invariants of tensors J. Math. Anal. Appl. 325 (2007) 1363 1377 www.elsevier.com/locate/jmaa Eigenvalues and invariants of tensors Liqun Q Department of Applied Mathematics, The Hong Kong Polytechnic University, Kowloon, Hong

More information

Statistics 202: Data Mining. c Jonathan Taylor. Week 2 Based in part on slides from textbook, slides of Susan Holmes. October 3, / 1

Statistics 202: Data Mining. c Jonathan Taylor. Week 2 Based in part on slides from textbook, slides of Susan Holmes. October 3, / 1 Week 2 Based in part on slides from textbook, slides of Susan Holmes October 3, 2012 1 / 1 Part I Other datatypes, preprocessing 2 / 1 Other datatypes Document data You might start with a collection of

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

Part I. Other datatypes, preprocessing. Other datatypes. Other datatypes. Week 2 Based in part on slides from textbook, slides of Susan Holmes

Part I. Other datatypes, preprocessing. Other datatypes. Other datatypes. Week 2 Based in part on slides from textbook, slides of Susan Holmes Week 2 Based in part on slides from textbook, slides of Susan Holmes Part I Other datatypes, preprocessing October 3, 2012 1 / 1 2 / 1 Other datatypes Other datatypes Document data You might start with

More information

Quick Tour of Linear Algebra and Graph Theory

Quick Tour of Linear Algebra and Graph Theory Quick Tour of Linear Algebra and Graph Theory CS224w: Social and Information Network Analysis Fall 2012 Yu Wayne Wu Based on Borja Pelato s version in Fall 2011 Matrices and Vectors Matrix: A rectangular

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

arxiv: v1 [math.ra] 11 Aug 2014

arxiv: v1 [math.ra] 11 Aug 2014 Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091

More information

Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan

Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Iterations, Symmetries, and Rank Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

From nonnegative matrices to nonnegative tensors

From nonnegative matrices to nonnegative tensors From nonnegative matrices to nonnegative tensors Shmuel Friedland Univ. Illinois at Chicago 7 October, 2011 Hamilton Institute, National University of Ireland Overview 1 Perron-Frobenius theorem for irreducible

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Lecture 7 Spectral methods

Lecture 7 Spectral methods CSE 291: Unsupervised learning Spring 2008 Lecture 7 Spectral methods 7.1 Linear algebra review 7.1.1 Eigenvalues and eigenvectors Definition 1. A d d matrix M has eigenvalue λ if there is a d-dimensional

More information

hal , version 1-16 Aug 2009

hal , version 1-16 Aug 2009 Author manuscript, published in "Journal of Chemometrics 23 (2009) 432-441" DOI : 10.1002/cem.1244 Journal of Chemometrics, vol.23, pp.432 441, Aug. 2009 NONNEGATIVE APPROXIMATIONS OF NONNEGATIVE TENSORS

More information

Tropical decomposition of symmetric tensors

Tropical decomposition of symmetric tensors Tropical decomposition of symmetric tensors Melody Chan University of California, Berkeley mtchan@math.berkeley.edu December 11, 008 1 Introduction In [], Comon et al. give an algorithm for decomposing

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

c 2016 Society for Industrial and Applied Mathematics

c 2016 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 37, No. 4, pp. 1556 1580 c 2016 Society for Industrial and Applied Mathematics SEMIALGEBRAIC GEOMETRY OF NONNEGATIVE TENSOR RANK YANG QI, PIERRE COMON, AND LEK-HENG LIM

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

Cayley s Hyperdeterminant, the Principal Minors of a Symmetric Matrix and the Entropy Region of 4 Gaussian Random Variables

Cayley s Hyperdeterminant, the Principal Minors of a Symmetric Matrix and the Entropy Region of 4 Gaussian Random Variables Forty-Sixth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 23-26, 2008 Cayley s Hyperdeterminant, the Principal Minors of a Symmetric Matrix and the Entropy Region of 4 Gaussian

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

18.S34 linear algebra problems (2007)

18.S34 linear algebra problems (2007) 18.S34 linear algebra problems (2007) Useful ideas for evaluating determinants 1. Row reduction, expanding by minors, or combinations thereof; sometimes these are useful in combination with an induction

More information

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting. Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Eigenvalues of the Adjacency Tensor on Products of Hypergraphs

Eigenvalues of the Adjacency Tensor on Products of Hypergraphs Int. J. Contemp. Math. Sciences, Vol. 8, 201, no. 4, 151-158 HIKARI Ltd, www.m-hikari.com Eigenvalues of the Adjacency Tensor on Products of Hypergraphs Kelly J. Pearson Department of Mathematics and Statistics

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information