Tensor Low-Rank Completion and Invariance of the Tucker Core

Size: px
Start display at page:

Download "Tensor Low-Rank Completion and Invariance of the Tucker Core"

Transcription

1 Tensor Low-Rank Completion and Invariance of the Tucker Core Shuzhong Zhang Department of Industrial & Systems Engineering University of Minnesota Joint work with Bo JIANG, Shiqian MA, and Fan YANG IMA Annual Program Year Workshop Optimization and Parsimonious Modeling January 28, 2016 Shuzhong Zhang Low-Rank Completion and Invariance of the Tucker Core January, / 36

2 Part I. Matricization and Tensor Completion Shuzhong Zhang Low-Rank Completion and Invariance of the Tucker Core January, / 36

3 Tensor CP-Rank Consider d-th order tensor F C n 1 n 2 n d, its CP-rank (denoted as rank CP (F )) is the smallest integer r exhibiting the following decomposition F = r a 1,i a 2,i a d,i i=1 where a k,i C n i for k = 1,..., d and i = 1,..., r. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

4 Tensor CP-Rank Consider d-th order tensor F C n 1 n 2 n d, its CP-rank (denoted as rank CP (F )) is the smallest integer r exhibiting the following decomposition F = r a 1,i a 2,i a d,i i=1 where a k,i C n i for k = 1,..., d and i = 1,..., r. If F is in the real domain, then its real CP-rank rankcp R (F ) is the above decomposition where all the vectors are real. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

5 Tensor CP-Rank Consider d-th order tensor F C n 1 n 2 n d, its CP-rank (denoted as rank CP (F )) is the smallest integer r exhibiting the following decomposition F = r a 1,i a 2,i a d,i i=1 where a k,i C n i for k = 1,..., d and i = 1,..., r. If F is in the real domain, then its real CP-rank rankcp R (F ) is the above decomposition where all the vectors are real. It is well known that in general when d 3. rank CP (F ) < rank R CP (F ) Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

6 Low-CP-Rank Tensor Recovery Tensor Completion min s.t. rank CP (X) X C n 1 n 2 n d L(X) = b. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

7 Low-CP-Rank Tensor Recovery Tensor Completion min s.t. rank CP (X) X C n 1 n 2 n d L(X) = b. Robust Tensor Recovery min rank CP (Y) + λ Z 0 s.t. Y, Z C n 1 n 2 n d Y + Z = F. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

8 Hardness of the CP-Rank Computation Consider the following tensor (Kruskal, 1989): and all other components are zero. (1,1,1): 1 (4,2,1): 1 (7,3,1): 1 (1,4,2): 1 (4,5,2): 1 (7,6,2): 1 (1,7,3): 1 (4,8,3): 1 (7,9,3): 1 (2,1,4): 1 (5,2,4): 1 (8,3,4): 1 (2,4,5): 1 (5,5,5): 1 (8,6,5): 1 (2,7,6): 1 (5,8,6): 1 (8,9,6): 1 (3,1,7): 1 (6,2,7): 1 (9,3,7): 1 (3,4,8): 1 (6,5,8): 1 (9,6,8): 1 (3,7,9): 1 (6,8,9): 1 (9,9,9): 1 The CP-rank of the above tensor is only known to be between 19 and 23. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

9 Matricized Ranks For F C n 1 n 2 n d, its mode- j matricization F( j) is obtained by arranging the j-th index of F as the column index of F( j) and merging other indices of F as the row index of F( j). The Tucker rank of F is defined as the vector (rank(f(1)),..., rank(f(d))). The averaged Tucker rank rank n (F ) := 1 d d rank(f( j)) j=1 Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

10 The Low-n-Rank Recovery Liu, Musialski, Wonka, Ye, 2009; Gandy, Recht, Yamada, 2011: min s.t. rank CP (X) X C n 1 n 2 n d L(X) = b = Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

11 The Low-n-Rank Recovery Liu, Musialski, Wonka, Ye, 2009; Gandy, Recht, Yamada, 2011: min s.t. rank CP (X) X C n 1 n 2 n d L(X) = b = min s.t. rank n (X) X C n 1 n 2 n d L(X) = b Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

12 The Low-n-Rank Recovery Liu, Musialski, Wonka, Ye, 2009; Gandy, Recht, Yamada, 2011: min s.t. rank CP (X) X C n 1 n 2 n d L(X) = b = min s.t. rank n (X) X C n 1 n 2 n d L(X) = b and its convexified version min 1 dj=1 d X( j) s.t. X C n 1 n 2 n d L(X) = b. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

13 The Low-n-Rank Recovery Liu, Musialski, Wonka, Ye, 2009; Gandy, Recht, Yamada, 2011: min s.t. rank CP (X) X C n 1 n 2 n d L(X) = b = min s.t. rank n (X) X C n 1 n 2 n d L(X) = b and its convexified version min 1 dj=1 d X( j) s.t. X C n 1 n 2 n d L(X) = b. We have rank n (X) rank CP (X), but no reasonable upper bound is known. The method works well when rank CP (X) is relatively small. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

14 Fair and Square We may also group the indices by equal parts and then unfold it into a matrix. Let us call it Square Unfolding (Mu, Huang, Wright, Goldfarb, 2013). Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

15 Fair and Square We may also group the indices by equal parts and then unfold it into a matrix. Let us call it Square Unfolding (Mu, Huang, Wright, Goldfarb, 2013). Tomioka, Suzuki, Hayashi, 2011: It takes O(rn d 1 ) observations for the low-n-rank model to complete the tensor. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

16 Fair and Square We may also group the indices by equal parts and then unfold it into a matrix. Let us call it Square Unfolding (Mu, Huang, Wright, Goldfarb, 2013). Tomioka, Suzuki, Hayashi, 2011: It takes O(rn d 1 ) observations for the low-n-rank model to complete the tensor. Mu, Huang, Wright, Goldfarb, 2013: It takes O(r d 2 n d 2 ) observations for the square unfolding model to complete the tensor. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

17 The M-Rank Shuzhong Zhang Low-Rank Completion and Invariance of the Tucker Core January, / 36

18 The M-Rank Definition For an even-order tensor F C n 1 n 2 n 2d, the M-decomposition is to find some tensors A i C n 1 n d, B i C n d+1 n 2d with i = 1,..., r such that F = r A i B i. i=1 Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

19 The M-Rank Definition For an even-order tensor F C n 1 n 2 n 2d, the M-decomposition is to find some tensors A i C n 1 n d, B i C n d+1 n 2d with i = 1,..., r such that F = r A i B i. i=1 Definition Given an even-order tensor F C n 1 n 2 n 2d, its M -rank is the smallest rank of all square unfolding matrices, i.e., rank M (F ) = min rank (M(F π)), π Π(1,...,2d) and the M + -rank is the largest rank of all square unfolding matrices: rank M +(F ) = max rank (M(F π)). π Π(1,...,2d) Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

20 The Symmetric CP-Rank Shuzhong Zhang Low-Rank Completion and Invariance of the Tucker Core January, / 36

21 The Symmetric CP-Rank Definition Given a d-th order n-dimensional super-symmetric complex-valued tensor F, its symmetric CP-rank (denoted by rank S CP (F )) is the smallest integer r such that with a i C n, i = 1,..., r. F = r i=1 a i a i } {{ } d Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

22 The Symmetric CP-Rank Definition Given a d-th order n-dimensional super-symmetric complex-valued tensor F, its symmetric CP-rank (denoted by rank S CP (F )) is the smallest integer r such that with a i C n, i = 1,..., r. F = r i=1 Obviously, rank CP (F ) rank S CP (F ). a i a i } {{ } d Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

23 The Symmetric CP-Rank Definition Given a d-th order n-dimensional super-symmetric complex-valued tensor F, its symmetric CP-rank (denoted by rank S CP (F )) is the smallest integer r such that with a i C n, i = 1,..., r. F = r i=1 Obviously, rank CP (F ) rank S CP (F ). a i a i } {{ } d Open Problem: rank CP (F ) = rank S CP (F )? Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

24 The Symmetric CP-Rank Definition Given a d-th order n-dimensional super-symmetric complex-valued tensor F, its symmetric CP-rank (denoted by rank S CP (F )) is the smallest integer r such that with a i C n, i = 1,..., r. F = r i=1 Obviously, rank CP (F ) rank S CP (F ). a i a i } {{ } d Open Problem: rank CP (F ) = rank S CP (F )? Zhang, Huang, Qi, 2014: rank CP (F ) = rank S CP (F ) if rank CP (F ) d. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

25 The Symmetric M-Rank ab + ba = 1 2 (a + b)(a + b) 1 2 (a b)(a b). Definition For an even-order super-symmetric tensor F S n2d, its symmetric M-rank is the smallest r such that F = r B i B i, B i C nd, i = 1,..., r. i=1 The strongly symmetric M-rank is the smallest integer r such that F = r A i A i, A i S nd, i = 1,..., r. i=1 Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

26 Theorem (Jiang, Ma, and Z.; 2015) For an even-order super-symmetric tensor F S n2d, we have rank M (F ) = rank S M (F ) = rank S S M (F ). Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

27 Theorem (Jiang, Ma, and Z.; 2015) For an even-order super-symmetric tensor F S n2d, we have rank M (F ) = rank S M (F ) = rank S S M (F ). Theorem (Jiang, Ma, and Z.; 2015) Suppose F S n2d. We have rank M (F ) = 1 rank S CP (F ) = 1. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

28 Theorem (Jiang, Ma, and Z.; 2015) For an even-order super-symmetric tensor F S n2d, we have rank M (F ) = rank S M (F ) = rank S S M (F ). Theorem (Jiang, Ma, and Z.; 2015) Suppose F S n2d. We have rank M (F ) = 1 rank S CP (F ) = 1. Theorem (Jiang, Ma, and Z.; 2015) For any given F S n4, it holds that rank M (F ) rank S CP (F ) (n + 4n 2 ) rank M (F ). Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

29 Non-Symmetric Tensors Theorem (Jiang, Ma, and Z.; 2015) Suppose F C n 1 n 2 n 3 n 4 with n 1 n 2 n 3 n 4. Then for any permutation π of (1, 2, 3, 4) it holds that rank(m(f π )) rank CP (F π ) n 1 n 3 rank(m(f π )). Moreover, the inequalities above can be sharpened to rank M +(F ) rank CP (F ) n 1 n 3 rank M (F ). Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

30 Non-Symmetric Tensors Theorem (Jiang, Ma, and Z.; 2015) Suppose F C n 1 n 2 n 3 n 4 with n 1 n 2 n 3 n 4. Then for any permutation π of (1, 2, 3, 4) it holds that rank(m(f π )) rank CP (F π ) n 1 n 3 rank(m(f π )). Moreover, the inequalities above can be sharpened to rank M +(F ) rank CP (F ) n 1 n 3 rank M (F ). As a consequence, rank CP (F π ) rankcp R (F π) n 1 n 3 rank R (M(F π )) = n 1 n 3 rank(m(f π )) n 1 n 3 rank CP (F π ). Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

31 Tightness of the Bounds Let us consider a fourth order tensor F C n 1 n 2 n 3 n 4 such that F = A B for some matrices A C n 1 n 2 and B C n 3 n 4. Denote r 1 = rank(a), r 2 = rank(b). Then, the following holds: (i) The Tucker rank of F is (r 1, r 1, r 2, r 2 ); (ii) rank M +(F ) = r 1 r 2 and rank M (F ) = 1; (iii) rank CP (F ) = r 1 r 2. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

32 Higher Dimensions In general, for an even order tensor F C n 1 n 2 n 2d with F = A 1 A 2 A d for some matrices A i C n 2i 1 n 2i. Denoting r i = rank(a i ) for i = 1,..., d, we have rank CP (F ) = rank M +(F ) = r 1 r 2 r d. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

33 Part II. Numerical Experiments Shuzhong Zhang Low-Rank Completion and Invariance of the Tucker Core January, / 36

34 How does the M-Rank relate to the CP-Rank? We generate random tensors r T = a i b i c i d i C n 1 n 2 n 3 n 4. i=1 Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

35 How does the M-Rank relate to the CP-Rank? We generate random tensors r T = a i b i c i d i C n 1 n 2 n 3 n 4. i=1 For each size, we generate 20 instances and compute the average ranks: CP-rank Tucker rank M + -rank M -rank Dimension r = 18, CP-rank 18 (15,15,15,15) Dimension r = 18, CP-rank 18 (15,15,18,18) Dimension r = 30, CP-rank 30 (20,20,20,20) Dimension r = 30, CP-rank 30 (20,20,25,25) Dimension r = 40, CP-rank 40 (25,25,30,30) Dimension r = 40, CP-rank 40 (30,30,30,30) Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

36 Different Type of Random Tensors Next, we randomly generate tensors of the form r T = A i B i, with rank(a i ) = rank(b i ) = k, i = 1,..., r, and we do the same tests: i=1 Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

37 Different Type of Random Tensors Next, we randomly generate tensors of the form r T = A i B i, with rank(a i ) = rank(b i ) = k, i = 1,..., r, and we do the same tests: i=1 CP-rank Tucker rank M + -rank M -rank Dimension r = k = 2, CP-rank 8 (4,4,4,4) 8 2 r = k = 3, CP-rank 27 (9,9,9,9) 27 3 r = k = 4, CP-rank 64 (16,16,16,16) 64 4 Dimension r = k = 3, CP-rank 27 (9,9,9,9) 27 3 r = k = 4, CP-rank 64 (15,15,16,16) 64 4 r = k = 5, CP-rank 125 (20,20,20,20) Dimension r = k = 3, CP-rank 27 (9,9,9,9) 27 3 r = k = 4, CP-rank 64 (16,16,16,16) 64 4 r = k = 5, CP-rank 125 (20,20,25,25) Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

38 Low CP-Rank Recovery Recall the low CP-rank recovery problem min rank CP (X) s.t. L(X) = b. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

39 Low CP-Rank Recovery Recall the low CP-rank recovery problem min rank CP (X) s.t. L(X) = b. The alternative convexifications min 1 dj=1 d X( j) s.t. L(X) = b min s.t. M(X) L(X) = b low-n-rank completion low-m-rank completion Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

40 X 0 = r i=1 a i b i c i d i C n 1 n 2 n 3 n 4 ; RelErr := X X 0 F X 0 F. sampling ratio low-n-rank completion low-m-rank completion Rel Err Tucker rank Rel Err M + -rank M -rank Dimension , r = 4, CP-rank 4 70% 7.26e-006 (4, 4, 4, 4) 3.49e % 1.45e-005 (4, 4, 4, 4) 6.76e % 1.96e-005 (4, 4, 4, 4) 2.37e Dimension , r = 8, CP-rank 8 70% 1.67e-005 (8, 8, 8, 8) 6.60e % 1.34e-001 (20, 20, 20, 20) 1.59e % 5.87e-001 (20, 20, 20, 20) 4.24e Dimension , r = 12, CP-rank 12 70% 1.45e-001 (20, 20, 20, 20) 8.58e % 4.20e-001 (20, 20, 20, 20) 3.46e % 6.99e-001 (20, 20, 20, 20) 6.75e Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

41 X 0 = r i=1 A i B i ; RelErr := X X 0 F X 0 F ; samply ratio = 30%. rank CP (X 0 ) RelErr CPU rank M +(X ) rank M (X ) Dimension r = 12, rank CP (X 0 ) e Dimension r = 15, rank CP (X 0 ) e Dimension r = 24, rank CP (X 0 ) e Dimension r = 28, rank CP (X 0 ) e Dimension r = 35, rank CP (X 0 ) e Dimension r = 45, rank CP (X 0 ) e Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

42 Super-Symmetric Tensor Recovery Entries removed = 60% (sampling ratio = 40%): Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

43 Super-Symmetric Tensor Recovery Entries removed = 60% (sampling ratio = 40%): rank S CP (X 0 ) RelErr CPU rank M (X ) Dimension r = 8, rank S CP (X 0 ) e r = 12, rank S CP (X 0 ) e Dimension r = 8, rank S CP (X 0 ) e r = 20, rank S CP (X 0 ) e Dimension r = 15, rank S CP (X 0 ) e r = 25, rank S CP (X 0 ) e Dimension r = 15, rank S CP (X 0 ) e r = 30, rank S CP (X 0 ) e Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

44 Robust Tensor Recovery min rank CP (Y) + λ Z 0 s.t. Y + Z = F min M(Y) + λ Z 1 s.t. Y + Z = F Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

45 Robust Tensor Recovery min rank CP (Y) + λ Z 0 s.t. Y + Z = F min M(Y) + λ Z 1 s.t. Y + Z = F low-n-rank robust tensor recovery low-m-rank robust tensor recovery rank CP (Y 0 ) Rel Err All Rel Err LR Tucker rank Rel Err All Rel Err LR (rank M + (Y 0 ), rank M (Y 0 )) Dimension CP-rank e e-002 (3.85, 3.80, 3.90, 3.85) 8.48e e-007 (2,2) CP-rank e e-002 (5.30, 5.35, 5.15, 5.20) 7.76e e-007 (4,4) CP-rank e e-002 (10, 10, 10, 10) 8.63e e-006 (6,6) CP-rank e e-002 (10, 10, 10, 10) 7.78e e-006 (8,8) CP-rank e e-002 (10, 10, 10, 10) 8.61e e-006 (12,12) Dimension CP-rank e e-003 (5.15, 5.15, 5.25, 5.25) 8.50e e-007 (3,3) CP-rank e e-003 (15, 15, 15, 15) 8.57e e-007 (6,6) CP-rank e e-002 (15, 15, 15, 15) 8.13e e-007 (9,9) CP-rank e e-002 (15, 15, 15, 15) 7.54e e-007 (12,12) CP-rank e e-002 (15, 15, 15, 15) 8.11e e-007 (18,18) Dimension CP-rank e e-003 (20, 20, 20, 20) 8.64e e-007 (4,4) CP-rank e e-003 (20, 20, 20, 20) 7.37e e-007 (8,8) CP-rank e e-002 (20, 20, 20, 20) 8.59e e-007 (12,12) CP-rank e e-002 (20, 20, 20, 20) 8.26e e-007 (16,16) CP-rank e e-003 (20, 20, 20, 20) 8.59e e-007 (24,24) Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

46 Colored Video Completion: 50 frames The first row: frames of the original video; the second row: frames with 80% missing entries; the third row: recovered images using tensor completion. Shuzhong Zhang Low-Rank Completion and Invariance of the Tucker Core January, / 36

47 Robust Video Recovery The first row are the 3 frames of the original video sequence. The second row are the recovered background, and the last row are the recovered foreground. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

48 Part III. Invariance of the Tucker Core Shuzhong Zhang Low-Rank Completion and Invariance of the Tucker Core January, / 36

49 The Tucker Core Consider an N-way tensor X C I 1 I 2 I N. The equation X = G 1 A (1) 2 N A (N) G; A (1),..., A (N), is called a size-(j 1,..., J N ) Tucker decomposition of X, and G is called a core tensor associated with this decomposition, and A (n) is the n-th factorization matrix, where G C J 1 J 2 J N and matrices A (n) C I n J n, n = 1,..., N, and n is the mode-n (matrix) product. Moreover, a Tucker decomposition is said to be independent if each of the factorization matrix has full column rank. A Tucker decomposition is said to be orthonormal if each of the factorization matrix has orthonormal columns. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

50 The CP-Rank of the Tucker Core Theorem (Jiang, Yang, Z., 2015) For any given tensor X C I 1 I 2 I N with independent Tucker decomposition X = G; X (1),..., X (N), we have rank CP (X) = rank CP (G). Theorem (Jiang, Yang, Z., 2015) For any symmetric tensor X C I I I and its independent symmetric decomposition X = G s ; X, X,..., X, the core tensor G s C J J J is also symmetric, and rank S (G s ) = rank S (X). Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

51 The Frobenius Norm Theorem (Jiang, Yang, Z., 2015) For any given tensor X C I 1 I 2 I N with independent Tucker decomposition X = G; X (1),..., X (N), we have that α G F X F β G F, where β = X (1) 2 X (2) 2 X (N) 2 and α = β/( N n=1 κ(x (n) )) with κ(x (n) ) being the condition number of the matrix X (n) for all n. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

52 The Tensor Schatten norm and p-quasi Norm For any tensor X C I 1 I 2 I N, the tensor p-quasi norm for 0 < p 1 of X is defined as X p min { r λ s p s=1 1/p : X = r λ s x (1) s x (2) s... x (N) s=1 s, } x (n) s = 1, s = 1, 2,..., r, n = 1, 2,..., N Theorem For any given tensor X C I 1 I 2 I N with independent Tucker decomposition X = G; X (1),..., X (N), we have α G p X p β G p, where β = X (1) 2 X (2) 2 X (N) 2 and α = β/( N n=1 κ(x (n) )) with κ(x (n) ) being the condition number of matrix X (n) for all n. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

53 The Border Rank For an N-way tensor X C I 1 I 2 I N, its border rank, denoted by rank B (X), is defined as rank B = min { r ɛ > 0, E C I 1 I 2 I N, s.t. E F ɛ, rank CP (X + E) r }. Theorem For any given tensor X C I 1 I 2 I N with independent Tucker decomposition X = G; X (1),..., X (N), we have rank B (X) = rank B (G). Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

54 The Z-Eigenvalue For an N-way symmetric tensor T R I I, if there exists a number λ R and a nonzero vector x R I such that T (x (N 1) ) = λx, x T x = 1. Then λ is called the Z-eigenvalue of T, and x is called the corresponding Z-eigenvector. Theorem For any given N-way symmetric tensor T R I I I with exact independent symmetric Tucker decomposition T = G s ; X, X,..., X. Construct Ĝ s = G s 1 (X T X) 1/2 2 N (X T X) 1/2. Then any Z-eigenvalues of Ĝ s are also Z-eigenvalues of T while any non-zero Z-eigenvalue of T are also Z-eigenvalues of Ĝ s. Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

55 Numerical Experiments with the Z-Eigenvalue Computation Example 1 (Xie and Chang, 2014): X(x 4 ) = (x 1 + x 2 + x 3 + x 4 ) 4 + (x 2 + x 3 + x 4 + x 5 ) 4 R Examples 2 and 3 (Nie and Wang, 2014): X i jk = ( 1)i i with n = 5 and n = 8 respectively. + ( 1) j j + ( 1)k, k Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

56 Example 4 (Cui, Dai and Nie, 2014): X i1 i 2 i 3 i 4 = tan(i 1 ) + tan(i 2 ) + tan(i 3 ) + tan(i 4 ). Example 5 (Cui, Dai and Nie, 2014): X i1 i 2 i 3 i 4 i 5 = ln(i 1 ) + ln(i 2 ) + ln(i 3 ) + ln(i 4 ) + ln(i 5 ). Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

57 The Savings Shuzhong Zhang Low-Rank Completion and Invariance of the Tucker Core January, / 36

58 The Savings C. Cui, Y. Dai, and J. Nie, All real eigenvalues of symmetric tensors, SIAM J. Math. Anal., 35: , Shuzhong Zhang Low-Rank Completion and Invariance of the Tucker Core January, / 36

59 The Savings C. Cui, Y. Dai, and J. Nie, All real eigenvalues of symmetric tensors, SIAM J. Math. Anal., 35: , Example Order Size Core Size CPU for Tensor CPU for Core s 3.29s s 2.55s s 2.66s s 4.25s s 5.23s Table: Computational Time Comparison Shuzhong Zhang (ISyE@UMN) Low-Rank Completion and Invariance of the Tucker Core January, / 36

60 Thank you! Shuzhong Zhang Low-Rank Completion and Invariance of the Tucker Core January, / 36

New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization

New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization Bo JIANG Shiqian MA Shuzhong ZHANG January 12, 2015 Abstract In this paper, we propose three new tensor decompositions

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Tensor Decomposition with Smoothness (ICML2017)

Tensor Decomposition with Smoothness (ICML2017) Tensor Decomposition with Smoothness (ICML2017) Masaaki Imaizumi 1 Kohei Hayashi 2,3 1 Institute of Statistical Mathematics 2 National Institute of Advanced Industrial Science and Technology 3 RIKEN Center

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu

More information

Approximate Low-Rank Tensor Learning

Approximate Low-Rank Tensor Learning Approximate Low-Rank Tensor Learning Yaoliang Yu Dept of Machine Learning Carnegie Mellon University yaoliang@cs.cmu.edu Hao Cheng Dept of Electric Engineering University of Washington kelvinwsch@gmail.com

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A). Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

BEYOND MATRIX COMPLETION

BEYOND MATRIX COMPLETION BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (MSR) Part I: RecommendaIon systems and parially observed matrices THE NETFLIX PROBLEM movies

More information

arxiv: v1 [math.ra] 11 Aug 2014

arxiv: v1 [math.ra] 11 Aug 2014 Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091

More information

Math 315: Linear Algebra Solutions to Assignment 7

Math 315: Linear Algebra Solutions to Assignment 7 Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

More information

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Canyi Lu 1, Jiashi Feng 1, Yudong Chen, Wei Liu 3, Zhouchen Lin 4,5,, Shuicheng Yan 6,1

More information

Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas. Ben Lee Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

More information

Inverse Perron values and connectivity of a uniform hypergraph

Inverse Perron values and connectivity of a uniform hypergraph Inverse Perron values and connectivity of a uniform hypergraph Changjiang Bu College of Automation College of Science Harbin Engineering University Harbin, PR China buchangjiang@hrbeu.edu.cn Jiang Zhou

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Generalized Higher-Order Tensor Decomposition via Parallel ADMM

Generalized Higher-Order Tensor Decomposition via Parallel ADMM Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Generalized Higher-Order Tensor Decomposition via Parallel ADMM Fanhua Shang 1, Yuanyuan Liu 2, James Cheng 1 1 Department of

More information

Lecture 3. Random Fourier measurements

Lecture 3. Random Fourier measurements Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Statistical Performance of Convex Tensor Decomposition

Statistical Performance of Convex Tensor Decomposition Slides available: h-p://www.ibis.t.u tokyo.ac.jp/ryotat/tensor12kyoto.pdf Statistical Performance of Convex Tensor Decomposition Ryota Tomioka 2012/01/26 @ Kyoto University Perspectives in Informatics

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery

CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery Tim Roughgarden & Gregory Valiant May 3, 2017 Last lecture discussed singular value decomposition (SVD), and we

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION

ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION Niannan Xue, George Papamakarios, Mehdi Bahri, Yannis Panagakis, Stefanos Zafeiriou Imperial College London, UK University of Edinburgh,

More information

Novel methods for multilinear data completion and de-noising based on tensor-svd

Novel methods for multilinear data completion and de-noising based on tensor-svd Novel methods for multilinear data completion and de-noising based on tensor-svd Zemin Zhang, Gregory Ely, Shuchin Aeron Department of ECE, Tufts University Medford, MA 02155 zemin.zhang@tufts.com gregoryely@gmail.com

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information

Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization

Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization Shengke Xue, Wenyuan Qiu, Fan Liu, and Xinyu Jin College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou,

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015 Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

More information

1. Select the unique answer (choice) for each problem. Write only the answer.

1. Select the unique answer (choice) for each problem. Write only the answer. MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +

More information

MATH 4211/6211 Optimization Constrained Optimization

MATH 4211/6211 Optimization Constrained Optimization MATH 4211/6211 Optimization Constrained Optimization Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Constrained optimization

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

An Even Order Symmetric B Tensor is Positive Definite

An Even Order Symmetric B Tensor is Positive Definite An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Eigenvalues of the Adjacency Tensor on Products of Hypergraphs

Eigenvalues of the Adjacency Tensor on Products of Hypergraphs Int. J. Contemp. Math. Sciences, Vol. 8, 201, no. 4, 151-158 HIKARI Ltd, www.m-hikari.com Eigenvalues of the Adjacency Tensor on Products of Hypergraphs Kelly J. Pearson Department of Mathematics and Statistics

More information

Supplemental Figures: Results for Various Color-image Completion

Supplemental Figures: Results for Various Color-image Completion ANONYMOUS AUTHORS: SUPPLEMENTAL MATERIAL (NOVEMBER 7, 2017) 1 Supplemental Figures: Results for Various Color-image Completion Anonymous authors COMPARISON WITH VARIOUS METHODS IN COLOR-IMAGE COMPLETION

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

MATH 235. Final ANSWERS May 5, 2015

MATH 235. Final ANSWERS May 5, 2015 MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your

More information

BEYOND MATRIX COMPLETION

BEYOND MATRIX COMPLETION BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (Harvard) Part I: RecommendaJon systems and parjally observed matrices THE NETFLIX PROBLEM

More information

Lecture 9: Low Rank Approximation

Lecture 9: Low Rank Approximation CSE 521: Design and Analysis of Algorithms I Fall 2018 Lecture 9: Low Rank Approximation Lecturer: Shayan Oveis Gharan February 8th Scribe: Jun Qi Disclaimer: These notes have not been subjected to the

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery

Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery Ming-Jun Lai, Song Li, Louis Y. Liu and Huimin Wang August 14, 2012 Abstract We shall provide a sufficient condition to

More information

Nearest Correlation Matrix

Nearest Correlation Matrix Nearest Correlation Matrix The NAG Library has a range of functionality in the area of computing the nearest correlation matrix. In this article we take a look at nearest correlation matrix problems, giving

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

Spectrum and Pseudospectrum of a Tensor

Spectrum and Pseudospectrum of a Tensor Spectrum and Pseudospectrum of a Tensor Lek-Heng Lim University of California, Berkeley July 11, 2008 L.-H. Lim (Berkeley) Spectrum and Pseudospectrum of a Tensor July 11, 2008 1 / 27 Matrix eigenvalues

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Tensor Principal Component Analysis via Convex Optimization

Tensor Principal Component Analysis via Convex Optimization Tensor Principal Component Analysis via Convex Optimization Bo JIANG Shiqian MA Shuzhong ZHANG December 9, 2012 Abstract This paper is concerned with the computation of the principal components for a general

More information

EXAM. Exam 1. Math 5316, Fall December 2, 2012

EXAM. Exam 1. Math 5316, Fall December 2, 2012 EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.

More information

Eigenvalues of tensors

Eigenvalues of tensors Eigenvalues of tensors and some very basic spectral hypergraph theory Lek-Heng Lim Matrix Computations and Scientific Computing Seminar April 16, 2008 Lek-Heng Lim (MCSC Seminar) Eigenvalues of tensors

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Qian Zhao 1,2 Deyu Meng 1,2 Xu Kong 3 Qi Xie 1,2 Wenfei Cao 1,2 Yao Wang 1,2 Zongben Xu 1,2 1 School of Mathematics and Statistics,

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

More information

Fourier PCA. Navin Goyal (MSR India), Santosh Vempala (Georgia Tech) and Ying Xiao (Georgia Tech)

Fourier PCA. Navin Goyal (MSR India), Santosh Vempala (Georgia Tech) and Ying Xiao (Georgia Tech) Fourier PCA Navin Goyal (MSR India), Santosh Vempala (Georgia Tech) and Ying Xiao (Georgia Tech) Introduction 1. Describe a learning problem. 2. Develop an efficient tensor decomposition. Independent component

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Simultaneous Diagonalization of Positive Semi-definite Matrices

Simultaneous Diagonalization of Positive Semi-definite Matrices Simultaneous Diagonalization of Positive Semi-definite Matrices Jan de Leeuw Version 21, May 21, 2017 Abstract We give necessary and sufficient conditions for solvability of A j = XW j X, with the A j

More information

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013 Math Exam Sample Problems Solution Guide October, Note that the following provides a guide to the solutions on the sample problems, but in some cases the complete solution would require more work or justification

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

arxiv: v3 [math.ra] 22 Aug 2014

arxiv: v3 [math.ra] 22 Aug 2014 arxiv:1407.0331v3 [math.ra] 22 Aug 2014 Positivity of Partitioned Hermitian Matrices with Unitarily Invariant Norms Abstract Chi-Kwong Li a, Fuzhen Zhang b a Department of Mathematics, College of William

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Math 7, Professor Ramras Linear Algebra Practice Problems () Consider the following system of linear equations in the variables x, y, and z, in which the constants a and b are real numbers. x y + z = a

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Wavelets and Linear Algebra

Wavelets and Linear Algebra Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza

More information

The number of singular vector tuples and approximation of symmetric tensors / 13

The number of singular vector tuples and approximation of symmetric tensors / 13 The number of singular vector tuples and approximation of symmetric tensors Shmuel Friedland Univ. Illinois at Chicago Colloquium NYU Courant May 12, 2014 Joint results with Giorgio Ottaviani and Margaret

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

InequalitiesInvolvingHadamardProductsof HermitianMatrices y

InequalitiesInvolvingHadamardProductsof HermitianMatrices y AppliedMathematics E-Notes, 1(2001), 91-96 c Availablefreeatmirrorsites ofhttp://math2.math.nthu.edu.tw/»amen/ InequalitiesInvolvingHadamardProductsof HermitianMatrices y Zhong-pengYang z,xianzhangchong-guangcao

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Compressive Sensing of Sparse Tensor

Compressive Sensing of Sparse Tensor Shmuel Friedland Univ. Illinois at Chicago Matheon Workshop CSA2013 December 12, 2013 joint work with Q. Li and D. Schonfeld, UIC Abstract Conventional Compressive sensing (CS) theory relies on data representation

More information

Z-eigenvalue methods for a global polynomial optimization problem

Z-eigenvalue methods for a global polynomial optimization problem Math. Program., Ser. A (2009) 118:301 316 DOI 10.1007/s10107-007-0193-6 FULL LENGTH PAPER Z-eigenvalue methods for a global polynomial optimization problem Liqun Qi Fei Wang Yiju Wang Received: 6 June

More information