Robust Tensor Completion and its Applications. Michael K. Ng Department of Mathematics Hong Kong Baptist University

Size: px
Start display at page:

Download "Robust Tensor Completion and its Applications. Michael K. Ng Department of Mathematics Hong Kong Baptist University"

Transcription

1 Robust Tensor Completion and its Applications Michael K. Ng Department of Mathematics Hong Kong Baptist University

2 Example

3 Low-dimensional Structure Data in many real applications exhibit low-dimensional structures due to local regularities, global symmetries, repetitive patterns, redundant sampling,... (low-dimensional structure low-rank data matrices)

4 Example Customer/Item I II III IV A 5 1?? B? 2 3? C?? 4 2 D 1???..... For example (Netflix Challenge 2009), it is about 0.5 million users and about 18,000 movies Matrix Completion min X rank(x) subject to P Ω(X) = P Ω (M)

5 Example Matrix RPCA min X rank(x) + λ E 0 subject to X + E = M

6 Example Robust Matrix Completion min X rank(x) + λ E 0 subject to P Ω (X + E) = P Ω (M)

7 Low Rank Matrix Recovery Matrix Completion Matrix RPCA min X rank(x) subject to P Ω(X) = P Ω (M) min X rank(x) + λ E 0 subject to M = X + E Robust Matrix Completion min X rank(x) + λ E 0 subject to P Ω (M) = P Ω (X + E)

8 Singular Value Decomposition X = UΣV T U and V are orthogonal matrices and Σ is a diagonal matrix containing the singular values of X. The number of singular values of X is the rank of X.

9 Low Rank Matrix Recovery Matrix Completion Matrix RPCA min X X subject to P Ω (X) = P Ω (M) min X X + λ E 1 subject to M = X + E Robust Matrix Completion min X X + λ E 1 subject to P Ω (M) = P Ω (X + E) Nuclear norm : sum of singular values (convex envelop of rank)

10 Low Rank Matrix Recovery Results (RPCA) Candes, E. J., Li, X., Ma, Y., and Wright, J. Journal of the ACM, 58(3):173, (Matrix Completion) Recht, B. Journal of Machine Learning Research, 12(4): , (Matrix Completion) Chen, Y. IEEE Transactions on Information Theory, 61(5): , many papers...

11 Application: Background and Foreground Separation in Video (spatial dimensions + time)

12 Application: Color Images Completion and Denoising (spatial dimensions + RGB)

13 Application: Hyperspectral Images Completion and Denoising (spatial dimensions + frequencies)

14 Low Rank Tensor Recovery Data are usually in multi-dimensional array. Vectorization probably break the inherent structures and correlations in the original data.

15 Low Rank Tensor Recovery Tensor Completion min X rank(x ) subject to P Ω(X ) = P Ω (M) Tensor Robust PCA min X rank(x ) + λ E 0 subject to M = X + E Robust Tensor Completion min X rank(x ) + λ E 0 subject to P Ω (M) = P Ω (X + E)

16 Tensor Decomposition CANDECOMP/PARAFAC Decomposition: X = r λ i a i,1 a i,m i=1 The minimal value of r is called the rank of A.

17 Tensor Decomposition Tucker Decomposition: X = r 1 i 1 =1 X = G A 1 A 2 A m r m i m=1 g i1,i 2,,i m a i 1,1 a im,m It can be obtained by using singular value decomposition to each unfolded matrix X ij from X. The Tucker rank is (rank(x 1 ), rank(x 2 ),, rank(x m )) = (r 1, r 2,, r m ).

18 Low Rank Tensor Recovery Tensor Completion min X rank(x ) subject to P Ω(X ) = P Ω (M) Tensor Robust PCA min X rank(x ) + λ E 0 subject to M = X + E Robust Tensor Completion min X rank(x ) + λ E 0 subject to P Ω (M) = P Ω (X + E)

19 Low Rank Tensor Recovery CP decomposition/rank cannot be computed efficiently Matrix rank can be replaced by matrix nuclear norm (the sum of singular values), it is a convex envelope Replace Tucker rank by the sum of nuclear norms of unfolding tensors, interdependent matrix trace norm is involved The use of the sum of nuclear norms of unfolding matrices of a tensor may be challenged since it is suboptimal 1 The tensor trace norm (the average of trace norms of unfolding matrices) is not a tight convex relaxation of the tensor rank (the average rank of unfolding matrices) 2 1 C. Mu, B. Huang, J. Wright, and D. Goldfarb. Square deal: Lower bounds and improved relaxations for tensor recovery. In ICML, pages 7381, B. Romera-Paredes and M. Pontil. A new convex relaxation for tensor completion. In Adv. Neural Inf. Process. Syst., pages , 2013.

20 t-svd Decomposition A third-order tensor of size n 1 n 2 n 3 can be viewed as an n 1 n 2 matrix of tubes which lie in the third-dimension. [Kilmer, M. E. and Martin, C. D. Linear Algebra & Its Applications, 435(3):641658, 2011]

21 t-svd Decomposition Definition: The t-product A B of A R n 1 n 2 n 3 and B R n 2 n 4 n 3 is a tensor C R n 1 n 4 n 3 whose (i, j)th tube is given by n 2 C(i, j, :) = A(i, k, :) B(k, j, :), k=1 where denotes the circular convolution between two tubes of same size. The tube at (i, k) position in A convolutes with the tube at (k, j) position in B. Both have sizes n 3. Put all the correlations at (i, j) position in C. The multiplication of between the scalars is replaced by circular convolution between the tubes.

22 t-svd Decomposition Definition: The identity tensor I R n n n 3 is defined to be a tensor whose first frontal slice I (1) is the n n identity matrix and whose other frontal slices I (i), i = 2,..., n 3 are zero matrices. Definition: The conjugate transpose of a tensor A R n 1 n 2 n 3 is the tensor A H R n 2 n 1 n 3 obtained by conjugate transposing each of the frontal slice and then reversing the order of transposed frontal slices 2 through n 3, i.e., (A H) (1) (A H) (i) = = ( A (1)) H, ( ) A (n H 3+2 i), i = 2,..., n3.

23 t-svd Decomposition Definition: A tensor Q R n n n 3 is orthogonal if it satisfies Q H Q = Q Q H = I, where I is the identity tensor of size n n n 3. Definition: A tensor A is called f-diagonal if each frontal slice A (i) is a diagonal matrix.

24 t-svd Decomposition For A R n 1 n 2 n 3, the t-svd of A is given by A = U S V H, where U R n 1 n 1 n 3 and V R n 2 n 2 n 3 are orthogonal tensors, and S R n 1 n 2 n 3 is a f-diagonal tensor, respectively. The entries in S are called the singular tubes of A.

25 t-svd Decomposition The tensor tubal-rank, denoted as rank t (A), is defined as the number of nonzero singular tubes of S, where S comes from the t-svd of A, i.e., rank t (A) = #{i : S(i, i, :) 0}. It can be shown that it is equal to max i rank(â (i) ) where  (i) is the i-th slice of  and  represents a third-order tensor obtained by taking the Discrete Fourier Transform (DFT) of all the tubes along the third dimension of A. Example of t-svd Decomposition

26 Hyperspectral Images

27 Tubal Rank 1 Approximation

28 Tubal Rank 5 Approximation

29 Tubal Rank 10 Approximation

30 Tubal Rank 20 Approximation

31 Tubal Rank 40 Approximation

32 Low Tubal Rank Tensor Recovery Tensor Completion min X rank(x ) subject to P Ω(X ) = P Ω (M) Tensor Robust PCA min X rank(x ) + λ E 0 subject to M = X + E Robust Tensor Completion min X rank(x ) + λ E 0 subject to P Ω (M) = P Ω (X + E)

33 TNN Definition: The tubal nuclear norm of a tensor A R n 1 n 2 n 3, denoted as A TNN, is the nuclear norm of all the frontal slices of Â. Theorem For any tensor X C n 1 n 2 n 3, X TNN is the convex envelope of the function n 3 i=1 rank(â(i) ) on the set {X X 1}.

34 Low Tubal Rank Tensor Recovery (Relaxation) Tensor Completion min X X TNN subject to P Ω (X ) = P Ω (M) Tensor Robust PCA min X X TNN + λ E 1 subject to M = X + E Robust Tensor Completion min X X TNN + λ E 1 subject to P Ω (M) = P Ω (X + E) Can we recover low-tubal-rank tensor from partial and grossly corrupted observations exactly?

35 Tensor Incoherence Conditions Assume that rank t (L 0 ) = r and its t-svd L 0 = U S V H. L 0 is said to satisfy the tensor incoherence conditions with parameter µ > 0 if µr max U H e i F, i=1,,n 1 n 1 max V H e j F j=1,,n 2 and (joint incoherence condition) U V H µr n 2, µr n 1 n 2 n 3.

36 Tensor Incoherence Conditions The column basis, denoted as e i, is a tensor of size n 1 1 n 3 with its (i, 1, 1)th entry equaling to 1 and the rest equaling to 0. The tube basis, denoted as ek, is a tensor of size 1 1 n 3 with its (1, 1, k)th entry equaling to 1 and the rest equaling to 0.

37 Low Rank Tensor Recovery Theorem Suppose L 0 R n 1 n 2 n 3 obeys tensor incoherence conditions, and the observation set Ω is uniformly distributed among all sets of cardinality m = ρn 1 n 2 n 3. Also suppose that each observed entry is independently corrupted with probability γ. Then, there exist universal constants c 1, c 2 > 0 such that with probability at least 1 c 1 (n (1) n 3 ) c 2, the recovery of L 0 with λ = 1/ ρn (1) n 3 is exact, provided that r c r n (2) µ(log(n (1) n 3 )) 2 and γ c γ where c r and c γ are two positive constants. n (1) = max{n 1, n 2 } and n (2) = min{n 1, n 2 }

38 Low Rank Tensor Recovery Theorem (Tensor Completion): Suppose L 0 R n 1 n 2 n 3 obeys tensor incoherence conditions, and m entries of L 0 are observed with locations sampled uniformly at random, then there exist universal constants c 0, c 1, c 2 > 0 such that if m c 0 µrn (1) n 3 (log(n (1) n 3 )) 2, L 0 is the unique minimizer to the convex optimization problem with probability at east 1 c 1 (n (1) n 3 ) c 2.

39 Low Rank Tensor Recovery Theorem (Tensor Robust PCA): Suppose L 0 R n 1 n 2 n 3 obeys tensor incoherence conditions and joint incoherence condition and E 0 has support uniformly distributed with probability γ. Then, there exist universal constants c 1, c 2 > 0 such that with probability at least 1 c 1 (n (1) n 3 ) c 2, (L 0, E 0 ) is the unique minimizer to the convex optimization problem with λ = 1/ n (1) n 3, provided that r c r n (2) µ(log(n (1) n 3 )) 2 and γ c γ where c r and c γ are two positive constants.

40 Convex Optimization Problem Input: X, Ω and λ. Initialize: L 0 = E 0 = Y 0 = 0, ρ = 1.1, µ 0 = 1e-4, µ max = 1e8. WHILE not converged 1. Update L k+1 by 2. Update P Ω (E k+1 ) by min L TNN + µk L + E k X + Yk 2 L 2 µ k ; F ( ) min λ P Ω (E) 1 + µk P Ω E + L k+1 X + Yk 2 E 2 µ k ; F 3. Update P Ω c (E k+1 ) by P Ω c (E k+1 ) = P Ω c (X L k+1 Y k /µ k ); 4. Update the multipliers Y k+1 by Y k+1 = Y k + µ k (L k+1 + E k+1 X ); 5. Update µ k+1 by µ k+1 = min(ρµ k, µ max ); 6. Check the convergence condition ENDWHILE Output: L

41 Phase Transition ρ (data observation) and γ (data corruption)

42 Application: Completion and Denoising ρ = 70% (data observation) and γ = 30% (data corruption)

43 Application: Completion and Denoising ρ = 70% (data observation) and γ = 30% (data corruption)

44 Application: Completion and Denoising For RPCA and RMC, we apply them on each channel with λ = 1/ n 1 ; For SNN, unfolding with three parameters suggested in the literature; For TRPCA, λ = 1/ n 1 n 3 ; For BM3D, standard denoising method using nonlocal information; For BM3D+, two-step method with BM3D and image completion using HaLRTC (tensor unfolding to matrix); For BM3D++, two-step method with BM3D and image completion using TNMM.

45 Video Background Modeling Background: Low-Tubal-Rank Component and Moving Objects: Sparse Component

46 Video Background Modeling

47 Traffic Data Estimation Traffic flow data such as traffic volumes, occupancy rats and flow speeds are usually contaminated by missing values and outliers due to the hardware or software malfunctions. Performance Measurement System (PeMS) pems.dot.ca.gov Third-order tensor (day) x (time) x (week) of traffic volume

48 Traffic Data Estimation

49 Transform Orientation: Example 3rd direction 1st direction 2nd direction

50 The Correction Model

51 The Corrected Model Issue: The nuclear norm minimization of a matrix may be challenged under general sampling distribution. Salakhutdinov et al. 3 showed that when certain rows and/or columns were sampled with high probability, the matrix nuclear norm minimization may fail in the sense that the number of observations required for recovery was much more than the setting of most matrix completion problems. Miao et al. proposed a rank-corrected model for low-rank matrix recovery with fixed basis coefficients 4. 3 R. Salakhutdinov and N. Srebro. Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In Adv. Neural Inform. Process. Syst., pages , W. Miao, S. Pan, and D. Sun. A rank-corrected procedure for matrix completion with fixed basis coefficients. Math. Program., 159(1):289338, 2016.

52 The Corrected Method For any given index set Ω {1, 2,..., n 1 } {1, 2,..., n 2 } {1, 2,..., n 3 }, we define the sampling operator D Ω : R n 1 n 2 n 3 R Ω by D Ω (X ) = ( E ijk, X ) T (i,j,k) Ω, where Ω denotes the number of entries in Ω. Let X 0 R n 1 n 2 n 3 be an unknown true tensor. The observed model can be described in the following form: y = D Ω (X 0 ) + σε, where y = (y 1, y 2,..., y m ) T R m and ε = (ε 1, ε 2,..., ε m ) T R m are the observation vector and the noise vector, respectively, ε i are the independent and identically distributed (i.i.d.) noises with E(ε i ) = 0 and E(ε 2 i ) = 1, and σ > 0 controls the magnitude of noise.

53 The Corrected Method Assumption: Each entry is sampled with positive probability, i.e., there exists a positive constant κ 1 1 such that It implies p ijk n 1 n 2 n 3 E( E, X 2 ) = i=1 j=1 k=1 1 κ 1 n 1 n 2 n 3. p ijk x 2 ijk 1 κ 1 n 1 n 2 n 3 X 2 F.

54 The Corrected Method In the matrix case, the nuclear norm penalization may fail when some columns or rows are sampled with very high probability. In the third-order tensor, we also need to avoid this case that each fiber is sampled with very high probability. Let n 1 n 2 n 3 R jk = p ijk, C ik = p ijk, T ij = p ijk, i=1 j=1 k=1 Assumption: There exists a positive constant κ 2 1 such that max {R jk, C ik, T ij } i,j,k κ 2 min{n 1, n 2, n 3 }.

55 The Corrected Method min 1 ( ) 2m y D Ω(X ) 2 + µ X TNN F (X m ), X s.t. X c, where the spectral function F : R n 1 n 2 n 3 R n 1 n 2 n 3 is given as follows: F (X m ) := U Σ V H, associated with Σ = ifft( M, [ ], 3) with M ( ( )) (i) = f (Ŝ (i) ) := Diag f diag(ŝ (i) ), f is defined by f i (x) := { ( ) φ xi x, if x 0, 0, otherwise, and the scalar function φ : R R, is defined by φ(z) = (1 + ε τ z τ ) z τ + ε τ.

56 The Corrected Method The correction function F is used to get a lower tubal rank solution. For the small singular values of the frontal slices in the Fourier domain, we would like to penalize more in the correction procedure. Then these small singular values will approximate to zero in the next correction procedure. In this case, the model can generate a lower tubal rank solution by the correction method.

57 The Corrected Method Theorem Suppose the two assumptions hold. Let τ > 1 be given. Then, for m ñn 3 log 3 (n 1 n 3 + n 2 n 3 )/κ 2, there exists constants C, C 1 > 0 such that X c X 0 2 F n 1n 2 κ 2 1 κ 2 log((n 1 + n 2 )n 3 ) n 1 n 2 n 3 mñ ( ( 32C1 2 2r ) 2τ + α 2 m σ 2 + τ 4096 Cc 2( τ( ) 2r + α m ) ) 2 τ 1 with probability at least 1 2 n 1 +n 2 +n 3, where α m = Ũ1 ṼT 1 F (X m) F. Here Ũ1, ṼT 1 are the associated orthogonal tensors in t-svd of X 0.

58 The Symmetric Gauss-Seidel Multi-Block ADMM Let U(X ) := {X X c}. By introducing z = y D Ω (X ) and X = S, the model is given by ( ) 1 min 2m z 2 + µ X TNN F (X m ), X + δ U (S) s.t. z = y D Ω (X ), X = S. Since the TNN is the dual norm of the tensor spectral norm, its Lagrangian dual is given as follows: max u,w m 2 u 2 + u, y δ U ( W) s.t. µf (X m ) + D Ω (u) + W µ.

59 The Symmetric Gauss-Seidel Multi-Block ADMM Let Z := µf (X m ) D Ω (u) + W and X(X ) := {X X µ}. min u,w,z m 2 u 2 u, y + δ U ( W) + δ X(Z) s.t. Z = µf (X m ) + D Ω (u) + W. The augmented Lagrangian function is defined by L(u, W, Z, X ) := m 2 u 2 u, y + δ U ( W) + δ X(Z) X, Z µf (X m ) D Ω (u) W + β 2 Z µf (X m) D Ω (u) W 2 F, where β > 0 is the penalty parameter and X is the Lagrangian multiplier.

60 The Symmetric Gauss-Seidel Multi-Block ADMM The iteration system of sgs-admm is described as follows: { } u k+ 1 2 = arg min L(u, W k, Z k, X k ), u { } W k+1 = arg min L(u k+ 1 2, W, Z k, X k ), W { } u k+1 = arg min L(u, W k+1, Z k, X k ), u { } Z k+1 = arg min L(u k+1, W k+1, Z, X k ), Z ( X k+1 = X k γβ Z k+1 µf (X m ) D Ω (uk+1 ) W k+1), where γ (0, (1 + 5)/2) is the step-length.

61 The Symmetric Gauss-Seidel Multi-Block ADMM The optimal solution with respect to u is given explicitly by u = 1 ( )) (y D Ω X + β(µf (X m ) + W Z). m + β The optimal solution with respect to W is given explicitly by ( ) W k+1 = Prox 1 1 β δ U β X k + µf (X m ) + D Ω (uk+ 1 2 ) Z k ( ) = 1 β X k + µf (X m ) + D Ω (uk+ 1 2 ) Z k ( )) + 1 β Prox βδ U (β 1 β X k + µf (X m ) + D Ω (uk+ 1 2 ) Z k.

62 The Symmetric Gauss-Seidel Multi-Block ADMM For the subproblem with respect to Z, it is a projection onto X, which has a closed-form solution. Theorem For any Y R n 1 n 2 n 3 and ρ > 0, let Y = U S V H be the t-svd. Then the optimal solution X of the following problem is given by min X R n 1 n 2 n 3 { X Y 2 F, X ρ} X = U S ρ V H, where S ρ = ifft(min{s, ρ}, [], 3).

63 The Symmetric Gauss-Seidel Multi-Block ADMM The optimal solution with respect to Z in (1) is given by ) Z k+1 = Prox 1 (µf β δ (X X m ) + D Ω (uk+1 ) + W k β X k+1 = U k+1 S k+1 µ (V k+1 ) T, where S k+1 µ = ifft(min{s k+1, µ}, [ ], 3) and µf (X m ) D Ω (uk+1 ) + W k β X k+1 = U k+1 S k+1 (V k+1 ) T.

64 The Symmetric Gauss-Seidel Multi-Block ADMM Theorem The optimal solution set is nonempty and compact. Only two blocks with respect to W, Z are nonsmooth and other blocks are quadratic. Theorem Suppose that β > 0 and γ (0, (1 + 5)/2). Let the sequence {(W k, u k, Z k, X k )} be generated by the algorithm. Then {(W k, u k, Z k )} converges to an optimal solution and {X k } converges to an optimal solution of the dual problem.

65 Numerical Examples Table: Relative errors of the TNN and CTNN with different tensors, tubal ranks, and sampling ratios for low-rank tensor recovery. Tensor r σ SR TNN CTNN-1 CTNN-2 CTNN e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-3

66 Color Image

67 Color Image

68 Other t-svds

69 Revisit t-svd We use X C m 1 m 2 m 3 to represent the discrete Fourier transform of X C m 1 m 2 m 3 along each tube, i.e., X = fft(x, [ ], 3). The block circulant matrix is defined as bcirc(x ) := X (1) X (m 3) X (2) X (2) X (1) X (3) X (m 3) X (m 3 1) X (1) The block diagonal matrix and the corresponding inverse operator are defined as X (1) X (2) bdiag(x ) :=..., X (m3)

70 Revisit t-svd Theorem bdiag( X ) = (F m3 I m1 )bcirc(x )(F H m 3 I m2 ), where denotes the Kronecker product, F m3 matrix and I m is an m m identity matrix. is an m 3 m 3 DFT

71 Revisit t-svd The unfold and fold operators in t-svd are defined as X (1) X (2) unfold(x ) :=, fold(unfold(x )) = X.. X (m 3) Given X C m 1 m 2 m 3 and Y C m 2 m 4 m 3, the t-product X Y is a third-order tensor of size m 1 m 4 m 3 Z = X Y := fold(bcirc(x )unfold(y)). Since the corresponding block circulant matrices can be diagonalized by DFT, the DFT based t-svd can be efficiently implemented via fast Fourier transform (fft).

72 Cosine-Transform based t-svd The first work is given by E. Kernfeld, M. Kilmer and S. Aeron, Tensor tensor products with invertible linear transforms, LAA, Vol 485, pp (2015). We define the shift of tensor A = fold σ(a) = fold A (2) A (3). A (m 3) O. A (1) A (2). A (m 3) as Any tensor X can be uniquely divided into A + σ(a).

73 Cosine-Transform based t-svd We use X R m 1 m 2 m 3 to represent the DCT along each tube of X, i.e., X = dct(x, [ ], 3) = dct(a + σ(a), [ ], 3). We define the block Toeplitz matrix of A as A (1) A (2) A (m 3 1) A (m 3) A (2) A (1) A (m 3 2) A (m 3 1) bt(a) := A (m 3 1) A (m 3 2) A (1) A (2) A (m 3) A (m 3 1) A (2) A (1) The block Hankel matrix is defined as A (2) A (3) A (m 3) O A (3) A (4) O A (m 3) bh(a) := A (m 3) O A (4) A (3) O A (m 3) A (3) A (2).

74 Cosine-Transform based t-svd The block Toeplitz-plus-Hankel matrix of A is defined as btph(a) := bt(a) + bh(a). The block Toeplitz-plus-Hankel matrix can be diagonalized. Theorem bdiag( X ) = (C m3 I m1 )btph(a)(c T m 3 I m2 ), where denotes the Kronecker product, C m3 matrix. is an m 3 m 3 DCT

75 Cosine-Transform based t-svd Definition: Given X C m 1 m 2 m 3 and Y C m 2 m 4 m 3, the t-product X Y is a third-order tensor of size m 1 m 4 m 3 Z = X Y := fold(btph(a)unfold(y)), where X = A + σ(a).

76 Cosine-Transform based t-svd Theorem Given a tensor X R m 1 m 2 m 3, the DCT-based t-svd of X is given by X = U dct S dct V H, where U R m 1 m 1 m 3,V R m 2 m 2 m 3 are orthogonal tensors, S R m 1 m 2 m 3 is a f-diagonal tensor, and V H is the tensor transpose of V.

77 Cosine-Transform based t-svd Table: The time cost of t-svd and DCT-based t-svd on the random tensors of different size. size 100*100* *100* *200* *400*100 FFT SVD after FFT original t-svd DCT SVD after DCT new t-svd

78 Video Examples

79 Table: PSNR, SSIM, and time of two methods in video completion. In brackets, they are the time required for transformation and time required for performing SVD. The best results are highlighted in bold. video akiyo suzie salesman SR metric TNN-F TNN-C TNN-F TNN-C TNN-F TNN-C PSNR SSIM time PSNR SSIM time PSNR SSIM time

80 Video Examples

81 Transform-based t-svd Fourier-Transform based t-svd Z = X fft Y = fold(bcirc(x )unfold(y)) The DFT based t-svd can be efficiently implemented via fast Fourier transform (fft). Cosine-Transform based t-svd Z = X dct Y = fold(btph(a)unfold(y)) The DCT based t-svd can be efficiently implemented via fast cosine transform (dct).

82 Transform-based t-svd Fourier-Transform based t-svd ( ) Z = X fft Y = fold[unfold( ˆX fft ). unfold(ŷ fft )] The DFT based t-svd can be efficiently implemented via fast Fourier transform (fft). Cosine-Transform based t-svd ( ) Z = X dct Y = fold[unfold( ˆX dct ). unfold(ŷ dct )] The DCT based t-svd can be efficiently implemented via fast cosine transform (dct). ifft idct

83 Transform-based t-svd The first work is given by E. Kernfeld, M. Kilmer and S. Aeron, Tensor tensor products with invertible linear transforms, LAA, Vol 485, pp (2015). We generalize tensor singular value decomposition by using other unitary transform matrices instead of discrete Fourier/cosine transform matrix. The motivation is that a lower transformed tubal tensor rank may be obtained by using other unitary transform matrices than that by using discrete Fourier/cosine transform matrix, and therefore this would be more effective for robust tensor completion.

84 Transform-based t-svd Let Φ be the unitary transform matrix with ΦΦ H = Φ H Φ = I. Â Φ represents a third-order tensor obtained via multiplying by Φ on all tubes along the third dimension of A. The Φ-product of A C n 1 n 2 n 3 and B C n 2 n 4 n 3 is a tensor C C n 1 n 4 n 3, which is given by ( ) C = A Φ B = fold[unfold( ˆX Φ ). unfold(ŷ Φ )] Φ H.

85 Transform-based t-svd Theorem Suppose that A C n 1 n 2 n 3. Then A can be factorized as follows: A = U Φ S Φ V H, where U C n 1 n 1 n 3, V C n 2 n 2 n 3 are unitary tensors with respect to Φ-product, and S C n 1 n 2 n 3 is a diagonal tensor.

86 Transform-based t-svd Definition: The transformed tubal multi-rank of a tensor A C n 1 n 2 n 3 is a vector r R n 3 with its i-th entry as the rank of the i-th frontal slice of  Φ, i.e., r i = rank(â (i) Φ ). The transformed tubal tensor rank, denoted as rank tt (A), is defined as the number of nonzero singular tubes of S, where S comes from the tt-svd of A = U Φ S Φ V H. Definition: The transformed tubal nuclear norm of a tensor A C n 1 n 2 n 3, denoted as A TTNN, is the sum of the nuclear norms of all the frontal slices of  Φ, i.e., n 3 A TTNN = Â(i) Φ. i=1

87 Transform-based t-svd Theorem For any tensor X C n 1 n 2 n 3, X TTNN is the convex envelope of the function n 3 i=1 rank(â(i) ) on the set {X X 1}. Φ

88 Transform-based t-svd min L TTNN + λ E 1, s.t., P Ω (L + E) = P Ω (X ), L, E where λ is a penalty parameter and P Ω is a linear projection such that the entries in the set Ω are given while the remaining entries are missing.

89 Transform-based t-svd Assume that rank tt (L 0 ) = r and its skinny tt-svd is L 0 = U Φ S Φ V H. L 0 is said to satisfy the transformed tensor incoherence conditions with parameter µ > 0 if and max U H Φ ei F i=1,...,n 1 max j=1,...,n 2 V H Φ e j F U Φ V H µr, n 1 µr, n 2 µr n 1 n 2 n 3, where ei and ej are the tensor basis with respect to Φ.

90 Transform-based t-svd Theorem Suppose that L 0 C n 1 n 2 n 3 obeys transformed tensor incoherence conditions, and the observation set Ω is uniformly distributed among all sets of cardinality m = ρn 1 n 2 n 3. Also suppose that each observed entry is independently corrupted with probability γ. Then, there exist universal constants c 1, c 2 > 0 such that with probability at least 1 c 1 (n (1) n 3 ) c 2, the recovery of L 0 with λ = 1/ ρn (1) n 3 is exact, provided that r c r n (2) µ(log(n (1) n 3 )) 2 and γ c γ, where c r and c γ are two positive constants.

91 Numerical Illustration Table: The transformed tubal ranks of randomly generated ten tensors. Transform/Tensor #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 level-1 Haar level-2 Haar Fourier

92 Numerical Illustration Table: The relative errors of tensor completion for Tensors #1 and #2 with sampling ratios. ρ Tensor #1 Tensor #2 level-1 Haar level-2 Haar Fourier level-1 Haar level-2 Haar Fourier e e e e e e e e e e e e e e e e e e-1

93 Numerical Illustration Table: The relative errors of robust tensor completion for Tensors #1 and #2 with sampling ratios and noise levels. ρ γ Tensor #1 Tensor #2 level-1 Haar level-2 Haar Fourier level-1 Haar level-2 Haar Fourier e e e e e e e e e0 2.26e e e e e e0 3.67e e e e e e0 2.63e e e e e e0 3.35e e e e e e0 1.77e e e0

94 Hyperspectral Image

95 Hyperspectral Image

96 Hyperspectral Image

97 Data-Dependent Transform Theorem Let A be a given tensor with size n 1 n 2 n 3 and A be its unfolding matrix along the third-dimension. Suppose rank(a) = k. Then a global optimal solution of the following problem min rank(b)=k,φ ΦA B 2 F is given by Φ = U H and B = ΣV H. s.t. Φ H Φ = ΦΦ H = I,

98 Hyperspectral Image

99 Hyperspectral Image

100 Hyperspectral Image

101 Concluding Remarks Other tensor data sets Theory and Algorithms to be studied

PRINCIPAL Component Analysis (PCA) is a fundamental approach

PRINCIPAL Component Analysis (PCA) is a fundamental approach IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Member, IEEE, Zhouchen

More information

Robust Tensor Completion Using Transformed Tensor Singular Value Decomposition

Robust Tensor Completion Using Transformed Tensor Singular Value Decomposition Robust Tensor Completion Using Transformed Tensor Singular Value Decomposition Guang-Jing Song Xiongjun Zhang Qiang Jiang Michael K. Ng Abstract: In this paper, we study robust tensor completion by using

More information

Tensor-Tensor Product Toolbox

Tensor-Tensor Product Toolbox Tensor-Tensor Product Toolbox 1 version 10 Canyi Lu canyilu@gmailcom Carnegie Mellon University https://githubcom/canyilu/tproduct June, 018 1 INTRODUCTION Tensors are higher-order extensions of matrices

More information

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Canyi Lu 1, Jiashi Feng 1, Yudong Chen, Wei Liu 3, Zhouchen Lin 4,5,, Shuicheng Yan 6,1

More information

arxiv: v1 [cs.cv] 3 Dec 2018

arxiv: v1 [cs.cv] 3 Dec 2018 Tensor N-tubal rank and its convex relaxation for low-rank tensor recovery Yu-Bang Zheng a, Ting-Zhu Huang a,, Xi-Le Zhao a, Tai-Xiang Jiang a, Teng-Yu Ji b, Tian-Hui Ma c a School of Mathematical Sciences/Research

More information

Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization

Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization Shengke Xue, Wenyuan Qiu, Fan Liu, and Xinyu Jin College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou,

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Novel methods for multilinear data completion and de-noising based on tensor-svd

Novel methods for multilinear data completion and de-noising based on tensor-svd Novel methods for multilinear data completion and de-noising based on tensor-svd Zemin Zhang, Gregory Ely, Shuchin Aeron Department of ECE, Tufts University Medford, MA 02155 zemin.zhang@tufts.com gregoryely@gmail.com

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Canyi Lu, Jiashi Feng 2, Zhouchen Li,4 Shuicheng Yan 5,2 Department of Electrical and Computer Engineering, Carnegie Mellon University 2

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

RIP-based performance guarantee for low tubal rank tensor recovery

RIP-based performance guarantee for low tubal rank tensor recovery RIP-based performance guarantee for low tubal rank tensor recovery Feng Zhang a, Wendong Wang a, Jianwen Huang a, Jianjun Wang a,b, a School of Mathematics and Statistics, Southwest University, Chongqing

More information

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Canyi Lu 1, Jiashi Feng, Zhouchen Lin 3,4 Shuicheng Yan 5, 1 Department of Electrical and Computer Engineering, Carnegie Mellon University

More information

ATENSOR is a multi-dimensional array of numbers which

ATENSOR is a multi-dimensional array of numbers which 1152 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 3, MARCH 2018 Tensor Factorization for Low-Rank Tensor Completion Pan Zhou, Canyi Lu, Student Member, IEEE, Zhouchen Lin, Senior Member, IEEE, and

More information

A Tensor Completion Approach for Efficient and Robust Fingerprint-based Indoor Localization

A Tensor Completion Approach for Efficient and Robust Fingerprint-based Indoor Localization A Tensor Completion Approach for Efficient and Robust Fingerprint-based Indoor Localization Yu Zhang, Xiao-Yang Liu Shanghai Institute of Measurement and Testing Technology, China Shanghai Jiao Tong University,

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Tensor Factorization for Low-Rank Tensor Completion

Tensor Factorization for Low-Rank Tensor Completion This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI.9/TIP.27.2762595,

More information

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Qian Zhao 1,2 Deyu Meng 1,2 Xu Kong 3 Qi Xie 1,2 Wenfei Cao 1,2 Yao Wang 1,2 Zongben Xu 1,2 1 School of Mathematics and Statistics,

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

arxiv: v1 [cs.cv] 7 Jan 2019

arxiv: v1 [cs.cv] 7 Jan 2019 Truncated nuclear norm regularization for low-rank tensor completion Shengke Xue, Wenyuan Qiu, Fan Liu, Xinyu Jin College of Information Science and Electronic Engineering, Zhejiang University, No 38 Zheda

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

Hyperspectral Image Restoration Using Low- Rank Tensor Recovery

Hyperspectral Image Restoration Using Low- Rank Tensor Recovery See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/318373873 Hyperspectral Image Restoration Using Low- Rank Tensor Recovery Article in IEEE Journal

More information

TENSOR COMPLETION VIA ADAPTIVE SAMPLING OF TENSOR FIBERS: APPLICATION TO EFFICIENT INDOOR RF FINGERPRINTING

TENSOR COMPLETION VIA ADAPTIVE SAMPLING OF TENSOR FIBERS: APPLICATION TO EFFICIENT INDOOR RF FINGERPRINTING TENSOR COMPLETION VIA ADAPTIVE SAMPLING OF TENSOR FIBERS: APPLICATION TO EFFICIENT INDOOR RF FINGERPRINTING Xiao-Yang Liu 1,4, Shuchin Aeron 2, Vaneet Aggarwal 3, Xiaodong Wang 4 and, Min-You Wu 1 1 Shanghai-Jiatong

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

arxiv: v1 [cs.it] 26 Oct 2018

arxiv: v1 [cs.it] 26 Oct 2018 Outlier Detection using Generative Models with Theoretical Performance Guarantees arxiv:1810.11335v1 [cs.it] 6 Oct 018 Jirong Yi Anh Duc Le Tianming Wang Xiaodong Wu Weiyu Xu October 9, 018 Abstract This

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Optimization for Learning and Big Data

Optimization for Learning and Big Data Optimization for Learning and Big Data Donald Goldfarb Department of IEOR Columbia University Department of Mathematics Distinguished Lecture Series May 17-19, 2016. Lecture 1. First-Order Methods for

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Some tensor decomposition methods for machine learning

Some tensor decomposition methods for machine learning Some tensor decomposition methods for machine learning Massimiliano Pontil Istituto Italiano di Tecnologia and University College London 16 August 2016 1 / 36 Outline Problem and motivation Tucker decomposition

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

arxiv: v1 [cs.lg] 2 Aug 2017

arxiv: v1 [cs.lg] 2 Aug 2017 Exact Tensor Completion from Sparsely Corrupted Observations via Convex Optimization Jonathan Q. Jiang and Michael K. Ng Department of Mathematics, Hong Kong Baptist University arxiv:1708.00601v1 [cs.lg]

More information

Low-tubal-rank Tensor Completion using Alternating Minimization

Low-tubal-rank Tensor Completion using Alternating Minimization Low-tubal-rank Tensor Completion using Alternating Minimization Xiao-Yang Liu 1,2, Shuchin Aeron 3, Vaneet Aggarwal 4, and Xiaodong Wang 1 1 Department of Electrical Engineering, Columbia University, NY,

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Statistical Performance of Convex Tensor Decomposition

Statistical Performance of Convex Tensor Decomposition Slides available: h-p://www.ibis.t.u tokyo.ac.jp/ryotat/tensor12kyoto.pdf Statistical Performance of Convex Tensor Decomposition Ryota Tomioka 2012/01/26 @ Kyoto University Perspectives in Informatics

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Tensor Low-Rank Completion and Invariance of the Tucker Core

Tensor Low-Rank Completion and Invariance of the Tucker Core Tensor Low-Rank Completion and Invariance of the Tucker Core Shuzhong Zhang Department of Industrial & Systems Engineering University of Minnesota zhangs@umn.edu Joint work with Bo JIANG, Shiqian MA, and

More information

Stochastic dynamical modeling:

Stochastic dynamical modeling: Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Spectral k-support Norm Regularization

Spectral k-support Norm Regularization Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:

More information

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel J. Candès, and Yi Ma, Microsoft Research Asia, Beijing, P.R.C Dept. of Electrical

More information

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,

More information

An Online Tensor Robust PCA Algorithm for Sequential 2D Data

An Online Tensor Robust PCA Algorithm for Sequential 2D Data MITSUBISHI EECTRIC RESEARCH ABORATORIES http://www.merl.com An Online Tensor Robust PCA Algorithm for Sequential 2D Data Zhang, Z.; iu, D.; Aeron, S.; Vetro, A. TR206-004 March 206 Abstract Tensor robust

More information

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information

Seismic data interpolation and denoising using SVD-free low-rank matrix factorization

Seismic data interpolation and denoising using SVD-free low-rank matrix factorization Seismic data interpolation and denoising using SVD-free low-rank matrix factorization R. Kumar, A.Y. Aravkin,, H. Mansour,, B. Recht and F.J. Herrmann Dept. of Earth and Ocean sciences, University of British

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS

FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS MISHA E. KILMER AND CARLA D. MARTIN Abstract. Operations with tensors, or multiway arrays, have become increasingly prevalent in recent years. Traditionally,

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

Low-Rank Factorization Models for Matrix Completion and Matrix Separation

Low-Rank Factorization Models for Matrix Completion and Matrix Separation for Matrix Completion and Matrix Separation Joint work with Wotao Yin, Yin Zhang and Shen Yuan IPAM, UCLA Oct. 5, 2010 Low rank minimization problems Matrix completion: find a low-rank matrix W R m n so

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

A Randomized Approach for Crowdsourcing in the Presence of Multiple Views

A Randomized Approach for Crowdsourcing in the Presence of Multiple Views A Randomized Approach for Crowdsourcing in the Presence of Multiple Views Presenter: Yao Zhou joint work with: Jingrui He - 1 - Roadmap Motivation Proposed framework: M2VW Experimental results Conclusion

More information

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison From Compressed Sensing to Matrix Completion and Beyond Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Netflix Prize One million big ones! Given 100 million ratings on a

More information

Robust Asymmetric Nonnegative Matrix Factorization

Robust Asymmetric Nonnegative Matrix Factorization Robust Asymmetric Nonnegative Matrix Factorization Hyenkyun Woo Korea Institue for Advanced Study Math @UCLA Feb. 11. 2015 Joint work with Haesun Park Georgia Institute of Technology, H. Woo (KIAS) RANMF

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

Supplemental Figures: Results for Various Color-image Completion

Supplemental Figures: Results for Various Color-image Completion ANONYMOUS AUTHORS: SUPPLEMENTAL MATERIAL (NOVEMBER 7, 2017) 1 Supplemental Figures: Results for Various Color-image Completion Anonymous authors COMPARISON WITH VARIOUS METHODS IN COLOR-IMAGE COMPLETION

More information

Matrix Completion: Fundamental Limits and Efficient Algorithms

Matrix Completion: Fundamental Limits and Efficient Algorithms Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, 2010 1 / 33 Matrix completion Find the missing entries in a huge data matrix 2 / 33 Example

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Tensor Decompositions and Applications

Tensor Decompositions and Applications Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Robust PCA CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Robust PCA 1 / 52 Previously...

More information

arxiv: v2 [cs.na] 24 Mar 2015

arxiv: v2 [cs.na] 24 Mar 2015 Volume X, No. 0X, 200X, X XX doi:1934/xx.xx.xx.xx PARALLEL MATRIX FACTORIZATION FOR LOW-RANK TENSOR COMPLETION arxiv:1312.1254v2 [cs.na] 24 Mar 2015 Yangyang Xu Department of Computational and Applied

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

Compressive Sensing of Sparse Tensor

Compressive Sensing of Sparse Tensor Shmuel Friedland Univ. Illinois at Chicago Matheon Workshop CSA2013 December 12, 2013 joint work with Q. Li and D. Schonfeld, UIC Abstract Conventional Compressive sensing (CS) theory relies on data representation

More information

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor

More information

High Order Tensor Formulation for Convolutional Sparse Coding

High Order Tensor Formulation for Convolutional Sparse Coding High Order Tensor Formulation for Convolutional Sparse Coding Adel Bibi and Bernard Ghanem King Abdullah University of Science and Technology (KAUST), Saudi Arabia adel.bibi@kaust.edu.sa, bernard.ghanem@kaust.edu.sa

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds

Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds Hongyang Zhang, Zhouchen Lin, Chao Zhang, Edward

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Application to Hyperspectral Imaging

Application to Hyperspectral Imaging Compressed Sensing of Low Complexity High Dimensional Data Application to Hyperspectral Imaging Kévin Degraux PhD Student, ICTEAM institute Université catholique de Louvain, Belgium 6 November, 2013 Hyperspectral

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Gongguo Tang and Arye Nehorai Department of Electrical and Systems Engineering Washington University in St Louis

More information

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Lecture: Matrix Completion

Lecture: Matrix Completion 1/56 Lecture: Matrix Completion http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html Acknowledgement: this slides is based on Prof. Jure Leskovec and Prof. Emmanuel Candes s lecture notes Recommendation systems

More information

Breaking the Limits of Subspace Inference

Breaking the Limits of Subspace Inference Breaking the Limits of Subspace Inference Claudia R. Solís-Lemus, Daniel L. Pimentel-Alarcón Emory University, Georgia State University Abstract Inferring low-dimensional subspaces that describe high-dimensional,

More information

ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION

ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION Niannan Xue, George Papamakarios, Mehdi Bahri, Yannis Panagakis, Stefanos Zafeiriou Imperial College London, UK University of Edinburgh,

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Approximate Low-Rank Tensor Learning

Approximate Low-Rank Tensor Learning Approximate Low-Rank Tensor Learning Yaoliang Yu Dept of Machine Learning Carnegie Mellon University yaoliang@cs.cmu.edu Hao Cheng Dept of Electric Engineering University of Washington kelvinwsch@gmail.com

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Improved PARAFAC based Blind MIMO System Estimation

Improved PARAFAC based Blind MIMO System Estimation Improved PARAFAC based Blind MIMO System Estimation Yuanning Yu, Athina P. Petropulu Department of Electrical and Computer Engineering Drexel University, Philadelphia, PA, 19104, USA This work has been

More information

Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016

Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 Optimization for Tensor Models Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 1 Tensors Matrix Tensor: higher-order matrix

More information

Matrix-Tensor and Deep Learning in High Dimensional Data Analysis

Matrix-Tensor and Deep Learning in High Dimensional Data Analysis Matrix-Tensor and Deep Learning in High Dimensional Data Analysis Tien D. Bui Department of Computer Science and Software Engineering Concordia University 14 th ICIAR Montréal July 5-7, 2017 Introduction

More information