Efficient tensor completion: Low-rank tensor train

Size: px
Start display at page:

Download "Efficient tensor completion: Low-rank tensor train"

Transcription

1 1 Efficient tensor completion: Low-ran tensor train Ho N. Phien 1, Hoang D. Tuan 1, Johann A. Bengua 1 and Minh N. Do 2 arxiv: v1 [cs.na] 6 Jan 2016 Abstract This paper proposes a novel formulation of the tensor completion problem to impute missing entries of data represented by tensors. The formulation is introduced in terms of tensor train (TT) ran which can effectively capture global information of tensors thans to its construction by a wellbalanced matricization scheme. Two algorithms are proposed to solve the corresponding tensor completion problem. The first one called simple low-ran tensor completion via tensor train (-TT) is intimately related to imizing the TT nuclear norm. The second one is based on a multilinear matrix factorization model to approximate the TT ran of the tensor and called tensor completion by parallel matrix factorization via tensor train (-TT). These algorithms are applied to complete both synthetic and real world data tensors. Simulation results of synthetic data show that the proposed algorithms are efficient in estimating missing entries for tensors with either low Tucer ran or TT ran while Tucer-based algorithms are only comparable in the case of low Tucer ran tensors. When applied to recover color images represented by ninthorder tensors augmented from third-order ones, the proposed algorithms outperforms the Tucer-based algorithms. Index Terms Tensor completion, tensor train decomposition, tensor train ran, tensor train nuclear norm, Tucer decomposition. I. INTRODUCTION Tensors are multi-dimensional arrays, nown as higherorder generalizations of matrices and vectors [1]. Tensors provide a natural way to represent multi-dimensional data objects whose entries are indexed by several continuous or discrete variables. Employing tensors and their decompositions to process data objects has become increasingly popular since [2] [4]. For instance, a color image is a third-order tensor defined by two indices for spatial variables and one index for color mode. A video comprised of color images is a fourth-order tensor with an additional index for a temporal variable. Residing in extremely high-dimensional data spaces, the tensors in practical applications are nevertheless often of low-ran [1]. Consequently, they can be effectively projected to much smaller subspaces underlying their decompositions such as the CANDECOMP/PARAFAC (CP) [5], [6], Tucer [7] and tensor train (TT) [8] or matrix product state (MPS) [9] [11]. Motivated by the success of low ran matrix completion (LRMC) [12] [14], much recent effort has been made to extend its concept to low ran tensor completion (LRTC). In fact, LRTC has been seen pivotal in computer vision 1 Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia; ngocphien.ho@uts.edu.au, tuan.hoang@uts.edu.au, johann.a.bengua@student.uts.edu.au. 2 Department of Electrical and Computer Engineering and the Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL USA; hdo@illinois.edu and graphics, signal processing and machine learning [15] [21]. The common target is to recover missing entries of a tensor from its partially observed entities [22] [24]. Despite its potential application, LRTC remains a grand challenge due to the fact that imization of the tensor ran, defined as CP ran [1], is an NP-hard problem. There have been some progress in formulating the LRTC via Tucer ran [15], [18]. However, a crucial drawbac of Tucer ran is that its components are rans of matrices constructed based on an unbalanced matricization scheme (one mode versus the rest). Therefore, the upper bound of each individual ran is often small, and may not be suitable for describing global information of the tensor, especially strongly correlated tensors of high orders. In addition, the matrix ran imization are only efficient when the matrix is more balanced. As the ran of a matrix is not more than {n, m}, where m and n are the number of rows and columns of the matrix, respectively, the high ratio max{m, n}/ {m, n} would effectively rule out the need of matrix ran imization. In this paper we introduce a novel formulation of LRTC in the concept of TT ran [8] which is different from the Tucer ran. Specifically, the TT ran is constituted by rans of matrices formed by a well-balanced matricization scheme, i.e. matricize the tensor along one or a few modes. This gives rise to a huge advantage for representing the tensor ran in the sense that its components can have large upper bounds. Consequently, it provides a much better means to capture the global information in the tensor. We will also analyze how the ran of a matrix is closely related to the concept of von Neumann entropy in quantum information theory [25]. Subsequently, it is shown that the weaness of LRTC formulated by Tucer ran can be mitigated by utilizing the TT ran. Two algorithms are proposed to approximately solve the proposed LRTC formulation, namely -TT and - TT. The former is based on the [15] which employs the bloc coordinate descent (BCD) for optimization and tools such as singular value thresholding from the matrix ran imization problem [12], [13]. The -TT mainly solves the TT nuclear norm imization problem that is a convex surrogate for the new LRTC. Here, we define TT nuclear norm of a tensor as a sum of weighted nuclear norms of matrices formed by matricizing the tensor along one or a few modes. The latter adapted from its counterpart, i.e. [21], is related to solving a weighted multilinear matrix factorization model. Although this model is non-convex, it can be solved effectively in the sense that no computationally expensive SVD is needed compared to the former. The algorithms are applied to complete both synthetic and real world data described by tensors which are assumed to

2 2 have low Tucer ran or TT ran. Empirical results simulated by proposed algorithms for synthetic data are much more efficient than the Tucer-based algorithms in the case of low TT ran and are comparable in the case of low Tucer ran tensors. When studying real world data such as color images, we introduce a tensor augmentation scheme called et augmentation (KA) to represent a lower-order tensor by a higher-order one without changing its number of entries. The KA scheme extended from the one introduced in [26] provides a perfect means to obtain higher-order tensor representation of visual data. We apply the proposed algorithms to complete a few color images represented by ninth-order tensors and results show that our algorithms outperforms the Tucer-based ones. Besides, we will show that our proposed algorithms recover augmented tensors more efficiently than those without applying augmentation scheme. The rest of the paper is organized as follows. Section II provides some notations and preliaries of tensors. In Section III, we firstly review the conventional formulation of LRTC and then introduce our new formulation in terms of TT ran. The algorithms are then proposed to solve the problem. We introduce the tensor augmentation scheme KA in Section IV. Section V presents simulation results. Finally, we conclude our wor in Section VI. II. NOTATIONS AND PRELIMINARIES OF TENSORS We adopt some mathematical notations and preliaries of tensors in [1]. A tensor is a multi-dimensional array and its order (also nown as way or mode) is the number of its dimensions. Scalars are zero-order tensors denoted by lowercase letters (x, y, z,...). Vectors and matrices are the first- and second-order tensors which are denoted by boldface lowercase letters (x, y, z,... ) and capital letters (X, Y, Z,...), respectively. A higher-order tensor (the tensor of order three or above) are denoted by calligraphic letters (X, Y, Z,...). An Nth-order tensor is denoted as X R I1 I2 I N where I, = 1,..., N is the dimension corresponding to mode. The elements of X are denoted as x i1 i i N, where 1 i I, = 1,..., N. A mode-n fiber of a tensor X R I1 I2 I N is a vector defined by fixing all indices but i n and denoted by x i1...i n 1:i n+1...i N. Mode-n matricization (also nown as mode-n unfolding or flattening) of a tensor X R I1 I2 I N is the process of unfolding or reshaping the tensor into a matrix X (n) R In (I1 I 1I +1 I N ) by rearranging the mode-n fibers to be the columns of the resulting matrix. Tensor element (i 1,..., i n 1, i n, i n+1,..., i N ) maps to matrix element (i n, j) such that j = 1 + N =1, n (i 1)J with J = 1 m=1,m n I m. (1) The mode-n product of a tensor X R I1 I2 I N with a matrix A R J In results into a new tensor of size I 1 I n 1 J I n+1 I N which is denoted as X n A. Elementwise, it is described by (X n A) i1 i n 1ji n+1 i N = I n i n=1 x i1 i n i N a jin. (2) The Tucer decomposition (TD) is a form of higher-order principle component analysis [1], [7]. It is employed to decompose a tensor into a core tensor multiplied by a matrix along each mode. In general, for a given tensor X R I1 I2 I N, its TD is written as, X = G 1 A (1) 2 A (2) N A (N), (3) where the core tensor G R r1 r2 r N and the factor matrices A () R r I, = 1,..., N. The vector r = (r 1, r 2,..., r N ), where r n is the ran of the corresponding matrix X (n) denoted as r n = ran(x (n) ), is called as the Tucer ran of the tensor X. The inner product of two tensors X, Y R I1 I2 I N is defined as X, Y = x i1i 2 i N y i1i 2 i N. (4) i 1 i 2 i N Accordingly, the Frobenius norm of X is X F = X, X. III. THE FORMULATION OF TENSOR COMPLETION AND ALGORITHMS This section firstly revisits the conventional formulation of LRTC based on the Tucer ran, a generalization of LRMC. Then a new LRTC formulated in terms of the TT ran is introduced with algorithms. A. Conventional formulation of tensor completion We give an overview of matrix completion before introducing the formulation for tensor completion. The problem on how to recover missing entries of a low-ran matrix T R m n from its partially nown entries given by a subset Ω can be studied via the well-nown optimization problem [27]: X ran(x) s.t. X Ω = T Ω. (5) The missing entries of X are estimated such that the ran of X is as small as possible. Due to the combinational nature of the function ran( ), problem (5), however, is NP-hard and one needs to loo for its surrogates. Minimization of the matrix nuclear norm has proved as an excellent approximation of the matrix ran. This leads to the following convex optimization problem for matrix completion [12], [13], [28]: X X s.t. X Ω = T Ω, (6) where the nuclear norm X is the summation of the singular values of X. Note that this matrix nuclear norm optimization problem is efficient only when X is balanced which implies that m n. Alternatively, one can also apply the low-ran matrix factorization model to solve the matrix completion problem [29]: U,V,X 1 2 UV X 2 F s.t. X Ω = T Ω, (7)

3 3 where U R m r, V R r n and X R m n and the integer r is the estimated ran of the matrix T. The matrix completion can be generalized for tensor underlying the concept of tensor ran, e.g. the CANDE- COMP/PARAFAC ran (CP-ran), Tucer ran [1], [7]. For instance, in terms of Tucer ran, completing an N th-order tensor T R I1 I2 I N from its nown entries given by an index set Ω is related to solving the following optimization problem [15], [18], [20], [21]: X () N α ran(x () ) s.t. X Ω = T Ω. (8) =1 where {α } N =1 are defined as weights fulfilling condition N =1 α = 1. The Eq. (8) is a weighted multilinear matrix completion problem which is still NP-hard. Therefore, one needs to switch to an alternative by generalizing the matrix case. For instance, (8) can be converted to the following optimization problem [15]: X () N α X () s.t. X Ω = T Ω, (9) =1 where N =1 α X () can be defined as Tucer nuclear norm of the tensor. This problem can be then solved by simply applying nown methods such as the bloc coordinate descent (BCD) to alternatively optimize a group of variables while the other groups remain fixed. Besides, one can generalize the lowran matrix factorization model in (7) as a subsitute for (8) and then apply the BCD method with the nonlinear Gauss-Seidal method to solve it [20], [21], [29]. Although the Tucer-based LRTC problem has become increasingly popular, it is only appropriate for the tensors with either low Tucer ran or low orders and might be less efficient when applying to real world data represented by tensors of orders higher than three. This weaness comes from the fact that each matrix X () in (8) is obtained by matricizing the tensor along one single mode. As a consequence, it is unbalanced and the corresponding ran is not large enough to capture the global correlation between elements in the tensor. Even when all the modes have the same dimension (I 1 = = I N I), these matrices are highly unbalanced. We can clarify this observation via the concept of von Neumann entropy [25] as follows. Represent X as a pure state in the space R I1 I2 I N, X = x i1i 2 i N e i1 e i2 e in, (10) i 1,i 2...,i N where denotes a tensor product [1], e i R I forms an orthonormal basis in R I for each = 1,..., N. Applying mode- matricization of X results into X () representing a pure state of the composite system AB in the space H AB R m n, which is a tensor product of two subspaces H A R m and H B R n of dimensions m = I and N n = I l, respectively. The subsystems A and B are,l seen as two contigous partitions consisting of mode and all other modes of the tensor, respectively. It follows from (10) that X () = i,j x i je i e j, (11) where the new index j is defined as in (1), e j = N,l e i l R n. According to the Schmidt decomposition [25], there exist orthonormal bases {u A l } in H A and {v B l } in H B such that, X () = r λ l u A l v B l, (12) where r is the ran of X (), λ l are nonvanishing singular values and {u A l } and {vb l } are columns of orthonormal matrices U and V which are obtained from the SVD X () = UλV T, respectively. The correlation between two subsystems A and B can be studied via von Neumann entropy defined as [25]: S A = Trace(ρ A log 2 (ρ A )), (13) where ρ A is called the reduced density matrix operator of the composite system and computed by taing the partial trace of the density matrix ρ AB with respect to B. Specifically, we have ρ AB = X () (X () ) T ( r = λ l u A l v B l Then ρ A is computed as ) ( r ) T λ j u A j v B j.(14) j=1 ρ A = Trace B (ρ AB ) r = λ 2 l u A l (u A l ) T, (15) Substituting (15) to (13) yields Similarly, S A = r λ 2 l log 2 λ 2 l. (16) S B = Trace(ρ B log 2 (ρ B )) = r λ 2 l log 2 λ 2 l, (17) which is the same with S A, simply S A = S B = S. This entropy reflects the correlation or degree of entanglement between subsystem A and its complement B [30]. Without loss of generality, the normalization condition r λ 2 l = 1 can be imposed, so 0 S log 2 r. Obviously, there is no correlation between subsystems A and B whenever S = 0 (where λ 1 = 1 and the other singular values are zeros). There exists correlation between subsystems A and B whenever S 0 with its maxima S = log 2 r (when λ 1 = = λ r = 1/ r ). If the singular values decay significantly, e.g. exponential decay, we can also eep a few r (r m) largest singular values of λ without considerably losing accuracy in quantifying the amount of correlation between the subsystems. Then r is referred to as the approximate low ran of the matrix X () which means

4 4 that the amount of correlation between the elements in the matrix is small. On the contrary, if two subsystems A and B are highly correlated, i.e. the singular values decay very slowly, then r needs to be as large as possible to capture the correlation in the tensor. Therefore, the problem of matrix ran imization is in fact intimately related to the problem of von Neumann entropy imization. From the above analysis, we see that the amount of correlation between elements in the matrix X () depends on the ran r which is bounded by m = I. Therefore, when the dimensions of modes are slightly different or the same, that is I 1 I 2 I N I, the matrix X () is essentially unbalanced due to m n when either I or N is large. As a result, the limit of each r is too small to describe the correlation of the tensor in case the tensor X has higher order (N > 3) that maes the Tucer-based LRTC no longer appropriate for a highly-correlated tensor. In the next section we introduce a new LRTC problem formulated in terms of TT ran defined by more balanced matrices. B. Tensor completion formulation in the concept of tensor train ran and algorithms The tensor train (TT) decomposition is applied to decompose a higher-order tensor into a sequence of connected lowerorder tensors [8]. Using Vidal s decomposition [31], the TT decomposition of a tensor described by (10) can be written in the following form, X = i 1,...,i N Γ [1] i 1 λ [1] λ [N 1] Γ [N] i N e i1 e in,(18) where for = 1,..., N, Γ [] i is an r 1 r matrix and λ [] is the r r diagonal singular matrix, r 0 = r N+1 = 1. For every, the following orthogonal conditions are fulfilled: I i =1 I i =1 Γ [] i λ [] (Γ [] i λ [] ) T = I [ 1], (19) (λ [ 1] Γ [] i ) T λ [ 1] Γ [] i = I [], (20) where I [ 1] and I [] are the identity matrices of sizes r 1 r 1 and r r, respectively. Based on the form (18), each component r of the so-called TT ran of the tensor, simply defined as r = (r 1, r 2,..., r N 1 ), can be detered directly via the singular matrices λ []. Specifically, to detere r, rewrite (18) as X = u [1 ]i1 i λ [] v [+1 N]i+1 in, (21) i 1,i 2...,i N where and u [1 ]i1 i = Γ [1] i 1 λ [1] Γ [] i e il, (22) v [+1 N]i +1 i N = Γ [+1] i +1 λ [+1] Γ [N] i N N l=+1 e il. (23) We can also rewrite (21) in terms of the matrix form of an SVD as X [] = Uλ [] V T, (24) where X [] R m n (m = I l, n = N l=+1 I l) is the mode-(1, 2,..., ) matricization of the tensor X [8], U R m r and V R n r are orthogonal matrices. Obviously, r, defined as number of nonvanishing singular values of λ [], is the ran of X []. In practice, the mode-(1, 2,..., ) matricization X [] of tensor X can be obtained by reshaping the tensor X in such a way that the first indices enumerate the rows of X [], and the last (N ) enumerate the columns of X [] [8], [32]. Specifically, the tensor element (i 1, i 2,..., i N ) of X maps to the element (i, j) of X [] for i = 1 + j = 1 + m=1 N m=+1 m 1 ( ) (im 1) I l, (25) ( (im 1) m 1 l=+1 I l ). (26) Since matrix X [] is obtained by matricizing along a few modes rather than one single mode, its ran r is bounded by ( I l, N l=+1 I l). Therefore TT ran is in general more appropriate than Tucer ran for quantifying correlation of higher-order tensors. We now propose to formulate the LRTC problem in terms of TT ran as X [] N 1 =1 α ran(x [] ) s.t. X Ω = T Ω, (27) where α denotes the weight that the ran of the matrix X [] contributes to the TT ran that the condition N 1 =1 α = 1 is satisfied. The LRTC problem is now relaxed to the weighted multilinear matrix completion problem which is similar to (8). It is still difficult to directly tacle as ran( ) is presumably hard. Thus, we will convert this problem into two separate problems. The first one based on the so-called TT nuclear norm, defined as is given by X N 1 =1 X = N 1 =1 α X [], (28) α X [] s.t. X Ω = T Ω, (29) The problem (29) is defined similarly to (9) where the Tucer nuclear norm is used instead. Besides, from (29) we can recover the square model [32] by choosing the weights such that α = 1 if = round(n/2) otherwise α = 0. The problem (29) can be further converted to the following problem: X,M N 1 =1 s.t. X Ω = T Ω, α M + β 2 X [] M 2 F (30) where β are positive numbers and can be solved by employing the BCD method for the optimization which will be discussed later in this section.

5 5 The second proposed problem is based on the multilinear matrix factorization model. More specifically, given a matrix X [] R m n of ran r, it can be factorized as X [] = UV where U R m r and V R r n. Therefore, instead of optimizing the nuclear norm of the unfolding matrices X [], the Frobenius norm is imized as follows, U,V,X N 1 =1 s.t. X Ω = T Ω, α 2 U V X [] 2 F (31) where U R j=1 Ij r and V R r N j=+1 Ij. This model is similar to the one proposed in [20], [21] (which is an extension of the matrix completion model [29]) where the Tucer ran is employed. To solve the convex but nondifferentiable optimization problem described by (30), one can adapt the TT nuclear norm to the algorithms such as, FaLRTC in [15]. Besides, in order to solve (31), we can apply the alternating least squares (ALS) technique to variationally optimize U, V, X until a convergence is obtained. Specifically, one can modify the algorithms such as and TC-MLFM in [21] and [20], respectively by incorporating the concept of TT ran into them. The essential advantage of this multilinear matrix factorization model when compared to the model in (30) is that it avoids a lot of SVDs and hence can substantially save the computational time. Let us propose the first algorithm to solve the optimization problem in (30) which is deeply rooted by the algorithm [15]. We call our algorithm -TT which stands for simple low ran tensor completion via tensor train. The central concept of this algorithm is based on the BCD method to alternatively optimize a group of variables while the other groups remain fixed. More specifically, the variables are divided into two main groups. The first one contains the unfolding matrices M 1, M 2,..., M N 1 and the other is tensor X. Computing each matrix M is related to solving the following optimization problem: M α M + β 2 X [] M 2 F, (32) with fixed X []. The optimal solution for this problem has the closed form [13] which is detered by M = D γ (X [] ), (33) where γ = α β and D γ (X [] ) denotes the thresholding SVD of X [] [12]. Specifically, if the SVD of X [] = UλV T, its thresholding SVD is defined as: D γ (X [] ) = Uλ γ V T, (34) where λ γ = diag(max(λ l γ, 0)). After updating all the M matrices, we turn into another bloc to compute the tensor X which elements are given by { ( N ) =1 β fold(m ) N (i x i1 i N = =1 β 1 i N ) / Ω i 1 i N t i1 i N (i 1 i N ) Ω (35) The pseudo-code of this algorithm is given in Algorithm I. The convergence condition is reached when the relative error between two successive tensors X is smaller than a threshold. The algorithm is guaranteed to be converged and gives rise to a global solution since the objective in (30) is a convex and the nonsmooth term is separable. We can also apply this algorithm for the square model [32] by simply choosing the weights such that α = 1 if = round(n/2) otherwise α = 0. For this particular case, let us call the algorithm as -Square. Algorithm I: -TT Input: The observed data T R I 1 I 2 I N, index set Ω. Parameters: α, β, = 1,..., N 1. 1: Initialization: X 0, with XΩ 0 = T Ω, l = 0. 2: While not converged do: 3: for = 1 to N 1 do 4: Unfold the tensor X l to get X[] l 5: M l+1 6: end for = D α β (X l [] ) 7: Update X l+1 from M l+1 by (35) 8: End while Output: The recovered tensor X as an approximation of T In order to solve the problem given by (31), we apply the BCD method to alternatively optimize different groups of variables. Specifically, we can first solve the following problem: U,V,X [] U V X [] 2 F, (36) for = 1, 2,..., N 1. As the problem is convex with respect to each bloc of variables U, V and X [] while the other two are fixed, we have the following updates: U l+1 = X[] l (V ) l (V(V l ) l ), (37) V l+1 = ((U l+1 ) T U l+1 ) (U l+1 ) T )X[] l (38) X l+1 [] = U l+1 V l+1, (39) where denotes the Moore-Penrose pseudoinverse. It was shown in [21] that, we can replace (37) by the following one: U l+1 = X l [] (V l ) T, (40) to avoid computing the Moore-Penrose pseudoinverse (V l(v l)t ). The rationale behind this is that we only need the product U l+1 V l+1 to compute X l+1 [] as in (39) that is the same when either (37) or (40) is used. After updating U l+1, V l+1 and X l+1 [] for all = 1, 2,..., N 1, we compute elements of the tensor X l+1 as follows: { ( N 1 ) x l+1 =1 i 1 i N = α fold(x l+1 [] ) (i 1 i N ) / Ω i 1 i N t i1 i N (i 1 i N ) Ω (41) Let us name the algorithm as -TT which stands for tensor completion by parallel matrix factorization in the concept of tensor train and its pseudo-code is summarized in Algorithm II. Again, the Algorithm II can be applied for the square model [32] by choosing the weights such that α = 1

6 6 if = round(n/2) otherwise α = 0, and we call it as - Square. Algorithm II: -TT Input: The observed data T R I 1 I 2 I N, index set Ω. Parameters: α i, r i, i = 1,..., N 1. 1: Initialization: U 0, V 0, X 0, with XΩ 0 = T Ω, l = 0. While not converged do: 2: for = 1 to N 1 do 3: Unfold the tensor X l to get X[] l 4: U l+1 i = X[] l (V l)t 5: V l+1 = ((U l+1 ) T U l+1 ) (U l+1 ) T X[] l 6: X l+1 = U l+1 [] V l+1 7: end 8: Update the tensor X l+1 using (41) End while Output: The recovered tensor X as an approximation of T the color is indexed by j where j = 1, 2, 3 corresponding to red, green and blue colors, respectively). This bloc can be represented as T [ j] = 4 c i1je i1, (42) i 1=1 where c i1j is the pixel value corresponding to color j and e i1 is the orthonormal base which is defined as e 1 = (1, 0, 0, 0), e 2 = (0, 1, 0, 0), e 3 = (0, 0, 1, 0) and e 4 = (0, 0, 0, 1). The value i 1 = 1, 2, 3 and 4 can be considered as labeling the upleft, up-right, down-left and down-right pixels, respectively. For all three colors, we have three blocs which are presented by T [ ] = 4 i 1=1 j=1 3 c i1je i1 u j, (43) where u j is also an orthonormal base which is defined as u 1 = (1, 0, 0), u 2 = (0, 1, 0), u 3 = (0, 0, 1). We now consider C. Computational complexity of algorithms We analyze the computational complexity of algorithms applied to complete a tensor X R I1 I2 I N in the Table I where we assume that I 1 = I 2 = = I N = I, the Tucer ran and TT ran are the same r 1 = r 2 = = r N = r. Table I: Computational complexity of algorithms for one iteration. Algorithm Computational complexity O(NI N+1 ) -TT O(I 3N/2 + I 3N/2 1 ) O(3NI N r) -TT O(3(N 1)I N r) IV. TENSOR AUGMENTATION In this section, we introduce the et augmentation (KA) to represent a lower-order tensor by a higher-order one, i.e. to cast an Nth-order tensor T R I1 I2 I N into a Kthorder tensor T R J 1 J 2 J K, where K N and N I l = K J l. Higher-order representation of the tensor offers some important advantages. For instance, TT decomposition is more efficient for the augmented tensor because the local structure of the data can be exploited effectively in terms of computational resources. Actually, if the tensor is slightly correlated, its augmented tensor can be represented by a lowran TT [8], [26]. The KA was originally introduced in [26] for casting a grayscale image into real et state of a Hilbert space, which is simply a higher-order tensor, using an appropriate bloc structured addressing. Here we generalize the KA scheme for third-order tensors T R I1 I2 I3 that represent color images, where I 1 I 2 = 2 n 2 n (n 1 Z) is the number of pixels in the image and I 3 = 3 is the number of colors (red, green and blue). Let us start with an initial bloc, labeled as i 1, of 2 2 pixels corresponding to a single color j (assume that Figure 1: A structured bloc addressing procedure to cast an image into a higher-order tensor. (a) Example for an image of size represented by (43). (b) Illustration for an image of size represented by (44). a larger bloc labeled as i 2 mae up of four inner sub-blocs for each color j as shown in Fig. 1. In total, the new bloc is represented by T [ ] = 4 4 i 2=1 i 1=1 j=1 3 c i2i 1je i2 e i1 u j. (44) Generally, this bloc structure can be extended to a size of 2 n 2 n 3 after several steps until it can present all the values of pixels in the image. Finally, the image can be cast into an (n + 1)th-order tensor C R containing all the pixel values as follows, T [2n 2 n 3] = 4 i n,...,i 1=1 j=1 3 c in i 1je in e i1 u j. (45) When the image is represented by a real et state, its entanglement entropy can reflect the correlation between individual pixels as due to their relative positions in the image. Besides, this presentation is suitable for the image processing as it not only preserves the pixels values of the image but also rearrange them in a higher-order tensor such that the richness of textures

7 7 in the image can be studied via the correlation between modes of the tensor [26]. V. SIMULATIONS We apply the proposed algorithms for completing both synthetic data and color images. Simulation results are compared with commonly used algorithms, i.e. [15], - Square [32], [21] and -Square. To measure performance of a LRTC algorithm we compute the relative square error (RSE) between the approximately recovered tensor X and the original one T, which is defined as, RSE = X T F / T F. (46) It is hard to choose the parameters in the models so that optimal solutions can be achieved. In experiments, we simply choose the weights α for our proposed algorithms as follows: α = δ N 1 =1 δ with δ = ( I l, N l=+1 I l ),(47) where = 1,..., N 1. The positive parameters are chosen by β = fα, where f is empirically chosen from one of the following values in [0.01, 0.05, 0.1, 0.5, 1] in such a way that the algorithm performs the best. For the algorithms used to compare with ours, i.e. and the weights are chosen as follows: α = I N =1 I, (48) where = 1,..., N. The positive parameters are chosen such that β = fα, where f is empirically chosen from one of the following values in [0.01, 0.05, 0.1, 0.5, 1] which gives the best performance. The convergence criterion of our proposed algorithms is defined by computing the relative error of the tensor X between two successive iterations as follows: ɛ = X l+1 X l F T F tol, (49) where we set tol = 10 4 and the maximum number of iterations is maxiter = In what follows, we perform simulations for algorithms with respect to different missing ratios () defined as, = p N =1 I, (50) where p is the number of missing entries which are often chosen randomly from the tensor T based on a uniform distribution. These simulations are implemented under a Matlab environment using the FEIT cluster from the University of Technology Sydney. A. Synthetic data completion We firstly perform the simulation on two different types of low-ran tensors which are generated synthetically in such a way that the Tucer and TT ran are nown in advance. 1) Completion of low TT ran tensor: The N th-order tensors T R I1 I2 I N of TT ran (r 1, r 2,..., r N 1 ) are generated such that its elements is represented by a TT format [8]. Specifically, its elements is t i1i 2...i N = A [1] i 1 A [2] i 2 A [N] i N, where A [1] R I1 r1, A [N] R r N I N and A [] R r 1 I r with = 2,..., N 1 are generated randomly with respect to the standard Gaussian distribution N (0, 1). For simplicity, in this paper we set all components of the TT ran the same and so does the dimension of each mode, i.e. r 1 = r 2 = = r N 1 = r and I 1 = I 2 = = I N = I. The plots of RSE with respect to are shown in the Figure. 2 for tensors of different sizes, (4D), (5D), (6D) and (7D) and the corresponding TT ran tuples are (10, 10, 10) (4D), (5, 5, 5, 5) (5D), (4, 4, 4, 4, 4) (6D) and (4, 4, 4, 4, 4, 4) (7D). From the plots we can see that -TT shows best performance in most cases. Especially, -TT can recover the tensor successfully despite the high missing ratios. Particularly, in most cases with high missing ratios, e.g. = 0.9, it can recover the tensor with RSE More importantly, the proposed algorithms -TT and -TT often performs better than their corresponding counterparts, i.e. and in most cases. RSE RSE (a) 4D (c) 6D Square TT Square TT (b) 5D (d) 7D 0 1 Figure 2: The RSE comparison when applying different LRTC algorithms to synthetic random tensors of low TT ran. Simulation results are shown for different tensor dimensions, 4D, 5D, 6D and 7D. For a better comparison on the performance of different LRTC algorithms, we present the phase diagrams using the grayscale color to estimate how successfully a tensor can be recovered for a range of different TT ran and missing ratios. If RSE ɛ where ɛ is a small threshold, we say that the tensor is recovered successfully and is represented by a white bloc in the phase diagram. Otherwise, if RSE > ɛ, the tensor is recovered partially with a relative error and the bloc color is

8 8 gray. Especially the recovery is completely failed if RSE = 1. Concretely, we show in Fig. 3 the phase diagrams for different algorithms applied to complete a 5D tensor of size where the TT ran r varies from 2 to 16 and ɛ = We can see that our LRTC algorithms outperform the others. Especially, -TT always recovers successfully the tensor with any TT ran and missing ratio. (a) (c) -Square (e) -TT (b) (d) -Square (f) -TT color images where the rans of the tensors are not nown in advance in the next subsection. (a) (c) -Square (e) -TT 5 10 r 15 (b) (d) -Square (f) -TT 5 10 r 15 Figure 4: Phase diagrams for low Tucer ran tensor completion when applying different algorithms to a 5D tensor r r Figure 3: Phase diagrams for low TT ran tensor completion when applying different algorithms to a 5D tensor. 2) Completion of low Tucer ran tensor: Let us now apply our proposed algorithms to synthetic random tensors of low Tucer ran. The Nth-order tensor T R I1 I2 I N of Tucer ran (r 1, r 2,..., r N ) is constructed by T = G 1 A (1) 2 A (2) N A (N), where the core tensor G R r1 r2 r N and the factor matrices A () R r I, = 1,..., N are generated randomly by using the standard Gaussian distribution N (0, 1). Here, we choose r 1 = r 2 = = r N = r and I 1 = I 2 = = I N = I for simplicity. To compare the performance between the algorithms, we show in the Fig. 4 the phase diagrams for different algorithms applied to complete a 5D tensor of size where the Tucer ran r varies from 2 to 16 and ɛ = We can see that both and -TT perform much better than the others and. Besides, -TT shows better performance when compared to and -Square. Similarly, -TT is better than its particular case -Square. In summary, we can see that although the tensors are generated synthetically to have low Tucer rans, the proposed algorithms are still capable of producing results which are as good as the ones obtained by the Tucer-based algorithms. In order to have a better comparison between algorithms, we show results of applying them to the real world data such as B. Image completion A set of color images, namely Peppers, Lenna and House are employed to test the algorithms with different missing ratios. All the images are initially represented by thirdorder tensors which have same sizes of Note RSE Square TT Square TT (a) (b) Figure 5: Performance comparison between different tensor completion algorithms based on the RSE vs the missing rate when applied to the Peppers image. (a) Original tensor (no order augmentation). (b) Augmented tensor using KA scheme. that when completing the third-order tensors, we do not expect that our proposed methods prevail against the conventional ones due to the fact that the TT ran of the tensor is a special case of the Tucer ran. Thus, performance of the algorithms

9 9 RSE =0.129 RSE =0.394 RSE =0.134 RSE =0.134 Original image =0.7 RSE =0.121 RSE =0.109 RSE = Square -TT RSE = RSE should be mutually comparable. However, for the purpose of comparing the performance between different algorithms for real data (images) represented in terms of higher-order tensors, we apply tensor augmentation scheme KA mentioned above to reshape third-order tensors to higher-order ones without changing the number of entries in the tensor. Specifically, we start our simulation by casting a third-order tensor T R into a ninth-order T R and then applying the tensor completion algorithms to impute its missing entries. We perform the simulation for the Peppers and Lenna images where missing entries of each image are chosen randomly according to a uniform distribution, the missing ratio varies from 0.1 to 0.9. In Fig. 5, we compare performance of algorithms on completing the Peppers image. We can see that, when the image is represented by a thirdorder tensor, the performance of the algorithms are comparable (The -TT is actually slightly better than the others in most of the missing ratios). However, for the case of the ninth-order tensors, the performance of the algorithms are rigorously distinguished. Specifically, our proposed algorithms (especially -TT) prevail against the others. We also illustrate the recovered images for = 0.7 in Fig. 6. This shows that our proposed algorithms give really good results in the case of augmented tensors, meanwhile the compared algorithms seem to be inefficient. Furthermore, using the KA scheme to increase the tensor order, -TT and TT significantly improve the accuracy when compared to the cases without augmentation. More precisely, -TT gives the best results RSE with respect to the case of using KA scheme. Same experiment is performed on the Lenna image and recovery results are shown in Fig. 7 and Fig. 8. The results also show that -TT gives the best results for the augmented tensor using the KA scheme. 1 (b) (a) Square TT Square TT Figure 7: Performance comparison between different tensor completion algorithms based on the RSE vs the missing rate when applied to the Lenna image. (a) Original tensor (no order augmentation). (b) Augmented tensor using KA scheme. Original image =0.9 RSE =08 RSE =04 RSE =0.192 RSE =68 RSE =0.184 RSE =0.181 RSE =0.383 RSE =0.165 RSE =0.149 RSE =0.384 RSE =0.304 RSE = Square -TT -Square -TT Figure 8: Recover the Lenna image with 90% of missing entries using different algorithms. Top row from left to right: the original image and its copy with 90% of missing entries. Second and third rows represent the recovery results of thirdorder (no order augmentation) and ninth-order tensors (KA augmentation), using different algorithms:, Square, -TT,, -Square and -TT from the left to the right, respectively. RSE =0.110 RSE =0.397 RSE =0.172 RSE = Square -TT Figure 6: Recover the Peppers image with 70% of missing entries using different algorithms. Top row from left to right: the original image and its copy with 70% of missing entries. Second and third rows represent the recovery results of thirdorder (no order augmentation) and ninth-order tensors (KA augmentation), using different algorithms:, Square, -TT,, -Square and -TT from the left to the right, respectively. We perform the same above experiment on the House image, however, the missing entries are now chosen as the white text, and hence the missing rate is fixed. The result is shown in Fig. 9. In the cases of tensor augmentation, the conventional algorithms and do not perform well meanwhile our proposed algorithms do. Using the KA scheme, better results can be achieved by employing our algorithms when compared to the case without using the augmentation schemes. To sum up, we see that the TT-based algorithms outperforms the Tucer-based ones when applying to the images represented by ninth-order tensors T R This is because the components of TT ran can approximately vary in a broad range of values (the maximum value it can reach is 256) to capture the global information of the images. On the contrary, the components of Tucer ran can have value up to 4 due to the mode- matricization X R Consequently, the Tucer-based algorithms are not reliable due to the naturally small Tucer ran.

10 10 Original image Text Image REFERENCES RSE =0.039 RSE =0.039 RSE =0.036 RSE =0.053 RSE =0.050 RSE =0.040 RSE =0.075 RSE =0.030 RSE = Square -TT RSE =0.093 RSE = Square RSE = TT Figure 9: Recover the House image with missing entries described by the white letters using different algorithms. Top row from left to right: the original image and its copy with white letters. Second and third rows represent the recovery results of third-order (no order augmentation) and ninthorder tensors (KA augmentation), using different algorithms:, -Square, -TT,, -Square and -TT from the left to the right, respectively. VI. CONCLUSION We have proposed efficient LRTC algorithms based on the concept of the TT decomposition. The -TT algorithm is applied to imize the TT ran of the tensor by solving the TT nuclear norm optimization. Meanwhile, -TT is based on the multilinear matrix factorization model to imize the TT-ran. The latter is more computationally efficient due to the fact that it does not need the SVD which is different from the former. The proposed algorithms are employed to simulate with both synthetic and real world data represented by higher-order tensors and their performance are compared with their replicates, which are formulated in terms of Tucer ran. For synthetic data, on the one hand our algorithms prevail the other when the tensors have low TT ran. On the other hand, their performance are comparable in case of low Tucer ran tensors. Therefore, the TT-based algorithms are quite promising and reliable when applying to real world data. To validate this, we apply the algorithms to study the image completion problem. Benchmar results show that when applied to original tensors without order augmentation, all algorithms are comparable to each other. However, in the case of augmented tensors, our proposed algorithms not only outperform the others but also provide better recovery results when compared to the case without tensor order augmentation. Although the proposed algorithms can potentially be applied to complete tensors with a wide range of low tensor rans, i.e. Tucer ran or TT ran, the optimal parameters such as weights and TT ran cannot be chosen automatically rather than empirically. We plan to further improve the algorithms by developing a scheme to adaptively choose these parameters. Besides, their applications to data compression, text ing, image classification and video indexing are under our interest. [1] T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Review, vol. 51, no. 3, pp , [2] M. Vasilescu and D. Terzopoulos, Multilinear subspace analysis of image ensembles, in 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Proceedings. Institute of Electrical & Electronics Engineers (IEEE), [3] J.-T. Sun, H.-J. Zeng, H. Liu, Y. Lu, and Z. Chen, Cubesvd: A novel approach to personalized web search, in Proceedings of the 14th International Conference on World Wide Web, ser. WWW 05. New Yor, NY, USA: ACM, 2005, pp [4] T. Franz, A. Schultz, S. Sizov, and S. Staab, Tripleran: Raning semantic web data by tensor decomposition, in The Semantic Web - ISWC 2009, ser. Lecture Notes in Computer Science, A. Bernstein, D. Karger, T. Heath, L. Feigenbaum, D. Maynard, E. Motta, and K. Thirunarayan, Eds. Springer Berlin Heidelberg, 2009, vol. 5823, pp [5] J. Carroll and J.-J. Chang, Analysis of individual differences in multidimensional scaling via an n-way generalization of ecart-young decomposition, Psychometria, vol. 35, no. 3, pp , [6] R. A. Harshman, Foundations of the PARAFAC procedure: Models and conditions for an explanatory multi-modal factor analysis, UCLA Woring Papers in Phonetics, vol. 16, no. 1, p. 84, [7] L. R. Tucer, Some mathematical notes on three-mode factor analysis, Psychometria, vol. 31, no. 3, pp , Sep [8] I. V. Oseledets, Tensor-Train Decomposition, SIAM J. Sci. Comput., vol. 33, no. 5, pp , Jan [9] M. Fannes, B. Nachtergaele, and R. Werner, Finitely correlated states on quantum spin chains, Communications in Mathematical Physics, vol. 144, no. 3, pp , [10] A. Klmper, A. Schadschneider, and J. Zittartz, Matrix product ground states for one-dimensional spin-1 quantum antiferromagnets, EPL (Europhysics Letters), vol. 24, no. 4, p. 293, [Online]. Available: [11] D. Perez-Garcia, F. Verstraete, M. M. Wolf, and J. I. Cirac, Matrix product state representations, Quantum Info. Comput., vol. 7, no. 5, pp , [12] J.-F. Cai, E. J. Candès, and Z. Shen, A Singular Value Thresholding Algorithm for Matrix Completion, SIAM J. Optim., vol. 20, no. 4, pp , Jan [13] S. Ma, D. Goldfarb, and L. Chen, Fixed point and Bregman iterative methods for matrix ran imization, Mathematical Programg, vol. 128, no. 1-2, pp , Sep [14] B. Recht, M. Fazel, and P. A. Parrilo, Guaranteed Minimum-Ran Solutions of Linear Matrix Equations via Nuclear Norm Minimization, SIAM Rev., vol. 52, no. 3, pp , Jan [15] J. Liu, P. Musialsi, P. Wona, and J. Ye, Tensor completion for estimating missing values in visual data, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 35, no. 1, pp , Jan [16] M. Signoretto, L. De Lathauwer, and J. A. K. Suyens, Nuclear norms for tensors and their use for convex multilinear estimation, ESAT- SISTA, K.U.Leuven (Leuven, Belgium), Tech. Rep., [17] M. Signoretto, R. Van de Plas, B. De Moor, and J. Suyens, Tensor versus matrix completion: A comparison with application to spectral data, Signal Processing Letters, IEEE, vol. 18, no. 7, pp , Jul [18] S. Gandy, B. Recht, and I. Yamada, Tensor completion and low-n-ran tensor recovery via convex optimization, Inverse Problems, vol. 27, no. 2, p , Jan [19] T. Ryota, S. Taiji, K. Hayashi, and H. Kashima, Statistical performance of convex tensor decomposition, in Proceedings of the 25th Annual Conference on Neural Information Processing Systems (NIPS), Dec [20] H. Tan, B. Cheng, W. Wang, Y.-J. Zhang, and B. Ran, Tensor completion via a multi-linear low-n-ran factorization model, Neurocomputing, vol. 133, pp , Jun [21] Y. Xu, R. Hao, W. Yin, and Z. Su, Parallel matrix factorization for low-ran tensor completion, IPI, vol. 9, no. 2, pp , Mar [22] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, Image inpainting, in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH 00. New Yor, NY, USA: ACM Press/Addison-Wesley Publishing Co., 2000, pp

11 11 [23] N. Komodais, Image Completion Using Global Optimization, in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1 (CVPR). Institute of Electrical & Electronics Engineers (IEEE), [24] T. Korah and C. Rasmussen, Spatiotemporal Inpainting for Recovering Texture Maps of Occluded Building Facades, IEEE Transactions on Image Processing, vol. 16, no. 9, pp , Sep [25] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information. Cambridge University Press (CUP), [26] J. I. Latorre, Image compression and entanglement, arxiv, vol. abs/quant-ph/ , [27] M. Kurucz, A. A. Benczur, and K. Csalogany, Methods for large scale SVD with missing values, KDD Cup and Worshop at the 13th ACM SIGKDD Conference, [28] F. R. Bach, Consistency of trace norm imization, J. Mach. Learn. Res., vol. 9, pp , Jun [29] Z. Wen, W. Yin, and Y. Zhang, Solving a low-ran factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Mathematical Programg Computation, vol. 4, no. 4, pp , Jul [30] C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, Mixed-state entanglement and quantum error correction, Phys. Rev. A, vol. 54, no. 5, pp , Nov [31] G. Vidal, Efficient Simulation of One-Dimensional Quantum Many- Body Systems, Phys. Rev. Lett., vol. 93, no. 4, Jul [32] C. Mu, B. Huang, J. Wright, and D. Goldfarb, Square deal: Lower bounds and improved relaxations for tensor recovery, in Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, June 2014, ser. JMLR Proceedings, vol. 32. JMLR.org, 2014, pp

An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition

An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition Jinshi Yu, Guoxu Zhou, Qibin Zhao and Kan Xie School of Automation, Guangdong University of Technology, Guangzhou,

More information

Tensor Completion for Estimating Missing Values in Visual Data

Tensor Completion for Estimating Missing Values in Visual Data Tensor Completion for Estimating Missing Values in Visual Data Ji Liu, Przemyslaw Musialski 2, Peter Wonka, and Jieping Ye Arizona State University VRVis Research Center 2 Ji.Liu@asu.edu, musialski@vrvis.at,

More information

On Tensor Train Rank Minimization: Statistical Efficiency and Scalable Algorithm

On Tensor Train Rank Minimization: Statistical Efficiency and Scalable Algorithm On Tensor Train Ran Minimization: Statistical Efficiency and Scalable Algorithm Masaai Imaizumi Institute of Statistical Mathematics RIKEN Center for Advanced Intelligence Project imaizumi@ism.ac.jp Taanori

More information

Matrix product state decomposition in machine learning and signal processing

Matrix product state decomposition in machine learning and signal processing Matrix product state decomposition in machine learning and signal processing Johann Anton Bengua A thesis submitted for the degree of Doctor of Philosophy at The University of Technology Sydney in 2016

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

THERE is an increasing need to handle large multidimensional

THERE is an increasing need to handle large multidimensional 1 Matrix Product State for Feature Extraction of Higher-Order Tensors Johann A. Bengua 1, Ho N. Phien 1, Hoang D. Tuan 1 and Minh N. Do 2 arxiv:1503.00516v4 [cs.cv] 20 Jan 2016 Abstract This paper introduces

More information

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

Generalized Higher-Order Tensor Decomposition via Parallel ADMM

Generalized Higher-Order Tensor Decomposition via Parallel ADMM Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Generalized Higher-Order Tensor Decomposition via Parallel ADMM Fanhua Shang 1, Yuanyuan Liu 2, James Cheng 1 1 Department of

More information

THERE is an increasing need to handle large multidimensional

THERE is an increasing need to handle large multidimensional 1 Matrix Product State for Higher-Order Tensor Compression and Classification Johann A. Bengua, Ho N. Phien, Hoang D. Tuan and Minh N. Do Abstract This paper introduces matrix product state (MPS) decomposition

More information

Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion

Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion Yuanyuan Liu Fanhua Shang Hong Cheng James Cheng Hanghang Tong Abstract Most existing low-n-rank imization algorithms for tensor completion

More information

A Multi-Affine Model for Tensor Decomposition

A Multi-Affine Model for Tensor Decomposition Yiqing Yang UW Madison breakds@cs.wisc.edu A Multi-Affine Model for Tensor Decomposition Hongrui Jiang UW Madison hongrui@engr.wisc.edu Li Zhang UW Madison lizhang@cs.wisc.edu Chris J. Murphy UC Davis

More information

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

Some tensor decomposition methods for machine learning

Some tensor decomposition methods for machine learning Some tensor decomposition methods for machine learning Massimiliano Pontil Istituto Italiano di Tecnologia and University College London 16 August 2016 1 / 36 Outline Problem and motivation Tucker decomposition

More information

Provable Low-Rank Tensor Recovery

Provable Low-Rank Tensor Recovery Provable Low-Ran Tensor Recovery Bo Huang, Cun Mu, Donald Goldfarb, John Wright February 26, 204 Abstract In this paper, we rigorously study tractable models for provably recovering low-ran tensors. Unlie

More information

Multilinear Singular Value Decomposition for Two Qubits

Multilinear Singular Value Decomposition for Two Qubits Malaysian Journal of Mathematical Sciences 10(S) August: 69 83 (2016) Special Issue: The 7 th International Conference on Research and Education in Mathematics (ICREM7) MALAYSIAN JOURNAL OF MATHEMATICAL

More information

Fantope Regularization in Metric Learning

Fantope Regularization in Metric Learning Fantope Regularization in Metric Learning CVPR 2014 Marc T. Law (LIP6, UPMC), Nicolas Thome (LIP6 - UPMC Sorbonne Universités), Matthieu Cord (LIP6 - UPMC Sorbonne Universités), Paris, France Introduction

More information

Approximate Low-Rank Tensor Learning

Approximate Low-Rank Tensor Learning Approximate Low-Rank Tensor Learning Yaoliang Yu Dept of Machine Learning Carnegie Mellon University yaoliang@cs.cmu.edu Hao Cheng Dept of Electric Engineering University of Washington kelvinwsch@gmail.com

More information

Provable Models for Robust Low-Rank Tensor Completion

Provable Models for Robust Low-Rank Tensor Completion Provable Models for Robust Low-Ran Tensor Completion Bo Huang, Cun Mu, Donald Goldfarb, John Wright Abstract: In this paper, we rigorously study tractable models for provably recovering low-ran tensors.

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,

More information

Matrix product state decomposition in machine learning and signal processing

Matrix product state decomposition in machine learning and signal processing Matrix product state decomposition in machine learning and signal processing Johann Anton Bengua A thesis submitted for the degree of Doctor of Philosophy at The University of Technology Sydney in 2016

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Fast Nonnegative Matrix Factorization with Rank-one ADMM

Fast Nonnegative Matrix Factorization with Rank-one ADMM Fast Nonnegative Matrix Factorization with Rank-one Dongjin Song, David A. Meyer, Martin Renqiang Min, Department of ECE, UCSD, La Jolla, CA, 9093-0409 dosong@ucsd.edu Department of Mathematics, UCSD,

More information

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia Network Feature s Decompositions for Machine Learning 1 1 Department of Computer Science University of British Columbia UBC Machine Learning Group, June 15 2016 1/30 Contact information Network Feature

More information

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Qian Zhao 1,2 Deyu Meng 1,2 Xu Kong 3 Qi Xie 1,2 Wenfei Cao 1,2 Yao Wang 1,2 Zongben Xu 1,2 1 School of Mathematics and Statistics,

More information

Multi-Tensor Completion with Common Structures

Multi-Tensor Completion with Common Structures Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Multi-Tensor Completion with Common Structures 1,2 Chao Li, 2 Qibin Zhao, 2 Junhua Li, 2,3 Andrzej Cichoci and 1 Lili Guo 1 Harbin

More information

Machine Learning with Quantum-Inspired Tensor Networks

Machine Learning with Quantum-Inspired Tensor Networks Machine Learning with Quantum-Inspired Tensor Networks E.M. Stoudenmire and David J. Schwab Advances in Neural Information Processing 29 arxiv:1605.05775 RIKEN AICS - Mar 2017 Collaboration with David

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization

Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization Shengke Xue, Wenyuan Qiu, Fan Liu, and Xinyu Jin College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou,

More information

Wafer Pattern Recognition Using Tucker Decomposition

Wafer Pattern Recognition Using Tucker Decomposition Wafer Pattern Recognition Using Tucker Decomposition Ahmed Wahba, Li-C. Wang, Zheng Zhang UC Santa Barbara Nik Sumikawa NXP Semiconductors Abstract In production test data analytics, it is often that an

More information

Tensor Low-Rank Completion and Invariance of the Tucker Core

Tensor Low-Rank Completion and Invariance of the Tucker Core Tensor Low-Rank Completion and Invariance of the Tucker Core Shuzhong Zhang Department of Industrial & Systems Engineering University of Minnesota zhangs@umn.edu Joint work with Bo JIANG, Shiqian MA, and

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering,

More information

Sparseness Constraints on Nonnegative Tensor Decomposition

Sparseness Constraints on Nonnegative Tensor Decomposition Sparseness Constraints on Nonnegative Tensor Decomposition Na Li nali@clarksonedu Carmeliza Navasca cnavasca@clarksonedu Department of Mathematics Clarkson University Potsdam, New York 3699, USA Department

More information

ARestricted Boltzmann machine (RBM) [1] is a probabilistic

ARestricted Boltzmann machine (RBM) [1] is a probabilistic 1 Matrix Product Operator Restricted Boltzmann Machines Cong Chen, Kim Batselier, Ching-Yun Ko, and Ngai Wong chencong@eee.hku.hk, k.batselier@tudelft.nl, cyko@eee.hku.hk, nwong@eee.hku.hk arxiv:1811.04608v1

More information

Efficient Low Rank Tensor Ring Completion

Efficient Low Rank Tensor Ring Completion 1 Efficient Low Rank Tensor Ring Completion Wenqi Wang, Vaneet Aggarwal, and Shuchin Aeron arxiv:1707.08184v1 [cs.lg] 23 Jul 2017 Abstract Using the matrix product state (MPS) representation of the recently

More information

Window-based Tensor Analysis on High-dimensional and Multi-aspect Streams

Window-based Tensor Analysis on High-dimensional and Multi-aspect Streams Window-based Tensor Analysis on High-dimensional and Multi-aspect Streams Jimeng Sun Spiros Papadimitriou Philip S. Yu Carnegie Mellon University Pittsburgh, PA, USA IBM T.J. Watson Research Center Hawthorne,

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

A Randomized Approach for Crowdsourcing in the Presence of Multiple Views

A Randomized Approach for Crowdsourcing in the Presence of Multiple Views A Randomized Approach for Crowdsourcing in the Presence of Multiple Views Presenter: Yao Zhou joint work with: Jingrui He - 1 - Roadmap Motivation Proposed framework: M2VW Experimental results Conclusion

More information

Statistical Performance of Convex Tensor Decomposition

Statistical Performance of Convex Tensor Decomposition Slides available: h-p://www.ibis.t.u tokyo.ac.jp/ryotat/tensor12kyoto.pdf Statistical Performance of Convex Tensor Decomposition Ryota Tomioka 2012/01/26 @ Kyoto University Perspectives in Informatics

More information

Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig

Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Coherence of Assistance and Regularized Coherence of Assistance by Ming-Jing Zhao, Teng Ma, and Shao-Ming Fei Preprint no.: 14 2018

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Robust Low-Rank Modelling on Matrices and Tensors

Robust Low-Rank Modelling on Matrices and Tensors Imperial College London Department of Computing MSc in Advanced Computing Robust Low-Ran Modelling on Matrices and Tensors by Georgios Papamaarios Submitted in partial fulfilment of the requirements for

More information

An Optimization-based Approach to Decentralized Assignability

An Optimization-based Approach to Decentralized Assignability 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016 Boston, MA, USA An Optimization-based Approach to Decentralized Assignability Alborz Alavian and Michael Rotkowitz Abstract

More information

Multiple Similarities Based Kernel Subspace Learning for Image Classification

Multiple Similarities Based Kernel Subspace Learning for Image Classification Multiple Similarities Based Kernel Subspace Learning for Image Classification Wang Yan, Qingshan Liu, Hanqing Lu, and Songde Ma National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

A Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation

A Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence A Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation Dongdong Chen and Jian Cheng Lv and Zhang Yi

More information

Non-Negative Tensor Factorisation for Sound Source Separation

Non-Negative Tensor Factorisation for Sound Source Separation ISSC 2005, Dublin, Sept. -2 Non-Negative Tensor Factorisation for Sound Source Separation Derry FitzGerald, Matt Cranitch φ and Eugene Coyle* φ Dept. of Electronic Engineering, Cor Institute of Technology

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Iterative Laplacian Score for Feature Selection

Iterative Laplacian Score for Feature Selection Iterative Laplacian Score for Feature Selection Linling Zhu, Linsong Miao, and Daoqiang Zhang College of Computer Science and echnology, Nanjing University of Aeronautics and Astronautics, Nanjing 2006,

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

Machine Learning with Tensor Networks

Machine Learning with Tensor Networks Machine Learning with Tensor Networks E.M. Stoudenmire and David J. Schwab Advances in Neural Information Processing 29 arxiv:1605.05775 Beijing Jun 2017 Machine learning has physics in its DNA # " # #

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

Supplemental Figures: Results for Various Color-image Completion

Supplemental Figures: Results for Various Color-image Completion ANONYMOUS AUTHORS: SUPPLEMENTAL MATERIAL (NOVEMBER 7, 2017) 1 Supplemental Figures: Results for Various Color-image Completion Anonymous authors COMPARISON WITH VARIOUS METHODS IN COLOR-IMAGE COMPLETION

More information

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Robust PCA CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Robust PCA 1 / 52 Previously...

More information

Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016

Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 Optimization for Tensor Models Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 1 Tensors Matrix Tensor: higher-order matrix

More information

Application of Tensor and Matrix Completion on Environmental Sensing Data

Application of Tensor and Matrix Completion on Environmental Sensing Data Application of Tensor and Matrix Completion on Environmental Sensing Data Michalis Giannopoulos 1,, Sofia Savvaki 1,, Grigorios Tsagkatakis 1, and Panagiotis Tsakalides 1, 1- Institute of Computer Science

More information

SYMMETRIC MATRIX PERTURBATION FOR DIFFERENTIALLY-PRIVATE PRINCIPAL COMPONENT ANALYSIS. Hafiz Imtiaz and Anand D. Sarwate

SYMMETRIC MATRIX PERTURBATION FOR DIFFERENTIALLY-PRIVATE PRINCIPAL COMPONENT ANALYSIS. Hafiz Imtiaz and Anand D. Sarwate SYMMETRIC MATRIX PERTURBATION FOR DIFFERENTIALLY-PRIVATE PRINCIPAL COMPONENT ANALYSIS Hafiz Imtiaz and Anand D. Sarwate Rutgers, The State University of New Jersey ABSTRACT Differential privacy is a strong,

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

TENSOR COMPLETION VIA ADAPTIVE SAMPLING OF TENSOR FIBERS: APPLICATION TO EFFICIENT INDOOR RF FINGERPRINTING

TENSOR COMPLETION VIA ADAPTIVE SAMPLING OF TENSOR FIBERS: APPLICATION TO EFFICIENT INDOOR RF FINGERPRINTING TENSOR COMPLETION VIA ADAPTIVE SAMPLING OF TENSOR FIBERS: APPLICATION TO EFFICIENT INDOOR RF FINGERPRINTING Xiao-Yang Liu 1,4, Shuchin Aeron 2, Vaneet Aggarwal 3, Xiaodong Wang 4 and, Min-You Wu 1 1 Shanghai-Jiatong

More information

On the convergence of higher-order orthogonality iteration and its extension

On the convergence of higher-order orthogonality iteration and its extension On the convergence of higher-order orthogonality iteration and its extension Yangyang Xu IMA, University of Minnesota SIAM Conference LA15, Atlanta October 27, 2015 Best low-multilinear-rank approximation

More information

Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing

Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing 1 Zhong-Yuan Zhang, 2 Chris Ding, 3 Jie Tang *1, Corresponding Author School of Statistics,

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

arxiv: v2 [cs.na] 24 Mar 2015

arxiv: v2 [cs.na] 24 Mar 2015 Volume X, No. 0X, 200X, X XX doi:1934/xx.xx.xx.xx PARALLEL MATRIX FACTORIZATION FOR LOW-RANK TENSOR COMPLETION arxiv:1312.1254v2 [cs.na] 24 Mar 2015 Yangyang Xu Department of Computational and Applied

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

Tensor Decomposition with Smoothness (ICML2017)

Tensor Decomposition with Smoothness (ICML2017) Tensor Decomposition with Smoothness (ICML2017) Masaaki Imaizumi 1 Kohei Hayashi 2,3 1 Institute of Statistical Mathematics 2 National Institute of Advanced Industrial Science and Technology 3 RIKEN Center

More information

Tensor-Tensor Product Toolbox

Tensor-Tensor Product Toolbox Tensor-Tensor Product Toolbox 1 version 10 Canyi Lu canyilu@gmailcom Carnegie Mellon University https://githubcom/canyilu/tproduct June, 018 1 INTRODUCTION Tensors are higher-order extensions of matrices

More information

A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations

A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations Chuangchuang Sun and Ran Dai Abstract This paper proposes a customized Alternating Direction Method of Multipliers

More information

A Tensor Approximation Approach to Dimensionality Reduction

A Tensor Approximation Approach to Dimensionality Reduction Int J Comput Vis (2008) 76: 217 229 DOI 10.1007/s11263-007-0053-0 A Tensor Approximation Approach to Dimensionality Reduction Hongcheng Wang Narendra Ahua Received: 6 October 2005 / Accepted: 9 March 2007

More information

Matrix Completion for Structured Observations

Matrix Completion for Structured Observations Matrix Completion for Structured Observations Denali Molitor Department of Mathematics University of California, Los ngeles Los ngeles, C 90095, US Email: dmolitor@math.ucla.edu Deanna Needell Department

More information

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Gongguo Tang and Arye Nehorai Department of Electrical and Systems Engineering Washington University in St Louis

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization

New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization Bo JIANG Shiqian MA Shuzhong ZHANG January 12, 2015 Abstract In this paper, we propose three new tensor decompositions

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

Non-negative matrix factorization with fixed row and column sums

Non-negative matrix factorization with fixed row and column sums Available online at www.sciencedirect.com Linear Algebra and its Applications 9 (8) 5 www.elsevier.com/locate/laa Non-negative matrix factorization with fixed row and column sums Ngoc-Diep Ho, Paul Van

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University

More information

Statistical Machine Learning for Structured and High Dimensional Data

Statistical Machine Learning for Structured and High Dimensional Data Statistical Machine Learning for Structured and High Dimensional Data (FA9550-09- 1-0373) PI: Larry Wasserman (CMU) Co- PI: John Lafferty (UChicago and CMU) AFOSR Program Review (Jan 28-31, 2013, Washington,

More information

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Canyi Lu 1, Jiashi Feng 1, Yudong Chen, Wei Liu 3, Zhouchen Lin 4,5,, Shuicheng Yan 6,1

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

N-mode Analysis (Tensor Framework) Behrouz Saghafi

N-mode Analysis (Tensor Framework) Behrouz Saghafi N-mode Analysis (Tensor Framework) Behrouz Saghafi N-mode Analysis (Tensor Framework) Drawback of 1-mode analysis (e.g. PCA): Captures the variance among just a single factor Our training set contains

More information

Tensor Completion by Alternating Minimization under the Tensor Train (TT) Model

Tensor Completion by Alternating Minimization under the Tensor Train (TT) Model Tensor Completion by Alternating Minimization under the Tensor Train (TT) Model Wenqi Wang, Vaneet Aggarwal, and Shuchin Aeron arxiv:1609.05587v1 [cs.na] 19 Sep 2016 Abstract Using the matrix product state

More information