ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION

Size: px
Start display at page:

Download "ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION"

Transcription

1 ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION Niannan Xue, George Papamakarios, Mehdi Bahri, Yannis Panagakis, Stefanos Zafeiriou Imperial College London, UK University of Edinburgh, UK Middlesex University, UK ABSTRACT A framework for reliable seperation of a low-rank subspace from grossly corrupted multi-dimensional signals is pivotal in modern signal processing applications. Current methods fall short of this separation either due to the radical simplification or the drastic transformation of data. This has motivated us to propose two new robust low-rank tensor models: Tensor Orthonormal Robust PCA (TORCPA) and Tensor Robust CP Decomposition (TRCPD). They seek Tucker and CP decomposition of a tensor respectively with l p norm regularisation. We compare our methods with state-of-the-art low-rank models on both synthetic and real-world data. Experimental results indicate that the proposed methods are faster and more accurate than the methods they compared to. Index Terms Tensor Decomposition, Robust Principal Component Analysis, Tucker, CANDECOMP/PARAFAC. INTRODUCTION In many real-world applications, input data are naturally representd by tensors (i.e., multi-dimensional arrays). Traditionally, such data would require vectorising before processing and thus destroy the inherent higher-order interactions. As a result, novel models must be developed to preserve the multilinear structure when extracting the hidden and evolving trends in such data. Typical tensor data are video clips, color images, multi-channel EEG records, etc. In practice, important information usually lie in a (multilinear) low-dimensional space whose dimensionality is much lower dimensional space than observations.this is the essence of low-rank modelling. In this paper, we focus on the problem of recovering a low-dimensional multilinear structure from tensor data corrupted by gross corruptions. Given a tensor L R d d N, its tensor rank [] is defined by the smallest r such that L = r a() i a (N) i, where denotes outer products among some Nr vectors a () i,, a (N) i, i r. As such, robust low-rank tensor modelling seeks a decomposition X = L + S for an N th -order tensor X R d d N, where L has a low tensor rank and S is sparse. However, the tensor rank is Corresponding Author (s.zafeiriou@imperial.ac.uk). The work of Stefanos Zafeiriou was partially funded by EPSRC project EP/N007743/ (FACER2VM). usually intractable [2]. A common adjustment [3 5] is to use a convex combination of the n-ranks of L, that is γ = N α irank i (L), where α i 0, N α i = and rank i (L) is the the column rank of the mode-i matricisation [6] of L. It is, therefore, natural to obtain the decomposition by solving optimisation problem () γ+λ E 0 s.t. γ = α i rank i (L), X = L+S, () L,S where S 0 is the l 0 norm of the vectorisation of S and λ is a weighting parameter. Here we present two novel robust tensor methods based on Tucker and CP decomposition that recover the latent lowrank component from noisy observations by relaxing (), which is NP-hard. In section 2, we review relevant literature on matrix and tensor algorithms. In section 3, we explain our proposed tensor methods in detail. In section 4, we demonstrate the advantages of our models on both synthetic data and a real-world dataset. Finally, in section 5, we summarise our new models and point out possible future improvements. 2. RELATED WORK RSTD [7] is a direct multi-linear extension of matrix principal component pursuit (PCP) [8]. It approximates () by replacing rank i (L) and S 0 with convex surrogates L (i) and S respectively, where L (i) is the nuclear norm of the mode-i matricisation of L and S is the l norm of the vectorisation of S. As a result, it solves the following alternative objective L,S α i L (i) + λ S s.t. X = L + S. (2) An ALM solver can be found in [9]. It is also worth noting that under certain conditions RSTD is guaranteed to exactly recover the low-rank component [0]. Much recent research on subspace analysis for the matrix case has direct applicability to tensor data. The costly singular value decomposition step in classical PCP prohibits largescale analysis. A general approach to mitigate this issue is to look for a factorisation of the low rank component A. OR- PCA [] uses a linear combination of the active subspace, A = UV, U T U = I, where bilinear factors U R m k and V R k n are the principal components and the com- ISBN EURASIP

2 bination coefficients respectively and k is an upper bound of rank(a). 3. ROBUST LOW-RANK TENSOR MODELLING 3.. Notation We first introduce some notations used throughout the paper. Lowercase latin and greek letters denote scalars, e.g. r, γ. Bold lowercase latin letters denote vectors, e.g. a. Bold uppercase latin letters denote matrices, e.g. A. Bold uppercase calligraphic latin letters denote tensors, e.g. L. Bold uppercase greek letters denote operators on tensors and matrices, e.g. Θ(S), Φ(U). A, B represents tr(a T B). A F is the Frobenius norm. A B denotes the Khatri-Rao product between matrices A and B and X i U is the i-mode product [6] Soft and hard thresholding operators For fixed X R d d N, the optimal analytical solution for Y κ Y + 2 X Y 2 F is given by the soft thresholding operation Θ κ (X ), where Θ κ (X ) ι ι N = (X ι ι N κ) + ( X ι ι N κ) +. (3) And for fixed X R m n, the optimal analytical solution for Y κ X + 2 X Y 2 F is given by the hard thresholding operation Φ κ (X), where Φ κ (X) = UΘ κ (S)V T, (4) for singular value decomposition X = USV T Tensor Orthonormal Robust PCA Generalisation of ORPCA to tensors corresponds to the following factorisation of the low-rank component L: L = V U N U N V N U i, U T i U i = I, (5) which is exactly the HOSVD [2] of L and the following relationship holds L (i) = V (i). (6) Based on the above, (2) can be re-written as α i V (i) + λ S, V,S s.t. X = V N U i + S, U T i U i = I, i N. To separate variables, we make the substitution V (i) = J i, to arrive at an equivalent problem: α i J i + λ S, J i,s s.t. X = V N U i + S, U T i U i = I, V (i) = J i, i N. (7) (8) To apply ADMM, the augmented Lagrangian of (8) is constructed first: l(j i, V, S, U i, Y, Z i ) = α i J i + λ S + X V N U i S, Y + + µ 2 X V N U i S 2 F + V (i) J i, Z i where U T i U i = I has not been incorporated. J i is updated by the imiser of l(j i ): µ 2 V (i) J i 2 F, J i = arg α i µ J i + J i 2 J i (V (i) + µ Z i) 2 F = Φ αiµ (V (i) + µ Z i) V is updated by the imiser of l(v): (9) (0) V = arg V, (µ(x S) + Y) V N Ui T () + V J i, Z i + µ µ 2 V 2 F + 2 V J i 2 F, where we have used the fact that Ui T U i = I, the Frobenius norm is invariant under rotations and J i, Z i are the inverse of mode-i matricisations, J i, Z i respectively. To obtain V, setting the gradient of () to zero gives: V = N + ((X S + µ Y) N U T i + S is updated by the imiser of l(s): S =arg S λµ S (J i µ Z i)). + 2 S (X V N U i + µ Y) 2 F =Θ λµ (X V N U i + µ Y). (2) (3) U i is updated by the imiser of l(u i ) subject to Ui T U i = I: U i =arg U i 2 X (i) S (i) + µ Y (i) U i B i 2 F, (4) where B i = (V i j= U j N j=i+ U j ) (i). If we have the following SVD (X (i) S (i) + µ Y (i))b T i = C i D i V T i, (5) then according to the Reduced Rank Procrustes Theorem [3], the solution is given by U i = C i V T i. (6) The complete algorithm is presented in Algorithm. ISBN EURASIP

3 Algorithm ADMM solver for TORPCA Input: Observation X, parameter λ > 0, scaling κ >, weights α i, ranks k i : Initialise: J i = Z i = 0, S = Y = 0, V = 0, U i = first k i left singular vectors of X (i), µ > 0 2: while not converged do 3: for i {, 2,, N} do 4: J i = Φ αiµ (V (i) + µ Z i) 5: end for 6: V = N+ ((X S + µ Y) N U i T + N (J i µ Z i)) 7: S = Θ λµ (X V N U i + µ Y) 8: for i {, 2,, N} do 9: B i = (V i j= U j N j=i+ U j) (i) 0: C i D i Vi T = (X (i) S (i) + µ Y (i))bi T : U i = C i V T i 2: end for 3: Y = Y + µ(x V N U i S) 4: for i {, 2,, N} do 5: Z i = Z i + µ(v (i) J i ) 6: end for 7: µ = µ κ 8: end while Return: V, S, U i 3.4. Tensor robust CP decomposition Let U (i) = [a (i), a(i) 2,, a(i) r ], then we can express L compactly as L = U () U (2) U (N). In particular, it can be shown that rank i (L) rank(u i ), for i N. So, it is beneficial to solve the following objective U i,s α i U i + λ S, X = U U 2 U N + S. (7) Again, we make the substitution U i = J i before perforg ADMM, which leads to the following problem J i,s α i J i + λ S + µ 2 X U U N S 2 F + (8) s.t. X = U U N + S, U i = J i, i N. The corresponding augmented Lagrangian is l(j i, U i, S, Y, Z i ) = α i J i + λ S (9) + X U U N S, Y + U i J i, Z i µ 2 U i J i 2 F. Algorithm 2 ADMM solver for TRCPD Input: Observation X, parameter λ > 0, scaling κ >, weights α i, rank k : Initialise: J i = U i = rand, S = Y = 0, Z i = 0, µ > 0 2: while not converged do 3: for i {, 2,, N} do 4: J i = Φ αiµ (U i + µ Z i) 5: Ũ i = (U N U i+ U i U ) T 6: U i = ((X (i) S (i) + µ Y (i))ũ T i + J i µ Z i)(ũ i Ũi T + I) 7: end for 8: S = Θ λµ (X U U N + µ Y) 9: Y = Y + µ(x U U N S) 0: for i {, 2,, N} do : Z i = Z i + µ(u i J i ) 2: end for 3: µ = µ κ 4: end while Return: U i, S J i is updated by the imiser of l(j i ): J i = arg α i µ J i + J i 2 J i (U i + µ Z i) 2 F = Φ αiµ (U i + µ Z i). U i is updated by the imiser of l(u i ): (20) U i = arg U i X (i) U i Ũ i S (i), Y (i) + U i J i, Z i + µ 2 X (i) U i Ũ i S (i) 2 F + µ 2 U i J i 2 F, where Ũ i = (U N U i+ U i U ) T (2) Setting the derivative of (2) to zero gives: U i =((X (i) S (i) + µ Y (i))ũ T i + J i µ Z i)(ũ i Ũ T i + I). S is updated by the imiser of l(s): S = arg S λµ S + 2 S (X U U N ) + µ Y 2 F = Θ λµ (X U U N + µ Y). The complete algorithm is described in Algorithm Complexity and convergence (22) (23) For ease of exposition, we assume that d,, d N = ζ. For TORPCA, the most expansive calculation in each iteration is the i-mode product which has a time complexity of O(Nrζ N ). For TRCPD, the doant term is the chain of matrix outer products which costs O(Nrζ N ). Note that both ISBN EURASIP

4 methods have lower complexity than RSTD whose complexity is O(Nζ N+ ) due to SVD if r < ζ. Although both of our proposed tensor methods are nonconvex, we have empirically found that the warm initialisation of using the first k i left singular vectors of X (i) for U i works well for TORPCA and uniform initialisation of U i, i k from [0, ] suffices for TRCPD (see Section 4). 4. EXPERIMENTAL RESULTS 4.. Implementation details For stopping criteria, we use one of the KKT opimality conditions, X L S F X F < δ and we have set δ = 0 7. The initial value of µ is set to 0 3, which is geometrically increased by a factor of κ =.2 up to 0 9. The weights α i are assumed equal Simulation We first evaluate the performance of all algorithms on synthetic data. A low-rank tensor L R is generated via L = U U 2 U 3, where elements of U, U 2, U 3 R 00 8 are independently sampled from the standard Gaussian distribution. The variance of L is normalised to afterwards. A sparse tensor S R is constructed by uniform sampling from [ 0, 0]. Then only 20% of the elements are kept, with others set to zero. Relative error RSTD TORPCA TRCPD PCP ORPCA Fig.. Relative error from all algorithms for a range of λ. Each tensor algorithm takes X = L+S as input, whereas matrix algorithms take mode- matricisation of X as input. Since rank(l) 8, the rank k in TRCPD is set to 8 and the ranks k i in TORPCA are all set to 8 because rank i (L) 8. The relative error L L F L F averaged over 5 trials against λ is plotted for the optimal L in each algorithm in Fig. The total execution time for each algorithm versus λ is shown in Fig 2. It is clear that tensor methods are superior to matrix-based methods. Particularly, TRCPD performs the best and TOR- PCA is also better than RSTD. Both TRCPD and TORPCA are stable in terms of λ whereas RSTD depends on tuning heavily. The execution time confirms our complexity analysis. Both of TORCPA and TRCPD are significantly faster than RSTD. Execution time/s RSTD TORPCA TRCPD PCP ORPCA Fig. 2. Running time of all algorithms as λ varies. Senario Algorithm λ [0 4, 0 ] Salt & Pepper occlusion k {0, 20,, 200} α {0., 0.2,, 0.9} RSTD TORPCA TRCPD ORPCA RSTD TORPCA TRCPD ORPCA Table. Optimal parameter choices for all algorithms used in different experiments Facial image denoising It is well understood that a convex Lambertian surface, viz. faces, under distant and isotropic lighting has a low-rank underlying model. In light of this, we consider images of a fixed pose under different illuations from the extended Yale B database for benchmarking. All 64 images for one person were studied. For matrix-based methods, observation matrices were formed by vectorising each image. All images are also re-scaled such that every pixel lies in [0, ]. Salt & Pepper Noise Salt & pepper noise is observed in real images, commonly caused by data transmission errors. To apply salt & pepper noise, we randomly set pixels to black (0) or white () with equal probability. This is close to the Laplacian noise hypothesis, where noise is heavy, non- Gaussian and potentially wide-ranging. We test an extreme case, where 60% of all the pixels are affected. Partial Occlusion Partial occlusion is ubiquitous in visual information, which can usually be completed during human visual perception [4]. For the partial occlusion noise, we generate randomly sized patches at random locaions. The maximum dimension is 60 pixels and the occlusion is full of Salt & Pepper noise. The successful application of various algorithms requires careful tuning of the algorithmic parameters. These include the penalty parameter λ, an estimate of k = rank(l) and k i = rank i (L) = d i α. The ranges of interest and the optimal choices are summarised in Table. Reconstruction from salt & pepper noise is illustrated in the first row of Fig 3, where the first image in the sequence ISBN EURASIP

5 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) Fig. 3. Image Denoising Experiments: (a) & (h) are original images from the sequence. Salt & pepper is introduced as shown in (b) and occlusion is demonstrated in (i). (c) & (j) present recovery results for RSTD. (d) & (k) for TORPCA. (e) & (l) for TRCPD. (f) & (m) for PCP. And (g) & (n) for ORCPA. is shown. RSTD and matrix-based methods fail to remove the introduced noise, whereas TORPCA and TRCPD are extremely promising such that no trail of noise can be seen. Recovery from partial occlusion is displayed in the second row of Fig 3. ORPCA has little effect. The region where noise was introduced is severely distorted in the recovered image of RSTD. Both TORPCA and TRCPD mananged to denoise the occlusion though they have an additional smoothing effect. PCP achieves the highest quality of recovery but there is still unremoved noise left in the image. This may be attributed to the fact that the nature of the occlusion is inherently in a matrix form. 5. CONCLUSIONS In this paper, two novel methods, namely the TORPCA and TRCPD have been proposed in order to address the problem of robust low-rank tensor recovery. The proposed methods surpass existing tensor- and matrix-based methods in our experiments. We anticipate our work to lay foundations for more general signal processing problems. Future direction of research can extend our methods to hierarchical tensor decomposition. REFERENCES [] J.B. Kruskal, Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics, Linear Algebra and its Applications, vol. 8, no. 2, pp , 977. [2] J. Håstad, Tensor rank is np-complete, Journal of Algorithms, vol., no. 4, pp , 990. [3] S. Gandyand B. Recht and I. Yamada, Tensor completion and low-n-rank tensor recovery via convex optimization, Inverse Problems, vol. 27, no. 2, pp , 20. [4] J. Liu, P. Musialski, P. Wonka, and J. Ye, Tensor completion for estimating missing values in visual data, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no., pp , 203. [5] L. Yang, Z. Huang, and X. Shi, A fixed point iterative method for low n-rank tensor pursuit, IEEE Transactions on Signal Processing, vol. 6, no., pp , 203. [6] T.G. Kolda and B.W. Bader, Tensor decompositions and applications, SIAM Review, vol. 5, no. 3, pp , [7] Y. Li, J. Yan, Y. Zhou, and J. Yang, Optimum subspace learning and error correction for tensors, in European Conference on Computer Vision, 200. [8] E.J. Candès, X. Li, Y. Ma, and J. Wright, Robust principal component analysis?, Journal of the ACM, vol. 58, no. 3, 20. [9] D. Goldfarb and Z. Qin, Robust low-rank tensor recovery: Models and algorithms, SIAM Journal on Matrix Analysis and Applications, vol. 35, no., 204. [0] B. Huang, C. Mu, J. Wright, and D. Goldfarb, Provable models for robust low-rank tensor completion, Pacific Journal of Optimisation, vol., no. 2, pp , 205. [] G. Liu and S. Yan, Active subspace: Toward scalable low-rank learning, Neural Computation, vol. 24, no. 2, pp , 202. [2] L. De Lathauwer, B. De Moor, and J. Vandewalle, A multilinear singular value decomposition, SIAM Journal on Matrix Analysis and Applications, vol. 2, no. 4, [3] H. Zou, T. Hastie, and R. Tibshirani, Sparse principal component analysis, Journal of Computational and Graphical Statistics, vol. 5, no. 2, [4] A.B. Sekuler and S.E. Palmer, Perception of partly occluded objects: A microgenetic analysis, Journal of Experimental Psychology: General, vol. 2, no., 992. ISBN EURASIP

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

A Multi-Affine Model for Tensor Decomposition

A Multi-Affine Model for Tensor Decomposition Yiqing Yang UW Madison breakds@cs.wisc.edu A Multi-Affine Model for Tensor Decomposition Hongrui Jiang UW Madison hongrui@engr.wisc.edu Li Zhang UW Madison lizhang@cs.wisc.edu Chris J. Murphy UC Davis

More information

Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016

Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 Optimization for Tensor Models Donald Goldfarb IEOR Department Columbia University UCLA Mathematics Department Distinguished Lecture Series May 17 19, 2016 1 Tensors Matrix Tensor: higher-order matrix

More information

Robust Low-Rank Modelling on Matrices and Tensors

Robust Low-Rank Modelling on Matrices and Tensors Imperial College London Department of Computing MSc in Advanced Computing Robust Low-Ran Modelling on Matrices and Tensors by Georgios Papamaarios Submitted in partial fulfilment of the requirements for

More information

Generalized Higher-Order Tensor Decomposition via Parallel ADMM

Generalized Higher-Order Tensor Decomposition via Parallel ADMM Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Generalized Higher-Order Tensor Decomposition via Parallel ADMM Fanhua Shang 1, Yuanyuan Liu 2, James Cheng 1 1 Department of

More information

Supplemental Figures: Results for Various Color-image Completion

Supplemental Figures: Results for Various Color-image Completion ANONYMOUS AUTHORS: SUPPLEMENTAL MATERIAL (NOVEMBER 7, 2017) 1 Supplemental Figures: Results for Various Color-image Completion Anonymous authors COMPARISON WITH VARIOUS METHODS IN COLOR-IMAGE COMPLETION

More information

Tensor Low-Rank Completion and Invariance of the Tucker Core

Tensor Low-Rank Completion and Invariance of the Tucker Core Tensor Low-Rank Completion and Invariance of the Tucker Core Shuzhong Zhang Department of Industrial & Systems Engineering University of Minnesota zhangs@umn.edu Joint work with Bo JIANG, Shiqian MA, and

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Robust PCA CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Robust PCA 1 / 52 Previously...

More information

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Canyi Lu 1, Jiashi Feng 1, Yudong Chen, Wei Liu 3, Zhouchen Lin 4,5,, Shuicheng Yan 6,1

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

On the convergence of higher-order orthogonality iteration and its extension

On the convergence of higher-order orthogonality iteration and its extension On the convergence of higher-order orthogonality iteration and its extension Yangyang Xu IMA, University of Minnesota SIAM Conference LA15, Atlanta October 27, 2015 Best low-multilinear-rank approximation

More information

Novel methods for multilinear data completion and de-noising based on tensor-svd

Novel methods for multilinear data completion and de-noising based on tensor-svd Novel methods for multilinear data completion and de-noising based on tensor-svd Zemin Zhang, Gregory Ely, Shuchin Aeron Department of ECE, Tufts University Medford, MA 02155 zemin.zhang@tufts.com gregoryely@gmail.com

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,

More information

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Qian Zhao 1,2 Deyu Meng 1,2 Xu Kong 3 Qi Xie 1,2 Wenfei Cao 1,2 Yao Wang 1,2 Zongben Xu 1,2 1 School of Mathematics and Statistics,

More information

PRINCIPAL Component Analysis (PCA) is a fundamental approach

PRINCIPAL Component Analysis (PCA) is a fundamental approach IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Member, IEEE, Zhouchen

More information

Approximate Low-Rank Tensor Learning

Approximate Low-Rank Tensor Learning Approximate Low-Rank Tensor Learning Yaoliang Yu Dept of Machine Learning Carnegie Mellon University yaoliang@cs.cmu.edu Hao Cheng Dept of Electric Engineering University of Washington kelvinwsch@gmail.com

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia Network Feature s Decompositions for Machine Learning 1 1 Department of Computer Science University of British Columbia UBC Machine Learning Group, June 15 2016 1/30 Contact information Network Feature

More information

Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization

Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization Shengke Xue, Wenyuan Qiu, Fan Liu, and Xinyu Jin College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou,

More information

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

Multilinear Subspace Analysis of Image Ensembles

Multilinear Subspace Analysis of Image Ensembles Multilinear Subspace Analysis of Image Ensembles M. Alex O. Vasilescu 1,2 and Demetri Terzopoulos 2,1 1 Department of Computer Science, University of Toronto, Toronto ON M5S 3G4, Canada 2 Courant Institute

More information

Wafer Pattern Recognition Using Tucker Decomposition

Wafer Pattern Recognition Using Tucker Decomposition Wafer Pattern Recognition Using Tucker Decomposition Ahmed Wahba, Li-C. Wang, Zheng Zhang UC Santa Barbara Nik Sumikawa NXP Semiconductors Abstract In production test data analytics, it is often that an

More information

Breaking the Limits of Subspace Inference

Breaking the Limits of Subspace Inference Breaking the Limits of Subspace Inference Claudia R. Solís-Lemus, Daniel L. Pimentel-Alarcón Emory University, Georgia State University Abstract Inferring low-dimensional subspaces that describe high-dimensional,

More information

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University

More information

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,

More information

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Gongguo Tang and Arye Nehorai Department of Electrical and Systems Engineering Washington University in St Louis

More information

Application of Tensor and Matrix Completion on Environmental Sensing Data

Application of Tensor and Matrix Completion on Environmental Sensing Data Application of Tensor and Matrix Completion on Environmental Sensing Data Michalis Giannopoulos 1,, Sofia Savvaki 1,, Grigorios Tsagkatakis 1, and Panagiotis Tsakalides 1, 1- Institute of Computer Science

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Robust Component Analysis via HQ Minimization

Robust Component Analysis via HQ Minimization Robust Component Analysis via HQ Minimization Ran He, Wei-shi Zheng and Liang Wang 0-08-6 Outline Overview Half-quadratic minimization principal component analysis Robust principal component analysis Robust

More information

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA D Pimentel-Alarcón 1, L Balzano 2, R Marcia 3, R Nowak 1, R Willett 1 1 University of Wisconsin - Madison, 2 University of Michigan - Ann Arbor, 3 University

More information

2.3. Clustering or vector quantization 57

2.3. Clustering or vector quantization 57 Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

Sparse representation classification and positive L1 minimization

Sparse representation classification and positive L1 minimization Sparse representation classification and positive L1 minimization Cencheng Shen Joint Work with Li Chen, Carey E. Priebe Applied Mathematics and Statistics Johns Hopkins University, August 5, 2014 Cencheng

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy

More information

A Tensor Approximation Approach to Dimensionality Reduction

A Tensor Approximation Approach to Dimensionality Reduction Int J Comput Vis (2008) 76: 217 229 DOI 10.1007/s11263-007-0053-0 A Tensor Approximation Approach to Dimensionality Reduction Hongcheng Wang Narendra Ahua Received: 6 October 2005 / Accepted: 9 March 2007

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition

An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition Jinshi Yu, Guoxu Zhou, Qibin Zhao and Kan Xie School of Automation, Guangdong University of Technology, Guangzhou,

More information

Compressive Sensing, Low Rank models, and Low Rank Submatrix

Compressive Sensing, Low Rank models, and Low Rank Submatrix Compressive Sensing,, and Low Rank Submatrix NICTA Short Course 2012 yi.li@nicta.com.au http://users.cecs.anu.edu.au/~yili Sep 12, 2012 ver. 1.8 http://tinyurl.com/brl89pk Outline Introduction 1 Introduction

More information

Matrix-Tensor and Deep Learning in High Dimensional Data Analysis

Matrix-Tensor and Deep Learning in High Dimensional Data Analysis Matrix-Tensor and Deep Learning in High Dimensional Data Analysis Tien D. Bui Department of Computer Science and Software Engineering Concordia University 14 th ICIAR Montréal July 5-7, 2017 Introduction

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Elastic-Net Regularization of Singular Values for Robust Subspace Learning

Elastic-Net Regularization of Singular Values for Robust Subspace Learning Elastic-Net Regularization of Singular Values for Robust Subspace Learning Eunwoo Kim Minsik Lee Songhwai Oh Department of ECE, ASRI, Seoul National University, Seoul, Korea Division of EE, Hanyang University,

More information

Grassmann Averages for Scalable Robust PCA Supplementary Material

Grassmann Averages for Scalable Robust PCA Supplementary Material Grassmann Averages for Scalable Robust PCA Supplementary Material Søren Hauberg DTU Compute Lyngby, Denmark sohau@dtu.dk Aasa Feragen DIKU and MPIs Tübingen Denmark and Germany aasa@diku.dk Michael J.

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations

More information

Seismic data interpolation and denoising using SVD-free low-rank matrix factorization

Seismic data interpolation and denoising using SVD-free low-rank matrix factorization Seismic data interpolation and denoising using SVD-free low-rank matrix factorization R. Kumar, A.Y. Aravkin,, H. Mansour,, B. Recht and F.J. Herrmann Dept. of Earth and Ocean sciences, University of British

More information

18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008

18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 MPCA: Multilinear Principal Component Analysis of Tensor Objects Haiping Lu, Student Member, IEEE, Konstantinos N. (Kostas) Plataniotis,

More information

L 2,1 Norm and its Applications

L 2,1 Norm and its Applications L 2, Norm and its Applications Yale Chang Introduction According to the structure of the constraints, the sparsity can be obtained from three types of regularizers for different purposes.. Flat Sparsity.

More information

Online Tensor Factorization for. Feature Selection in EEG

Online Tensor Factorization for. Feature Selection in EEG Online Tensor Factorization for Feature Selection in EEG Alric Althoff Honors Thesis, Department of Cognitive Science, University of California - San Diego Supervised by Dr. Virginia de Sa Abstract Tensor

More information

Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning

Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning LIU, LU, GU: GROUP SPARSE NMF FOR MULTI-MANIFOLD LEARNING 1 Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning Xiangyang Liu 1,2 liuxy@sjtu.edu.cn Hongtao Lu 1 htlu@sjtu.edu.cn

More information

Efficient Sparse Low-Rank Tensor Completion Using the Frank-Wolfe Algorithm

Efficient Sparse Low-Rank Tensor Completion Using the Frank-Wolfe Algorithm Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) Efficient Sparse Low-Rank Tensor Completion Using the Frank-Wolfe Algorithm Xiawei Guo, Quanming Yao, James T. Kwok

More information

Degrees of Freedom in Regression Ensembles

Degrees of Freedom in Regression Ensembles Degrees of Freedom in Regression Ensembles Henry WJ Reeve Gavin Brown University of Manchester - School of Computer Science Kilburn Building, University of Manchester, Oxford Rd, Manchester M13 9PL Abstract.

More information

Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices

Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices Himanshu Nayar Dept. of EECS University of Michigan Ann Arbor Michigan 484 email:

More information

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Zhouchen Lin Visual Computing Group Microsoft Research Asia Risheng Liu Zhixun Su School of Mathematical Sciences

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

Robust Tensor Factorization Using R 1 Norm

Robust Tensor Factorization Using R 1 Norm Robust Tensor Factorization Using R Norm Heng Huang Computer Science and Engineering University of Texas at Arlington heng@uta.edu Chris Ding Computer Science and Engineering University of Texas at Arlington

More information

Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising

Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Curt Da Silva and Felix J. Herrmann 2 Dept. of Mathematics 2 Dept. of Earth and Ocean Sciences, University

More information

arxiv: v1 [cs.it] 26 Oct 2018

arxiv: v1 [cs.it] 26 Oct 2018 Outlier Detection using Generative Models with Theoretical Performance Guarantees arxiv:1810.11335v1 [cs.it] 6 Oct 018 Jirong Yi Anh Duc Le Tianming Wang Xiaodong Wu Weiyu Xu October 9, 018 Abstract This

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques, which are widely used to analyze and visualize data. Least squares (LS)

More information

New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization

New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization Bo JIANG Shiqian MA Shuzhong ZHANG January 12, 2015 Abstract In this paper, we propose three new tensor decompositions

More information

Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs

Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs Bhargav Kanagal Department of Computer Science University of Maryland College Park, MD 277 bhargav@cs.umd.edu Vikas

More information

Tensor Decomposition with Smoothness (ICML2017)

Tensor Decomposition with Smoothness (ICML2017) Tensor Decomposition with Smoothness (ICML2017) Masaaki Imaizumi 1 Kohei Hayashi 2,3 1 Institute of Statistical Mathematics 2 National Institute of Advanced Industrial Science and Technology 3 RIKEN Center

More information

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel J. Candès, and Yi Ma, Microsoft Research Asia, Beijing, P.R.C Dept. of Electrical

More information

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor

More information

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering,

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

ATENSOR is a multi-dimensional array of numbers which

ATENSOR is a multi-dimensional array of numbers which 1152 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 3, MARCH 2018 Tensor Factorization for Low-Rank Tensor Completion Pan Zhou, Canyi Lu, Student Member, IEEE, Zhouchen Lin, Senior Member, IEEE, and

More information

Some tensor decomposition methods for machine learning

Some tensor decomposition methods for machine learning Some tensor decomposition methods for machine learning Massimiliano Pontil Istituto Italiano di Tecnologia and University College London 16 August 2016 1 / 36 Outline Problem and motivation Tucker decomposition

More information

Symmetric Factorization for Nonconvex Optimization

Symmetric Factorization for Nonconvex Optimization Symmetric Factorization for Nonconvex Optimization Qinqing Zheng February 24, 2017 1 Overview A growing body of recent research is shedding new light on the role of nonconvex optimization for tackling

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,

More information

Generalized Tensor Total Variation Minimization for Visual Data Recovery

Generalized Tensor Total Variation Minimization for Visual Data Recovery Generalized Tensor Total Variation Minimization for Visual Data Recovery Xiaojie Guo 1 and Yi Ma 2 1 State Key Laboratory of Information Security IIE CAS Beijing 193 China 2 School of Information Science

More information

Recovering Low-Rank and Sparse Matrices via Robust Bilateral Factorization

Recovering Low-Rank and Sparse Matrices via Robust Bilateral Factorization 2014 IEEE International Conference on Data Mining Recovering Low-Rank and Sparse Matrices via Robust Bilateral Factorization Fanhua Shang, Yuanyuan Liu, James Cheng, Hong Cheng Dept. of Computer Science

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

ARestricted Boltzmann machine (RBM) [1] is a probabilistic

ARestricted Boltzmann machine (RBM) [1] is a probabilistic 1 Matrix Product Operator Restricted Boltzmann Machines Cong Chen, Kim Batselier, Ching-Yun Ko, and Ngai Wong chencong@eee.hku.hk, k.batselier@tudelft.nl, cyko@eee.hku.hk, nwong@eee.hku.hk arxiv:1811.04608v1

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

A Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition

A Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition A Simpler Approach to Low-ank Tensor Canonical Polyadic Decomposition Daniel L. Pimentel-Alarcón University of Wisconsin-Madison Abstract In this paper we present a simple and efficient method to compute

More information

Tensor Factorization for Low-Rank Tensor Completion

Tensor Factorization for Low-Rank Tensor Completion This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI.9/TIP.27.2762595,

More information

A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations

A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations Chuangchuang Sun and Ran Dai Abstract This paper proposes a customized Alternating Direction Method of Multipliers

More information

Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo

Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery Florian Römer and Giovanni Del Galdo 2nd CoSeRa, Bonn, 17-19 Sept. 2013 Ilmenau University of Technology Institute for Information

More information

Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices

Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices Xianbiao Shu 1, Fatih Porikli 2 and Narendra Ahuja 1 1 University of Illinois at Urbana-Champaign, 2 Australian National

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy

3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy 3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy Introduction. Seismic data are often sparsely or irregularly sampled along

More information

A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices

A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices Ryota Tomioka 1, Taiji Suzuki 1, Masashi Sugiyama 2, Hisashi Kashima 1 1 The University of Tokyo 2 Tokyo Institute of Technology 2010-06-22

More information

A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank

A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank Hongyang Zhang, Zhouchen Lin, and Chao Zhang Key Lab. of Machine Perception (MOE), School of EECS Peking University,

More information

Sparse Gaussian conditional random fields

Sparse Gaussian conditional random fields Sparse Gaussian conditional random fields Matt Wytock, J. ico Kolter School of Computer Science Carnegie Mellon University Pittsburgh, PA 53 {mwytock, zkolter}@cs.cmu.edu Abstract We propose sparse Gaussian

More information

Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion

Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion Yuanyuan Liu Fanhua Shang Hong Cheng James Cheng Hanghang Tong Abstract Most existing low-n-rank imization algorithms for tensor completion

More information