Closed-Form Solutions in Low-Rank Subspace Recovery Models and Their Implications. Zhouchen Lin ( 林宙辰 ) 北京大学 Nov. 7, 2015

Size: px
Start display at page:

Download "Closed-Form Solutions in Low-Rank Subspace Recovery Models and Their Implications. Zhouchen Lin ( 林宙辰 ) 北京大学 Nov. 7, 2015"

Transcription

1 Closed-Form Solutions in Low-Rank Subspace Recovery Models and Their Implications Zhouchen Lin ( 林宙辰 ) 北京大学 Nov. 7, 2015

2 Outline Sparsity vs. Low-rankness Closed-Form Solutions of Low-rank Models Applications of Closed-Form Solutions Conclusions

3 Challenges of High-dim Data Images > 1M dim Videos > 1B dim Web data > 10B+ dim??? Courtesy of Y. Ma.

4 Sparsity vs. Low-rankness

5 Sparse Models Sparse Representation min jjzjj 0 ; s:t: x = Az: (1) Sparse Subspace Clustering m in jjz i jj 0 ; s:t: x i = X^iz i ; 8i: (2) where X^i = [x 1 ; ; x i 1 ; x i+1 ; ; x n ]. m in jjz jj 0 ; s:t: X = X Z ; diag(z ) = 0: (3) m in jjz jj 1 ; s:t: X = X Z ; diag(z ) = 0: Elhamifar and Vidal. Sparse Subspace Clustering. CVPR2009. (4)

6 Low-rank Models Matrix Completion (MC) Robust PCA m in rank(a ); s:t: D = ¼ (A ): m in rank(a ) + ke k l0 ; s:t: D = A + E : Low-rank Representation (LRR) min rank(z) + jjejj 2;0 ; s:t: X = XZ + E: Filling in missing entries Denoising Clustering ke k l0 = # fe ij je ij 6= 0g jjejj 2;0 = #fijjje :;i jj 2 6= 0g. NP Hard!

7 Convex Program Formulation Matrix Completion (MC) Robust PCA m in ka k ; s:t: D = ¼ (A ): m in ka k + ke k l1 ; s:t: D = A + E : Low-rank Representation (LRR) min jjzjj + jjejj 2;1 ; s:t: X = XZ + E: ka k = P i ke k l1 = P i;j ¾ i (A ); je ij j: nuclear norm jjejj 2;1 = P i jje :;i jj 2.

8 Applications of Low-rank Models Background modeling Robust Alignment Image Rectification Motion Segmentation Image Segmentation Saliency Detection Image Tag Refinement Partial Duplicate Image Search 林宙辰 马毅, 信号与数据处理中的低秩模型, 中国计算机学会通讯,2015 年第 4 期

9 Closed-form Solution of LRR Closed form solution at noiseless case m in kz k ; Z s:t: X = X Z ; has a unique closed-form optim al solution: Z = V r V T r skinny SV D of X. Shape Interaction Matrix, w here U r r V T r when X is sampled from independent subspaces, Z* is block diagonal, each block corresponding to a subspace is th e min kzk = rank(x): X=XZ Wei and Lin. Analysis and Improvement of Low Rank Representation for Subspace segmentation, arxiv: , Liu et al. Robust Recovery of Subspace Structures by Low-Rank Representation, TPAMI 2013.

10 Liu et al., Robust Recovery of Subspace Structures by Low-Rank Representation, TPAMI Yao-Liang Yu and Dale Schuurmans, Rank/Norm Regularization with Closed-Form Solutions: Application to Subspace Clustering, UAI2011. Closed-form Solution of LRR Closed form solution at general case m in Z kz k ; s:t: X = A Z ; has a unique closed-form optim al solution: Z = A y X. Valid for any unitary invariant norm!

11 Hongyang Zhang, Zhouchen Lin, and Chao Zhang, A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank, ECML/PKDD2013. Closed-form Solution of LRR Closed form solution of the original LRR min Z rank(z); s.t. A = XZ: (1) Theorem: Suppose U X X VX T and U A A VA T are the skinny SVD of X and A, respectively. The complete solutions to feasible generalized LRR problem (1) are given by Z = X y A + SVA T ; (2) where S is any matrix such that V T X S = 0.

12 Latent LRR Small sample issue min kzk ; s:t: X = XZ: Consider unobserved samples min kzk ; s:t: X O = [X O ; X H ]Z: Liu and Yan. Latent Low-Rank Representation for Subspace Segmentation and Feature Extraction, ICCV 2011.

13 Latent LRR min kzk ; s:t: X O = [X O ; X H ]Z: Theorem: Suppose Z O;H = [Z OjH ; Z HjO ]. Then Z OjH = V OV T O ; Z HjO = V HV T O ; where V O and V H are calculated as follows. Compute the skinny SVD: [X O ; X H ] = U V T and partition V as V = [V O ; V H ]. Proof: That [X O ; X H ] = U [V O ; V H ] T implies So X O = [X O ; X H ]Z reduces to: So Z O;H = V V T O = [V OV T O ; V HV T O ]: X O = U V T O ; X H = U V T H : V T O = V T Z: Liu and Yan. Latent Low-Rank Representation for Subspace Segmentation and Feature Extraction, ICCV 2011.

14 Latent LRR min kzk ; s:t: X O = [X O ; X H ]Z: Z OjH = V OV T O ; Z HjO = V HV T O : X O = [X O ; X H ]Z O;H = X O Z OjH + X HZ HjO = X O Z OjH + X HV H VO T = X O Z OjH + U V H T V HVO T = X O Z OjH + U V H T V H 1 U T X O X O Z OjH + L HjO X O: low rank! min rank(z) + rank(l); s:t: X = XZ + LX: min kzk + klk ; s:t: X = XZ + LX: Liu and Yan. Latent Low-Rank Representation for Subspace Segmentation and Feature Extraction, ICCV 2011.

15 Latent LRR min kzk + klk + kek 1 ; s:t: X = XZ + LX + E: Liu and Yan. Latent Low-Rank Representation for Subspace Segmentation and Feature Extraction, ICCV 2011.

16 Zhang et al, A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank, ECML/PKDD Analysis on LatLRR Noiseless LatLRR has non-unique closed form solutions! Theorem: The complete solutions to are as follows min Z;L rank(z) + rank(l); s:t: X = XZ + LX min Z rank(z); s:t: 1 2 X = XZ: min L rank(l); s:t: Z = V X ~ W V T X + S 1 ~ W V T X and L = U X X (I ~ W ) 1 X U T X + U X X (I ~ W )S 2 ; where ~ W is any idempotent matrix and S 1 and S 2 are any matrices satisfying: 1. V T X S 1 = 0 and S 2 U X = 0; and 1 2 X = LX: min rank(z); s:t: X = XZ: Z min L rank(l); s:t: X = LX: 2. rank(s 1 ) rank( ~ W ) and rank(s 2 ) rank(i ~ W ). A 2 = A + = 1:

17 Zhang et al, A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank, ECML/PKDD Analysis on LatLRR Theorem: The complete solutions to min Z;L kzk + klk ; s:t: X = XZ + LX are as follows Z = V X c W V T X and L = U X (I c W )U T X; where c W is any block diagonal matrix satisfying: 1. its blocks are compatible with X, i.e., if [ X ] ii 6= [ X ] jj then [ c W ] ij = 0; and 2. both c W and I c W are positive semi-de nite.

18 Robust LatLRR Comparison on the synthetic data as the percentage of corruptions increases. Hongyang Zhang, Zhouchen Lin, Chao Zhang, and Junbin Gao, Robust Latent Low Rank Representation for Subspace Clustering, Neurocomputing, Vol. 145, pp , December 2014.

19 Hongyang Zhang, Zhouchen Lin, Chao Zhang, and Junbin Gao, Robust Latent Low Rank Representation for Subspace Clustering, Neurocomputing, Vol. 145, pp , December Robust LatLRR Table 1: Segmentation errors (%) on the Hopkins 155 data set. For robust LatLRR, the parameter is set as 0:806= p n. The parameters of other methods are also tuned to be the best. SSC LRR RSI LRSC LatLRR Robust LatLRR MAX MEAN STD

20 Relationship Between LR Models Hongyang Zhang, Zhouchen Lin, Chao Zhang, and Junbin Gao, Relation among Some Low Rank Subspace Recovery Models, Neural Computation, Vol. 27, No. 9, pp , 2015.

21 Relationship Between LR Models Hongyang Zhang, Zhouchen Lin, Chao Zhang, and Junbin Gao, Relation among Some Low Rank Subspace Recovery Models, Neural Computation, Vol. 27, No. 9, pp , 2015.

22 Relationship Between LR Models Hongyang Zhang, Zhouchen Lin, Chao Zhang, and Junbin Gao, Relation among Some Low Rank Subspace Recovery Models, Neural Computation, Vol. 27, No. 9, pp , 2015.

23 Implications We could obtain a globally optimal solution to other low rank models. We could have much faster algorithms for other low rank models. Hongyang Zhang, Zhouchen Lin, Chao Zhang, and Junbin Gao, Relation among Some Low Rank Subspace Recovery Models, Neural Computation, Vol. 27, No. 9, pp , 2015.

24 Comparison of optimality Implications Comparison of accuracies of solutions to relaxed R-LRR computed by REDU-EXPR and partial ADM, where the parameter is adopted as 1/\sqrt(log n) and n is the input size. The program is run by 10 times and the average accuracies are reported. Hongyang Zhang, Zhouchen Lin, Chao Zhang, and Junbin Gao, Relation among Some Low Rank Subspace Recovery Models, Neural Computation, Vol. 27, No. 9, pp , 2015.

25 Implications Comparison of speed Model Method Accuracy CPU Time (h) LRR ADM - >10 R-LRR ADM - did not converge R-LRR partial ADM - >10 R-LRR REDU-EXPR % Table 1: Unsupervised face image clustering results on the Extended YaleB database. REDU-EXPR means reducing to RPCA rst and then express the solution as that of RPCA. Hongyang Zhang, Zhouchen Lin, Chao Zhang, and Junbin Gao, Relation among Some Low Rank Subspace Recovery Models, Neural Computation, Vol. 27, No. 9, pp , 2015.

26 Hongyang Zhang, Zhouchen Lin, Chao Zhang, and Junbin Gao, Relation among Some Low Rank Subspace Recovery Models, Neural Computation, Vol. 27, No. 9, pp , Implications Comparison of optimality and speed

27 Risheng Liu, Zhouchen Lin, Zhixun Su, and Junbin Gao, Linear time Principal Component Pursuit and its extensions using l1 Filtering, Neurocomputing, Vol. 142, pp , O(n 1 ) RPCA by l 1 -Filtering Assumption: rank(a) = o(n). n r=o(n) D min kak + kek l1 ; s:t: D = A + E:

28 Risheng Liu, Zhouchen Lin, Zhixun Su, and Junbin Gao, Linear time Principal Component Pursuit and its extensions using l1 Filtering, Neurocomputing, Vol. 142, pp , O(n 1 ) RPCA by l 1 -Filtering First, randomly sample D sub. K=cr n D sub D min kak + kek l1 ; s:t: D = A + E:

29 Risheng Liu, Zhouchen Lin, Zhixun Su, and Junbin Gao, Linear time Principal Component Pursuit and its extensions using l1 Filtering, Neurocomputing, Vol. 142, pp , O(n 1 ) RPCA by l 1 -Filtering Second, solve RPCA for D sub : D sub =A sub +E sub. n K=cr min ka sub k + 1kE sub k 1 subj D sub = A sub + E sub : D sub < O(n) complexity! D min kak + kek l1 ; s:t: D = A + E:

30 Risheng Liu, Zhouchen Lin, Zhixun Su, and Junbin Gao, Linear time Principal Component Pursuit and its extensions using l1 Filtering, Neurocomputing, Vol. 142, pp , O(n 1 ) RPCA by l 1 -Filtering Third, find the full rank submatrix B of A sub. K=cr r=o(n) B n A sub D min kak + kek l1 ; s:t: D = A + E:

31 Risheng Liu, Zhouchen Lin, Zhixun Su, and Junbin Gao, Linear time Principal Component Pursuit and its extensions using l1 Filtering, Neurocomputing, Vol. 142, pp , O(n 1 ) RPCA by l 1 -Filtering Fourth, correct the Kx(n-K) submatrix of D by C sub. n min kd c C sub p c k 1 K=cr p c r=o(n) d c O(n) complexity! Can be done in parallel!! B C sub A sub D min kak + kek l1 ; s:t: D = A + E:

32 Risheng Liu, Zhouchen Lin, Zhixun Su, and Junbin Gao, Linear time Principal Component Pursuit and its extensions using l1 Filtering, Neurocomputing, Vol. 142, pp , O(n 1 ) RPCA by l 1 -Filtering Fifth, correct the (n-k)xk submatrix of D by R sub. min q r kd r q r R sub k 1 O(n) complexity! Can be done in parallel!! K=cr r=o(n) B R sub A sub n Rows and columns can also be processed in parallel!! d r D min kak + kek l1 ; s:t: D = A + E:

33 Risheng Liu, Zhouchen Lin, Zhixun Su, and Junbin Gao, Linear time Principal Component Pursuit and its extensions using l1 Filtering, Neurocomputing, Vol. 142, pp , O(n 1 ) RPCA by l 1 -Filtering Finally, the rest part of A is A c =Q r BP c. P c = (p 1 ; p 2 ; ; p n K ) 0 1 q 1 Q r = B q 2 A q n K K=cr r=o(n) B A sub n A compact representation of A! D A c min kak + kek l1 ; s:t: D = A + E:

34 Risheng Liu, Zhouchen Lin, Zhixun Su, and Junbin Gao, Linear time Principal Component Pursuit and its extensions using l1 Filtering, Neurocomputing, Vol. 142, pp , O(n 1 ) RPCA by l 1 -Filtering What if the rank is unknown? 1. If rank(a sub ) > K/c, then increase K to c rank(a sub ). 2. Otherwise, resample another D sub for cross validation. K=cr r=o(n) B A sub D n B D sub min kak + kek l1 ; s:t: D = A + E:

35 Risheng Liu, Zhouchen Lin, Zhixun Su, and Junbin Gao, Linear time Principal Component Pursuit and its extensions using l1 Filtering, Neurocomputing, Vol. 142, pp , Experiments Size kl 0 L k F Synthetic Data Method kl 0 k F rank(l ) kl k ks k l0 ks k l1 Time(s) rank(l 0 ) = 20, kl 0 k = 39546, ks 0 k l0 = 40000, ks 0 k l1 = S-ADM L-ADM l = rank(l 0 ) = 50, kl 0 k = , ks 0 k l0 = , ks 0 k l1 = S-ADM L-ADM l = rank(l 0 ) = 100, kl 0 k = , ks 0 k l0 = , ks 0 k l1 = S-ADM L-ADM l = Structure form Motion Data rank(l 0 ) = 4, kl 0 k = 31160, ks 0 k l0 = , ks 0 k l1 = S-ADM L-ADM l =

36 Experiments Risheng Liu, Zhouchen Lin, Zhixun Su, and Junbin Gao, Linear time Principal Component Pursuit and its extensions using l1 Filtering, Neurocomputing, Vol. 142, pp , 2014.

37 Conclusions Low-rank models have much richer mathematical properties than sparse models. Closed-form solutions to low-rank models are useful in both theory and applications.

38 Thanks!

A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank

A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank Hongyang Zhang, Zhouchen Lin, and Chao Zhang Key Lab. of Machine Perception (MOE), School of EECS Peking University,

More information

arxiv: v2 [stat.ml] 1 Jul 2013

arxiv: v2 [stat.ml] 1 Jul 2013 A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank Hongyang Zhang, Zhouchen Lin, and Chao Zhang Key Lab. of Machine Perception (MOE), School of EECS Peking University,

More information

Low-Rank Subspace Clustering

Low-Rank Subspace Clustering Low-Rank Subspace Clustering Zhouchen Lin 1 & Jiashi Feng 2 1 Peking University 2 National University of Singapore June 26 Outline Representative Models Low-Rank Representation (LRR) Based Subspace Clustering

More information

Relations Among Some Low-Rank Subspace Recovery Models

Relations Among Some Low-Rank Subspace Recovery Models LETTER Communicated by Guangcan Liu Relations Among Some Low-Rank Subspace Recovery Models Hongyang Zhang hy_zh@pku.edu.cn Zhouchen Lin zlin@pku.edu.cn Chao Zhang chzhang@cis.pku.edu.cn Key Laboratory

More information

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学 Linearized Alternating Direction Method: Two Blocks and Multiple Blocks Zhouchen Lin 林宙辰北京大学 Dec. 3, 014 Outline Alternating Direction Method (ADM) Linearized Alternating Direction Method (LADM) Two Blocks

More information

Graph based Subspace Segmentation. Canyi Lu National University of Singapore Nov. 21, 2013

Graph based Subspace Segmentation. Canyi Lu National University of Singapore Nov. 21, 2013 Graph based Subspace Segmentation Canyi Lu National University of Singapore Nov. 21, 2013 Content Subspace Segmentation Problem Related Work Sparse Subspace Clustering (SSC) Low-Rank Representation (LRR)

More information

Scalable Subspace Clustering

Scalable Subspace Clustering Scalable Subspace Clustering René Vidal Center for Imaging Science, Laboratory for Computational Sensing and Robotics, Institute for Computational Medicine, Department of Biomedical Engineering, Johns

More information

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Zhouchen Lin Visual Computing Group Microsoft Research Asia Risheng Liu Zhixun Su School of Mathematical Sciences

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds

Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds Hongyang Zhang, Zhouchen Lin, Chao Zhang, Edward

More information

Sparse Subspace Clustering

Sparse Subspace Clustering Sparse Subspace Clustering Based on Sparse Subspace Clustering: Algorithm, Theory, and Applications by Elhamifar and Vidal (2013) Alex Gutierrez CSCI 8314 March 2, 2017 Outline 1 Motivation and Background

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Robust Component Analysis via HQ Minimization

Robust Component Analysis via HQ Minimization Robust Component Analysis via HQ Minimization Ran He, Wei-shi Zheng and Liang Wang 0-08-6 Outline Overview Half-quadratic minimization principal component analysis Robust principal component analysis Robust

More information

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation Zhouchen Lin Peking University April 22, 2018 Too Many Opt. Problems! Too Many Opt. Algorithms! Zero-th order algorithms:

More information

arxiv: v2 [cs.cv] 20 Apr 2014

arxiv: v2 [cs.cv] 20 Apr 2014 Robust Subspace Recovery via Bi-Sparsity Pursuit arxiv:1403.8067v2 [cs.cv] 20 Apr 2014 Xiao Bian, Hamid Krim North Carolina State University Abstract Successful applications of sparse models in computer

More information

Provable Subspace Clustering: When LRR meets SSC

Provable Subspace Clustering: When LRR meets SSC Provable Subspace Clustering: When LRR meets SSC Yu-Xiang Wang School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 USA yuxiangw@cs.cmu.edu Huan Xu Dept. of Mech. Engineering National

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

Global Optimality in Matrix and Tensor Factorization, Deep Learning & Beyond

Global Optimality in Matrix and Tensor Factorization, Deep Learning & Beyond Global Optimality in Matrix and Tensor Factorization, Deep Learning & Beyond Ben Haeffele and René Vidal Center for Imaging Science Mathematical Institute for Data Science Johns Hopkins University This

More information

Scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit

Scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit Scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit Chong You, Daniel P. Robinson, René Vidal Johns Hopkins University, Baltimore, MD, 21218, USA Abstract Subspace clustering methods based

More information

A Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation

A Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence A Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation Dongdong Chen and Jian Cheng Lv and Zhang Yi

More information

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Robust PCA CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Robust PCA 1 / 52 Previously...

More information

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Qian Zhao 1,2 Deyu Meng 1,2 Xu Kong 3 Qi Xie 1,2 Wenfei Cao 1,2 Yao Wang 1,2 Zongben Xu 1,2 1 School of Mathematics and Statistics,

More information

Tensor LRR and Sparse Coding-Based Subspace Clustering Yifan Fu, Junbin Gao, David Tien, Zhouchen Lin, Senior Member, IEEE, and Xia Hong

Tensor LRR and Sparse Coding-Based Subspace Clustering Yifan Fu, Junbin Gao, David Tien, Zhouchen Lin, Senior Member, IEEE, and Xia Hong 2120 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 10, OCTOBER 2016 Tensor LRR and Sparse Coding-Based Subspace Clustering Yifan Fu, Junbin Gao, David Tien, Zhouchen Lin, Senior

More information

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Canyi Lu 1, Jiashi Feng 1, Yudong Chen, Wei Liu 3, Zhouchen Lin 4,5,, Shuicheng Yan 6,1

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Tensor LRR Based Subspace Clustering

Tensor LRR Based Subspace Clustering Tensor LRR Based Subspace Clustering Yifan Fu, Junbin Gao and David Tien School of Computing and mathematics Charles Sturt University Bathurst, NSW 2795, Australia Email:{yfu, jbgao, dtien}@csu.edu.au

More information

矩阵论 马锦华 数据科学与计算机学院 中山大学

矩阵论 马锦华 数据科学与计算机学院 中山大学 矩阵论 马锦华 数据科学与计算机学院 中山大学 About The Instructor Education - Bachelor and master in mathematics, SYSU - PhD in computer science, HKBU Research Experience - Postdoc at Rutgers - Postdoc at Johns Hopkins - Postdoc

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Subspace Clustering with Priors via Sparse Quadratically Constrained Quadratic Programming

Subspace Clustering with Priors via Sparse Quadratically Constrained Quadratic Programming Subspace Clustering with Priors via Sparse Quadratically Constrained Quadratic Programming Yongfang Cheng, Yin Wang, Mario Sznaier, Octavia Camps Electrical and Computer Engineering Northeastern University,

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

IT IS well known that the problem of subspace segmentation

IT IS well known that the problem of subspace segmentation IEEE TRANSACTIONS ON CYBERNETICS 1 LRR for Subspace Segmentation via Tractable Schatten-p Norm Minimization and Factorization Hengmin Zhang, Jian Yang, Member, IEEE, Fanhua Shang, Member, IEEE, Chen Gong,

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Problems. Looks for literal term matches. Problems:

Problems. Looks for literal term matches. Problems: Problems Looks for literal term matches erms in queries (esp short ones) don t always capture user s information need well Problems: Synonymy: other words with the same meaning Car and automobile 电脑 vs.

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Low-Rank Factorization Models for Matrix Completion and Matrix Separation

Low-Rank Factorization Models for Matrix Completion and Matrix Separation for Matrix Completion and Matrix Separation Joint work with Wotao Yin, Yin Zhang and Shen Yuan IPAM, UCLA Oct. 5, 2010 Low rank minimization problems Matrix completion: find a low-rank matrix W R m n so

More information

Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices

Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices Xianbiao Shu 1, Fatih Porikli 2 and Narendra Ahuja 1 1 University of Illinois at Urbana-Champaign, 2 Australian National

More information

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA D Pimentel-Alarcón 1, L Balzano 2, R Marcia 3, R Nowak 1, R Willett 1 1 University of Wisconsin - Madison, 2 University of Michigan - Ann Arbor, 3 University

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Tensor LRR and sparse coding based subspace clustering

Tensor LRR and sparse coding based subspace clustering Tensor LRR and sparse coding based subspace clustering Article Accepted Version Fu, Y., Gao, J., Tien, D., Lin, Z. and Hong, X. (2016) Tensor LRR and sparse coding based subspace clustering. IEEE Transactions

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Multi-Task Learning for Subspace Segmentation

Multi-Task Learning for Subspace Segmentation Yu Wang David Wipf Qing Ling Wei Chen Ian Wassell Computer Laboratory, University of Cambridge, Cambridge, UK Microsoft Research, Beiing, China University of Science and Technology of China, Hefei, Anhui,

More information

REPORT DOCUMENTATION PAGE

REPORT DOCUMENTATION PAGE REPORT DOCUMENTATION PAGE Form Approved OMB NO. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions,

More information

PRINCIPAL Component Analysis (PCA) is a fundamental approach

PRINCIPAL Component Analysis (PCA) is a fundamental approach IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Member, IEEE, Zhouchen

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Canyi Lu, Jiashi Feng 2, Zhouchen Li,4 Shuicheng Yan 5,2 Department of Electrical and Computer Engineering, Carnegie Mellon University 2

More information

Robust Asymmetric Nonnegative Matrix Factorization

Robust Asymmetric Nonnegative Matrix Factorization Robust Asymmetric Nonnegative Matrix Factorization Hyenkyun Woo Korea Institue for Advanced Study Math @UCLA Feb. 11. 2015 Joint work with Haesun Park Georgia Institute of Technology, H. Woo (KIAS) RANMF

More information

Tensor Low-Rank Completion and Invariance of the Tucker Core

Tensor Low-Rank Completion and Invariance of the Tucker Core Tensor Low-Rank Completion and Invariance of the Tucker Core Shuzhong Zhang Department of Industrial & Systems Engineering University of Minnesota zhangs@umn.edu Joint work with Bo JIANG, Shiqian MA, and

More information

a Short Introduction

a Short Introduction Collaborative Filtering in Recommender Systems: a Short Introduction Norm Matloff Dept. of Computer Science University of California, Davis matloff@cs.ucdavis.edu December 3, 2016 Abstract There is a strong

More information

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Gongguo Tang and Arye Nehorai Department of Electrical and Systems Engineering Washington University in St Louis

More information

The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices

The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices Noname manuscript No. (will be inserted by the editor) The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices Zhouchen Lin Minming Chen Yi Ma Received: date / Accepted:

More information

An Approximation Algorithm for Approximation Rank

An Approximation Algorithm for Approximation Rank An Approximation Algorithm for Approximation Rank Troy Lee Columbia University Adi Shraibman Weizmann Institute Conventions Identify a communication function f : X Y { 1, +1} with the associated X-by-Y

More information

Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining

Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes October 3, 2012 1 / 1 Combinations of features Given a data matrix X n p with p fairly large, it can

More information

Lecture Notes 2: Matrices

Lecture Notes 2: Matrices Optimization-based data analysis Fall 2017 Lecture Notes 2: Matrices Matrices are rectangular arrays of numbers, which are extremely useful for data analysis. They can be interpreted as vectors in a vector

More information

Automatic Subspace Learning via Principal Coefficients Embedding

Automatic Subspace Learning via Principal Coefficients Embedding IEEE TRANSACTIONS ON CYBERNETICS 1 Automatic Subspace Learning via Principal Coefficients Embedding Xi Peng, Jiwen Lu, Senior Member, IEEE, Zhang Yi, Fellow, IEEE and Rui Yan, Member, IEEE, arxiv:1411.4419v5

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Canyi Lu 1, Jiashi Feng, Zhouchen Lin 3,4 Shuicheng Yan 5, 1 Department of Electrical and Computer Engineering, Carnegie Mellon University

More information

arxiv: v1 [math.oc] 2 Sep 2011

arxiv: v1 [math.oc] 2 Sep 2011 arxiv:1109.0367v1 [math.oc] 2 Sep 2011 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040

More information

PART II: Basic Theory of Half-quadratic Minimization

PART II: Basic Theory of Half-quadratic Minimization PART II: Basic Theory of Half-quadratic Minimization Ran He, Wei-Shi Zheng and Liang Wang 013-1-01 Outline History of half-quadratic (HQ) minimization Half-quadratic minimization HQ loss functions HQ in

More information

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja Homework Haimanot Kassa, Jeremy Morris & Isaac Ben Jeppsen October 7, 004 Exercise 1 : We can say that kxk = kx y + yk And likewise So we get kxk kx yk + kyk kxk kyk kx yk kyk = ky x + xk kyk ky xk + kxk

More information

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu

More information

Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs

Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs Bhargav Kanagal Department of Computer Science University of Maryland College Park, MD 277 bhargav@cs.umd.edu Vikas

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

Outlier-Robust Tensor PCA

Outlier-Robust Tensor PCA Outlier-Robust Tensor PCA Pan Zhou Jiashi Feng National University of Singapore, Singapore pzhou@u.nus.edu elefjia@nus.edu.sg Abstract Low-rank tensor analysis is important for various real applications

More information

The Information-Theoretic Requirements of Subspace Clustering with Missing Data

The Information-Theoretic Requirements of Subspace Clustering with Missing Data The Information-Theoretic Requirements of Subspace Clustering with Missing Data Daniel L. Pimentel-Alarcón Robert D. Nowak University of Wisconsin - Madison, 53706 USA PIMENTELALAR@WISC.EDU NOWAK@ECE.WISC.EDU

More information

GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case

GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case ianyi Zhou IANYI.ZHOU@SUDEN.US.EDU.AU Dacheng ao DACHENG.AO@US.EDU.AU Centre for Quantum Computation & Intelligent Systems, FEI, University of echnology, Sydney, NSW 2007, Australia Abstract Low-rank and

More information

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent Unsupervised Machine Learning and Data Mining DS 5230 / DS 4420 - Fall 2018 Lecture 7 Jan-Willem van de Meent DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Dimensionality Reduction Goal:

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

Subspace Clustering via Variance Regularized Ridge Regression

Subspace Clustering via Variance Regularized Ridge Regression Subspace Clustering via Variance Regularized Ridge Regression Chong Peng, Zhao Kang, Qiang Cheng Southern Illinois University, Carbondale, IL, 629, USA {pchong,zhao.kang,qcheng}@siu.edu Abstract Spectral

More information

Sparse representation classification and positive L1 minimization

Sparse representation classification and positive L1 minimization Sparse representation classification and positive L1 minimization Cencheng Shen Joint Work with Li Chen, Carey E. Priebe Applied Mathematics and Statistics Johns Hopkins University, August 5, 2014 Cencheng

More information

arxiv: v3 [math.ra] 22 Aug 2014

arxiv: v3 [math.ra] 22 Aug 2014 arxiv:1407.0331v3 [math.ra] 22 Aug 2014 Positivity of Partitioned Hermitian Matrices with Unitarily Invariant Norms Abstract Chi-Kwong Li a, Fuzhen Zhang b a Department of Mathematics, College of William

More information

PU Learning for Matrix Completion

PU Learning for Matrix Completion Cho-Jui Hsieh Dept of Computer Science UT Austin ICML 2015 Joint work with N. Natarajan and I. S. Dhillon Matrix Completion Example: movie recommendation Given a set Ω and the values M Ω, how to predict

More information

DIMENSIONALITY reduction is a powerful tool for

DIMENSIONALITY reduction is a powerful tool for 34 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 6, NO., OCTOBER 5 Linear-Time Subspace Clustering via Bipartite Graph Modeling Amir Adler, Michael Elad, Fellow, IEEE, andyacovhel-or

More information

Robust Tensor Completion and its Applications. Michael K. Ng Department of Mathematics Hong Kong Baptist University

Robust Tensor Completion and its Applications. Michael K. Ng Department of Mathematics Hong Kong Baptist University Robust Tensor Completion and its Applications Michael K. Ng Department of Mathematics Hong Kong Baptist University mng@math.hkbu.edu.hk Example Low-dimensional Structure Data in many real applications

More information

Locating Salient Group-Structured Image Features via Adaptive Compressive Sensing

Locating Salient Group-Structured Image Features via Adaptive Compressive Sensing Locating Salient Group-Structured Image Features via Adaptive Compressive Sensing Xingguo Li and Jarvis Haupt Department of Electrical and Computer Engineering University of Minnesota, 2 Union Street SE,

More information

Data Mining Techniques

Data Mining Techniques Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 12 Jan-Willem van de Meent (credit: Yijun Zhao, Percy Liang) DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Linear Dimensionality

More information

Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization

Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization Yuan Shen Zaiwen Wen Yin Zhang January 11, 2011 Abstract The matrix separation problem aims to separate

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

High-Rank Matrix Completion and Subspace Clustering with Missing Data

High-Rank Matrix Completion and Subspace Clustering with Missing Data High-Rank Matrix Completion and Subspace Clustering with Missing Data Authors: Brian Eriksson, Laura Balzano and Robert Nowak Presentation by Wang Yuxiang 1 About the authors

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Principal Components Analysis. Sargur Srihari University at Buffalo

Principal Components Analysis. Sargur Srihari University at Buffalo Principal Components Analysis Sargur Srihari University at Buffalo 1 Topics Projection Pursuit Methods Principal Components Examples of using PCA Graphical use of PCA Multidimensional Scaling Srihari 2

More information

Robust extraction of specific signals with temporal structure

Robust extraction of specific signals with temporal structure Robust extraction of specific signals with temporal structure Zhi-Lin Zhang, Zhang Yi Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science

More information

arxiv: v2 [cs.cv] 23 Sep 2018

arxiv: v2 [cs.cv] 23 Sep 2018 Subspace Clustering using Ensembles of K-Subspaces John Lipor 1, David Hong, Yan Shuo Tan 3, and Laura Balzano arxiv:1709.04744v [cs.cv] 3 Sep 018 1 Department of Electrical & Computer Engineering, Portland

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations

Machine Learning for Signal Processing Sparse and Overcomplete Representations Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA

More information

High Dimensional Covariance and Precision Matrix Estimation

High Dimensional Covariance and Precision Matrix Estimation High Dimensional Covariance and Precision Matrix Estimation Wei Wang Washington University in St. Louis Thursday 23 rd February, 2017 Wei Wang (Washington University in St. Louis) High Dimensional Covariance

More information

Learning Multiple Tasks with a Sparse Matrix-Normal Penalty

Learning Multiple Tasks with a Sparse Matrix-Normal Penalty Learning Multiple Tasks with a Sparse Matrix-Normal Penalty Yi Zhang and Jeff Schneider NIPS 2010 Presented by Esther Salazar Duke University March 25, 2011 E. Salazar (Reading group) March 25, 2011 1

More information

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

Machine Learning for Disease Progression

Machine Learning for Disease Progression Machine Learning for Disease Progression Yong Deng Department of Materials Science & Engineering yongdeng@stanford.edu Xuxin Huang Department of Applied Physics xxhuang@stanford.edu Guanyang Wang Department

More information

Sparse & Redundant Signal Representation, and its Role in Image Processing

Sparse & Redundant Signal Representation, and its Role in Image Processing Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique

More information

ATENSOR is a multi-dimensional array of numbers which

ATENSOR is a multi-dimensional array of numbers which 1152 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 3, MARCH 2018 Tensor Factorization for Low-Rank Tensor Completion Pan Zhou, Canyi Lu, Student Member, IEEE, Zhouchen Lin, Senior Member, IEEE, and

More information

Tensor Factorization for Low-Rank Tensor Completion

Tensor Factorization for Low-Rank Tensor Completion This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI.9/TIP.27.2762595,

More information