Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Size: px
Start display at page:

Download "Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng"

Transcription

1 Robust PCA CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Robust PCA 1 / 52

2 Previously... Previously... not robust against outliers linear least squares PCA robust against outliers trimmed least squares trimmed PCA Various ways to robustify PCA: trimming: remove outliers covariance matrix with 0-1 weight [Xu95]: similar to trimming weighted SVD [Gabriel79]: weighting robust error function [Torre2001]: winsorizing Strength: simple concepts Weakness: no guarantee of optimality Leow Wee Kheng (NUS) Robust PCA 2 / 52

3 Robust PCA Robust PCA PCA can be formulated as follows: Given a data matrix D, recover a low-rank matrix A from D such that the error E = D A is minimized: min E F, subject to rank(a) r, D = A+E. (1) A,E r min(m,n) is the target rank of A. F is the Frobenius norm. Notes: This definition of PCA includes dimensionality reduction. PCA is severely affected by large-amplitude noise; not robust. Leow Wee Kheng (NUS) Robust PCA 3 / 52

4 Robust PCA [Wright2009] formulated the Robust PCA problem as follows: Given a data matrix D = A+E where A and E are unknown but A is low-rank and E is sparse, recover A. An obvious way to state the robust PCA problem in math is: Given a data matrix D, find A and E that solve the problem min rank(a)+λ E 0, subject to A+E = D. (2) A,E λ is a Lagrange multiplier. E 0 : l 0 -norm, number of non-zero elements in E. E is sparse if E 0 is small. Leow Wee Kheng (NUS) Robust PCA 4 / 52

5 Robust PCA min rank(a)+λ E 0, subject to A+E = D. (2) A,E D A E Problem 2 is a matrix recovery problem. rank(a) and E 0 are not continuous, not convex; very hard to solve; no efficient algorithm. Leow Wee Kheng (NUS) Robust PCA 5 / 52

6 Robust PCA [Candès2011, Wright2009] reformulated Problem 2 as follows: Given an m n data matrix D, find A and E that solve min A +λ E 1, subject to A+E = D. (3) A,E A : nuclear norm, sum of singular values of A; surrogate for rank(a). E 1 : l 1 -norm, sum of absolute values of elements of E; surrogate for E 0. Solution of Problem 3 can be recovered exactly if A is sufficiently low-rank but not sparse, and E is sufficiently sparse but not low-rank, with optimal λ = 1/ max(m,n). A and E 1 are convex; can apply convex optimization. Leow Wee Kheng (NUS) Robust PCA 6 / 52

7 Robust PCA Introduction to Convex Optimization Introduction to Convex Optimization For a differentiable function f(x), its minimizer x is given by [ df( x) f( x) dx = x 1 ] f( x) = 0. (4) x m f x df / dx = 0 x Leow Wee Kheng (NUS) Robust PCA 7 / 52

8 Robust PCA Introduction to Convex Optimization E 1 is not differentiable when any of its element is zero! f(x) x x Cannot write the following because they don t exist: d E 1 de, E 1 e ij, d e ij de ij. WRONG! (5) Fortunately, E 1 is convex. Leow Wee Kheng (NUS) Robust PCA 8 / 52

9 Robust PCA Introduction to Convex Optimization A function f(x) is convex if and only if x 1,x 2, α [0,1], f(αx 1 +(1 α)x 2 ) αf(x 1 )+(1 α)f(x 2 ). (6) f ( x) x 1 x 2 x x Leow Wee Kheng (NUS) Robust PCA 9 / 52

10 Robust PCA Introduction to Convex Optimization A vector g(x) is a subgradient of convex function f at x if f(y) f(x) g(x) (y x), y. (7) f ( x) l 1 l 2 l 3 x 1 x 2 x At differentiable point x 1 : one unique subgradient = gradient. At non-differentiable point x 2 : multiple subgradients. Leow Wee Kheng (NUS) Robust PCA 10 / 52

11 Robust PCA Introduction to Convex Optimization The subdifferential f(x) is the set of all subgradients of f at x: { } f(x) = g(x) g(x) (y x) f(y) f(x), y. (8) Caution: Subdifferential f(x) is not partial differentiation f(x)/ x. Don t mix up. Partial differentiation is defined for differentiable functions. It is a single vector. Subdifferential is defined for convex functions, which may be non-differentiable. It is a set of vectors. Leow Wee Kheng (NUS) Robust PCA 11 / 52

12 Robust PCA Introduction to Convex Optimization Example: Absolute value. x is not differentiable at x = 0 but subdifferentiable at x = 0: x if x > 0, {+1} if x > 0, x = 0 if x = 0, x = [ 1,+1] if x = 0, x if x < 0. { 1} if x < 0. (9) g = 1 x g = < g < 1 0 x 0 absolute value g = 0 x 1 subdifferential Leow Wee Kheng (NUS) Robust PCA 12 / 52

13 Robust PCA Introduction to Convex Optimization Example: l 1 -norm of an m-d vector x. m x 1 = x i, x 1 = i=1 m x i. (10) i=1 Caution: Right-hand-side of Eq. 10 is set addition. This gives a product of m sets, one for each element x i : {+1} if x i > 0, x 1 = J 1 J m, J i = [ 1,+1] if x i = 0, { 1} if x i < 0. Alternatively, we can write x 1 as = +1 if x i > 0, x 1 = {g} such that g i [ 1,+1] if x i = 0, = 1 if x i < 0. (11) (12) Leow Wee Kheng (NUS) Robust PCA 13 / 52

14 Robust PCA Introduction to Convex Optimization Another alternative: x 1 = {g} such that { gi = sgn(g i ) if x i > 0, g i 1 if x i = 0. (13) Here are some examples of the subdifferentials of 2D l 1 -norm. f(0,0) = [ 1,1] [ 1,1] f(1,0) = {1} [ 1,1] f(1,1) = {(1,1)} Leow Wee Kheng (NUS) Robust PCA 14 / 52

15 Robust PCA Introduction to Convex Optimization Optimality Condition x is a minimum of f if 0 f(x), i.e., 0 is a subgradient of f. f ( x) x g ( x) = 0 x For more details on convex functions and convex optimization, refer to [Bertsekas2003, Boyd2004, Rockafellar70, Shor85]. Leow Wee Kheng (NUS) Robust PCA 15 / 52

16 Robust PCA Back to Robust PCA Back to Robust PCA Shrinkage or soft threshold operator: x ε if x > ε, T ε (x) = x+ε if x < ε, 0 otherwise. (14) T ε( x ) ε 0 ε x Leow Wee Kheng (NUS) Robust PCA 16 / 52

17 Robust PCA Back to Robust PCA Using convex optimization, the following minimizers are shown [Cai2010, Hale2007]: For an m n matrix M with SVD USV, UT ε (S)V = argmin X ε X X M 2 F, (15) T ε (M) = argmin X ε X X M 2 F. (16) Leow Wee Kheng (NUS) Robust PCA 17 / 52

18 Robust PCA Solving Robust PCA Solving Robust PCA There are several ways to solve robust PCA (Problem 3) [Lin2009,Wright2009]: principal component pursuit iterative thresholding accelerated proximal gradient augmented Lagrange multipliers Leow Wee Kheng (NUS) Robust PCA 18 / 52

19 Augmented Lagrange Multipliers Augmented Lagrange Multipliers Consider the problem min x f(x) subject to c j (x) = 0,j = 1,...,m. (17) This is a constrained optimization problem. Lagrange multipliers method reformulates Problem 17 as: with some constants λ j. min x f(x)+ m λ j c j (x) (18) j=1 Leow Wee Kheng (NUS) Robust PCA 19 / 52

20 Augmented Lagrange Multipliers Penalty method reformulates Problem 17 as: m minf(x)+µ c 2 x j(x). (19) j=1 Parameter µ increases over iteration, e.g., by factor of 10. Need µ to get good solution. Augmented Lagrange multipliers method combines Lagrange multipliers and penalty: m minf(x)+ λ j c j (x)+ µ m c 2 x 2 j(x). (20) j=1 j=1 Denote λ = [λ 1 λ m ], c(x) = [c 1 (x) c m (x)]. Then, Problem 20 becomes min x f(x)+λ c(x)+ µ 2 c(x) 2. (21) Leow Wee Kheng (NUS) Robust PCA 20 / 52

21 Augmented Lagrange Multipliers If the constraints form a matrix C = [c jk ], then define Λ = [λ jk ]. Then Problem 20 becomes min x f(x)+ Λ,C(x) + µ 2 C(x) 2 F. (22) Λ,C is sum of product of corresponding elements: Λ,C = j λ jk c jk. k C F is the Frobenius norm: C 2 F = j c 2 jk. k Leow Wee Kheng (NUS) Robust PCA 21 / 52

22 Augmented Lagrange Multipliers Augmented Lagrange Multipliers (ALM) Method 1. Initialize Λ, µ > 0, ρ Repeat until convergence: 2.1 Compute x = argmin x L(x) where 2.2 Update Λ = Λ+µC(x). 2.3 Update µ = ρµ. L(x) = f(x)+ Λ,C(x) + µ 2 C(x) 2 F. What kind of optimization algorithm is this? ALM does not need µ to get good solution. That means, can converge faster. Leow Wee Kheng (NUS) Robust PCA 22 / 52

23 Augmented Lagrange Multipliers With ALM, robust PCA (Problem 3) is reformulated as min A,E A +λ E 1 + Y,D A E + µ 2 D A E 2 F, (23) Y are the Lagrange multipliers. Constraints C = D A E. To implement Step 2.1 of ALM, need to find minima for A and E. Adopt alternating optimization. Leow Wee Kheng (NUS) Robust PCA 23 / 52

24 Augmented Lagrange Multipliers Trace of Matrix Trace of Matrix The trace of a matrix A is the sum of its diagonal elements a ii : tr(a) = n a ii. (24) i=1 Strictly speaking, scalar c and a 1 1 matrix [c] are not the same thing. Nevertheless, since tr([c]) = c, we often write, for simplicity tr(c) = c. Properties tr(a) = n λ i, where λ i are the eigenvalues of A. i=1 tr(a+b) = tr(a)+tr(b) tr(ca) = ctr(a) tr(a) = tr(a ) Leow Wee Kheng (NUS) Robust PCA 24 / 52

25 Augmented Lagrange Multipliers Trace of Matrix tr(abcd) = tr(bcda) = tr(cdab) = tr(dabc) tr(x Y) = tr(xy ) = tr(y X) = tr(yx ) = x ij y ij i j Leow Wee Kheng (NUS) Robust PCA 25 / 52

26 Augmented Lagrange Multipliers From Problem 23, the minimal E with other variables fixed is given by min E λ E 1 + Y,D A E + µ 2 D A E 2 F min E λ E 1 +tr(y (D A E))+ µ 2 D A E 2 F min E λ E 1 +tr( Y E)+ µ 2 tr((d A E) (D A E)). (Homework) min E λ E 1 + µ 2 tr((e (D A+Y/µ)) (E (D A+Y/µ))) min E λ E 1 + µ 2 E (D A+Y/µ) 2 F min E λ µ E E (D A+Y/µ) 2 F. (25) Leow Wee Kheng (NUS) Robust PCA 26 / 52

27 Augmented Lagrange Multipliers Compare Eq. 16 and Problem 25 T ε (M) = argmin X ε X X M 2 F, min E λ µ E E (D A+Y/µ) 2 F. Set X = E, M = D A+Y/µ. Then, E = T λ/µ (D A+Y/µ). (26) Leow Wee Kheng (NUS) Robust PCA 27 / 52

28 Augmented Lagrange Multipliers From Problem 23, the minimal A with other variables fixed is given by min A A + Y,D A E + µ 2 D A E 2 F min A +tr(y (D A E))+ µ 2 D A E 2 F. (Homework) min A Compare Problem 27 and Eq µ A A (D E+Y/µ) 2 F (27) UT ε (S)V = argmin X ε X X M 2 F, Set X = A, M = D E+Y/µ. Then, A = UT 1/µ (S)V. (28) Leow Wee Kheng (NUS) Robust PCA 28 / 52

29 Augmented Lagrange Multipliers Robust PCA for Matrix Recovery via ALM Inputs: D. 1. Initialize A = 0, E = Initialize Y, µ > 0, ρ > Repeat until convergence: 4. Repeat until convergence: 5. U,S,V = SVD(D E+Y/µ). 6. A = UT 1/µ (S)V. 7. E = T λ/µ (D A+Y/µ). 8. Update Y = Y +µ(d A E). 9. Update µ = ρµ. Outputs: A, E. Leow Wee Kheng (NUS) Robust PCA 29 / 52

30 Augmented Lagrange Multipliers Typical initialization [Lin2009]: Y = sgn(d)/j(sgn(d)). sgn(d) gives sign of each matrix element of D. J( ) gives scaling factors: J(X) = max( X 2,λ 1 X ). X 2 is spectral norm, largest singular value of X. X is largest absolute value of elements of X. µ = 1.25 D 2. ρ = 1.5. λ = 1/ max(m,n) for m n matrix D. Leow Wee Kheng (NUS) Robust PCA 30 / 52

31 Augmented Lagrange Multipliers Example: Recovery of video background. video frames data matrix D Leow Wee Kheng (NUS) Robust PCA 31 / 52

32 Augmented Lagrange Multipliers Sample results: data matrix low-rank error low-rank error Leow Wee Kheng (NUS) Robust PCA 32 / 52

33 Augmented Lagrange Multipliers Example: Removal of specular reflection and shadow. D A E Leow Wee Kheng (NUS) Robust PCA 33 / 52

34 Fixed-Rank Robust PCA Fixed-Rank Robust PCA In reflection removal, reflection may be global. ground-truth input Then, E is not sparse: violate RPCA condition! But, rank of A = 1. Fix the rank of A to deal with non-sparse E [Leow2013]. Leow Wee Kheng (NUS) Robust PCA 34 / 52

35 Fixed-Rank Robust PCA Fixed-Rank Robust PCA via ALM Inputs: D. 1. Initialize A = 0, E = Initialize Y, µ > 0, ρ > Repeat until convergence: 4. Repeat until convergence: 5. U,S,V = SVD(D E+Y/µ). 6. If rank(t 1/µ (S)) < r, A = UT 1/µ (S)V ; else A = US r V. 7. E = T λ/µ (D A+Y/µ). 8. Update Y = Y +µ(d A E). 9. Update µ = ρµ. Outputs: A, E. S r is S with last m r singular values set to 0. Leow Wee Kheng (NUS) Robust PCA 35 / 52

36 Fixed-Rank Robust PCA Example: Removal of local reflections. ground-truth input FRPCA RPCA Leow Wee Kheng (NUS) Robust PCA 36 / 52

37 Fixed-Rank Robust PCA Example: Removal of global reflections. ground-truth input FRPCA RPCA Leow Wee Kheng (NUS) Robust PCA 37 / 52

38 Fixed-Rank Robust PCA Example: Removal of global reflections. ground-truth input FRPCA RPCA Leow Wee Kheng (NUS) Robust PCA 38 / 52

39 Fixed-Rank Robust PCA Example: Background recovery for traffic video: fast moving vehicles. input FRPCA RPCA Leow Wee Kheng (NUS) Robust PCA 39 / 52

40 Fixed-Rank Robust PCA Example: Background recovery for traffic video: slow moving vehicles. input FRPCA RPCA Leow Wee Kheng (NUS) Robust PCA 40 / 52

41 Fixed-Rank Robust PCA Example: Background recovery for traffic video: temporary stop. input FRPCA RPCA Leow Wee Kheng (NUS) Robust PCA 41 / 52

42 Matrix Completion Matrix Completion Customers are asked to rate the movies from 1 (poor) to 5 (excellent). A B C D Customers rate only some movies some data are missing. How to estimate the missing data? matrix completion. Leow Wee Kheng (NUS) Robust PCA 42 / 52

43 Matrix Completion Let D denote data matrix with missing elements set to 0, and M = {(i,j)} denote the indices of missing elements in D. Then, the matrix completion problem can be formulated as Given D and M, find matrix A that solves the problem min A A subject to A+E = D,E ij = 0 (i,j) / M. (29) For (i,j) / M, constrain E ij = 0 so that A ij = D ij ; no change. For (i,j) M, D ij = 0, i.e., A ij = E ij ; recovered value. Reformulating Problem 29 using augmented Lagrange multipliers gives min A A + Y,D A E + µ 2 D A E 2 F (30) such that E ij = 0 (i,j) / M. Leow Wee Kheng (NUS) Robust PCA 43 / 52

44 Matrix Completion Robust PCA for Matrix Completion Inputs: D. 1. Initialize A = 0, E = Initialize Y, µ > 0, ρ > Repeat until convergence: 4. Repeat until convergence: 5. U,S,V = SVD(D E+Y/µ). 6. A = UT 1/µ (S)V. 7. E = Γ M (D A+Y/µ), where { Xij, for (i,j) M. Γ M (X) = 0, for (i,j) / M, 8. Update Y = Y +µ(d A E). 9. Update µ = ρµ. Outputs: A, E. Leow Wee Kheng (NUS) Robust PCA 44 / 52

45 Matrix Completion Example: Recovery of occluded parts in face images. input ground matrix matrix truth completion recovery Leow Wee Kheng (NUS) Robust PCA 45 / 52

46 Summary Summary Robustification of PCA PCA trimming 0 1 covariance weighted SVD robust error function trimmed PCA trimmed PCA weighted PCA winsorized PCA Leow Wee Kheng (NUS) Robust PCA 46 / 52

47 Summary low rank + sparse matrix decomposition Robust PCA PCA robust PCA (matrix recovery) fix rank of A fix elements of E fixed rank robust PCA robust PCA (matrix completion) Leow Wee Kheng (NUS) Robust PCA 47 / 52

48 Probing Questions Probing Questions If the data matrix of a problem is composed of a low-rank matrix, a sparse matrix and something else, can you still use robust PCA methods? If yes, how? If no, why? In application of robust PCA to high-resolution colour image processing, the data matrix contains three times as many rows as the number of pixels in the images, which can lead to a very large data matrix that takes a long time to compute. Suggest a way to overcome this problem. In application of robust PCA to video processing, the data matrix contains as many columns as the number of video frames, which can lead to a very large data matrix that is more than the available memory required to store the matrix. Suggest a way to overcome this problem. Leow Wee Kheng (NUS) Robust PCA 48 / 52

49 Homework Homework 1. Show that tr(x Y) = i x ij y ij. 2. Show that the following two optimization problems are equivalent: min E λ E 1 +tr( Y E)+ µ 2 tr((d A E) (D A E)) min E λ E 1 + µ 2 tr((e (D A+Y/µ)) (E (D A+Y/µ))) 3. Show that the minimal A of Problem 23 with other variables fixed is given by min A 1 µ A A (D E+Y/µ) 2 F. j Leow Wee Kheng (NUS) Robust PCA 49 / 52

50 References I References 1. D. P. Bertsekas. Convex Analysis and Optimization. Athena Scientific, S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, J. F. Cai, E. J. Candès, Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal of Optimization, 20(4): , E. J. Candès, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of ACM, 58(3), F. De la Torre and M. J. Black. Robust principal component analysis for computer vision. In Proc. Int. Conf. Computer Vision, R. T. Rockafellar. Convex Analysis. Princeton University Press, Leow Wee Kheng (NUS) Robust PCA 50 / 52

51 References II References 7. K. Gabriel and S. Zamir. Lower rank approximation of matrices by least squares with any choice of weights. Technometrics, 21: , E. T. Hale, W. Yin, Y. Zhang. A fixed-point continuation method for l 1 -regularized minimization with applications to compressed sensing. Tech. Report TR07-07, Dept. of Comp. and Applied Math., Rice University, W. K. Leow, Y. Cheng, L. Zhang, T. Sim, and L. Foo. Background recovery by fixed-rank robust principal component analysis. In Proc. Int. Conf. on Computer Analysis of Images and Patterns, Z. Lin, M. Chen, L. Wu, and Y. Ma. The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. Tech. Report UILU-ENG , UIUC, N. Z. Shor. Minimization Methods for Non-Differentiable Functions. Springer-Verlag, Leow Wee Kheng (NUS) Robust PCA 51 / 52

52 References III References 12. J. Wright, Y. Peng, Y. Ma, A. Ganesh, and S. Rao. Robust principal component analysis: Exact recovery of corrupted low-rank matrices by convex optimization. In Proc. Conf. on Neural Information Processing Systems, L. Xu and A. Yuille. Robust principal component analysis by self-organizing rules based on statistical physics approach. IEEE Trans. Neural Networks, 6(1): , Leow Wee Kheng (NUS) Robust PCA 52 / 52

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Gongguo Tang and Arye Nehorai Department of Electrical and Systems Engineering Washington University in St Louis

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Principal

More information

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38

More information

The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices

The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices Noname manuscript No. (will be inserted by the editor) The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices Zhouchen Lin Minming Chen Yi Ma Received: date / Accepted:

More information

The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices

The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices Noname manuscript No. (will be inserted by the editor) The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices Zhouchen Lin Minming Chen Yi Ma Received: date / Accepted:

More information

Robust Component Analysis via HQ Minimization

Robust Component Analysis via HQ Minimization Robust Component Analysis via HQ Minimization Ran He, Wei-shi Zheng and Liang Wang 0-08-6 Outline Overview Half-quadratic minimization principal component analysis Robust principal component analysis Robust

More information

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Zhouchen Lin Visual Computing Group Microsoft Research Asia Risheng Liu Zhixun Su School of Mathematical Sciences

More information

Fast Algorithms for Structured Robust Principal Component Analysis

Fast Algorithms for Structured Robust Principal Component Analysis Fast Algorithms for Structured Robust Principal Component Analysis Mustafa Ayazoglu, Mario Sznaier and Octavia I. Camps Dept. of Electrical and Computer Engineering, Northeastern University, Boston, MA

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725 Proximal Gradient Descent and Acceleration Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: subgradient method Consider the problem min f(x) with f convex, and dom(f) = R n. Subgradient method:

More information

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation Zhouchen Lin Peking University April 22, 2018 Too Many Opt. Problems! Too Many Opt. Algorithms! Zero-th order algorithms:

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Supplementary Materials: Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications

Supplementary Materials: Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 16 Supplementary Materials: Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications Fanhua Shang, Member, IEEE,

More information

Sparse PCA with applications in finance

Sparse PCA with applications in finance Sparse PCA with applications in finance A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon 1 Introduction

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

arxiv: v2 [stat.ml] 1 Jul 2013

arxiv: v2 [stat.ml] 1 Jul 2013 A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank Hongyang Zhang, Zhouchen Lin, and Chao Zhang Key Lab. of Machine Perception (MOE), School of EECS Peking University,

More information

Grassmann Averages for Scalable Robust PCA Supplementary Material

Grassmann Averages for Scalable Robust PCA Supplementary Material Grassmann Averages for Scalable Robust PCA Supplementary Material Søren Hauberg DTU Compute Lyngby, Denmark sohau@dtu.dk Aasa Feragen DIKU and MPIs Tübingen Denmark and Germany aasa@diku.dk Michael J.

More information

Lecture 9: September 28

Lecture 9: September 28 0-725/36-725: Convex Optimization Fall 206 Lecturer: Ryan Tibshirani Lecture 9: September 28 Scribes: Yiming Wu, Ye Yuan, Zhihao Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

Compressive Sensing, Low Rank models, and Low Rank Submatrix

Compressive Sensing, Low Rank models, and Low Rank Submatrix Compressive Sensing,, and Low Rank Submatrix NICTA Short Course 2012 yi.li@nicta.com.au http://users.cecs.anu.edu.au/~yili Sep 12, 2012 ver. 1.8 http://tinyurl.com/brl89pk Outline Introduction 1 Introduction

More information

A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank

A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank Hongyang Zhang, Zhouchen Lin, and Chao Zhang Key Lab. of Machine Perception (MOE), School of EECS Peking University,

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Qian Zhao 1,2 Deyu Meng 1,2 Xu Kong 3 Qi Xie 1,2 Wenfei Cao 1,2 Yao Wang 1,2 Zongben Xu 1,2 1 School of Mathematics and Statistics,

More information

arxiv: v3 [math.oc] 18 Oct 2013

arxiv: v3 [math.oc] 18 Oct 2013 Noname manuscript No. (will be inserted by the editor) The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices Zhouchen Lin Minming Chen Yi Ma arxiv:1009.5055v3 [math.oc]

More information

Lecture 8: February 9

Lecture 8: February 9 0-725/36-725: Convex Optimiation Spring 205 Lecturer: Ryan Tibshirani Lecture 8: February 9 Scribes: Kartikeya Bhardwaj, Sangwon Hyun, Irina Caan 8 Proximal Gradient Descent In the previous lecture, we

More information

arxiv: v1 [cs.cv] 17 Nov 2015

arxiv: v1 [cs.cv] 17 Nov 2015 Robust PCA via Nonconvex Rank Approximation Zhao Kang, Chong Peng, Qiang Cheng Department of Computer Science, Southern Illinois University, Carbondale, IL 6901, USA {zhao.kang, pchong, qcheng}@siu.edu

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices

A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices Ryota Tomioka 1, Taiji Suzuki 1, Masashi Sugiyama 2, Hisashi Kashima 1 1 The University of Tokyo 2 Tokyo Institute of Technology 2010-06-22

More information

arxiv: v1 [cs.cv] 1 Jun 2014

arxiv: v1 [cs.cv] 1 Jun 2014 l 1 -regularized Outlier Isolation and Regression arxiv:1406.0156v1 [cs.cv] 1 Jun 2014 Han Sheng Department of Electrical and Electronic Engineering, The University of Hong Kong, HKU Hong Kong, China sheng4151@gmail.com

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

Matrix Differentiation

Matrix Differentiation Matrix Differentiation CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Matrix Differentiation

More information

Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices

Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices Xianbiao Shu 1, Fatih Porikli 2 and Narendra Ahuja 1 1 University of Illinois at Urbana-Champaign, 2 Australian National

More information

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel J. Candès, and Yi Ma, Microsoft Research Asia, Beijing, P.R.C Dept. of Electrical

More information

Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization

Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization Shengke Xue, Wenyuan Qiu, Fan Liu, and Xinyu Jin College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou,

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas. Ben Lee Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley A. d Aspremont, INFORMS, Denver,

More information

Big Data Analytics: Optimization and Randomization

Big Data Analytics: Optimization and Randomization Big Data Analytics: Optimization and Randomization Tianbao Yang Tutorial@ACML 2015 Hong Kong Department of Computer Science, The University of Iowa, IA, USA Nov. 20, 2015 Yang Tutorial for ACML 15 Nov.

More information

Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds

Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds Hongyang Zhang, Zhouchen Lin, Chao Zhang, Edward

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Matrix Algebra, part 2

Matrix Algebra, part 2 Matrix Algebra, part 2 Ming-Ching Luoh 2005.9.12 1 / 38 Diagonalization and Spectral Decomposition of a Matrix Optimization 2 / 38 Diagonalization and Spectral Decomposition of a Matrix Also called Eigenvalues

More information

FAST FIRST-ORDER METHODS FOR STABLE PRINCIPAL COMPONENT PURSUIT

FAST FIRST-ORDER METHODS FOR STABLE PRINCIPAL COMPONENT PURSUIT FAST FIRST-ORDER METHODS FOR STABLE PRINCIPAL COMPONENT PURSUIT N. S. AYBAT, D. GOLDFARB, AND G. IYENGAR Abstract. The stable principal component pursuit SPCP problem is a non-smooth convex optimization

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

Minimizing the Difference of L 1 and L 2 Norms with Applications

Minimizing the Difference of L 1 and L 2 Norms with Applications 1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Optimization for Learning and Big Data

Optimization for Learning and Big Data Optimization for Learning and Big Data Donald Goldfarb Department of IEOR Columbia University Department of Mathematics Distinguished Lecture Series May 17-19, 2016. Lecture 1. First-Order Methods for

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

Fast Nonnegative Matrix Factorization with Rank-one ADMM

Fast Nonnegative Matrix Factorization with Rank-one ADMM Fast Nonnegative Matrix Factorization with Rank-one Dongjin Song, David A. Meyer, Martin Renqiang Min, Department of ECE, UCSD, La Jolla, CA, 9093-0409 dosong@ucsd.edu Department of Mathematics, UCSD,

More information

On the Convex Geometry of Weighted Nuclear Norm Minimization

On the Convex Geometry of Weighted Nuclear Norm Minimization On the Convex Geometry of Weighted Nuclear Norm Minimization Seyedroohollah Hosseini roohollah.hosseini@outlook.com Abstract- Low-rank matrix approximation, which aims to construct a low-rank matrix from

More information

Supplement to A Generalized Least Squares Matrix Decomposition. 1 GPMF & Smoothness: Ω-norm Penalty & Functional Data

Supplement to A Generalized Least Squares Matrix Decomposition. 1 GPMF & Smoothness: Ω-norm Penalty & Functional Data Supplement to A Generalized Least Squares Matrix Decomposition Genevera I. Allen 1, Logan Grosenic 2, & Jonathan Taylor 3 1 Department of Statistics and Electrical and Computer Engineering, Rice University

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Elastic-Net Regularization of Singular Values for Robust Subspace Learning

Elastic-Net Regularization of Singular Values for Robust Subspace Learning Elastic-Net Regularization of Singular Values for Robust Subspace Learning Eunwoo Kim Minsik Lee Songhwai Oh Department of ECE, ASRI, Seoul National University, Seoul, Korea Division of EE, Hanyang University,

More information

User s Guide for LMaFit: Low-rank Matrix Fitting

User s Guide for LMaFit: Low-rank Matrix Fitting User s Guide for LMaFit: Low-rank Matrix Fitting Yin Zhang Department of CAAM Rice University, Houston, Texas, 77005 (CAAM Technical Report TR09-28) (Versions beta-1: August 23, 2009) Abstract This User

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

Stochastic dynamical modeling:

Stochastic dynamical modeling: Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case

GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case ianyi Zhou IANYI.ZHOU@SUDEN.US.EDU.AU Dacheng ao DACHENG.AO@US.EDU.AU Centre for Quantum Computation & Intelligent Systems, FEI, University of echnology, Sydney, NSW 2007, Australia Abstract Low-rank and

More information

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学 Linearized Alternating Direction Method: Two Blocks and Multiple Blocks Zhouchen Lin 林宙辰北京大学 Dec. 3, 014 Outline Alternating Direction Method (ADM) Linearized Alternating Direction Method (LADM) Two Blocks

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

ORIE 4741: Learning with Big Messy Data. Proximal Gradient Method

ORIE 4741: Learning with Big Messy Data. Proximal Gradient Method ORIE 4741: Learning with Big Messy Data Proximal Gradient Method Professor Udell Operations Research and Information Engineering Cornell November 13, 2017 1 / 31 Announcements Be a TA for CS/ORIE 1380:

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu Feature engineering is hard 1. Extract informative features from domain knowledge

More information

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization Design of Proection Matrix for Compressive Sensing by Nonsmooth Optimization W.-S. Lu T. Hinamoto Dept. of Electrical & Computer Engineering Graduate School of Engineering University of Victoria Hiroshima

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17 Logistics Week 8: Friday, Oct 17 1. HW 3 errata: in Problem 1, I meant to say p i < i, not that p i is strictly ascending my apologies. You would want p i > i if you were simply forming the matrices and

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Fantope Regularization in Metric Learning

Fantope Regularization in Metric Learning Fantope Regularization in Metric Learning CVPR 2014 Marc T. Law (LIP6, UPMC), Nicolas Thome (LIP6 - UPMC Sorbonne Universités), Matthieu Cord (LIP6 - UPMC Sorbonne Universités), Paris, France Introduction

More information

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

Sparse Covariance Selection using Semidefinite Programming

Sparse Covariance Selection using Semidefinite Programming Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support

More information

Sparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Sparse regression. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic

More information

Novel methods for multilinear data completion and de-noising based on tensor-svd

Novel methods for multilinear data completion and de-noising based on tensor-svd Novel methods for multilinear data completion and de-noising based on tensor-svd Zemin Zhang, Gregory Ely, Shuchin Aeron Department of ECE, Tufts University Medford, MA 02155 zemin.zhang@tufts.com gregoryely@gmail.com

More information

9. Dual decomposition and dual algorithms

9. Dual decomposition and dual algorithms EE 546, Univ of Washington, Spring 2016 9. Dual decomposition and dual algorithms dual gradient ascent example: network rate control dual decomposition and the proximal gradient method examples with simple

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION

ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION ROBUST LOW-RANK TENSOR MODELLING USING TUCKER AND CP DECOMPOSITION Niannan Xue, George Papamakarios, Mehdi Bahri, Yannis Panagakis, Stefanos Zafeiriou Imperial College London, UK University of Edinburgh,

More information

Recovering Low-Rank and Sparse Matrices via Robust Bilateral Factorization

Recovering Low-Rank and Sparse Matrices via Robust Bilateral Factorization 2014 IEEE International Conference on Data Mining Recovering Low-Rank and Sparse Matrices via Robust Bilateral Factorization Fanhua Shang, Yuanyuan Liu, James Cheng, Hong Cheng Dept. of Computer Science

More information

MATRIX RECOVERY FROM QUANTIZED AND CORRUPTED MEASUREMENTS

MATRIX RECOVERY FROM QUANTIZED AND CORRUPTED MEASUREMENTS MATRIX RECOVERY FROM QUANTIZED AND CORRUPTED MEASUREMENTS Andrew S. Lan 1, Christoph Studer 2, and Richard G. Baraniuk 1 1 Rice University; e-mail: {sl29, richb}@rice.edu 2 Cornell University; e-mail:

More information

Dominant Feature Vectors Based Audio Similarity Measure

Dominant Feature Vectors Based Audio Similarity Measure Dominant Feature Vectors Based Audio Similarity Measure Jing Gu 1, Lie Lu 2, Rui Cai 3, Hong-Jiang Zhang 2, and Jian Yang 1 1 Dept. of Electronic Engineering, Tsinghua Univ., Beijing, 100084, China 2 Microsoft

More information

Performance Guarantees for ReProCS Correlated Low-Rank Matrix Entries Case

Performance Guarantees for ReProCS Correlated Low-Rank Matrix Entries Case Performance Guarantees for ReProCS Correlated Low-Rank Matrix Entries Case Jinchun Zhan Dep. of Electrical & Computer Eng. Iowa State University, Ames, Iowa, USA Email: jzhan@iastate.edu Namrata Vaswani

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

Lecture: Smoothing.

Lecture: Smoothing. Lecture: Smoothing http://bicmr.pku.edu.cn/~wenzw/opt-2018-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Smoothing 2/26 introduction smoothing via conjugate

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Divide-and-Conquer Matrix Factorization

Divide-and-Conquer Matrix Factorization Divide-and-Conquer Matrix Factorization Lester Mackey Collaborators: Ameet Talwalkar Michael I. Jordan Stanford University UCLA UC Berkeley December 14, 2015 Mackey (Stanford) Divide-and-Conquer Matrix

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations

A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations Chuangchuang Sun and Ran Dai Abstract This paper proposes a customized Alternating Direction Method of Multipliers

More information

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition 1 Robust Principal Component nalysis Based on Low-Rank and Block-Sparse Matrix Decomposition Qiuwei Li Colorado School of Mines Gongguo Tang Colorado School of Mines rye Nehorai Washington University in

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information