Robust Asymmetric Nonnegative Matrix Factorization

Size: px
Start display at page:

Download "Robust Asymmetric Nonnegative Matrix Factorization"

Transcription

1 Robust Asymmetric Nonnegative Matrix Factorization Hyenkyun Woo Korea Institue for Advanced Study Feb Joint work with Haesun Park Georgia Institute of Technology, H. Woo (KIAS) RANMF / 54

2 Outline 1 Robust Principal Component Analysis 2 Asymmetric Nonnegative Matrix Factorization Asymmetric Nonnegative Nuclear Norm Denseness and Stability Asymmetric Incoherence Condition 3 Asymmetric Soft Regularization 4 Numerical Results H. Woo (KIAS) RANMF / 54

3 Outline Robust Principal Component Analysis 1 Robust Principal Component Analysis 2 Asymmetric Nonnegative Matrix Factorization Asymmetric Nonnegative Nuclear Norm Denseness and Stability Asymmetric Incoherence Condition 3 Asymmetric Soft Regularization 4 Numerical Results H. Woo (KIAS) RANMF / 54

4 Motivation : Robust Principal Component Analysis Given data : A matrix (A 0) Separation Problems : A=L 0 (low rank)+o 0 (grouped outliers) Related works: Candes, Wright, Ma, Chandrasekaran, Parrilo, Tao, Recht, Osher, Yin, etc.

5 Goal : Asymmetric NMF for (low rank + + outliers) Background modeling by ANMF min L,O { O l 1 + β A L O 2 F : rank +(L) r, 0 L R} Figure : Proposed Model (ANMF for low rank + + Outliers) Choose relatively large r > rank + (L 0 ) rank + (L 0 ) r eff r is automatically recovered by Soft Regularization (or algorithmic regularization)

6 Review: Robust Principal Component Analysis Robust PCA: min L A L l1 s. t. rank(l) r. where X l1 = ij X ij. NP-hard problem and highly nonconvex How to find low rank matrix L? We have relaxations: nuclear norm, γ 2 -norm, etc.

7 Robust PCA with nuclear norm or min λ A L l1 + L L Principal component Pursuit (Candes et al. 2011): (ˆL, Ô) = arg min λ O l1 + L L,O s. t. O = A L, where L = i σ i(l) (sum of singular values of L) Note that e i is a standard basis (0,.., 0, 1, 0,...0). What happen if ˆL = e i e T j and Ô = e ke T l? Can we fix λ for random sparse outliers Ô?

8 Incoherence parameter µ(u) Definition Let U R n with dim(u) = r and P U be the orthogonal projection onto U. Then the coherence of U is defined as where e i is a standard basis. µ(u) = n r max 1 i n P Ue i 2 2 P U = U(U T U) 1 U T (U is full rank) P U e i 2 2 = UT e i 2 2 when UT U = I 1 µ(u) n r

9 Incoherence condition 1 where r = rank(l 0 ) and L 0 R n n = UΣV T = r i=1 σ i u i v T i max i U T e i 2 2 µr n, max i V T e i 2 2 µr n and e i = (0, 0,..., 1,..., 0) is a standard coordinate basis. Note that and µ should be sufficiently small. 1 µ n r 1 Candes and Recht 09, Candes and Tao 10

10 When A = L 0 + O 0 Theorem L 0 : rank(l 0 ) ρ r n µ(log n) 2 O 0 : random sparsity pattern of cardinality m s ρ s n 2 Then with probability 1 O(n 10 ), RPCA with λ = 1 n can separate: ˆL = L 0, Ô = O 0 But, we do not know about incoherence condition parameter µ (1 µ n r )!!! How do we control µ to be small? In general, SVD generate very dense basis!! If O 0 have grouped outliers then how? We need to tune λ 2 parameter based on size of outliers. 2 Ramiraz and Sapiro 12

11 Goal: Column Outliers + Nonnegative Rank Definition (Column Outliers) Let O R m n be grouped outliers with limited row sparsity, i.e., max row i(o) 0 < ζn, 1 i m then we call O as column outliers. Here, 0 < ζ < 1 2 level of O in row direction. decides sparsity min L,O { Φ(O) + α 2 A L O 2 F : R(L) τ and 0 < L B L } (1) Φ is a sparsity enforcing function (elementwise separable, l 0 ) l 0 < O < l 2,0 How to? Change rank constraint R(L) to handle column outliers O.

12 Outline Asymmetric Nonnegative Matrix Factorization 1 Robust Principal Component Analysis 2 Asymmetric Nonnegative Matrix Factorization Asymmetric Nonnegative Nuclear Norm Denseness and Stability Asymmetric Incoherence Condition 3 Asymmetric Soft Regularization 4 Numerical Results H. Woo (KIAS) RANMF / 54

13 Nonnegative Rank Nonnegative rank of L R m n + : rank + (L) = arg min r {r L = WH T, W R m r +, H Rn r + } 0 rank(l) rank + (L) min{n, m} NP-hard Problem 3 We need a relaxation for rank + (L) How to control denseness of L to separable? 3 Vavasis 09

14 Review: Nuclear norm and γ 2 -norm for rank 5 Nuclear norm: L = arg min { λ i : L = i λ i w i h T i, w i B 2 (m), h i B 2 (n) } where B 2 (d) = {x R d x 2 = 1} γ 2 -norm: L γ2 4 min{ λ i : L = i λ i w i h T i, w i B (m), h i B (n) } where B (d) = {x R d x 1} Lee et.al. 10

15 Review: Nuclear norm and γ 2 -norm for rank + nuclear norm 6 : L + = arg min { λ i : L = i λ i w i h T i, w i B + 2 (m), h i B + 2 (n) } where B + 2 (d) = {x Rd + x 2 1} γ 2 -norm: L + γ 2 min{ λ i : L = i λ i w i h T i, w i B + (m), h i B + (n) } where B + (d) = {x R d + x 1} 6 Fawzi & Parrilo 12

16 Possible Candidate for rank + + Column Outliers min O l 1 + α W 0,H 0,O 2 A WHT O 2 F + γψ(wht ) nuclear norm: Ψ(WH T ) = WH T + γ 2 -norm: Ψ(WH T ) = WH T + γ 2 But, we don t want to tune γ parameter!! Are W and H sufficiently dense for separation of column outliers?

17 Relaxation: Asymmetric Nonnegative Nuclear Norm L = arg min{ i λ i : L = i λ i w i h T i, w i Z ηwi (m), h i Z ηhi (n) } where Z η (d) = { v R d + : v B + 2 (d) ηb+ (d) } B + 2 (d) = {v Rd + : v 2 = 1} B (d) + = {v R d + : v 1} λ i λ j if i < j Note that we call L = i λ iw i h i as Asymmetric NMF. We have many parameters : η wi and η hi. Why: To control denseness for separation of column outliers

18 Toy Model for low rank + + column outliers Matrix decompositions (low rank + + column outliers) of the proposed Asymmetric NMF model (TOP) and Robust PCA (BOTTOM). Note that A [0, 255] Bottom: The basis of RPCA is orthogonal and dense. Top : The basis + of ANMF is linearly independent and sparse.

19 Toy Model for low rank + + column outliers Matrix decompositions (low rank + + column outliers) of the proposed Asymmetric NMF model (TOP) and Robust PCA (BOTTOM). Note that A [0, 255] Bottom: The basis of RPCA is orthogonal and dense. Top : The basis + of ANMF is linearly independent and sparse.

20 Toy Model for low rank + + column outliers Matrix decompositions (low rank + + column outliers) of the proposed Asymmetric NMF model (TOP) and Robust PCA (BOTTOM). Note that A [0, 255] Bottom: The basis of RPCA is orthogonal and dense. Top : The basis + of ANMF is linearly independent and sparse.

21 Why Z η (d) : Denseness 1 d B + (d) B + 2 (d) 1B+ (d) where B + 2 (d) = {x Rd + : x 2 = 1} and B + (d) = {x R d + : x 1} v 1 B + d (d) B + 2 (d) : the unique densest unit vector dense vectors becomes unstable!! v 1B (d) + B + 2 (d) : it can be standard coordinate include the sparsest unit vector (i.e., v 0 = 1)

22 Why Z η (d) : Lower bound of sparsity Lemma Let w Z η (d) = { v R d + : v B + 2 (d) ηb+ (d) }, then where w 0 = #{w 0} w 0 1/ w 2 1/η 2 Proof. Let 1 d = (1,..., 1) T then w Z η (d), we get 1 w, 1 d 1 w, w 1 d w, η1 d w 0 where w, 1 d > 0.

23 Singular values of Asymmetric NMF (L = i λ iw i h T i ) Theorem Let L = i L i with L = arg min{ i λ i : L i = λ i w i h T i, w i Z ηwi (m), h i Z ηhi (n) }. Then E 0 (L i ) L i 0 λ i R L i 0 where E 0 (x) = x 1 / x 0 and R = L l Approximately, we can say λ i O( L i 0 )

24 Lower bound of rank + (L) Theorem Let L = i L i have an Asymmetric NMF (L i = λ i w i h T i ) with L = arg min{ i λ i : w i Z ηwi (m), h i Z ηhi (n) }. Then rank + (L) L max i λ i and rank + (L) L A max i Li l0 where L A = i Li l0 norm-based and combinatorial bound for rank + (L)

25 Stability of basis matrix W Z m r η W = { W = [w 1,..., w r ] : w i Z ηwi (m), det(w T W ) 0} where η W = max i η Wi. Is W stable? S(W ) = W T W I l = max i j w T i w j stable 0 S(W ) < 1 unstable W = ρ max (W T W ) stable 1 W 1 + (r 1)S(W ) < r unstable Which one do you prefer as a measure of stability of a matrix W?

26 Stability of basis matrix : W min c i + ɛ W i ( ) 1 a 1 W max c i ɛ W (1 a), i c i is the sum of i-th column elements of W T W ɛ W = min i,j w T i w j > 0 When ɛ W = S(W ), we get W = 1 + (r 1)S(W ) W is more robust measure of stability of W, since it is related to all elements of W

27 Stability of basis matrix: δ-distinguishable W Z m r η W = { W = [w 1,..., w r ] : w i Z ηwi (m), det(w T W ) 0} where η W = max i η Wi. Is W (or Z ηw (m)) δ-distinguishable? For any w i, w j W (or Z ηw (m)), w i w j 2 δ? Equivalently, S(W ) = max i j w T i w j 1 δ2 2 Example For all δ > 0, Z m r 1 m is not δ-distinguishable.

28 Stability vs. Denseness Theorem For 1 k m and 0 < δ 2 k, we get N( δ ( m 2, Z 1 (m)) k k ) (2) where N(ε, Z ηw (m)) is the cardinality of balls in Z ηw with minimum ε-radius and no intersection each other (ε-packing). If we relax denseness (k small) then we can obtain more stable basis (δ big)

29 Example Let W = [w 1,..., w r ] Z m r 1 k with w i V k for all i = 1,..., r. V k = {x = (x 1,..., x m ) T 0 : x 2 = 1, x 0 = k, and x i {0, 1 k }}. Let us assume that ι T w i ι wj = k 1 for all i j. Here, ι wi is an indicator function. Then, the condition number of W becomes ρ max (W cond(w ) = T W ) ρ min (W T W ) = rk r + 1 O( rk). That is, stability of W depends on rank parameter r and sparsity k of each column vector w i.

30 Asymmetric NMF for low rank + + column outliers min L,O { O l 1 + β A O L 2 F : L = WSHT, W Z m r 1, H Z n r η H } where S = diag(λ 1, λ 2,..., λ r ) with λ i λ j if i < j. ANMF for low rank + + column outliers: Assumption: column outliers are tall (mainly stay on column direction) W : stable (and sparse), H: dense (and unstable 7 ) Good news: w i h T i 0 O(λ 2 i ) How to find a solution of the model? Soft Regularization 7 Studer et.al. (2014) Democratic Representations (equal selection of basis in W )

31 Stability of ANMF depends on stability of W (i.e., W or S(W )): where E ij = E ij wi T w j, i j, w i hi T, w j hj T. Denseness of w i h T i depends on denseness of H (i.e., η H ): w i hi T l0 = w i 0 h i 0 1 ηh 2, Since w i Z1 m, a rank one matrix w ihi T row direction (i.e., in h i direction). can be thin structure in Column outliers O need to satisfy the following condition max row i (O) 0 ζn < 1 i ηh 2 min i h i 0.

32 Asymmetric Nonnegative Matrix Factorization Denseness and Stability Asymmetric Incoherence Condition Definition For H Zη n r H, let then we get Ξ(H) ( 1 rn, 1]. Ξ(H) = H l H, (3) It decides stability and denseness of a matrix H. If Ξ(H) is large ( 1), then H is stable but sparse. If Ξ(H) is small ( 1 rn ), then H is dense but unstable. H. Woo (KIAS) RANMF / 54

33 Asymmetric Incoherence Condition Definition Let L = r i=1 λ iw i h T i be an ANMF with basis matrix W = [w 1,..., w r ] and coefficient matrix H = [h 1,..., h r ]. We define asymmetric incoherence criterion of L as follows: 1 < ainc(l) = Ξ(H) rn Ξ(W ) < rm It decides stability and denseness of a matrix L = WSH T. ainc(l) rm: W dense and H stable ainc(l) 1 rn : W stable and H dense

34 Asymmetric Nonnegative Matrix Factorization Asymmetric Incoherence Condition Asymmetric Incoherence Condition Example Let L = WSH T = r i=1 λ iw i hi T 1 be an ideal k -dense vectors. That is, W Z m r 1, H Z n r 1, w i 0 = k W, h i 0 = k H. Let us assume that kw kh ι T w i ι wj = k W 1 and ι T h i ι hj = k H 1 for worst case separability. Then Ξ(W ) = 1 rkw r + 1 = 1 cond(w ) and Ξ(H) =... = 1 cond(h). and ainc(l) = cond(w ) cond(h) = rk W r + 1 rk H r + 1 k W k H = η H η W. H. Woo (KIAS) RANMF / 54

35 Outline Asymmetric Soft Regularization 1 Robust Principal Component Analysis 2 Asymmetric Nonnegative Matrix Factorization Asymmetric Nonnegative Nuclear Norm Denseness and Stability Asymmetric Incoherence Condition 3 Asymmetric Soft Regularization 4 Numerical Results H. Woo (KIAS) RANMF / 54

36 ASR: Asymmetric Soft Regularization where ( W k+ 1 2, H k+ 1 2 ) = argmin { Ã r W 0, H 0 i=1 w i h i T 2 F : h i cb (n)} + ( W k+1, H k+1 ) = BASIS( W k+ 1 2, H k+ 1 2 ) satisfies the following conditions: with an r r diagonal matrix S. (W, HS) = BASIS( W, H) WSH T = W H T W = [w 1, w 2,..., w r ], w i Z 1 (m) H = [h 1, h 2,..., h r ], h i Z ηhi (n) diag i (S) diag j (S) if i < j

37 Optimization Framework for low rank + + outliers O k+1 = argmin O O 1 + β A O r i=1 w k i ( h k i ) T 2 F ( W k+ 1 2, H k+ 1 2 ) = argmin{ A O k+1 r W 0, H ( W k+1, H k+1 ) = BASIS( W k+ 1 2, H k+ 1 2 ) i=1 w i h T i 2 F : h i cb + (n)} where (W, HS) = BASIS( W, H) and W W 1 and H H ηh and Plug-In Outliers detector into ASR(asymmetric soft regularization) Now, we only need to solve L2-NMF with a box-constraint on H!! Asymmetric NMF : L = WSH T

38 L2-NMF Solver: Hierarchical ALS 8 min { F(h 1, h 2,..., h r ; w 1, w 2,..., w r ) : w i R m, h i cb (n) + } W,H F(h 1, h 2,..., h r ; w 1, w 2,..., w r ) = Ã r i=1 w ih T i 2 F BCD Framework: Update H (i=1,...,r): Update W (i=1,...,r): min F (h k+1 h i cb (n) + 1, h k+1 2,..., h i,..., hr k ; w1 k,..., wr k ) min F(h k+1 w i R m 1,..., hr k+1 ; w k+1 1,..., w i,..., wr k ) + 8 Cichocki et.al. 07

39 Outline Numerical Results 1 Robust Principal Component Analysis 2 Asymmetric Nonnegative Matrix Factorization Asymmetric Nonnegative Nuclear Norm Denseness and Stability Asymmetric Incoherence Condition 3 Asymmetric Soft Regularization 4 Numerical Results H. Woo (KIAS) RANMF / 54

40 Numerical Results : Synthetic Image (rank + (L) = 2) There is a strong correlation between r eff and cb + (d) small c r eff rank + (L)

41 Numerical Results (r = 20) Yale B dataset We have 64 different illumination direction A is matrix

42 20 basis of ANMF(top) vs RPCA(bottom)

43 20 basis of Lp-RANMF(top) vs L1-NMF(bottom)

44 asymmetric incoherence condition for Face Image

45 RPCA( λ is tuned for each image) vs RANMF(r = 20)

46 Numerical Results : ainc(l)

47

48

49

50

51

52

53

54

55

56 Numerical Results Thank you!! H. Woo (KIAS) RANMF / 54

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Non-Negative Matrix Factorization

Non-Negative Matrix Factorization Chapter 3 Non-Negative Matrix Factorization Part : Introduction & computation Motivating NMF Skillicorn chapter 8; Berry et al. (27) DMM, summer 27 2 Reminder A T U Σ V T T Σ, V U 2 Σ 2,2 V 2.8.6.6.3.6.5.3.6.3.6.4.3.6.4.3.3.4.5.3.5.8.3.8.3.3.5

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel J. Candès, and Yi Ma, Microsoft Research Asia, Beijing, P.R.C Dept. of Electrical

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs

Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs Bhargav Kanagal Department of Computer Science University of Maryland College Park, MD 277 bhargav@cs.umd.edu Vikas

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Compressed Sensing: a Subgradient Descent Method for Missing Data Problems

Compressed Sensing: a Subgradient Descent Method for Missing Data Problems Compressed Sensing: a Subgradient Descent Method for Missing Data Problems ANZIAM, Jan 30 Feb 3, 2011 Jonathan M. Borwein Jointly with D. Russell Luke, University of Goettingen FRSC FAAAS FBAS FAA Director,

More information

Proximal Methods for Optimization with Spasity-inducing Norms

Proximal Methods for Optimization with Spasity-inducing Norms Proximal Methods for Optimization with Spasity-inducing Norms Group Learning Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology

More information

Big Data Analytics: Optimization and Randomization

Big Data Analytics: Optimization and Randomization Big Data Analytics: Optimization and Randomization Tianbao Yang Tutorial@ACML 2015 Hong Kong Department of Computer Science, The University of Iowa, IA, USA Nov. 20, 2015 Yang Tutorial for ACML 15 Nov.

More information

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725 Proximal Gradient Descent and Acceleration Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: subgradient method Consider the problem min f(x) with f convex, and dom(f) = R n. Subgradient method:

More information

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Gongguo Tang and Arye Nehorai Department of Electrical and Systems Engineering Washington University in St Louis

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 6 Non-Negative Matrix Factorization Rainer Gemulla, Pauli Miettinen May 23, 23 Non-Negative Datasets Some datasets are intrinsically non-negative: Counters (e.g., no. occurrences

More information

EE227 Project Report Robust Principal Component Analysis. Maximilian Balandat Walid Krichene Chi Pang Lam Ka Kit Lam

EE227 Project Report Robust Principal Component Analysis. Maximilian Balandat Walid Krichene Chi Pang Lam Ka Kit Lam EE227 Project Report Robust Principal Component Analysis Maximilian Balandat Walid Krichene Chi Pang Lam Ka Kit Lam May 10, 2012 May 10, 2012 1 Introduction Over the past decade there has been an explosion

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

PRINCIPAL Component Analysis (PCA) is a fundamental approach

PRINCIPAL Component Analysis (PCA) is a fundamental approach IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Member, IEEE, Zhouchen

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Fast Coordinate Descent methods for Non-Negative Matrix Factorization

Fast Coordinate Descent methods for Non-Negative Matrix Factorization Fast Coordinate Descent methods for Non-Negative Matrix Factorization Inderjit S. Dhillon University of Texas at Austin SIAM Conference on Applied Linear Algebra Valencia, Spain June 19, 2012 Joint work

More information

2.3. Clustering or vector quantization 57

2.3. Clustering or vector quantization 57 Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Robustness of Principal Components

Robustness of Principal Components PCA for Clustering An objective of principal components analysis is to identify linear combinations of the original variables that are useful in accounting for the variation in those original variables.

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

A convex model for non-negative matrix factorization and dimensionality reduction on physical space

A convex model for non-negative matrix factorization and dimensionality reduction on physical space A convex model for non-negative matrix factorization and dimensionality reduction on physical space Ernie Esser Joint work with Michael Möller, Stan Osher, Guillermo Sapiro and Jack Xin University of California

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds

Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds Hongyang Zhang, Zhouchen Lin, Chao Zhang, Edward

More information

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion Robert M. Freund, MIT joint with Paul Grigas (UC Berkeley) and Rahul Mazumder (MIT) CDC, December 2016 1 Outline of Topics

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Homework 5. Convex Optimization /36-725

Homework 5. Convex Optimization /36-725 Homework 5 Convex Optimization 10-725/36-725 Due Tuesday April 21 at 4:00pm submitted to Mallory Deptola in GHC 8001 (make sure to submit each problem separately) ***Instructions: you will solve 4 questions,

More information

Low-Rank Factorization Models for Matrix Completion and Matrix Separation

Low-Rank Factorization Models for Matrix Completion and Matrix Separation for Matrix Completion and Matrix Separation Joint work with Wotao Yin, Yin Zhang and Shen Yuan IPAM, UCLA Oct. 5, 2010 Low rank minimization problems Matrix completion: find a low-rank matrix W R m n so

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Robust Component Analysis via HQ Minimization

Robust Component Analysis via HQ Minimization Robust Component Analysis via HQ Minimization Ran He, Wei-shi Zheng and Liang Wang 0-08-6 Outline Overview Half-quadratic minimization principal component analysis Robust principal component analysis Robust

More information

Robust Tensor Completion and its Applications. Michael K. Ng Department of Mathematics Hong Kong Baptist University

Robust Tensor Completion and its Applications. Michael K. Ng Department of Mathematics Hong Kong Baptist University Robust Tensor Completion and its Applications Michael K. Ng Department of Mathematics Hong Kong Baptist University mng@math.hkbu.edu.hk Example Low-dimensional Structure Data in many real applications

More information

MATRIX RECOVERY FROM QUANTIZED AND CORRUPTED MEASUREMENTS

MATRIX RECOVERY FROM QUANTIZED AND CORRUPTED MEASUREMENTS MATRIX RECOVERY FROM QUANTIZED AND CORRUPTED MEASUREMENTS Andrew S. Lan 1, Christoph Studer 2, and Richard G. Baraniuk 1 1 Rice University; e-mail: {sl29, richb}@rice.edu 2 Cornell University; e-mail:

More information

GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case

GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case ianyi Zhou IANYI.ZHOU@SUDEN.US.EDU.AU Dacheng ao DACHENG.AO@US.EDU.AU Centre for Quantum Computation & Intelligent Systems, FEI, University of echnology, Sydney, NSW 2007, Australia Abstract Low-rank and

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

On Sparsity, Redundancy and Quality of Frame Representations

On Sparsity, Redundancy and Quality of Frame Representations On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

https://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Supremum of simple stochastic processes

Supremum of simple stochastic processes Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Canyi Lu 1, Jiashi Feng 1, Yudong Chen, Wei Liu 3, Zhouchen Lin 4,5,, Shuicheng Yan 6,1

More information

Sparse and Low-Rank Matrix Decompositions

Sparse and Low-Rank Matrix Decompositions Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

High Dimensional Covariance and Precision Matrix Estimation

High Dimensional Covariance and Precision Matrix Estimation High Dimensional Covariance and Precision Matrix Estimation Wei Wang Washington University in St. Louis Thursday 23 rd February, 2017 Wei Wang (Washington University in St. Louis) High Dimensional Covariance

More information

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson, Anton van den Hengel School of Computer Science University of Adelaide,

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

arxiv: v1 [math.st] 10 Sep 2015

arxiv: v1 [math.st] 10 Sep 2015 Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees Department of Statistics Yudong Chen Martin J. Wainwright, Department of Electrical Engineering and

More information

Space-time signal processing for distributed pattern detection in sensor networks

Space-time signal processing for distributed pattern detection in sensor networks JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 1, JANUARY 013 1 Space-time signal processing for distributed pattern detection in sensor networks *Randy Paffenroth, Philip du Toit, Ryan Nong,

More information

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University

More information

Minimizing the Difference of L 1 and L 2 Norms with Applications

Minimizing the Difference of L 1 and L 2 Norms with Applications 1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:

More information

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Robust PCA CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Robust PCA 1 / 52 Previously...

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 9: Dimension Reduction/Word2vec Cho-Jui Hsieh UC Davis May 15, 2018 Principal Component Analysis Principal Component Analysis (PCA) Data

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Lecture 8: February 9

Lecture 8: February 9 0-725/36-725: Convex Optimiation Spring 205 Lecturer: Ryan Tibshirani Lecture 8: February 9 Scribes: Kartikeya Bhardwaj, Sangwon Hyun, Irina Caan 8 Proximal Gradient Descent In the previous lecture, we

More information

Space-time signal processing for distributed pattern detection in sensor networks

Space-time signal processing for distributed pattern detection in sensor networks JOURNAL O SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 1, JANUARY 13 1 Space-time signal processing for distributed pattern detection in sensor networks *Randy Paffenroth, Philip du Toit, Ryan Nong,

More information

arxiv: v1 [cs.it] 26 Oct 2018

arxiv: v1 [cs.it] 26 Oct 2018 Outlier Detection using Generative Models with Theoretical Performance Guarantees arxiv:1810.11335v1 [cs.it] 6 Oct 018 Jirong Yi Anh Duc Le Tianming Wang Xiaodong Wu Weiyu Xu October 9, 018 Abstract This

More information

On Optimal Frame Conditioners

On Optimal Frame Conditioners On Optimal Frame Conditioners Chae A. Clark Department of Mathematics University of Maryland, College Park Email: cclark18@math.umd.edu Kasso A. Okoudjou Department of Mathematics University of Maryland,

More information

Fast Algorithms for Structured Robust Principal Component Analysis

Fast Algorithms for Structured Robust Principal Component Analysis Fast Algorithms for Structured Robust Principal Component Analysis Mustafa Ayazoglu, Mario Sznaier and Octavia I. Camps Dept. of Electrical and Computer Engineering, Northeastern University, Boston, MA

More information

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison From Compressed Sensing to Matrix Completion and Beyond Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Netflix Prize One million big ones! Given 100 million ratings on a

More information

A tutorial on sparse modeling. Outline:

A tutorial on sparse modeling. Outline: A tutorial on sparse modeling. Outline: 1. Why? 2. What? 3. How. 4. no really, why? Sparse modeling is a component in many state of the art signal processing and machine learning tasks. image processing

More information

Automatic Subspace Learning via Principal Coefficients Embedding

Automatic Subspace Learning via Principal Coefficients Embedding IEEE TRANSACTIONS ON CYBERNETICS 1 Automatic Subspace Learning via Principal Coefficients Embedding Xi Peng, Jiwen Lu, Senior Member, IEEE, Zhang Yi, Fellow, IEEE and Rui Yan, Member, IEEE, arxiv:1411.4419v5

More information

1 Non-negative Matrix Factorization (NMF)

1 Non-negative Matrix Factorization (NMF) 2018-06-21 1 Non-negative Matrix Factorization NMF) In the last lecture, we considered low rank approximations to data matrices. We started with the optimal rank k approximation to A R m n via the SVD,

More information

Robust PCA via Outlier Pursuit

Robust PCA via Outlier Pursuit Robust PCA via Outlier Pursuit Huan Xu Electrical and Computer Engineering University of Texas at Austin huan.xu@mail.utexas.edu Constantine Caramanis Electrical and Computer Engineering University of

More information

Sparse Optimization Lecture: Basic Sparse Optimization Models

Sparse Optimization Lecture: Basic Sparse Optimization Models Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm

More information

arxiv: v2 [cs.it] 19 Sep 2016

arxiv: v2 [cs.it] 19 Sep 2016 Fast Algorithms for Robust PCA via Gradient Descent Xinyang Yi Dohyung Park Yudong Chen Constantine Caramanis The University of Texas at Austin Cornell University {yixy,dhpark,constantine}@utexas.edu yudong.chen@cornell.edu

More information

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

EE 381V: Large Scale Optimization Fall Lecture 24 April 11 EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that

More information

Learning representations

Learning representations Learning representations Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 4/11/2016 General problem For a dataset of n signals X := [ x 1 x

More information

Block Coordinate Descent for Regularized Multi-convex Optimization

Block Coordinate Descent for Regularized Multi-convex Optimization Block Coordinate Descent for Regularized Multi-convex Optimization Yangyang Xu and Wotao Yin CAAM Department, Rice University February 15, 2013 Multi-convex optimization Model definition Applications Outline

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Robust PCA by Manifold Optimization

Robust PCA by Manifold Optimization Journal of Machine Learning Research 19 (2018) 1-39 Submitted 8/17; Revised 10/18; Published 11/18 Robust PCA by Manifold Optimization Teng Zhang Department of Mathematics University of Central Florida

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

Automatic Rank Determination in Projective Nonnegative Matrix Factorization

Automatic Rank Determination in Projective Nonnegative Matrix Factorization Automatic Rank Determination in Projective Nonnegative Matrix Factorization Zhirong Yang, Zhanxing Zhu, and Erkki Oja Department of Information and Computer Science Aalto University School of Science and

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Lecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016

Lecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016 Lecture 8 Principal Component Analysis Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza December 13, 2016 Luigi Freda ( La Sapienza University) Lecture 8 December 13, 2016 1 / 31 Outline 1 Eigen

More information

Matrix Completion for Structured Observations

Matrix Completion for Structured Observations Matrix Completion for Structured Observations Denali Molitor Department of Mathematics University of California, Los ngeles Los ngeles, C 90095, US Email: dmolitor@math.ucla.edu Deanna Needell Department

More information

Sparse & Redundant Signal Representation, and its Role in Image Processing

Sparse & Redundant Signal Representation, and its Role in Image Processing Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 PCA, NMF Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Summary: PCA PCA is SVD

More information