Robust Component Analysis via HQ Minimization

Size: px
Start display at page:

Download "Robust Component Analysis via HQ Minimization"

Transcription

1 Robust Component Analysis via HQ Minimization Ran He, Wei-shi Zheng and Liang Wang

2 Outline Overview Half-quadratic minimization principal component analysis Robust principal component analysis Robust principal component analysis Low-rank matrix recovery Summary

3 Overview of HQ minimization d n X R β R n d y R st sample n st sample st sample st feature st feature d st feature

4 Overview of HQ minimization T T k k k [g,,g k ] indicates a graph of {x k } T T k k k+ k g x = x, g = [0,..., 0,, 0,...0] g x = x x, g = [0,...,0,,,...0] min Ax y + φ( g x) x k T k Convex Non-convex k = k k + ϕm k e φ( x ) min( x e ) ( e ) k The additive form k = k k + ϕm k p φ( x ) min{ p x ( p )} k The multiplicative form When auxiliary variables e and p are given, the original problem becomes a quadratic problem. [Geman and Reynolds. TPAMI, 99 ][Geman and Yang, TIP, 995]

5 Overview of HQ minimization Half-quadratic minimization min Ax y + λ φ( x i ) x i The multiplicative form λ p t i i i i + ϕ pi ( px ( )) Alternate minimization t i δ( x ) min Ax y + Q ( x, p) x = M The additive form λ p t i i i i + ϕ i (( x p ) ( p )) Alternate minimization t i δ( x ) min Ax y + Q ( x, p) x = A

6 Overview of HQ minimization Huber loss function φ λ H () v v / v λ λ v λ / v > λ δ M H v () v = λ/ v v > The multiplicative form λ λ δ A H () v 0 v = v λsign( v) v > The additive form Soft-thresholding function λ λ λ φh () v = min( p p v ) + λ p

7 Overview of HQ minimization data matrix d n X R orthonormal basis d m U R coefficient matrix m n V R st sample st sample n n d m F i i ( ij jk ki ) i= i= j= k= X UV = X Uv = x u v

8 Outline Overview Half-quadratic minimization Principal component analysis Robust principal component analysis Robust principal component analysis Low-rank matrix recovery Summary

9 Overview of PCA Principal component analysis x i v i u PCA seeks a principal subspace (denoted by the magenta line), such that the orthogonal projection of the data points (red dots) onto this subspace maximizes the variance of the projected points (green dots). An alternative definition of PCA is based on minimizing the sum-of-squares of the projection errors, indicated by the blue lines. [Bishop 006] [C. M. Bishop. Pattern recognition and machine learning. Springer, 006]

10 Overview of PCA Principal component analysis d X = [ x,..., x ] R d U = [ u,..., u ] R n m is a data set of samples is a projection matrix m n = [,..., n ] is the projection coordinates under the projection matrix U. V v v R The mean (or center) of X n m μ = n i x i [C. M. Bishop. Pattern recognition and machine learning. Springer, 006]

11 Overview of PCA Maximum variance formulation of PCA The variance of the projected data n i T T T i μ = ( u x u ) u Su where S is the data covariance matrix S = ( xi μ)( xi μ) T n i [C. M. Bishop. Pattern recognition and machine learning. Springer, 006]

12 Overview of PCA Maximum variance formulation of PCA Lagrange multiplier and the normalization T condition u u = T T + λ u u u Su ( ) By setting the derivative with respect to u equal to zero, we see that this quantity will have a stationary point that Su = λu T u Su = λ [C. M. Bishop. Pattern recognition and machine learning. Springer, 006]

13 Overview of PCA Minimum error formulation of PCA Matrix X UV F vector min x ( μ + Uv ) UV,, μ i i i element m min x ( μ + v u ) UV,, μ ij ij ip pj i j p= Mean square error (MSE) [R. He et al. Neurocomputing, 00]

14 Overview of PCA Graph embedding formulation of PCA T min Tr ( U X ( I W ) X U ) T U U I = where I is the identity matrix, W is a nxn matrix whose elements are all equal to /n. T [S. Yan et al. IEEE TPAMI, 007]

15 Overview of PCA Two problems PCA is sensitive to outliers albeit it is robust to small white noise. Choosing the number of components

16 Outline Overview Half-quadratic minimization Principal component analysis Robust principal component analysis Robust principal component analysis Low-rank matrix recovery Summary

17 Overview of RPCA Self-organizing rules Gibbs distribution energy function A heuristic approach min X A A n min Xi Ai A i = n d min Xij A A i i = = ij L. Xu and A. Yuille. Robust principal component analysis by self-organizing rules based on statistical physics approach, IEEE TNN, 995 A. Baccini et al. A L-norm PCA and a heuristic approach, Ordinal and Symbolic Data Analysis, 996

18 Overview of RPCA 50% occlusion n d m min φ( x u v ) ij jk ki UV, i = j = k = i: each sample j: each dimension k: each basis φ: k: Robust estimator M. Black and A. Jepson. Eigentracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation. IJCV, 998

19 Overview of RPCA min θ n i= x i θ p L p M-estimate of location PCA RPCA N. Locantore et al. Robust PCA for functional data, Test, 999

20 Overview of RPCA two objects recovered st recovered st original image random noise 50% occlusion Recognition under Dense corruption A robust hypothesizeand-test paradigm using subsets of image points A selection procedure based on the Minimum Description Length principle A. Leonardis and H. Bischof. Robust recognition using Eigenimages. Computer Vision and Image Understanding, 000

21 Overview of RPCA n d m min φ( x μ u v ) ij j jk ki UV,, μ i = j = k = data PCA RPCA outlier weight i: each sample j: each dimension k: each basis φ: k: M-estimator F. De La Torre and M. Black. A framework for robust subspace learning. IJCV, 003 F. De La Torre and M.J. Black. Robust Parameterized Component Analysis: Theory and Applications to D Facial Appearance Models. CVIU, 003

22 Overview of RPCA min X UV UV, Factorizing a synthetic 30x30 matrix. Left: Missing data (o) and outliers ( ) in synthetic matrix; Middle: Initial weights from least L norm.right: Final weights after weighted L norm minimization. min W ( X UV ) UV F, Huber M-estimator is used to approximate the L norm Q. Ke and T. Kanade, Robust L norm factorization in the presence of outliers and missing data by alternative convex programming, CVPR, 005.

23 Overview of RPCA n T T T φ Xi Xi Xi UU Xi UV, i = min ( ) Huber M-estimator L M-estimator rotational invariant C. Ding et al. R-PCA: rotational invariant l-norm principal component analysis for robust subspace factorization, ICML, 006

24 Overview of RPCA occluded data compatible points selected pixels reconstructed image S. Fidler et al. Combining Reconstructive and Discriminative Subspace Methods for Robust Classification and Regression by Subsampling. IEEE TPAMI, 006.

25 Overview of RPCA original data subspace outliers PCA Robust PCA Robust PCA Robust PCA S. Danijel et al, Weighted and robust learning of subspace representations. Pattern Recognition, 007.

26 Overview of RPCA Segmentation α-pca n = T i i min φ ( UU ( x ) x ) U i Left: the ground truth Right: standard PCA (white) and α-pca (black) a twice-differentiable function J. Iglesias et al. A family of PCA for dealing with outliers. MICCAI, 007.

27 Overview of RPCA n m d min ujkx U i k j = = = ji outlier PCA-L PCA R-PCA i: each sample j: each dimension k: each basis μ = N. Kwak, Principal component analysis based on L-norm maximization, IEEE TPAMI, 008

28 Overview of RPCA min rank ( A ) s. t. A = X ij ij A min A st.. A ij = X ij The functional A * is the nuclear norm of the matrix X, which is the sum of its singular values. Singular value shrinkage operator A * min A F A Y + λ A * J. Cai et al. A singular value thresholding algorithm for matrix completion A. Ganesh et al. Fast algorithms for recovering a corrupted low-rank matrix. CAMSAP, 008.

29 Overview of RPCA Different types of PCA outliers Compute a robust center and covariance matrix in this k-dimensional subspace by applying the reweighted Minimum Covariance Determinant (MCD) estimator M. Hubert et al. Robust PCA for skewed data and its outlier map. Computational Statistics and Data Analysis, 009

30 Overview of RPCA min A + λ E st.. A + E = X A * 0 min A + λ E st.. A + E = X A * Left: video data; Middle: recovered background; Right: detected sparse errors * + λν + min ν A E X - A - E F AE, J. Wright et al. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization, 009. Z. Lin et al. Fast convex optimization algorithms for exact recovery of a corrupted lowrank matrix, 009.

31 Overview of RPCA E. J. Candes et al. Robust principal component analysis? Journal of the ACM, 00. E. J. Candes and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 00, 98(6): A. Ganesh et al. Dense error correction for low-rank matrices via principal component pursuit. the Computing Research Repository, 00. E. J. Candes and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE TIT,

32 Overview of RPCA Renyi quadratic entropy by Parzen window method 4-D data structure Ran He et al. Principal component analysis based on nonparametric maximum entropy. Neurocomputing, 00

33 Overview of RPCA min A * + λφ (E) st.. A + E = X min A * + λφ( X A) A A Φ ( E) = φ( Eij ) φ: k: Robust estimator ij R. He et al. Recovery of corrupted low rank matrices via half-quadratic based nonconvex minimization. IEEE CVPR, 0. Ran He et al. Recovery of Corrupted Low-rank Matrix by Implicit Regularizer. Submitted to IEEE TPAMI, 0.

34 Overview of RPCA - T T H r c d F N min A + αφ (E) s.t. E= C AC -[ λ ] x & λ = FN A * When using L -error terms, these methods often oscillate around the true optimum and only slowly converge towards this optimum due to the non-differentiability of the L -norm at 0. The Huber cost function is not only a more appropriate model for outliers and inliers contaminated by Gaussian noise it also leads to less oscillations and hence faster convergence R. Angst. The Generalized Trace-Norm and its Application to Structurefrom-Motion Problems. ICCV, 0.

35 Overview of RPCA * * = A * Tr ( AA ) ( AA ) W A F t+ arg min ( t ), and t+ ( t+ ( t+ ) * ) = = ϕ( A) = μ F A W A W A A The minimization can be reformulated as a weighted least squares problem with linear constraints; each of the updates for A t+ and W t+ can then be performed explicitly M. Fornasier et al. Low rank matrix recovery via iteratively reweighted least squares minimization. SIAM Journal on Optimization, 0.

36 Overview of RPCA Summary Robust PCAs have been widely used in tracking, matrix complement, robust recognition, and etc since 998. L-norm, Lp-norm and other robust M-estimators have been used to improve the robustness of PCA.

37 Outline Overview Robust principal component analysis PCA and outliers A general framework Robust PCA algorithms Low-rank matrix recovery Summary

38 PCA and outliers Mean square error is sensitive to outliers In robust statistics, outliers are those data points that are significantly different from the original data points. PCA minimizes the sum of squared errors, which is prone to the presence of outliers, because large errors squared dominate the sum.

39 PCA and outliers Outliers mean vector outliers outliers multimodal distributions

40 Robust PCA Outliers mean vector Robust estimation of mean vector [R. He et al. TIP, 0]

41 PCA and outliers data matrix d n X R orthonormal basis d m U R coefficient matrix m n V R n n d m d n m F i i ( ij jk ki ) ( ij jk ki ) i= i= j= k= j= i= k= X UV = X Uv = x u v = x u v matrix vector element

42 PCA and outliers data matrix d n X R X UV F Each entry n d m φ( x u v ) ij jk ki i= j= k= n i= Each sample d j = φ( ( x u v ) ) j ij k jk ki Each feature φ( ( x u v ) ) i ij k jk ki Structure sparsity

43 Outline Overview Robust principle component analysis PCA and outliers A general framework Robust PCA algorithms Low-rank matrix recovery Summary

44 A general framework Each data element The additive form n d m φ( x u v ) ij jk ki i= j= k= n d m xij ujkvki pij + ϕa pij UV,, P i = j = k = min ( ) ( ) The multiplicative form (weighting) n d m pij xij ujkvki + ϕm pij UV,, P i = j = k = min ( ) ( )

45 A general framework Each sample The multiplicative form (weighting) n d m p i xij ujkvki + ϕ pi UV,, P i = j = k = Each feature min ( ) ( ) The multiplicative form (weighting) n j ij k i= d n m p j xij ujkvki + ϕ pj UV,, P j = i = k = min ( ) ( ) φ( ( x u v ) ) jk ki

46 A general framework Each sample The additive form min φ( ( xij ujkvki ) ) UV j k i,,, n = n = j min φ( e ) st.. X = UV + E UV E j i φ() t = ε + t UV,, E n λ φ j j i= min X UV E ( e ) + E

47 A general framework Alternate minimization algorithm Step Calculate HQ auxiliary variable P or E according to HQ minimization functions Step Calculate corrected data X from P or E Step 3 Calculate mean vector μ X μ Solve a standard PCA problem based on,

48 Outline Overview Robust principle component analysis PCA and outliers A general framework Robust PCA algorithms Low-rank matrix recovery Summary

49 Robust PCAs - data entry Outliers each element m min x ( μ + v u ) ij ij ip pj UV, i j p = MSE M-estimator min φ( x ( μ + v u )) ij ij ip pj μ, UV, i j p = where φ(.) is a robust estimator. m

50 Robust PCAs - data entry Outliers each element Examples min φ( x ( μ + v u )) ij ij ip pj μ, UV, i j p = Geman-McClure function [F. De la Torre and M. J. Black 003] Weighted PCA [S. Danijel et al 007] L-norm [N. Kwak 008] Welsch M-estimator [R. He et al. 00] m

51 Robust PCAs - data entry Outliers each entry min φ( x ( μ + v u )) ij ij ip pj μ, UV, i j p = Welsch M-estimator Renyi quadratic entropy Parzen window estimation of distribution From Gaussian distribution to any distribution m

52 Robust PCAs - data entry Algorithm min φ( x ( μ + v u )) ij ij ip pj μ, UV, i j p = The multiplicative form Alternate minimization m m Pij xij μij + vipupj + ϕ Pij P, μ, UV, ij p = min { ( ( )) ( )} m t+ t t t ij = δ ij μ + ij ip pj p= m P ( x ( v u )) min P ( x ( μ + v u )) ij ij ij ip pj μ, UV, ij p =

53 Robust PCAs - data entry Algorithm min φ( x ( μ + v u )) ij ij ip pj μ, UV, i j p = The additive form Huber M-estimator m m xij μij + vipupj Eij + ϕ pij E, μ, UV, ij p = min {( ( ) ) ( )} min { ( x ( μ + v u ) E ) + λ E } ij ij ip pj ij E, μ, UV, ij p = m

54 Robust PCAs - data sample Outliers each sample MSE M-estimator where φ(.) is a robust estimator. Examples min x ( μ + Uv ) UV,, μ μ, UV, i Huber M-estimator [C. Ding et al. 006] Lp M-estimator [J. Iglesias et al. 007] i min φ( x ( μ + Uv )) i Welsch M-estimator [R. He et al. 0] i i i

55 Robust PCAs - data sample Outliers each sample min φ( x ( μ + Uv )) μ, UV, Welsch M-estimator Maximum correntropy criterion i i i

56 Robust PCAs - data sample Algorithm min φ( x ( μ + Uv )) μ, UV, The multiplicative form i i i t+ t t t i = δ i μ + i p ( x ( U v )) μ, Uv, t + i min p ( x ( μ + Uv )) i i i The weight vector p gives outliers some values and removes outlier during iteration.

57 Outline Overview Half-quadratic minimization Robust principal component analysis Low-rank matrix recovery Summary

58 Low-rank matrix recovery Nuclear norm minimization min A s.t. A X A * = min A F A X + μ A * where. * denotes the nuclear norm of a matrix (i.e., the sum of its singular values) and μ is a constant.

59 Low-rank matrix recovery Nuclear norm minimization min A s.t. A X A The additive form * Singular value shrinkage operator The multiplicative form = min A F () v Iteratively reweighted least squares δ A H A X + μ A 0 v = v λsign( v) v > * λ λ * * = A * Tr ( AA ) ( AA ) W A F

60 Low-rank matrix recovery Robust low rank matrix recovery min + λ s.t. + = A * E 0 A E X AE, min + s.t. A * λ E A + E = X AE, Relaxed version min A + E X + μ A + λμ E AE, F Nuclear norm: singular value shrinkage operator L norm: soft shrinkage operator *

61 Low-rank matrix recovery When matrix A is given min A + E X + μ A + λμ E AE, F min E ( X A) + λμ E E Soft shrinkage operator & Huber M-estimator λ F * Matrix A is fixed φh () v = min( p p v ) + λ p λμ φh ij ij E ij { λμ } ( X A ) = min E ( X A) + E [R. He et al. Submitted to IEEE TPAMI, 0] F

62 Low-rank matrix recovery Robust low rank matrix recovery min φ( Aij Xij ) + μ A A Low rank matrix recovery ij A X F + μ A * = Aij Xij + μ A A * ij min min ( ) A min φ( Eij ) + μ A * st.. A + E = X A ij * Mean square error is sensitive to outliers

63 Low-rank matrix recovery M-estimation min φ( Aij Xij ) + μ A A The multiplicative form ij min P ( A X ) A ( P ) AP, ij The additive form ij ij ij + μ * + ϕ ij ij min X A E A ϕ( Eij ) AE, F + μ + * ij * [R. He et al. Submitted to IEEE TPAMI, 0]

64 Low-rank matrix recovery The additive form of M-estimation min X A E A ϕ( Eij ) AE, Huber M-estimator min φ( Aij Xij ) + μ A A ij F + μ + * ij * min AE, F X A E + μ A + μλ E * [R. He et al. Submitted to IEEE TPAMI, 0]

65 Low-rank matrix recovery data matrix d n X R F X A + A * n i= j= Each element d n i= d j = φ( X A ) + A ij ij Each sample φ( ( X A ) ) + A j ij ij Each feature φ( ( A A ) ) + A i ij ij * * * Accelerated proximal gradient approach

66 Summary Robust PCA Mean square error is sensitive to outliers Outliers are data elements, data samples or features Updating mean vector is important to RPCA The two half-quadratic forms of M-estimators

67 Summary Nuclear norm regularized M-estimation min φ( D ), ij Aij + μ x ij * A min φ( Eij ) + μ x * st.. A + E = D AE, M-estimator ij, The additive form The multiplicative form Nuclear norm The additive form The multiplicative form φ( D A ) min( D A p ) ij ij ij ij ij pij ij ij ij ij ij p φ( D A ) min p ( D A ) min A A * = ij F D A + λ A W A F *

68 Summary Half-quadratic minimization Robust PCAs Low rank matrix recovery Convert complex problems to linear least squares problems Two-forms of minimization methods Error correction (the additive form) Error detection (the multiplicative form)

69 Summary Huber and Welsch M-estimators Convex Huber M-estimator It changes from an L metric to L Its dual potential function is absolute function. Nonconvex Welsch M-estimator It changes from an L metric to L and finally to L0 depending upon the distance between samples Correntropy induced metric

70 Some source codes Open Pattern Recognition Project is intended to be an open source platform for sharing algorithms of image processing, computer vision, natural language processing, pattern recognition, machine learning and the related fields. OpenPR is currently supported by the National Laboratory of Pattern Recognition, CASIA.

71 Thank You

72 Some references - robust PCA R.A. Maronna. Robust M-estimators of Multivariate Location and Scatter, Annals of Statistics, 976, 4: N.A. Campbell. Robust Procedures in Multivariate Analysis I: Robust Covariance Estimation. Applied Statistics, 980, 9: G. Li and Z. Chen. Projection-Pursuit Approach to Robust Dispersion Matrices and Principal Components Journal of the American Statistical Association, 985. L. Xu and A. Yuille. Robust principal component analysis by self-organizing rules based on statistical physics approach, IEEE TNN, 995,6: A. Baccini et al. A L-norm PCA and a heuristic approach, Ordinal and Symbolic Data Analysis, 996, : M. Black and A. Jepson. Eigentracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation. IJCV, 998, 6(): N. Locantore et al. Robust PCA for functional data, Test, 999, 8 (): 73. C. Croux and G. Haesbroeck. Principal Components Analysis based on Robust Estimators of the Covariance or Correlation Matrix. Biometrika, 000, 87: A. Leonardis and H. Bischof. Robust recognition using eigenimages. Computer Vision and Image Understanding, 000, 78(), G. Boente et al. Influence Functions and Outlier Detection under the Common Principal Components Model: A Robust Approach. Biometrika, 00, 89:

73 Some references - robust PCA M. Hubert et al. A Fast Method for Robust Principal Components with Applications to Chemometrics. Chemometrics and Intelligent Laboratory Systems, 00, 60: 0-. M. Hubert and K. V. Branden. Robust Methods for Partial Least Squares Regression. Journal of Chemometrics, 003. M. Hubert and K. Van Driessen. Fast and Robust Discriminant Analysis. Computational Statistics and Data Analysis, 003. M. Hubert and S. Verboven. A Robust PCR Method for High-Dimensional Regressors. Journal of Chemometrics, 003, 7: F. De La Torre and M. Black. A framework for robust subspace learning. IJCV, 003, 54(-3):7 4. F. De la Torre and M.J. Black. Robust Parameterized Component Analysis: Theory and Applications to D Facial Appearance Models, CVIU, 003, 9: X. Wang and X. Tang. A unified framework for subspace face recognition. IEEE TPAMI, 004, 6(9): 8. R. A. Maronna. Principal components and orthogonal regression based on robust scales. Technometrics, 005, 47: Q. Ke and T. Kanade, Robust L norm factorization in the presence of outliers and missing data by alternative convex programming, in: CVPR, 005.

74 Some references - robust PCA M. Hubert et al. RobPCA a new approach to robust principal component analysis, Technometrics, 005, 47: S. Fidler et al. Combining Reconstructive and Discriminative Subspace Methods for Robust Classification... IEEE TPAMI, 006, 8(3): C. Ding et al. R-pca: rotational invariant l-norm principal component analysis for robust subspace factorization, in: ICML, 006. J. Iglesias et al. A family of PCA for dealing with outliers. in MICCAI, 007. S. Danijel et al, Weighted and robust learning of subspace representations. Pattern Recognition, 007, 40: N. Kwak, Principal component analysis based on L-norm maximization, IEEE TPAMI, 008, 30 (9): M. Hubert et al. Robust PCA for skewed data and its outlier map. Computational Statistics and Data Analysis, 009, 53: Y. Pang et al. L-norm based tensor analysis, IEEE TCSVT, 00, 0 (): Ran He et al. Principal component analysis based on nonparametric maximum entropy. Neurocomputing, 00, 73: R. He et al. Robust principal component analysis based on maximum correntropy criterion. IEEE TIP, 0, 0(6):

75 Some references - low rank matrix J. Cai et al. A singular value thresholding algorithm for matrix completion. preprint, urlhttp://arxiv.org/abs/ , 008. A. Ganesh et al. Fast algorithms for recovering a corrupted low-rank matrix. in International Workshop on CAMSAP, 008. J. Wright et al. Robust principal component analysis: Exact recovery of corrupted lowrank matrices via convex optimization, Journal of the ACM, vol. 4, pp. 44, 009. Z. Lin et al. The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices, UIUC Technical Report UILU-ENG-09-5, 009. Z. Lin et al. Fast convex optimization algorithms for exact recovery of a corrupted lowrank matrix. UIUC Technical Report UILU-ENG-09-4, 009. E. J. Candes et al. Robust principal component analysis? Journal of the ACM, 00. E. J. Cand es and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 00, 98(6): A. Ganesh et al. Dense error correction for low-rank matrices via principal component pursuit, in the Computing Research Repository, 00. G. Liu et al. Robust subspace segmentation by low-rank representation, in ICML, 00.

76 Some references - low rank matrix E. J. Cand es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. on Information Theory, 00, 56(5): Y. Mu et al. Accelerated low-rank visual recovery by random projection. in IEEE CVPR, 0. R. He et al. Recovery of corrupted low rank matrices via half-quadratic based nonconvex minimization. in IEEE CVPR, 0. Ran He et al. Recovery of Corrupted Low-rank Matrix by Implicit Regularizer. Submitted to IEEE TPAMI, 0. M. Fornasier et al. Low rank matrix recovery via iteratively reweighted least squares minimization. submitted to SIAM Journal on Optimization, 0. D. Hsu et al. Robust matrix decomposition with sparse corruptions. IEEE Trans. on Information Theory, 0, 57(): D. Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. on Information Theory, 0, 57(3):

PART II: Basic Theory of Half-quadratic Minimization

PART II: Basic Theory of Half-quadratic Minimization PART II: Basic Theory of Half-quadratic Minimization Ran He, Wei-Shi Zheng and Liang Wang 013-1-01 Outline History of half-quadratic (HQ) minimization Half-quadratic minimization HQ loss functions HQ in

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Robust PCA CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Robust PCA 1 / 52 Previously...

More information

Grassmann Averages for Scalable Robust PCA Supplementary Material

Grassmann Averages for Scalable Robust PCA Supplementary Material Grassmann Averages for Scalable Robust PCA Supplementary Material Søren Hauberg DTU Compute Lyngby, Denmark sohau@dtu.dk Aasa Feragen DIKU and MPIs Tübingen Denmark and Germany aasa@diku.dk Michael J.

More information

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation Zhouchen Lin Peking University April 22, 2018 Too Many Opt. Problems! Too Many Opt. Algorithms! Zero-th order algorithms:

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition

Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Robust Principal Component Analysis Based on Low-Rank and Block-Sparse Matrix Decomposition Gongguo Tang and Arye Nehorai Department of Electrical and Systems Engineering Washington University in St Louis

More information

Scalable Subspace Clustering

Scalable Subspace Clustering Scalable Subspace Clustering René Vidal Center for Imaging Science, Laboratory for Computational Sensing and Robotics, Institute for Computational Medicine, Department of Biomedical Engineering, Johns

More information

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation Zhouchen Lin Visual Computing Group Microsoft Research Asia Risheng Liu Zhixun Su School of Mathematical Sciences

More information

A Randomized Approach for Crowdsourcing in the Presence of Multiple Views

A Randomized Approach for Crowdsourcing in the Presence of Multiple Views A Randomized Approach for Crowdsourcing in the Presence of Multiple Views Presenter: Yao Zhou joint work with: Jingrui He - 1 - Roadmap Motivation Proposed framework: M2VW Experimental results Conclusion

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Elastic-Net Regularization of Singular Values for Robust Subspace Learning

Elastic-Net Regularization of Singular Values for Robust Subspace Learning Elastic-Net Regularization of Singular Values for Robust Subspace Learning Eunwoo Kim Minsik Lee Songhwai Oh Department of ECE, ASRI, Seoul National University, Seoul, Korea Division of EE, Hanyang University,

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

L 2,1 Norm and its Applications

L 2,1 Norm and its Applications L 2, Norm and its Applications Yale Chang Introduction According to the structure of the constraints, the sparsity can be obtained from three types of regularizers for different purposes.. Flat Sparsity.

More information

Fantope Regularization in Metric Learning

Fantope Regularization in Metric Learning Fantope Regularization in Metric Learning CVPR 2014 Marc T. Law (LIP6, UPMC), Nicolas Thome (LIP6 - UPMC Sorbonne Universités), Matthieu Cord (LIP6 - UPMC Sorbonne Universités), Paris, France Introduction

More information

FAST CROSS-VALIDATION IN ROBUST PCA

FAST CROSS-VALIDATION IN ROBUST PCA COMPSTAT 2004 Symposium c Physica-Verlag/Springer 2004 FAST CROSS-VALIDATION IN ROBUST PCA Sanne Engelen, Mia Hubert Key words: Cross-Validation, Robustness, fast algorithm COMPSTAT 2004 section: Partial

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

MULTIVARIATE TECHNIQUES, ROBUSTNESS

MULTIVARIATE TECHNIQUES, ROBUSTNESS MULTIVARIATE TECHNIQUES, ROBUSTNESS Mia Hubert Associate Professor, Department of Mathematics and L-STAT Katholieke Universiteit Leuven, Belgium mia.hubert@wis.kuleuven.be Peter J. Rousseeuw 1 Senior Researcher,

More information

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery

Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Supplementary Material of A Novel Sparsity Measure for Tensor Recovery Qian Zhao 1,2 Deyu Meng 1,2 Xu Kong 3 Qi Xie 1,2 Wenfei Cao 1,2 Yao Wang 1,2 Zongben Xu 1,2 1 School of Mathematics and Statistics,

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Matrix-Tensor and Deep Learning in High Dimensional Data Analysis

Matrix-Tensor and Deep Learning in High Dimensional Data Analysis Matrix-Tensor and Deep Learning in High Dimensional Data Analysis Tien D. Bui Department of Computer Science and Software Engineering Concordia University 14 th ICIAR Montréal July 5-7, 2017 Introduction

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

Graph based Subspace Segmentation. Canyi Lu National University of Singapore Nov. 21, 2013

Graph based Subspace Segmentation. Canyi Lu National University of Singapore Nov. 21, 2013 Graph based Subspace Segmentation Canyi Lu National University of Singapore Nov. 21, 2013 Content Subspace Segmentation Problem Related Work Sparse Subspace Clustering (SSC) Low-Rank Representation (LRR)

More information

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm

Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson, Anton van den Hengel School of Computer Science University of Adelaide,

More information

FAULT DETECTION AND ISOLATION WITH ROBUST PRINCIPAL COMPONENT ANALYSIS

FAULT DETECTION AND ISOLATION WITH ROBUST PRINCIPAL COMPONENT ANALYSIS Int. J. Appl. Math. Comput. Sci., 8, Vol. 8, No. 4, 49 44 DOI:.478/v6-8-38-3 FAULT DETECTION AND ISOLATION WITH ROBUST PRINCIPAL COMPONENT ANALYSIS YVON THARRAULT, GILLES MOUROT, JOSÉ RAGOT, DIDIER MAQUIN

More information

Robust PCA via Outlier Pursuit

Robust PCA via Outlier Pursuit Robust PCA via Outlier Pursuit Huan Xu Electrical and Computer Engineering University of Texas at Austin huan.xu@mail.utexas.edu Constantine Caramanis Electrical and Computer Engineering University of

More information

Sparse representation classification and positive L1 minimization

Sparse representation classification and positive L1 minimization Sparse representation classification and positive L1 minimization Cencheng Shen Joint Work with Li Chen, Carey E. Priebe Applied Mathematics and Statistics Johns Hopkins University, August 5, 2014 Cencheng

More information

Robust Tensor Factorization Using R 1 Norm

Robust Tensor Factorization Using R 1 Norm Robust Tensor Factorization Using R Norm Heng Huang Computer Science and Engineering University of Texas at Arlington heng@uta.edu Chris Ding Computer Science and Engineering University of Texas at Arlington

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices

Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices Xianbiao Shu 1, Fatih Porikli 2 and Narendra Ahuja 1 1 University of Illinois at Urbana-Champaign, 2 Australian National

More information

arxiv: v1 [cs.cv] 1 Jun 2014

arxiv: v1 [cs.cv] 1 Jun 2014 l 1 -regularized Outlier Isolation and Regression arxiv:1406.0156v1 [cs.cv] 1 Jun 2014 Han Sheng Department of Electrical and Electronic Engineering, The University of Hong Kong, HKU Hong Kong, China sheng4151@gmail.com

More information

Online PCA for Contaminated Data

Online PCA for Contaminated Data for Contaminated Data Jiashi Feng ECE Department National University of Singapore jiashi@nus.edu.sg Shie Mannor EE Department echnion shie@ee.technion.ac.il Huan Xu ME Department National University of

More information

Subspace Methods for Visual Learning and Recognition

Subspace Methods for Visual Learning and Recognition This is a shortened version of the tutorial given at the ECCV 2002, Copenhagen, and ICPR 2002, Quebec City. Copyright 2002 by Aleš Leonardis, University of Ljubljana, and Horst Bischof, Graz University

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Linear Methods for Regression. Lijun Zhang

Linear Methods for Regression. Lijun Zhang Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived

More information

Sparse Subspace Clustering

Sparse Subspace Clustering Sparse Subspace Clustering Based on Sparse Subspace Clustering: Algorithm, Theory, and Applications by Elhamifar and Vidal (2013) Alex Gutierrez CSCI 8314 March 2, 2017 Outline 1 Motivation and Background

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds

Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds Hongyang Zhang, Zhouchen Lin, Chao Zhang, Edward

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Direct Robust Matrix Factorization for Anomaly Detection

Direct Robust Matrix Factorization for Anomaly Detection Direct Robust Matrix Factorization for Anomaly Detection Liang Xiong Machine Learning Department Carnegie Mellon University lxiong@cs.cmu.edu Xi Chen Machine Learning Department Carnegie Mellon University

More information

A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank

A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank Hongyang Zhang, Zhouchen Lin, and Chao Zhang Key Lab. of Machine Perception (MOE), School of EECS Peking University,

More information

Robust Motion Segmentation by Spectral Clustering

Robust Motion Segmentation by Spectral Clustering Robust Motion Segmentation by Spectral Clustering Hongbin Wang and Phil F. Culverhouse Centre for Robotics Intelligent Systems University of Plymouth Plymouth, PL4 8AA, UK {hongbin.wang, P.Culverhouse}@plymouth.ac.uk

More information

Closed-Form Solutions in Low-Rank Subspace Recovery Models and Their Implications. Zhouchen Lin ( 林宙辰 ) 北京大学 Nov. 7, 2015

Closed-Form Solutions in Low-Rank Subspace Recovery Models and Their Implications. Zhouchen Lin ( 林宙辰 ) 北京大学 Nov. 7, 2015 Closed-Form Solutions in Low-Rank Subspace Recovery Models and Their Implications Zhouchen Lin ( 林宙辰 ) 北京大学 Nov. 7, 2015 Outline Sparsity vs. Low-rankness Closed-Form Solutions of Low-rank Models Applications

More information

arxiv: v2 [stat.ml] 1 Jul 2013

arxiv: v2 [stat.ml] 1 Jul 2013 A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank Hongyang Zhang, Zhouchen Lin, and Chao Zhang Key Lab. of Machine Perception (MOE), School of EECS Peking University,

More information

Supplemental Figures: Results for Various Color-image Completion

Supplemental Figures: Results for Various Color-image Completion ANONYMOUS AUTHORS: SUPPLEMENTAL MATERIAL (NOVEMBER 7, 2017) 1 Supplemental Figures: Results for Various Color-image Completion Anonymous authors COMPARISON WITH VARIOUS METHODS IN COLOR-IMAGE COMPLETION

More information

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel J. Candès, and Yi Ma, Microsoft Research Asia, Beijing, P.R.C Dept. of Electrical

More information

High Dimensional Covariance and Precision Matrix Estimation

High Dimensional Covariance and Precision Matrix Estimation High Dimensional Covariance and Precision Matrix Estimation Wei Wang Washington University in St. Louis Thursday 23 rd February, 2017 Wei Wang (Washington University in St. Louis) High Dimensional Covariance

More information

PCA and LDA. Man-Wai MAK

PCA and LDA. Man-Wai MAK PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer

More information

Sparse PCA with applications in finance

Sparse PCA with applications in finance Sparse PCA with applications in finance A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon 1 Introduction

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

CS 4495 Computer Vision Principle Component Analysis

CS 4495 Computer Vision Principle Component Analysis CS 4495 Computer Vision Principle Component Analysis (and it s use in Computer Vision) Aaron Bobick School of Interactive Computing Administrivia PS6 is out. Due *** Sunday, Nov 24th at 11:55pm *** PS7

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Stefano Soatto Shankar Sastry Department of EECS, UC Berkeley Department of Computer Sciences, UCLA 30 Cory Hall,

More information

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition ace Recognition Identify person based on the appearance of face CSED441:Introduction to Computer Vision (2017) Lecture10: Subspace Methods and ace Recognition Bohyung Han CSE, POSTECH bhhan@postech.ac.kr

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Big Data Analytics: Optimization and Randomization

Big Data Analytics: Optimization and Randomization Big Data Analytics: Optimization and Randomization Tianbao Yang Tutorial@ACML 2015 Hong Kong Department of Computer Science, The University of Iowa, IA, USA Nov. 20, 2015 Yang Tutorial for ACML 15 Nov.

More information

A Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation

A Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence A Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation Dongdong Chen and Jian Cheng Lv and Zhang Yi

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley A. d Aspremont, INFORMS, Denver,

More information

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA D Pimentel-Alarcón 1, L Balzano 2, R Marcia 3, R Nowak 1, R Willett 1 1 University of Wisconsin - Madison, 2 University of Michigan - Ann Arbor, 3 University

More information

A Multi-Affine Model for Tensor Decomposition

A Multi-Affine Model for Tensor Decomposition Yiqing Yang UW Madison breakds@cs.wisc.edu A Multi-Affine Model for Tensor Decomposition Hongrui Jiang UW Madison hongrui@engr.wisc.edu Li Zhang UW Madison lizhang@cs.wisc.edu Chris J. Murphy UC Davis

More information

Pattern Recognition and Machine Learning

Pattern Recognition and Machine Learning Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability

More information

Tensor-Tensor Product Toolbox

Tensor-Tensor Product Toolbox Tensor-Tensor Product Toolbox 1 version 10 Canyi Lu canyilu@gmailcom Carnegie Mellon University https://githubcom/canyilu/tproduct June, 018 1 INTRODUCTION Tensors are higher-order extensions of matrices

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Multiple Similarities Based Kernel Subspace Learning for Image Classification

Multiple Similarities Based Kernel Subspace Learning for Image Classification Multiple Similarities Based Kernel Subspace Learning for Image Classification Wang Yan, Qingshan Liu, Hanqing Lu, and Songde Ma National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Recent Advances in Structured Sparse Models

Recent Advances in Structured Sparse Models Recent Advances in Structured Sparse Models Julien Mairal Willow group - INRIA - ENS - Paris 21 September 2010 LEAR seminar At Grenoble, September 21 st, 2010 Julien Mairal Recent Advances in Structured

More information

Iteratively Reweighted Least Squares Algorithms for L1-Norm Principal Component Analysis

Iteratively Reweighted Least Squares Algorithms for L1-Norm Principal Component Analysis Iteratively Reweighted Least Squares Algorithms for L1-Norm Principal Component Analysis Young Woong Park Cox School of Business Southern Methodist University Dallas, Texas 75225 Email: ywpark@smu.edu

More information

PCA and LDA. Man-Wai MAK

PCA and LDA. Man-Wai MAK PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental

More information

On Optimal Frame Conditioners

On Optimal Frame Conditioners On Optimal Frame Conditioners Chae A. Clark Department of Mathematics University of Maryland, College Park Email: cclark18@math.umd.edu Kasso A. Okoudjou Department of Mathematics University of Maryland,

More information

DATA MINING AND MACHINE LEARNING

DATA MINING AND MACHINE LEARNING DATA MINING AND MACHINE LEARNING Lecture 5: Regularization and loss functions Lecturer: Simone Scardapane Academic Year 2016/2017 Table of contents Loss functions Loss functions for regression problems

More information

Graph-Laplacian PCA: Closed-form Solution and Robustness

Graph-Laplacian PCA: Closed-form Solution and Robustness 2013 IEEE Conference on Computer Vision and Pattern Recognition Graph-Laplacian PCA: Closed-form Solution and Robustness Bo Jiang a, Chris Ding b,a, Bin Luo a, Jin Tang a a School of Computer Science and

More information

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 1 MACHINE LEARNING Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 2 Practicals Next Week Next Week, Practical Session on Computer Takes Place in Room GR

More information

Supplementary Materials: Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications

Supplementary Materials: Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 16 Supplementary Materials: Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications Fanhua Shang, Member, IEEE,

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

Face detection and recognition. Detection Recognition Sally

Face detection and recognition. Detection Recognition Sally Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification

More information

CAUCHY PRINCIPAL COMPONENT ANALYSIS

CAUCHY PRINCIPAL COMPONENT ANALYSIS Under review as a conference paper at ICLR 215 CAUCHY PRINCIPAL COMPONENT ANALYSIS Pengtao Xie & Eric Xing School of Computer Science Carnegie Mellon University Pittsburgh, PA, 15213 {pengtaox,epxing}@cs.cmu.edu

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Stochastic dynamical modeling:

Stochastic dynamical modeling: Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou

More information

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu Dimension Reduction Techniques Presented by Jie (Jerry) Yu Outline Problem Modeling Review of PCA and MDS Isomap Local Linear Embedding (LLE) Charting Background Advances in data collection and storage

More information

Covariance-Based PCA for Multi-Size Data

Covariance-Based PCA for Multi-Size Data Covariance-Based PCA for Multi-Size Data Menghua Zhai, Feiyu Shi, Drew Duncan, and Nathan Jacobs Department of Computer Science, University of Kentucky, USA {mzh234, fsh224, drew, jacobs}@cs.uky.edu Abstract

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 6 1 / 22 Overview

More information

Computation. For QDA we need to calculate: Lets first consider the case that

Computation. For QDA we need to calculate: Lets first consider the case that Computation For QDA we need to calculate: δ (x) = 1 2 log( Σ ) 1 2 (x µ ) Σ 1 (x µ ) + log(π ) Lets first consider the case that Σ = I,. This is the case where each distribution is spherical, around the

More information

Nonnegative Matrix Factorization Clustering on Multiple Manifolds

Nonnegative Matrix Factorization Clustering on Multiple Manifolds Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10) Nonnegative Matrix Factorization Clustering on Multiple Manifolds Bin Shen, Luo Si Department of Computer Science,

More information

2.3. Clustering or vector quantization 57

2.3. Clustering or vector quantization 57 Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :

More information

Minimizing the Difference of L 1 and L 2 Norms with Applications

Minimizing the Difference of L 1 and L 2 Norms with Applications 1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:

More information

Machine learning for pervasive systems Classification in high-dimensional spaces

Machine learning for pervasive systems Classification in high-dimensional spaces Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version

More information

Machine Learning 2nd Edition

Machine Learning 2nd Edition INTRODUCTION TO Lecture Slides for Machine Learning 2nd Edition ETHEM ALPAYDIN, modified by Leonardo Bobadilla and some parts from http://www.cs.tau.ac.il/~apartzin/machinelearning/ The MIT Press, 2010

More information

Linear Dimensionality Reduction

Linear Dimensionality Reduction Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Principal Component Analysis 3 Factor Analysis

More information

Frist order optimization methods for sparse inverse covariance selection

Frist order optimization methods for sparse inverse covariance selection Frist order optimization methods for sparse inverse covariance selection Katya Scheinberg Lehigh University ISE Department (joint work with D. Goldfarb, Sh. Ma, I. Rish) Introduction l l l l l l The field

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

arxiv: v1 [cs.cv] 17 Nov 2015

arxiv: v1 [cs.cv] 17 Nov 2015 Robust PCA via Nonconvex Rank Approximation Zhao Kang, Chong Peng, Qiang Cheng Department of Computer Science, Southern Illinois University, Carbondale, IL 6901, USA {zhao.kang, pchong, qcheng}@siu.edu

More information

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学 Linearized Alternating Direction Method: Two Blocks and Multiple Blocks Zhouchen Lin 林宙辰北京大学 Dec. 3, 014 Outline Alternating Direction Method (ADM) Linearized Alternating Direction Method (LADM) Two Blocks

More information

Robustness of Principal Components

Robustness of Principal Components PCA for Clustering An objective of principal components analysis is to identify linear combinations of the original variables that are useful in accounting for the variation in those original variables.

More information