Structured sparsity-inducing norms through submodular functions

Size: px
Start display at page:

Download "Structured sparsity-inducing norms through submodular functions"

Transcription

1 Structured sparsity-inducing norms through submodular functions Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with R. Jenatton, J. Mairal, G. Obozinski IMA - September 2011

2 Outline Introduction: Sparse methods for machine learning Need for structured sparsity: Going beyond the l 1 -norm Classical approaches to structured sparsity Linear combinations of l q -norms Submodular functions Structured sparsity through submodular functions Relaxation of the penalization of supports Unified algorithms and analysis Extensions Shaping level sets through symmetric submodular functions l q -norm relaxation of combinatorial penalties

3 Sparsity in supervised machine learning Observed data (x i,y i ) R p R, i = 1,...,n Response vector y = (y 1,...,y n ) R n Design matrix X = (x 1,...,x n ) R n p Regularized empirical risk minimization: 1 min w R p n n i=1 l(y i,w x i )+λω(w) = min w R p L(y,Xw)+λΩ(w) Norm Ω to promote sparsity square loss + l 1 -norm basis pursuit in signal processing (Chen et al., 2001), Lasso in statistics/machine learning(tibshirani, 1996) Proxy for interpretability Allow high-dimensional inference: log p = O(n)

4 Sparsity in unsupervised machine learning Multiple responses/signals y = (y 1,...,y k ) R n k min X=(x 1,...,x p ) min w 1,...,w k R p k j=1 { } L(y j,xw j )+λω(w j )

5 Sparsity in unsupervised machine learning Multiple responses/signals y = (y 1,...,y k ) R n k min X=(x 1,...,x p ) min w 1,...,w k R p k j=1 { } L(y j,xw j )+λω(w j ) Only responses are observed Dictionary learning Learn X = (x 1,...,x p ) R n p such that j, x j 2 1 min X=(x 1,...,x p ) min w 1,...,w k R p k j=1 { } L(y j,xw j )+λω(w j ) Olshausen and Field (1997); Elad and Aharon (2006); Mairal et al. (2009a) sparse PCA: replace x j 2 1 by Θ(x j ) 1

6 Sparsity in signal processing Multiple responses/signals x = (x 1,...,x k ) R n k min D=(d 1,...,d p ) min α 1,...,α k R p k j=1 { } L(x j,dα j )+λω(α j ) Only responses are observed Dictionary learning Learn D = (d 1,...,d p ) R n p such that j, d j 2 1 min D=(d 1,...,d p ) min α 1,...,α k R p k j=1 { } L(x j,dα j )+λω(α j ) Olshausen and Field (1997); Elad and Aharon (2006); Mairal et al. (2009a) sparse PCA: replace d j 2 1 by Θ(d j ) 1

7 Interpretability Why structured sparsity? Structured dictionary elements (Jenatton et al., 2009b) Dictionary elements organized in a tree or a grid (Kavukcuoglu et al., 2009; Jenatton et al., 2010; Mairal et al., 2010)

8 Structured sparse PCA (Jenatton et al., 2009b) raw data sparse PCA Unstructed sparse PCA many zeros do not lead to better interpretability

9 Structured sparse PCA (Jenatton et al., 2009b) raw data sparse PCA Unstructed sparse PCA many zeros do not lead to better interpretability

10 Structured sparse PCA (Jenatton et al., 2009b) raw data Structured sparse PCA Enforce selection of convex nonzero patterns robustness to occlusion in face identification

11 Structured sparse PCA (Jenatton et al., 2009b) raw data Structured sparse PCA Enforce selection of convex nonzero patterns robustness to occlusion in face identification

12 Interpretability Why structured sparsity? Structured dictionary elements (Jenatton et al., 2009b) Dictionary elements organized in a tree or a grid (Kavukcuoglu et al., 2009; Jenatton et al., 2010; Mairal et al., 2010)

13 Modelling of text corpora (Jenatton et al., 2010)

14 Interpretability Why structured sparsity? Structured dictionary elements (Jenatton et al., 2009b) Dictionary elements organized in a tree or a grid (Kavukcuoglu et al., 2009; Jenatton et al., 2010; Mairal et al., 2010)

15 Interpretability Why structured sparsity? Structured dictionary elements (Jenatton et al., 2009b) Dictionary elements organized in a tree or a grid (Kavukcuoglu et al., 2009; Jenatton et al., 2010; Mairal et al., 2010) Stability and identifiability Optimization problem min w R p L(y,Xw)+λ w 1 is unstable Codes w j often used in later processing (Mairal et al., 2009c) Prediction or estimation performance When prior knowledge matches data (Haupt and Nowak, 2006; Baraniuk et al., 2008; Jenatton et al., 2009a; Huang et al., 2009) Numerical efficiency Non-linear variable selection with 2 p subsets (Bach, 2008)

16 Classical approaches to structured sparsity Many application domains Computer vision (Cevher et al., 2008; Mairal et al., 2009b) Neuro-imaging (Gramfort and Kowalski, 2009; Jenatton et al., 2011) Bio-informatics (Rapaport et al., 2008; Kim and Xing, 2010) Non-convex approaches Haupt and Nowak (2006); Baraniuk et al. (2008); Huang et al. (2009) Convex approaches Design of sparsity-inducing norms

17 Outline Introduction: Sparse methods for machine learning Need for structured sparsity: Going beyond the l 1 -norm Classical approaches to structured sparsity Linear combinations of l q -norms Submodular functions Structured sparsity through submodular functions Relaxation of the penalization of supports Unified algorithms and analysis Extensions Shaping level sets through symmetric submodular functions l q -norm relaxation of combinatorial penalties

18 Sparsity-inducing norms Popular choice for Ω The l 1 -l 2 norm, G H w G 2 = G H ( j G wj 2 ) 1/2 with H a partition of {1,...,p} The l 1 -l 2 norm sets to zero groups of non-overlapping variables (as opposed to single variables for the l 1 -norm) For the square loss, group Lasso (Yuan and Lin, 2006) G1 G 2 G 3

19 Unit norm balls Geometric interpretation w 2 w 1 w 2 1 +w w 3

20 Sparsity-inducing norms Popular choice for Ω The l 1 -l 2 norm, G H w G 2 = G H ( j G wj 2 ) 1/2 with H a partition of {1,...,p} The l 1 -l 2 norm sets to zero groups of non-overlapping variables (as opposed to single variables for the l 1 -norm) For the square loss, group Lasso (Yuan and Lin, 2006) G1 G 2 G 3 However, the l 1 -l 2 norm encodes fixed/static prior information, requires to know in advance how to group the variables What happens if the set of groups H is not a partition anymore?

21 Structured sparsity with overlapping groups (Jenatton, Audibert, and Bach, 2009a) When penalizing by the l 1 -l 2 norm, ( wj 2 ) 1/2 G 2 = G H w G H j G The l 1 norm induces sparsity at the group level: Some w G s are set to zero Inside the groups, the l 2 norm does not promote sparsity G 1 G 2 G 3

22 Structured sparsity with overlapping groups (Jenatton, Audibert, and Bach, 2009a) When penalizing by the l 1 -l 2 norm, ( wj 2 ) 1/2 G 2 = G H w G H j G The l 1 norm induces sparsity at the group level: Some w G s are set to zero Inside the groups, the l 2 norm does not promote sparsity G 1 G 2 G 3 The zero pattern of w is given by {j, w j = 0} = G for some H H G H Zero patterns are unions of groups

23 Examples of set of groups H Selection of contiguous patterns on a sequence, p = 6 H is the set of blue groups Any union of blue groups set to zero leads to the selection of a contiguous pattern

24 Examples of set of groups H Selection of rectangles on a 2-D grids, p = 25 H is the set of blue/green groups (with their not displayed complements) Any union of blue/green groups set to zero leads to the selection of a rectangle

25 Examples of set of groups H Selection of diamond-shaped patterns on a 2-D grids, p = 25. It is possible to extend such settings to 3-D space, or more complex topologies

26 Relationship between H and Zero Patterns (Jenatton, Audibert, and Bach, 2009a) H Zero patterns: by generating the union-closure of H Zero patterns H: Design groups H from any union-closed set of zero patterns Design groups H from any intersection-closed set of non-zero patterns

27 Unit norm balls Geometric interpretation w 1 w 2 1 +w w 3 w 2 + w 1 + w 2

28 Optimization for sparsity-inducing norms (see Bach, Jenatton, Mairal, and Obozinski, 2011) Gradient descent as a proximal method (differentiable functions) w t+1 = arg min w R pj(w t)+(w w t ) J(w t )+ L 2 w w t 2 2 w t+1 = w t 1 L J(w t)

29 Optimization for sparsity-inducing norms (see Bach, Jenatton, Mairal, and Obozinski, 2011) Gradient descent as a proximal method (differentiable functions) w t+1 = arg min w R pj(w t)+(w w t ) J(w t )+ L 2 w w t 2 2 w t+1 = w t 1 L J(w t) Problems of the form: min w R pl(w)+λω(w) w t+1 = arg min w R pl(w t)+(w w t ) L(w t )+λω(w)+ L 2 w w t 2 2 Ω(w) = w 1 Thresholded gradient descent Similar convergence rates than smooth optimization Acceleration methods (Nesterov, 2007; Beck and Teboulle, 2009)

30 Sparse Structured PCA (Jenatton, Obozinski, and Bach, 2009b) Learning sparse and structured dictionary elements: 1 min W R k n,x R p k n n i=1 y i Xw i 2 2+λ p Ω(x j ) s.t. i, w i 2 1 j=1

31 Application to face databases (1/3) raw data (unstructured) NMF NMF obtains partially local features

32 Application to face databases (2/3) (unstructured) sparse PCA Structured sparse PCA Enforce selection of convex nonzero patterns robustness to occlusion

33 Application to face databases (2/3) (unstructured) sparse PCA Structured sparse PCA Enforce selection of convex nonzero patterns robustness to occlusion

34 Application to face databases (3/3) Quantitative performance evaluation on classification task % Correct classification raw data PCA NMF SPCA shared SPCA SSPCA shared SSPCA Dictionary size

35 Dictionary learning vs. sparse structured PCA Exchange roles of X and w Sparse structured PCA (structured dictionary elements): 1 min W R k n,x R p k n n i=1 y i Xw i 2 2 +λ k j=1 Ω(x j ) s.t. i, w i 2 1. Dictionary learning with structured sparsity for codes w: 1 min W R k n,x R p k n n i=1 y i Xw i 2 2+λΩ(w i ) s.t. j, x j 2 1. Optimization: proximal methods Requires solving many times min w R p 1 2 y w 2 2+λΩ(w) Modularity of implementation if proximal step is efficient (Jenatton et al., 2010; Mairal et al., 2010)

36 Hierarchical dictionary learning (Jenatton, Mairal, Obozinski, and Bach, 2010) Structure on codes w (not on dictionary X) Hierarchical penalization: Ω(w) = G H w G 2 where groups G in H are equal to set of descendants of some nodes in a tree Variable selected after its ancestors (Zhao et al., 2009; Bach, 2008)

37 Hierarchical dictionary learning Modelling of text corpora Each document is modelled through word counts Low-rank matrix factorization of word-document matrix Probabilistic topic models (Blei et al., 2003) Similar structures based on non parametric Bayesian methods (Blei et al., 2004) Can we achieve similar performance with simple matrix factorization formulation?

38 Modelling of text corpora - Dictionary tree

39 Application to background subtraction (Mairal, Jenatton, Obozinski, and Bach, 2010) Input l 1 -norm Structured norm

40 Application to background subtraction (Mairal, Jenatton, Obozinski, and Bach, 2010) Background l 1 -norm Structured norm

41 Topographic dictionaries (Mairal, Jenatton, Obozinski, and Bach, 2010)

42 Application to neuro-imaging Structured sparsity for fmri (Jenatton et al., 2011) Brain reading : prediction of (seen) object size Multi-scale activity levels through hierarchical penalization

43 Application to neuro-imaging Structured sparsity for fmri (Jenatton et al., 2011) Brain reading : prediction of (seen) object size Multi-scale activity levels through hierarchical penalization

44 Application to neuro-imaging Structured sparsity for fmri (Jenatton et al., 2011) Brain reading : prediction of (seen) object size Multi-scale activity levels through hierarchical penalization

45 Structured sparse PCA on resting state activity (Varoquaux, Jenatton, Gramfort, Obozinski, Thirion, and Bach, 2010)

46 An alternative approach: Latent group Lasso (Jacob, Obozinski, and Vert, 2009) Ω latent (w) = min d A v A q (v A R p ) A G A G s.t. v A i = 0 for i / A, w = A Gv A. Supports are typically unions of subcollections of the sets defining the norm See analysis by Rao et al. (2011) in the context of signal processing

47 Latent group Lasso (Jacob et al., 2009) Application to Bioinformatics Metastasis prediction from microarray data Biological pathways Dedicated sparsity-inducing norm for better interpretability and prediction

48 Outline Introduction: Sparse methods for machine learning Need for structured sparsity: Going beyond the l 1 -norm Classical approaches to structured sparsity Linear combinations of l q -norms Submodular functions Structured sparsity through submodular functions Relaxation of the penalization of supports Unified algorithms and analysis Extensions Shaping level sets through symmetric submodular functions l q -norm relaxation of combinatorial penalties

49 l 1 -norm = convex envelope of cardinality of support Let w R p. Let V = {1,...,p} and Supp(w) = {j V, w j 0} Cardinality of support: w 0 = Card(Supp(w)) Convex envelope = largest convex lower bound (see, e.g., Boyd and Vandenberghe, 2004) w 0 w l 1 -norm = convex envelope of l 0 -quasi-norm on the l -ball [ 1,1] p

50 Convex envelopes of general functions of the support (Bach, 2010) Let F : 2 V R be a set-function Assume F is non-decreasing (i.e., A B F(A) F(B)) Explicit prior knowledge on supports (Haupt and Nowak, 2006; Baraniuk et al., 2008; Huang et al., 2009) Define Θ(w) = F(Supp(w)): How to get its convex envelope? 1. Possible if F is also submodular 2. Allows unified theory and algorithm 3. Provides new regularizers

51 Submodular functions (Fujishige, 2005; Bach, 2010b) F : 2 V R is submodular if and only if A,B V, F(A)+F(B) F(A B)+F(A B) k V, A F(A {k}) F(A) is non-increasing

52 Submodular functions (Fujishige, 2005; Bach, 2010b) F : 2 V R is submodular if and only if A,B V, F(A)+F(B) F(A B)+F(A B) k V, A F(A {k}) F(A) is non-increasing Intuition 1: defined like concave functions ( diminishing returns ) Example: F : A g(card(a)) is submodular if g is concave

53 Submodular functions (Fujishige, 2005; Bach, 2010b) F : 2 V R is submodular if and only if A,B V, F(A)+F(B) F(A B)+F(A B) k V, A F(A {k}) F(A) is non-increasing Intuition 1: defined like concave functions ( diminishing returns ) Example: F : A g(card(a)) is submodular if g is concave Intuition 2: behave like convex functions Polynomial-time minimization, conjugacy theory

54 Submodular functions (Fujishige, 2005; Bach, 2010b) F : 2 V R is submodular if and only if A,B V, F(A)+F(B) F(A B)+F(A B) k V, A F(A {k}) F(A) is non-increasing Intuition 1: defined like concave functions ( diminishing returns ) Example: F : A g(card(a)) is submodular if g is concave Intuition 2: behave like convex functions Polynomial-time minimization, conjugacy theory Used in several areas of signal processing and machine learning Total variation/graph cuts (Chambolle, 2005; Boykov et al., 2001) Optimal design (Krause and Guestrin, 2005)

55 Submodular functions - Examples Concave functions of the cardinality: g( A ) Cuts Entropies H((X k ) k A ) from p random variables X 1,...,X p Gaussian variables H((X k ) k A ) logdetσ AA Functions of eigenvalues of sub-matrices Network flows Efficient representation for set covers Rank functions of matroids

56 Submodular functions - Lovász extension Subsets may be identified with elements of {0,1} p Given any set-function F and w such that w j1 w jp, define: f(w) = p k=1 w jk [F({j 1,...,j k }) F({j 1,...,j k 1 })] If w = 1 A, f(w) = F(A) extension from {0,1} p to R p f is piecewise affine and positively homogeneous F is submodular if and only if f is convex (Lovász, 1982) Minimizing f(w) on w [0,1] p equivalent to minimizing F on 2 V

57 Submodular functions - Submodular polyhedron Submodular polyhedron: P(F) = {s R p, A V, s(a) F(A)} s 2 s 3 B(F) P(F) B(F) s 1 P(F) s 2 s 1

58 Submodular functions - Submodular polyhedron Submodular polyhedron: P(F) = {s R p, A V, s(a) F(A)} Link with Lovász extension (Lovász, 1982): if w R p +, then max s P(F) w s = f(w) Maximizer obtained by greedy algorithm: Sort the components of w, as w j1 w jp Set s jk = F({j 1,...,j k }) F({j 1,...,j k 1 }) Other operations on submodular polyhedron (Bach, 2010b)

59 Submodular functions - Optimization Submodular function minimization in O(p 6 ) Schrijver (2000); Iwata et al. (2001); Orlin (2009) Efficient active set algorithm with no complexity bound Based on the efficient computability of the support function Fujishige and Isotani (2011); Wolfe (1976) Submodular function maximization is NP-hard 1/4-approximation with random subset, 1/2 with local search (Feige et al., 2007) (1 1/e)-approximation for maximizing a monotonic submodular function with a cardinality constraint (Nemhauser et al., 1978) Special cases with faster algorithms: cuts, flows

60 Submodular functions and structured sparsity Let F : 2 V R be a non-decreasing submodular set-function Proposition: the convex envelope of Θ : w F(Supp(w)) on the l -ball is Ω : w f( w ) where f is the Lovász extension of F

61 Submodular functions and structured sparsity Let F : 2 V R be a non-decreasing submodular set-function Proposition: the convex envelope of Θ : w F(Supp(w)) on the l -ball is Ω : w f( w ) where f is the Lovász extension of F Sparsity-inducing properties: Ω is a polyhedral norm (0,1)/F({2}) (1,1)/F({1,2}) (1,0)/F({1}) A if stable if for all B A, B A F(B) > F(A) With probability one, stable sets are the only allowed active sets

62 Polyhedral unit balls w 3 w 2 w 1 F(A) = A Ω(w) = w 1 F(A) = min{ A,1} Ω(w) = w F(A) = A 1/2 all possible extreme points F(A) = 1 {A {1} } +1 {A {2,3} } Ω(w) = w 1 + w {2,3} F(A) = 1 {A {1,2,3} } +1 {A {2,3} } +1 {A {3} } Ω(w) = w + w {2,3} + w 3

63 Submodular functions and structured sparsity Examples From Ω(w) to F(A): provides new insights into existing norms Grouped norms with overlapping groups (Jenatton et al., 2009a) Ω(w) = G H w G l 1 -l norm sparsity at the group level Some w G s are set to zero for some groups G ( Supp(w) ) c = G H G for some H H

64 Submodular functions and structured sparsity Examples From Ω(w) to F(A): provides new insights into existing norms Grouped norms with overlapping groups (Jenatton et al., 2009a) Ω(w) = G H w G F(A) = Card ( {G H, G A } ) l 1 -l norm sparsity at the group level Some w G s are set to zero for some groups G ( Supp(w) ) c = G H G for some H H Justification not only limited to allowed sparsity patterns

65 Selection of contiguous patterns in a sequence Selection of contiguous patterns in a sequence H is the set of blue groups: any union of blue groups set to zero leads to the selection of a contiguous pattern

66 Selection of contiguous patterns in a sequence Selection of contiguous patterns in a sequence H is the set of blue groups: any union of blue groups set to zero leads to the selection of a contiguous pattern G H w G F(A) = p 2+Range(A) if A Jump from 0 to p 1: tends to include all variables simultaneously Add ν A to smooth the kink: all sparsity patterns are possible Contiguous patterns are favored (and not forced)

67 Extensions of norms with overlapping groups Selection of rectangles (at any position) in a 2-D grids Hierarchies

68 Submodular functions and structured sparsity Examples From Ω(w) to F(A): provides new insights into existing norms Grouped norms with overlapping groups (Jenatton et al., 2009a) Ω(w) = w G F(A) = Card ( {G H, G A } ) G H Justification not only limited to allowed sparsity patterns

69 Submodular functions and structured sparsity Examples From Ω(w) to F(A): provides new insights into existing norms Grouped norms with overlapping groups (Jenatton et al., 2009a) Ω(w) = w G F(A) = Card ( {G H, G A } ) G H Justification not only limited to allowed sparsity patterns From F(A) to Ω(w): provides new sparsity-inducing norms F(A) = g(card(a)) Ω is a combination of order statistics Non-factorial priors for supervised learning: Ω depends on the eigenvalues of X A X A and not simply on the cardinality of A

70 Non-factorial priors for supervised learning Joint variable selection and regularization. Given support A V, 1 min w A R A 2n y X Aw A 2 2+ λ 2 w A 2 2 Minimizing with respect to A will always lead to A = V Information/model selection criterion F(A) min min 1 A V w A R A 2n y X Aw A λ 2 w A 2 2 +F(A) min w R p 1 2n y Xw 2 2+ λ 2 w 2 2+F(Supp(w))

71 Non-factorial priors for supervised learning Selection of subset A from design X R n p with l 2 -penalization Frequentist analysis (Mallow s C L ): trx A X A(X A X A+λI) 1 Not submodular Bayesian analysis (marginal likelihood): logdet(x A X A+λI) Submodular (also true for tr(x A X A) 1/2 ) p n k submod. l 2 vs. submod. l 1 vs. submod. greedy vs. submod ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± 3.5

72 Unified optimization algorithms Polyhedral norm with O(3 p ) faces and extreme points Not suitable to linear programming toolboxes Subgradient (w Ω(w) non-differentiable) subgradient may be obtained in polynomial time too slow

73 Unified optimization algorithms Polyhedral norm with O(3 p ) faces and extreme points Not suitable to linear programming toolboxes Subgradient (w Ω(w) non-differentiable) subgradient may be obtained in polynomial time too slow Proximal methods (e.g., Beck and Teboulle, 2009) min w R pl(y,xw)+λω(w): differentiable + non-differentiable Efficient when (P) : min w R p 1 2 w v 2 2+λΩ(w) is easy Proposition: (P) is equivalent to min A V λf(a) j A v j with minimum-norm-point algorithm Possible complexity bound O(p 6 ), but empirically O(p 2 ) (or more) Faster algorithm for special case (Mairal et al., 2010)

74 Proximal methods for Lovász extensions Proposition (Chambolle and Darbon, 2009): let w be the solution of min w R p 1 2 w v 2 2+λf(w). Then the solutions of min λf(a)+ (α v j ) A V j A are the sets A α such that {w > α} A α {w α} Parametric submodular function optimization General decomposition strategy for f( w ) and f(w) (Groenevelt, 1991) Efficient only when submodular minimization is efficient Otherwise, minimum-norm-point algorithm (a.k.a. Frank Wolfe) is preferable

75 Comparison of optimization algorithms Synthetic example with p = 1000 and F(A) = A 1/2 ISTA: proximal method FISTA: accelerated variant (Beck and Teboulle, 2009) f(w) min(f) fista ista subgradient time (seconds)

76 Comparison of optimization algorithms (Mairal, Jenatton, Obozinski, and Bach, 2010) Small scale Specific norms which can be implemented through network flows log(primal Optimum) n=100, p=1000, one dimensional DCT ProxFlox SG ADMM Lin ADMM QP CP log(seconds)

77 Comparison of optimization algorithms (Mairal, Jenatton, Obozinski, and Bach, 2010) Large scale Specific norms which can be implemented through network flows 2 n=1024, p=10000, one dimensional DCT n=1024, p=100000, one dimensional DCT 2 log(primal Optimum) ProxFlox SG 8 ADMM Lin ADMM CP log(seconds) 4 log(primal Optimum) ProxFlox 8 SG ADMM Lin ADMM log(seconds) 4

78 Unified theoretical analysis Decomposability Key to theoretical analysis (Negahban et al., 2009) Property: w R p, and J V, if min j J w j max j J c w j, then Ω(w) = Ω J (w J )+Ω J (w J c) Support recovery Extension of known sufficient condition (Zhao and Yu, 2006; Negahban and Wainwright, 2008) High-dimensional inference Extension of known sufficient condition (Bickel et al., 2009) Matches with analysis of Negahban et al. (2009) for common cases

79 Support recovery - min w R p Notation 1 2n y Xw 2 2+λΩ(w) ρ(j) = min B J c F(B J) F(J) F(B) (0,1] (for J stable) c(j) = sup w R pω J (w J )/ w J 2 J 1/2 max k V F({k}) Proposition Assume y = Xw +σε, with ε N(0,I) J = smallest stable set containing the support of w Assume ν = min j,w j 0 w j > 0 Let Q = 1 n X X R p p. Assume κ = λ min (Q JJ ) > 0 Assume that for η > 0, (Ω J ) [(Ω J (Q 1 JJ Q Jj)) j J c] 1 η If λ κν 2c(J), ŵ has support equal to J, with probability larger than 1 3P ( Ω (z) > ληρ(j) n) 2σ z is a multivariate normal with covariance matrix Q

80 Consistency - min w R p Proposition 1 2n y Xw 2 2+λΩ(w) Assume y = Xw +σε, with ε N(0,I) J = smallest stable set containing the support of w Let Q = 1 n X X R p p. Assume that s.t. Ω J ( J c) 3Ω J ( J ), Q κ J 2 2 Then Ω(ŵ w ) 24c(J)2 λ κρ(j) 2 and 1 n Xŵ Xw c(J)2 λ 2 κρ(j) 2 with probability larger than 1 P ( Ω (z) > λρ(j) n 2σ z is a multivariate normal with covariance matrix Q Concentration inequality (z normal with covariance matrix Q): T set of stable inseparable sets Then P(Ω (z) > t) A T 2 A exp ( t2 F(A) 2 /2) 1 Q AA 1 )

81 Outline Introduction: Sparse methods for machine learning Need for structured sparsity: Going beyond the l 1 -norm Classical approaches to structured sparsity Linear combinations of l q -norms Submodular functions Structured sparsity through submodular functions Relaxation of the penalization of supports Unified algorithms and analysis Extensions Shaping level sets through symmetric submodular functions l q -norm relaxation of combinatorial penalties

82 Symmetric submodular functions (Bach, 2010a) Let F : 2 V R be a symmetric submodular set-function Proposition: The Lovász extension f(w) is the convex envelope of the function w max α R F({w α}) on the set [0,1] p +R1 V = {w R p, max k V w k min k V w k 1}.

83 Symmetric submodular functions (Bach, 2010a) Let F : 2 V R be a symmetric submodular set-function Proposition: The Lovász extension f(w) is the convex envelope of the function w max α R F({w α}) on the set [0,1] p +R1 V = {w R p, max k V w k min k V w k 1}. w > w >w 1 3 w =w 2 3 w > w >w 2 3 (1,0,1)/F({1,3}) (1,0,0)/F({1}) 1 2 w 1> w 2>w3 w =w 1 2 (0,0,1)/F({3}) w > w >w w > w >w 2 (1,1,0)/F({1,2}) 2 1 (0,1,1)/F({2,3}) 1 w 2> w 3>w1 (0,1,0)/F({2}) 3 3 w =w 1 3 (1,0,0) (1,0,1)/2 (0,0,1) (0,1,1) (0,1,0)/2 (1,1,0)

84 Symmetric submodular functions - Examples From Ω(w) to F(A): provides new insights into existing norms Cuts - total variation F(A) = d(k,j) f(w) = d(k,j)(w k w j ) + k A,j V \A k,j V NB: graph may be directed

85 Symmetric submodular functions - Examples From F(A) to Ω(w): provides new sparsity-inducing norms F(A) = g(card(a)) priors on the size and numbers of clusters weights 0 5 weights 0 5 weights λ 0.03 A (p A ) λ 3 1 A (0,p) λ 0.4 max{ A,p A } Convex formulations for clustering (Hocking, Joulin, Bach, and Vert, 2011)

86 Symmetric submodular functions - Examples From F(A) to Ω(w): provides new sparsity-inducing norms Regular functions (Boykov et al., 2001; Chambolle and Darbon, 2009) F(A)= min d(k, j)+λ A B W B W k B, j W\B V weights 0 weights 0 weights

87 l q -relaxation of combinatorial penalties (Obozinski and Bach, 2011) Main result of Bach (2010): f( w ) is the convex envelope of F(Supp(w)) on [ 1,1] p Problems: Limited to submodular functions Limited to l -relaxation: undesired artefacts (0,1)/F({2}) (1,1)/F({1,2}) (1,0)/F({1})

88 From l to l 2 Variational formulations for subquadratic norms (Bach et al., 2011) Ω(w) = min η R p p j=1 wj g(η) = min η j 2 η H p j=1 where g is a convex homogeneous and H = {η,g(η) 1} Often used for computational reasons (Lasso, group Lasso) May also be used to define a norm (Micchelli et al., 2011) w 2 j η j

89 From l to l 2 Variational formulations for subquadratic norms (Bach et al., 2011) Ω(w) = min η R p p j=1 wj g(η) = min η j 2 η H p j=1 where g is a convex homogeneous and H = {η,g(η) 1} Often used for computational reasons (Lasso, group Lasso) May also be used to define a norm (Micchelli et al., 2011) If F is a nondecreasing submodular function with Lovász extension f 1 p wj 2 Define Ω 2 (w) = min + 1 η R p 2 η + j=1 j 2 f(η) Is it the convex relaxation of some natural function? w 2 j η j

90 l q -relaxation of submodular penalties (Obozinski and Bach, 2011) F a nondecreasing submodular function with Lovász extension f Define Ω q (w) = min η R p + 1 q i V w i q η q 1 i + 1 r f(η) with 1 q + 1 r = 1. Proposition 1: Ω q is the convex envelope of w F(Supp(w)) w q Proposition 2: Ω q is the homogeneous convex envelope of w 1 r F(Supp(w))+ 1 q w q q Jointly penalizing and regularizing Special cases q = 1, q = 2 and q =

91 Some simple examples F Ω q A w 1 If H is a partition of V: 1 {A } w q B H 1 {A B } B H w B q Recover results of Bach (2010) when q = and F submodular However when H is not a partition and q <, Ω q is not in general an l 1 /l q -norm! F does not need to be submodular New norms

92 l q -relaxation of combinatorial penalties (Obozinski and Bach, 2011) F any strictly positive set-function (with potentially infinite values) Jointly penalizing and regularizing. Two formulations: homogeneous convex envelope of w F(Supp(w))+ w q q convex envelope of w F(Supp(w)) w q Proposition: These envelopes are equal to a constant times a norm Ω F q = Ω q defined through its dual norm itsdualnormisequalto (Ω q ) (s) = max A V Three-line proof s A r F(A) 1/r, with 1 q +1 r = 1

93 l q -relaxation of combinatorial penalties Proof Denote Θ(w) = w q F(Supp(w)) 1/r, and compute its Fenchel conjugate: Θ (s) = max w R pw s w q F(Supp(w)) 1/r = max A V max As A w A q F(A) 1/r w A (R ) Aw = max A V ι { s A r F(A) 1/r } = ι {Ω q (s) 1}, where ι {s S} is the indicator of the set S Consequence: If F is submodular and q = +, Ω(w) = f( w )

94 How tight is the relaxation? What information of F is kept after the relaxation? When F is submodular and q = the Lovász extension f = Ω is said to extend F because Ω F (1 A ) = f(1 A ) = F(A) In general we can still consider the function : G(A) = Ω F (1 A ) Do we have G(A) = F(A)? How is G related to F? What is the norm Ω G which is associated with G?

95 Lower combinatorial envelope Given a function F : 2 V R, define its lower combinatorial envelope as the function G given by G(A) = max s P(F) s(a) with P(F) = {s R p, A V, s(a) F(A)}. Lemma 1 (Idempotence) P(F) = P(G) G is its own lower combinatorial envelope For all q 1, Ω F q = Ω G q Lemma 2 (Extension property) Ω F (1 A ) = max 1 As = max (Ω F ) (s) 1 s P(F) s 1 A = G(A)

96 l q -relaxation of combinatorial penalties A large latent group Lasso Goal: Primal formulation for (Ω q ) (s) = max A V s A r F(A) 1/r Define V = {v = (v A ) A V ( R V) 2 V s.t. Supp(v A ) A} Proposition (Jacob et al., 2009; Obozinski and Bach, 2011): Ω q (w) = min F(A) 1 r v A q s.t. w = v A, v V A V A V

97 Block coding (Huang et al., 2009) F : 2 V R + {+ } a positive set function Define the weighted set cover problem G(A) = min F(B i ) s.t. A i I i IB i I = min (δ A ) A V {0,1} 2V + A V F(A)δ A, s.t. A V δ A 1 A 1 B

98 A principled relaxation for block-coding Define the weighted set cover problem G(A) = min (δ A ) A V {0,1} 2V + A V F(A)δ A, s.t. A V δ A 1 A 1 B G the lower combinatorial envelope of F Proposition: G is the value of the min fractional weighted set cover G(A) = min (δ A ) A V [0,1] 2V + A V F(A)δ A, s.t. A V δ A 1 A 1 B and we have G(A) G(A) F(A)

99 Examples Examples from submodular functions New norms for the overlapping group lasso (Obozinski and Bach, 2011) with fewer artefacts Some functions are not attainable Many set-functions lead to G(A) = A and the l 1 -norm Example: F(A) = A 1 A A Convexification is not always useful

100 Conclusion Structured sparsity for machine learning and statistics Many applications (image, audio, text, etc.) May be achieved through structured sparsity-inducing norms Link with submodular functions: unified analysis and algorithms

101 Conclusion Structured sparsity for machine learning and statistics Many applications (image, audio, text, etc.) May be achieved through structured sparsity-inducing norms Link with submodular functions: unified analysis and algorithms On-going work on structured sparsity Norm design beyond submodular functions Instance of general framework of Chandrasekaran et al. (2010) Links with greedy methods (Haupt and Nowak, 2006; Baraniuk et al., 2008; Huang et al., 2009) Links between norm Ω, support Supp(w), and design X (see, e.g., Grave, Obozinski, and Bach, 2011) Achieving log p = O(n) algorithmically (Bach, 2008)

102 References F. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In Advances in Neural Information Processing Systems, F. Bach. Structured sparsity-inducing norms through submodular functions. In NIPS, F. Bach. Shaping level sets with submodular functions. Technical Report , HAL, 2010a. F. Bach. Convex analysis and optimization with submodular functions: a tutorial. Technical Report , HAL, 2010b. F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Technical Report , HAL, R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde. Model-based compressive sensing. Technical report, arxiv: , A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1): , P. Bickel, Y. Ritov, and A. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Annals of Statistics, 37(4): , D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. The Journal of Machine Learning Research, 3: , January D. Blei, T.L. Griffiths, M.I. Jordan, and J.B. Tenenbaum. Hierarchical topic models and the nested Chinese restaurant process. Advances in neural information processing systems, 16:106, S. P. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.

103 Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Trans. PAMI, 23(11): , V. Cevher, M. F. Duarte, C. Hegde, and R. G. Baraniuk. Sparse signal recovery using markov random fields. In Advances in Neural Information Processing Systems, A. Chambolle. Total variation minimization and a class of binary MRF models. In Energy Minimization Methods in Computer Vision and Pattern Recognition, pages Springer, A. Chambolle and J. Darbon. On total variation minimization and surface evolution using parametric maximum flows. International Journal of Computer Vision, 84(3): , V. Chandrasekaran, B. Recht, P.A. Parrilo, and A.S. Willsky. The convex geometry of linear inverse problems. Arxiv preprint arxiv: , S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Review, 43(1): , M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing, 15(12): , U. Feige, V.S. Mirrokni, and J. Vondrak. Maximizing non-monotone submodular functions. In Proc. Symposium on Foundations of Computer Science, pages IEEE Computer Society, S. Fujishige. Submodular Functions and Optimization. Elsevier, S. Fujishige and S. Isotani. A submodular function minimization algorithm based on the minimum-norm base. Pacific Journal of Optimization, 7:3 17, A. Gramfort and M. Kowalski. Improving M/EEG source localization with an inter-condition sparse prior. In IEEE International Symposium on Biomedical Imaging, 2009.

104 E. Grave, G. Obozinski, and F. Bach. Trace lasso: a trace norm regularization for correlated designs. Arxiv preprint arxiv: , H. Groenevelt. Two algorithms for maximizing a separable concave function over a polymatroid feasible region. European Journal of Operational Research, 54(2): , J. Haupt and R. Nowak. Signal reconstruction from noisy random projections. IEEE Transactions on Information Theory, 52(9): , T. Hocking, A. Joulin, F. Bach, and J.-P. Vert. Clusterpath: an algorithm for clustering using convex fusion penalties. In Proc. ICML, J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. In Proceedings of the 26th International Conference on Machine Learning (ICML), S. Iwata, L. Fleischer, and S. Fujishige. A combinatorial strongly polynomial algorithm for minimizing submodular functions. Journal of the ACM, 48(4): , L. Jacob, G. Obozinski, and J.-P. Vert. Group Lasso with overlaps and graph Lasso. In Proceedings of the 26th International Conference on Machine Learning (ICML), R. Jenatton, J.Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. Technical report, arxiv: , 2009a. R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. Technical report, arxiv: , 2009b. R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary learning. In Submitted to ICML, R. Jenatton, A. Gramfort, V. Michel, G. Obozinski, E. Eger, F. Bach, and B. Thirion. Multi-scale

105 mining of fmri data with hierarchical structured sparsity. Technical report, Preprint arxiv: , In submission to SIAM Journal on Imaging Sciences. K. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun. Learning invariant features through topographic filter maps. In Proceedings of CVPR, S. Kim and E. P. Xing. Tree-guided group Lasso for multi-task regression with structured sparsity. In Proceedings of the International Conference on Machine Learning (ICML), A. Krause and C. Guestrin. Near-optimal nonmyopic value of information in graphical models. In Proc. UAI, L. Lovász. Submodular functions and convexity. Mathematical programming: the state of the art, Bonn, pages , J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. Technical report, arxiv: , 2009a. J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Non-local sparse models for image restoration. In Computer Vision, 2009 IEEE 12th International Conference on, pages IEEE, 2009b. J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Supervised dictionary learning. Advances in Neural Information Processing Systems (NIPS), 21, 2009c. J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Network flow algorithms for structured sparsity. In NIPS, C.A. Micchelli, J.M. Morales, and M. Pontil. Regularizers for structured sparsity. Arxiv preprint arxiv: , 2011.

106 S. Negahban and M. J. Wainwright. Joint support recovery under high-dimensional scaling: Benefits and perils of l 1 -l -regularization. In Adv. NIPS, S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers G.L. Nemhauser, L.A. Wolsey, and M.L. Fisher. An analysis of approximations for maximizing submodular set functions i. Mathematical Programming, 14(1): , Y. Nesterov. Gradient methods for minimizing composite objective function. Center for Operations Research and Econometrics (CORE), Catholic University of Louvain, Tech. Rep, 76, G. Obozinski and F. Bach. Convex relaxation of combinatorial penalties. Technical report, HAL, B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37: , J.B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Mathematical Programming, 118(2): , N. Rao, B. Recht, and R. Nowak. Tight measurement bounds for exact recovery of structured sparse signals. Arxiv preprint arxiv: , F. Rapaport, E. Barillot, and J.-P. Vert. Classification of arraycgh data using fused SVM. Bioinformatics, 24(13):i375 i382, Jul A. Schrijver. A combinatorial algorithm minimizing submodular functions in strongly polynomial time. Journal of Combinatorial Theory, Series B, 80(2): , R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of The Royal Statistical Society Series B, 58(1): , 1996.

107 G. Varoquaux, R. Jenatton, A. Gramfort, G. Obozinski, B. Thirion, and F. Bach. Sparse structured dictionary learning for brain resting-state activity modeling. In NIPS Workshop on Practical Applications of Sparse Modeling: Open Issues and New Directions, P. Wolfe. Finding the nearest point in a polytope. Math. Progr., 11(1): , M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of The Royal Statistical Society Series B, 68(1):49 67, P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning Research, 7: , P. Zhao, G. Rocha, and B. Yu. Grouped and hierarchical model selection through composite absolute penalties. Annals of Statistics, 37(6A): , 2009.

Structured sparsity-inducing norms through submodular functions

Structured sparsity-inducing norms through submodular functions Structured sparsity-inducing norms through submodular functions Francis Bach Sierra team, INRIA - Ecole Normale Supérieure - CNRS Thanks to R. Jenatton, J. Mairal, G. Obozinski June 2011 Outline Introduction:

More information

Structured sparsity-inducing norms through submodular functions

Structured sparsity-inducing norms through submodular functions Structured sparsity-inducing norms through submodular functions Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with R. Jenatton, J. Mairal, G. Obozinski IPAM - February 2013 Outline

More information

Structured sparsity through convex optimization

Structured sparsity through convex optimization Structured sparsity through convex optimization Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with R. Jenatton, J. Mairal, G. Obozinski Journées INRIA - Apprentissage - December

More information

Convex relaxation for Combinatorial Penalties

Convex relaxation for Combinatorial Penalties Convex relaxation for Combinatorial Penalties Guillaume Obozinski Equipe Imagine Laboratoire d Informatique Gaspard Monge Ecole des Ponts - ParisTech Joint work with Francis Bach Fête Parisienne in Computation,

More information

Learning with Submodular Functions

Learning with Submodular Functions Learning with Submodular Functions Francis Bach Sierra project-team, INRIA - Ecole Normale Supérieure Machine Learning Summer School, Kyoto September 2012 Submodular functions- References and Links References

More information

Recent Advances in Structured Sparse Models

Recent Advances in Structured Sparse Models Recent Advances in Structured Sparse Models Julien Mairal Willow group - INRIA - ENS - Paris 21 September 2010 LEAR seminar At Grenoble, September 21 st, 2010 Julien Mairal Recent Advances in Structured

More information

Structured sparsity through convex optimization

Structured sparsity through convex optimization Structured sparsity through convex optimization Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with R. Jenatton, J. Mairal, G. Obozinski Cambridge University - May 2012 Outline

More information

Machine learning and convex optimization with submodular functions. Francis Bach Sierra project-team, INRIA - Ecole Normale Supérieure

Machine learning and convex optimization with submodular functions. Francis Bach Sierra project-team, INRIA - Ecole Normale Supérieure Machine learning and convex optimization with submodular functions Francis Bach Sierra project-team, INRIA - Ecole Normale Supérieure Workshop on combinatorial optimization - Cargese, 2013 Submodular functions

More information

Topographic Dictionary Learning with Structured Sparsity

Topographic Dictionary Learning with Structured Sparsity Topographic Dictionary Learning with Structured Sparsity Julien Mairal 1 Rodolphe Jenatton 2 Guillaume Obozinski 2 Francis Bach 2 1 UC Berkeley 2 INRIA - SIERRA Project-Team San Diego, Wavelets and Sparsity

More information

Structured Sparse Estimation with Network Flow Optimization

Structured Sparse Estimation with Network Flow Optimization Structured Sparse Estimation with Network Flow Optimization Julien Mairal University of California, Berkeley Neyman seminar, Berkeley Julien Mairal Neyman seminar, UC Berkeley /48 Purpose of the talk introduce

More information

Parcimonie en apprentissage statistique

Parcimonie en apprentissage statistique Parcimonie en apprentissage statistique Guillaume Obozinski Ecole des Ponts - ParisTech Journée Parcimonie Fédération Charles Hermite, 23 Juin 2014 Parcimonie en apprentissage 1/44 Classical supervised

More information

OWL to the rescue of LASSO

OWL to the rescue of LASSO OWL to the rescue of LASSO IISc IBM day 2018 Joint Work R. Sankaran and Francis Bach AISTATS 17 Chiranjib Bhattacharyya Professor, Department of Computer Science and Automation Indian Institute of Science,

More information

Hierarchical kernel learning

Hierarchical kernel learning Hierarchical kernel learning Francis Bach Willow project, INRIA - Ecole Normale Supérieure May 2010 Outline Supervised learning and regularization Kernel methods vs. sparse methods MKL: Multiple kernel

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

Sparse Estimation and Dictionary Learning

Sparse Estimation and Dictionary Learning Sparse Estimation and Dictionary Learning (for Biostatistics?) Julien Mairal Biostatistics Seminar, UC Berkeley Julien Mairal Sparse Estimation and Dictionary Learning Methods 1/69 What this talk is about?

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Learning with Submodular Functions: A Convex Optimization Perspective

Learning with Submodular Functions: A Convex Optimization Perspective Foundations and Trends R in Machine Learning Vol. 6, No. 2-3 (2013) 145 373 c 2013 F. Bach DOI: 10.1561/2200000039 Learning with Submodular Functions: A Convex Optimization Perspective Francis Bach INRIA

More information

Proximal Methods for Optimization with Spasity-inducing Norms

Proximal Methods for Optimization with Spasity-inducing Norms Proximal Methods for Optimization with Spasity-inducing Norms Group Learning Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology

More information

Submodular Functions Properties Algorithms Machine Learning

Submodular Functions Properties Algorithms Machine Learning Submodular Functions Properties Algorithms Machine Learning Rémi Gilleron Inria Lille - Nord Europe & LIFL & Univ Lille Jan. 12 revised Aug. 14 Rémi Gilleron (Mostrare) Submodular Functions Jan. 12 revised

More information

A tutorial on sparse modeling. Outline:

A tutorial on sparse modeling. Outline: A tutorial on sparse modeling. Outline: 1. Why? 2. What? 3. How. 4. no really, why? Sparse modeling is a component in many state of the art signal processing and machine learning tasks. image processing

More information

Backpropagation Rules for Sparse Coding (Task-Driven Dictionary Learning)

Backpropagation Rules for Sparse Coding (Task-Driven Dictionary Learning) Backpropagation Rules for Sparse Coding (Task-Driven Dictionary Learning) Julien Mairal UC Berkeley Edinburgh, ICML, June 2012 Julien Mairal, UC Berkeley Backpropagation Rules for Sparse Coding 1/57 Other

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD EE-731: ADVANCED TOPICS IN DATA SCIENCES LABORATORY FOR INFORMATION AND INFERENCE SYSTEMS SPRING 2016 INSTRUCTOR: VOLKAN CEVHER SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD STRUCTURED SPARSITY

More information

Tales from fmri Learning from limited labeled data. Gae l Varoquaux

Tales from fmri Learning from limited labeled data. Gae l Varoquaux Tales from fmri Learning from limited labeled data Gae l Varoquaux fmri data p 100 000 voxels per map Heavily correlated + structured noise Low SNR: 5% 13 db Brain response maps (activation) n Hundreds,

More information

Structured sparsity through convex optimization

Structured sparsity through convex optimization Submitted to the Statistical Science Structured sparsity through convex optimization Francis Bach, Rodolphe Jenatton, Julien Mairal and Guillaume Obozinski INRIA and University of California, Berkeley

More information

Proximal Minimization by Incremental Surrogate Optimization (MISO)

Proximal Minimization by Incremental Surrogate Optimization (MISO) Proximal Minimization by Incremental Surrogate Optimization (MISO) (and a few variants) Julien Mairal Inria, Grenoble ICCOPT, Tokyo, 2016 Julien Mairal, Inria MISO 1/26 Motivation: large-scale machine

More information

Shaping Level Sets with Submodular Functions

Shaping Level Sets with Submodular Functions Shaping Level Sets with Submodular Functions Francis Bach INRIA - Sierra Project-team Laboratoire d Informatique de l Ecole Normale Supérieure, Paris, France francis.bach@ens.fr Abstract We consider a

More information

Linear Regression with Strongly Correlated Designs Using Ordered Weigthed l 1

Linear Regression with Strongly Correlated Designs Using Ordered Weigthed l 1 Linear Regression with Strongly Correlated Designs Using Ordered Weigthed l 1 ( OWL ) Regularization Mário A. T. Figueiredo Instituto de Telecomunicações and Instituto Superior Técnico, Universidade de

More information

Restricted Strong Convexity Implies Weak Submodularity

Restricted Strong Convexity Implies Weak Submodularity Restricted Strong Convexity Implies Weak Submodularity Ethan R. Elenberg Rajiv Khanna Alexandros G. Dimakis Department of Electrical and Computer Engineering The University of Texas at Austin {elenberg,rajivak}@utexas.edu

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

Submodularity in Machine Learning

Submodularity in Machine Learning Saifuddin Syed MLRG Summer 2016 1 / 39 What are submodular functions Outline 1 What are submodular functions Motivation Submodularity and Concavity Examples 2 Properties of submodular functions Submodularity

More information

Optimization for Sparse Estimation and Structured Sparsity

Optimization for Sparse Estimation and Structured Sparsity Optimization for Sparse Estimation and Structured Sparsity Julien Mairal INRIA LEAR, Grenoble IMA, Minneapolis, June 2013 Short Course Applied Statistics and Machine Learning Julien Mairal Optimization

More information

Greedy Dictionary Selection for Sparse Representation

Greedy Dictionary Selection for Sparse Representation Greedy Dictionary Selection for Sparse Representation Volkan Cevher Rice University volkan@rice.edu Andreas Krause Caltech krausea@caltech.edu Abstract We discuss how to construct a dictionary by selecting

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

Trace Lasso: a trace norm regularization for correlated designs

Trace Lasso: a trace norm regularization for correlated designs Author manuscript, published in "NIPS 2012 - Neural Information Processing Systems, Lake Tahoe : United States (2012)" Trace Lasso: a trace norm regularization for correlated designs Edouard Grave edouard.grave@inria.fr

More information

Sparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Sparse regression. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic

More information

Generalized Conditional Gradient and Its Applications

Generalized Conditional Gradient and Its Applications Generalized Conditional Gradient and Its Applications Yaoliang Yu University of Alberta UBC Kelowna, 04/18/13 Y-L. Yu (UofA) GCG and Its Apps. UBC Kelowna, 04/18/13 1 / 25 1 Introduction 2 Generalized

More information

Learning MMSE Optimal Thresholds for FISTA

Learning MMSE Optimal Thresholds for FISTA MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Learning MMSE Optimal Thresholds for FISTA Kamilov, U.; Mansour, H. TR2016-111 August 2016 Abstract Fast iterative shrinkage/thresholding algorithm

More information

DATA MINING AND MACHINE LEARNING

DATA MINING AND MACHINE LEARNING DATA MINING AND MACHINE LEARNING Lecture 5: Regularization and loss functions Lecturer: Simone Scardapane Academic Year 2016/2017 Table of contents Loss functions Loss functions for regression problems

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

A Family of Penalty Functions for Structured Sparsity

A Family of Penalty Functions for Structured Sparsity A Family of Penalty Functions for Structured Sparsity Charles A. Micchelli Department of Mathematics City University of Hong Kong 83 Tat Chee Avenue, Kowloon Tong Hong Kong charles micchelli@hotmail.com

More information

High-dimensional Statistical Models

High-dimensional Statistical Models High-dimensional Statistical Models Pradeep Ravikumar UT Austin MLSS 2014 1 Curse of Dimensionality Statistical Learning: Given n observations from p(x; θ ), where θ R p, recover signal/parameter θ. For

More information

Generalized Power Method for Sparse Principal Component Analysis

Generalized Power Method for Sparse Principal Component Analysis Generalized Power Method for Sparse Principal Component Analysis Peter Richtárik CORE/INMA Catholic University of Louvain Belgium VOCAL 2008, Veszprém, Hungary CORE Discussion Paper #2008/70 joint work

More information

Beyond stochastic gradient descent for large-scale machine learning

Beyond stochastic gradient descent for large-scale machine learning Beyond stochastic gradient descent for large-scale machine learning Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with Eric Moulines - October 2014 Big data revolution? A new

More information

Structured Sparsity. group testing & compressed sensing a norm that induces structured sparsity. Mittagsseminar 2011 / 11 / 10 Martin Jaggi

Structured Sparsity. group testing & compressed sensing a norm that induces structured sparsity. Mittagsseminar 2011 / 11 / 10 Martin Jaggi Structured Sparsity group testing & compressed sensing a norm that induces structured sparsity (ariv.org/abs/1110.0413 Obozinski, G., Jacob, L., & Vert, J.-P., October 2011) Mittagssear 2011 / 11 / 10

More information

Tractable performance bounds for compressed sensing.

Tractable performance bounds for compressed sensing. Tractable performance bounds for compressed sensing. Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure/INRIA, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Learning Bound for Parameter Transfer Learning

Learning Bound for Parameter Transfer Learning Learning Bound for Parameter Transfer Learning Wataru Kumagai Faculty of Engineering Kanagawa University kumagai@kanagawa-u.ac.jp Abstract We consider a transfer-learning problem by using the parameter

More information

Approximating Submodular Functions. Nick Harvey University of British Columbia

Approximating Submodular Functions. Nick Harvey University of British Columbia Approximating Submodular Functions Nick Harvey University of British Columbia Approximating Submodular Functions Part 1 Nick Harvey University of British Columbia Department of Computer Science July 11th,

More information

Learning with stochastic proximal gradient

Learning with stochastic proximal gradient Learning with stochastic proximal gradient Lorenzo Rosasco DIBRIS, Università di Genova Via Dodecaneso, 35 16146 Genova, Italy lrosasco@mit.edu Silvia Villa, Băng Công Vũ Laboratory for Computational and

More information

Sparse Covariance Selection using Semidefinite Programming

Sparse Covariance Selection using Semidefinite Programming Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support

More information

High-dimensional Statistics

High-dimensional Statistics High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given

More information

Convergence Rates of Kernel Quadrature Rules

Convergence Rates of Kernel Quadrature Rules Convergence Rates of Kernel Quadrature Rules Francis Bach INRIA - Ecole Normale Supérieure, Paris, France ÉCOLE NORMALE SUPÉRIEURE NIPS workshop on probabilistic integration - Dec. 2015 Outline Introduction

More information

A convex relaxation for weakly supervised classifiers

A convex relaxation for weakly supervised classifiers A convex relaxation for weakly supervised classifiers Armand Joulin and Francis Bach SIERRA group INRIA -Ecole Normale Supérieure ICML 2012 Weakly supervised classification We adress the problem of weakly

More information

Recent Advances in Bayesian Inference Techniques

Recent Advances in Bayesian Inference Techniques Recent Advances in Bayesian Inference Techniques Christopher M. Bishop Microsoft Research, Cambridge, U.K. research.microsoft.com/~cmbishop SIAM Conference on Data Mining, April 2004 Abstract Bayesian

More information

An Homotopy Algorithm for the Lasso with Online Observations

An Homotopy Algorithm for the Lasso with Online Observations An Homotopy Algorithm for the Lasso with Online Observations Pierre J. Garrigues Department of EECS Redwood Center for Theoretical Neuroscience University of California Berkeley, CA 94720 garrigue@eecs.berkeley.edu

More information

TUTORIAL PART 1 Unsupervised Learning

TUTORIAL PART 1 Unsupervised Learning TUTORIAL PART 1 Unsupervised Learning Marc'Aurelio Ranzato Department of Computer Science Univ. of Toronto ranzato@cs.toronto.edu Co-organizers: Honglak Lee, Yoshua Bengio, Geoff Hinton, Yann LeCun, Andrew

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

Reflection methods for user-friendly submodular optimization

Reflection methods for user-friendly submodular optimization Reflection methods for user-friendly submodular optimization Stefanie Jegelka UC Berkeley Berkeley, CA, USA Francis Bach INRIA - ENS Paris, France Suvrit Sra MPI for Intelligent Systems Tübingen, Germany

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables

A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables Niharika Gauraha and Swapan Parui Indian Statistical Institute Abstract. We consider the problem of

More information

Semi-supervised Dictionary Learning Based on Hilbert-Schmidt Independence Criterion

Semi-supervised Dictionary Learning Based on Hilbert-Schmidt Independence Criterion Semi-supervised ictionary Learning Based on Hilbert-Schmidt Independence Criterion Mehrdad J. Gangeh 1, Safaa M.A. Bedawi 2, Ali Ghodsi 3, and Fakhri Karray 2 1 epartments of Medical Biophysics, and Radiation

More information

Maximization of Submodular Set Functions

Maximization of Submodular Set Functions Northeastern University Department of Electrical and Computer Engineering Maximization of Submodular Set Functions Biomedical Signal Processing, Imaging, Reasoning, and Learning BSPIRAL) Group Author:

More information

25 : Graphical induced structured input/output models

25 : Graphical induced structured input/output models 10-708: Probabilistic Graphical Models 10-708, Spring 2016 25 : Graphical induced structured input/output models Lecturer: Eric P. Xing Scribes: Raied Aljadaany, Shi Zong, Chenchen Zhu Disclaimer: A large

More information

MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications. Class 08: Sparsity Based Regularization. Lorenzo Rosasco

MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications. Class 08: Sparsity Based Regularization. Lorenzo Rosasco MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications Class 08: Sparsity Based Regularization Lorenzo Rosasco Learning algorithms so far ERM + explicit l 2 penalty 1 min w R d n n l(y

More information

High-dimensional graphical model selection: Practical and information-theoretic limits

High-dimensional graphical model selection: Practical and information-theoretic limits 1 High-dimensional graphical model selection: Practical and information-theoretic limits Martin Wainwright Departments of Statistics, and EECS UC Berkeley, California, USA Based on joint work with: John

More information

A Spectral Regularization Framework for Multi-Task Structure Learning

A Spectral Regularization Framework for Multi-Task Structure Learning A Spectral Regularization Framework for Multi-Task Structure Learning Massimiliano Pontil Department of Computer Science University College London (Joint work with A. Argyriou, T. Evgeniou, C.A. Micchelli,

More information

Efficient Methods for Overlapping Group Lasso

Efficient Methods for Overlapping Group Lasso Efficient Methods for Overlapping Group Lasso Lei Yuan Arizona State University Tempe, AZ, 85287 Lei.Yuan@asu.edu Jun Liu Arizona State University Tempe, AZ, 85287 j.liu@asu.edu Jieping Ye Arizona State

More information

Structured Sparse Principal Component Analysis

Structured Sparse Principal Component Analysis Structured Sparse Principal Component Analysis Rodolphe Jenatton, Guillaume Obozinski, Francis Bach To cite this version: Rodolphe Jenatton, Guillaume Obozinski, Francis Bach. Structured Sparse Principal

More information

2.3. Clustering or vector quantization 57

2.3. Clustering or vector quantization 57 Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :

More information

Fast Hard Thresholding with Nesterov s Gradient Method

Fast Hard Thresholding with Nesterov s Gradient Method Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science

More information

Supervised Feature Selection in Graphs with Path Coding Penalties and Network Flows

Supervised Feature Selection in Graphs with Path Coding Penalties and Network Flows Journal of Machine Learning Research 4 (03) 449-485 Submitted 4/; Revised 3/3; Published 8/3 Supervised Feature Selection in Graphs with Path Coding Penalties and Network Flows Julien Mairal Bin Yu Department

More information

Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning

Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning Francis Bach INRIA - Willow Project, École Normale Supérieure 45, rue d Ulm, 75230 Paris, France francis.bach@mines.org Abstract

More information

On Optimal Frame Conditioners

On Optimal Frame Conditioners On Optimal Frame Conditioners Chae A. Clark Department of Mathematics University of Maryland, College Park Email: cclark18@math.umd.edu Kasso A. Okoudjou Department of Mathematics University of Maryland,

More information

Group Lasso with Overlaps: the Latent Group Lasso approach

Group Lasso with Overlaps: the Latent Group Lasso approach Group Lasso with Overlaps: the Latent Group Lasso approach Guillaume Obozinski, Laurent Jacob, Jean-Philippe Vert To cite this version: Guillaume Obozinski, Laurent Jacob, Jean-Philippe Vert. Group Lasso

More information

Sparse and Robust Optimization and Applications

Sparse and Robust Optimization and Applications Sparse and and Statistical Learning Workshop Les Houches, 2013 Robust Laurent El Ghaoui with Mert Pilanci, Anh Pham EECS Dept., UC Berkeley January 7, 2013 1 / 36 Outline Sparse Sparse Sparse Probability

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Sparse PCA with applications in finance

Sparse PCA with applications in finance Sparse PCA with applications in finance A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon 1 Introduction

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Spectral k-support Norm Regularization

Spectral k-support Norm Regularization Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:

More information

An Introduction to Submodular Functions and Optimization. Maurice Queyranne University of British Columbia, and IMA Visitor (Fall 2002)

An Introduction to Submodular Functions and Optimization. Maurice Queyranne University of British Columbia, and IMA Visitor (Fall 2002) An Introduction to Submodular Functions and Optimization Maurice Queyranne University of British Columbia, and IMA Visitor (Fall 2002) IMA, November 4, 2002 1. Submodular set functions Generalization to

More information

Incremental and Stochastic Majorization-Minimization Algorithms for Large-Scale Machine Learning

Incremental and Stochastic Majorization-Minimization Algorithms for Large-Scale Machine Learning Incremental and Stochastic Majorization-Minimization Algorithms for Large-Scale Machine Learning Julien Mairal Inria, LEAR Team, Grenoble Journées MAS, Toulouse Julien Mairal Incremental and Stochastic

More information

Stochastic gradient descent and robustness to ill-conditioning

Stochastic gradient descent and robustness to ill-conditioning Stochastic gradient descent and robustness to ill-conditioning Francis Bach INRIA - Ecole Normale Supérieure, Paris, France ÉCOLE NORMALE SUPÉRIEURE Joint work with Aymeric Dieuleveut, Nicolas Flammarion,

More information

Properties of optimizations used in penalized Gaussian likelihood inverse covariance matrix estimation

Properties of optimizations used in penalized Gaussian likelihood inverse covariance matrix estimation Properties of optimizations used in penalized Gaussian likelihood inverse covariance matrix estimation Adam J. Rothman School of Statistics University of Minnesota October 8, 2014, joint work with Liliana

More information

Discriminative Models

Discriminative Models No.5 Discriminative Models Hui Jiang Department of Electrical Engineering and Computer Science Lassonde School of Engineering York University, Toronto, Canada Outline Generative vs. Discriminative models

More information

Sparse Gaussian conditional random fields

Sparse Gaussian conditional random fields Sparse Gaussian conditional random fields Matt Wytock, J. ico Kolter School of Computer Science Carnegie Mellon University Pittsburgh, PA 53 {mwytock, zkolter}@cs.cmu.edu Abstract We propose sparse Gaussian

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

Group lasso for genomic data

Group lasso for genomic data Group lasso for genomic data Jean-Philippe Vert Mines ParisTech / Curie Institute / Inserm Machine learning: Theory and Computation workshop, IMA, Minneapolis, March 26-3, 22 J.P Vert (ParisTech) Group

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley A. d Aspremont, INFORMS, Denver,

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

High-dimensional covariance estimation based on Gaussian graphical models

High-dimensional covariance estimation based on Gaussian graphical models High-dimensional covariance estimation based on Gaussian graphical models Shuheng Zhou Department of Statistics, The University of Michigan, Ann Arbor IMA workshop on High Dimensional Phenomena Sept. 26,

More information

arxiv: v1 [cs.lg] 20 Feb 2014

arxiv: v1 [cs.lg] 20 Feb 2014 GROUP-SPARSE MATRI RECOVERY iangrong Zeng and Mário A. T. Figueiredo Instituto de Telecomunicações, Instituto Superior Técnico, Lisboa, Portugal ariv:1402.5077v1 [cs.lg] 20 Feb 2014 ABSTRACT We apply the

More information

An Efficient Proximal Gradient Method for General Structured Sparse Learning

An Efficient Proximal Gradient Method for General Structured Sparse Learning Journal of Machine Learning Research 11 (2010) Submitted 11/2010; Published An Efficient Proximal Gradient Method for General Structured Sparse Learning Xi Chen Qihang Lin Seyoung Kim Jaime G. Carbonell

More information

Convergence Rate of Expectation-Maximization

Convergence Rate of Expectation-Maximization Convergence Rate of Expectation-Maximiation Raunak Kumar University of British Columbia Mark Schmidt University of British Columbia Abstract raunakkumar17@outlookcom schmidtm@csubcca Expectation-maximiation

More information

Optimization methods

Optimization methods Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to

More information

(k, q)-trace norm for sparse matrix factorization

(k, q)-trace norm for sparse matrix factorization (k, q)-trace norm for sparse matrix factorization Emile Richard Department of Electrical Engineering Stanford University emileric@stanford.edu Guillaume Obozinski Imagine Ecole des Ponts ParisTech guillaume.obozinski@imagine.enpc.fr

More information

Proximal Methods for Hierarchical Sparse Coding

Proximal Methods for Hierarchical Sparse Coding Proximal Methods for Hierarchical Sparse Coding Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski, Francis Bach To cite this version: Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski, Francis

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Nikhil Rao, Miroslav Dudík, Zaid Harchaoui

Nikhil Rao, Miroslav Dudík, Zaid Harchaoui THE GROUP k-support NORM FOR LEARNING WITH STRUCTURED SPARSITY Nikhil Rao, Miroslav Dudík, Zaid Harchaoui Technicolor R&I, Microsoft Research, University of Washington ABSTRACT Several high-dimensional

More information

Online Dictionary Learning with Group Structure Inducing Norms

Online Dictionary Learning with Group Structure Inducing Norms Online Dictionary Learning with Group Structure Inducing Norms Zoltán Szabó 1, Barnabás Póczos 2, András Lőrincz 1 1 Eötvös Loránd University, Budapest, Hungary 2 Carnegie Mellon University, Pittsburgh,

More information