Intrinsic Volumes of Convex Cones Theory and Applications

Size: px
Start display at page:

Download "Intrinsic Volumes of Convex Cones Theory and Applications"

Transcription

1 Intrinsic Volumes of Convex Cones Theory and Applications M\cr NA Manchester Numerical Analysis Martin Lotz School of Mathematics The University of Manchester with the collaboration of Dennis Amelunxen, Michael B. McCoy, Joel A. Tropp Chicago, July 11, 2014

2 Outline Problems involving cones Some conic integral geometry Concentration

3 Outline Problems involving cones Some conic integral geometry Concentration

4 Conic problems Problem: find a structured solution x 0 of m d system (m < d) Ax = b by minimizing a convex regularizer minimize f(x) subject to Ax = b. ( ) 1 / 27

5 Conic problems Problem: find a structured solution x 0 of m d system (m < d) by minimizing a convex regularizer Examples include: x 0 sparse: f(x) = x 1 ; Ax = b minimize f(x) subject to Ax = b. ( ) X 0 low-rank matrix: f(x) nuclear norm; Models with simultaneous structures ( talk by M. Fazel), atomic norms ( talk by B. Recht) 1 / 27

6 Example: l 1 minimization Let x 0 be s-sparse, b = Ax 0 for random A R m d (s < m < d). minimize x 1 subject to Ax = b. Probability of success Number of equations m s = 50, m = 25, d = / 27

7 Example: l 1 minimization Let x 0 be s-sparse, b = Ax 0 for random A R m d (s < m < d). minimize x 1 subject to Ax = b. Probability of success Number of equations m s = 50, m = 50, d = / 27

8 Example: l 1 minimization Let x 0 be s-sparse, b = Ax 0 for random A R m d (s < m < d). minimize x 1 subject to Ax = b. Probability of success Number of equations m s = 50, m = 75, d = / 27

9 Example: l 1 minimization Let x 0 be s-sparse, b = Ax 0 for random A R m d (s < m < d). minimize x 1 subject to Ax = b. Probability of success Number of equations m s = 50, m = 100, d = / 27

10 Example: l 1 minimization Let x 0 be s-sparse, b = Ax 0 for random A R m d (s < m < d). minimize x 1 subject to Ax = b. Probability of success Number of equations m s = 50, m = 125, d = / 27

11 Example: l 1 minimization Let x 0 be s-sparse, b = Ax 0 for random A R m d (s < m < d). minimize x 1 subject to Ax = b. Probability of success Number of equations m s = 50, m = 150, d = / 27

12 Example: l 1 minimization Let x 0 be s-sparse, b = Ax 0 for random A R m d (s < m < d). minimize x 1 subject to Ax = b. Probability of success Number of equations m s = 50, m = 175, d = / 27

13 Example: l 1 minimization Let x 0 be s-sparse, b = Ax 0 for random A R m d (s < m < d). minimize x 1 subject to Ax = b. Probability of success Number of equations m s = 50, m = 200, d = / 27

14 Phase transitions Phase transitions for l 1 and nuclear norm minimization. 3 / 27

15 Conic problems Problem: find a structured solution x 0 of m d system (m < d) Ax = b by minimizing a convex regularizer minimize f(x) subject to Ax = b. ( ) ( ) has x 0 as unique solution if and only if the optimality condition ker A D(f, x 0 ) = {0} is satisfied, where D(f, x 0 ) is the convex descent cone of f at x 0 : D(f, x 0 ) := { y R d : f(x 0 + τy) f(x 0 ) } τ>0 4 / 27

16 Success of l 1 -minimization minimize x 1 subject to Ax = b x 0 {Ax = b} success { x 1 x 0 1 } 5 / 27

17 Success of l 1 -minimization minimize x 1 subject to Ax = b x 0 failure 5 / 27

18 Success of l 1 -minimization minimize x 1 subject to Ax = b x 0 success descent cone 5 / 27

19 Success of l 1 -minimization minimize x 1 subject to Ax = b x 0 success descent cone l 1 minimization succeeds at finding a sparse vector x 0 if and only if the kernel of A misses the cone of descent directions of 1 at x 0. 5 / 27

20 Conic problems Problem: reconstruct two signals x 0, y 0 from the observation z 0 = x 0 + Qy 0, where Q O(d), by solving minimize f(x) subject to g(y) g(y 0) and z 0 = x + Qy. ( ) for suitable convex functions f and g. 6 / 27

21 Conic problems Problem: reconstruct two signals x 0, y 0 from the observation z 0 = x 0 + Qy 0, where Q O(d), by solving minimize f(x) subject to g(y) g(y 0) and z 0 = x + Qy. ( ) for suitable convex functions f and g. both are sparse x 0 sparse (corruption), y 0 {±1} d (message) x 0 low-rank matrix, y 0 sparse (corruption) 6 / 27

22 Conic problems Problem: reconstruct two signals x 0, y 0 from the observation z 0 = x 0 + Qy 0, where Q O(d), by solving minimize f(x) subject to g(y) g(y 0) and z 0 = x + Qy. ( ) for suitable convex functions f and g. ( ) uniquely recovers x 0, y 0, if and only if D(f, x 0 ) QD(g, y 0 ) = {0}, (McCoy-Tropp (2012,2013) Mike s upcoming talk) 6 / 27

23 Conic problems Projections of polytopes: Let A: R d R m, m < d, P R d polytope, F P a face. Then AF is a face of AP if and only if ker A T F (P ) = {0}, where T F (P ) is tangent cone to F at P (Donoho-Tanner (2006-)). Compressive separation: Given disjoint convex sets S 1, S 2 R d, then AS 1 AS 2 = if and only if (Bandeira-Mixon-Recht (2014)) ker A cone(s 1 S 2 ) = {0}. 7 / 27

24 The mathematical problem The problems mentioned motivate the following question: Given closed convex cones C, D R d and a random orthogonal transformation Q, what is the probability that they intersect: C QD {0}, ( ) Bounds on the intersection probability of cone with linear subspace follow from Gordon s escape through the mesh argument; Exact formulas for probability of intersection are based on the kinematic formula from (spherical) integral geometry. topic of this talk. 8 / 27

25 Outline Problems involving cones Some conic integral geometry Concentration

26 The kinematic formula The probability that a randomly rotated cone intersects another (not both linear spaces) is given by a discrete probability distribution, the spherical intrinsic volumes v 0 (C),..., v d (C): P{C QD {0}} = 2 v i (C)v j (D). k odd i+j=d+k For the case where D = L is a linear subspace of codimension m, v i (L) = 1 if i = d m and v i (L) = 0 else, P{C QL {0}} = 2 v m+k (C). k odd ( ) ( ) is essentially the tail of a discrete probability distribution. 9 / 27

27 Spherical intrinsic volumes C v 1 (C) v 2 (C) v 1 (C) 0 v 0 (C) Let C R d be a polyhedral cone, F k (C) set of k-dimensional faces. The k-th (spherical) intrinsic volume of C is defined as v k (C) = P{Π C (g) relint(f )}. F F k (C) For non-polyhedral cones by approximation or via Steiner formula. A long history in geometry, have also appeared in statistics (as weights of χ 2 distributions). 10 / 27

28 Spherical intrinsic volumes: examples Linear subspace L: v k (L) = 1 if dim L = k, v k (L) = 0 else. 11 / 27

29 Spherical intrinsic volumes: examples Linear subspace L: v k (L) = 1 if dim L = k, v k (L) = 0 else. Orthant R d 0: v k (R d 0) = ( ) d 2 d k 11 / 27

30 Spherical intrinsic volumes: examples Linear subspace L: v k (L) = 1 if dim L = k, v k (L) = 0 else. Orthant R d 0: v k (R d 0) = ( ) d 2 d k Second order cones: ( ( ) 1 d 2 ) v k Circ(d, α) = 2 sin k 1 (α) cos d k 1 (α). 2 k / 27

31 Spherical intrinsic volumes: examples Linear subspace L: v k (L) = 1 if dim L = k, v k (L) = 0 else. Orthant R d 0: v k (R d 0) = ( ) d 2 d k Second order cones: ( ( ) 1 d 2 ) v k Circ(d, α) = 2 sin k 1 (α) cos d k 1 (α). 2 k 1 2 Asymptotics for tangent cones at faces of simplex and l 1 -ball (Vershik-Sporyshev (1992), Donoho (2006)). 11 / 27

32 Spherical intrinsic volumes: examples Linear subspace L: v k (L) = 1 if dim L = k, v k (L) = 0 else. Orthant R d 0: v k (R d 0) = ( ) d 2 d k Second order cones: ( ( ) 1 d 2 ) v k Circ(d, α) = 2 sin k 1 (α) cos d k 1 (α). 2 k 1 2 Asymptotics for tangent cones at faces of simplex and l 1 -ball (Vershik-Sporyshev (1992), Donoho (2006)). Integral representations for the semidefinite cone (Amelunxen-Bürgisser (2012)). 11 / 27

33 Spherical intrinsic volumes: examples Linear subspace L: v k (L) = 1 if dim L = k, v k (L) = 0 else. Orthant R d 0: v k (R d 0) = ( ) d 2 d k Second order cones: ( ( ) 1 d 2 ) v k Circ(d, α) = 2 sin k 1 (α) cos d k 1 (α). 2 k 1 2 Asymptotics for tangent cones at faces of simplex and l 1 -ball (Vershik-Sporyshev (1992), Donoho (2006)). Integral representations for the semidefinite cone (Amelunxen-Bürgisser (2012)). Combinatorial expressions for regions of hyperplane arrangements (Klivans-Swartz (2011)). 11 / 27

34 Spherical intrinsic volumes: examples v k (L), dim L = k v k (R d 0) Circ(d, π/4) v k ({x: x 1 x d }) 12 / 27

35 Concentration Circ(d, π/4) Associate to a cone C the discrete random variable X C with P{X C = k} = v k (C), and define the statistical dimension as the average δ(c) = E[X C ] = d kv k (C). k=0 In the examples it appears that X C concentrates around δ(c). 13 / 27

36 Concentration Theorem [ALMT14] Let C be a convex cone, and X C a discrete random variable with distribution P{X C = k} = v k (C). Let δ(c) = E[X C ]. Then ( λ 2 ) /8 P{ X C δ(c) > λ} 4 exp for λ 0, ω(c) + λ where ω(c) := min{δ(c), d δ(c)}. Improved bounds by McCoy-Tropp (Discr. Comp. Geom. 2014) 14 / 27

37 Approximate kinematic formula Applying the concentration result to the kinematic formula P{C QD {0}} = 2 v i (C)v j (D). k odd i+j=d+k gives rise to Theorem [ALMT14] Fix a tolerance η (0, 1). Assume one of C, D is not a subspace. Then { } δ(c) + δ(d) d a η d = P C QK = {0} 1 η; { } δ(c) + δ(d) d + a η d = P C QK = {0} η, where a η := 4 log(4/η) (a 0.01 < 10 and a < 12). 15 / 27

38 Approximate kinematic formula Applying the concentration result to the kinematic formula P{C QD {0}} = 2 v i (C)v j (D). k odd i+j=d+k gives rise to Theorem [ALMT14] Fix a tolerance η (0, 1). Assume one of C, D is not a subspace. Then { } δ(c) + δ(d) d a η d = P C QK = {0} 1 η; { } δ(c) + δ(d) d + a η d = P C QK = {0} η, where a η := 4 log(4/η) (a 0.01 < 10 and a < 12). Interpretation: convex cones behave like linear subspaces of dimension δ(c), δ(d) in high dimensions. 15 / 27

39 Statistical dimension: basic properties Orthogonal invariance. δ(qc) = δ(c) for each Q O(d). 16 / 27

40 Statistical dimension: basic properties Orthogonal invariance. δ(qc) = δ(c) for each Q O(d). Subspaces. For a subspace L R d, δ(l) = dim(l). 16 / 27

41 Statistical dimension: basic properties Orthogonal invariance. δ(qc) = δ(c) for each Q O(d). Subspaces. Totality. For a subspace L R d, δ(l) = dim(l). δ(c) + δ(c ) = d. This generalises dim(l) + dim(l ) = d for linear L. 16 / 27

42 Statistical dimension: basic properties Orthogonal invariance. δ(qc) = δ(c) for each Q O(d). Subspaces. Totality. For a subspace L R d, δ(l) = dim(l). δ(c) + δ(c ) = d. This generalises dim(l) + dim(l ) = d for linear L. C C 16 / 27

43 Statistical dimension: basic properties Orthogonal invariance. δ(qc) = δ(c) for each Q O(d). Subspaces. Totality. For a subspace L R d, δ(l) = dim(l). δ(c) + δ(c ) = d. This generalises dim(l) + dim(l ) = d for linear L. Direct products. For each cone closed convex cone K, δ(c K) = δ(c) + δ(k). In particular, invariance under embedding. 16 / 27

44 Statistical dimension: basic properties Orthogonal invariance. δ(qc) = δ(c) for each Q O(d). Subspaces. Totality. For a subspace L R d, δ(l) = dim(l). δ(c) + δ(c ) = d. This generalises dim(l) + dim(l ) = d for linear L. Direct products. For each cone closed convex cone K, δ(c K) = δ(c) + δ(k). In particular, invariance under embedding. Monotonicity. C K implies that δ(c) δ(k). 16 / 27

45 Statistical dimension: basic properties Expected squared Gaussian projection: δ(c) = E [ Π C (g) 2], previously appeared in various contexts, among others as proxy to Gaussian width. Spherical formulation: δ(c) := d E [ Π C (θ) 2 ] where θ Uniform(S d 1 ). Relation to Gaussian width w(c) = E [ sup x, g ] : x C S d 1 w(c) 2 δ(c) w(c) Gaussian width has played a role in the analysis of recovery via Gordon s comparison inequality (Rudelson-Vershynin (2008), Stojnic (2009-), Oymak-Hassibi (2010-), Chandrasekaran et al. (2012)). 17 / 27

46 Examples Linear subspaces. δ(l) = dim L Non-negative orthant. δ(r d 0) = d/2. Self-dual cones. We have δ(c) + δ(c ) = d, so that δ(c) = d/2 for any self-dual cone (for example, positive semidefinite matrices). Second-order (ice cream) cones of angle α. Circ(d, α) := { x R n : x 1 / x cos(α) }. Then δ ( Circ(d, α) ) d sin 2 (α) The cone C A = {x : x 1 x d }. δ(c A ) = d k=1 1 k log(d). 18 / 27

47 Computing the statistical dimension In some cases the statistical dimension of a convex cone can be compute exactly from the intrinsic volumes: Spherical cones; Descent cone of f = l ; Regions of hyperplane arrangements with high symmetry. 19 / 27

48 Computing the statistical dimension In some cases the statistical dimension of a convex cone can be compute exactly from the intrinsic volumes: Spherical cones; Descent cone of f = l ; Regions of hyperplane arrangements with high symmetry. For descent cones of convex regularizers, asymptotically expressions follow from a blueprint developed by Stojnic (2008) and refined since: x 0 s-sparse, f = 1. Asymptotic formula for δ( 1, x 0 ) follows from Stojnic X 0 rank r matrix, f = S1. Asymptotic formula based on the Marčenko-Pastur characterisation of the empirical eigenvalue distribution of Wishart matrices. 19 / 27

49 Computing the statistical dimension: an example C A = {x: x 1 x 2 x d }. Normal cone to vertex at permutahedron, suggested as convex regularizer for vectors from lists problem (Chandrasekaran et al (2012)). Using combinatorics (Klivans-Swartz (2011), Stanley): v(t) = d k=0 v k (C A )t k = 1 t (t + 1) (t + d 1). d! Statistical dimension: δ(c A ) = d dt v(t) t=1 = d k=1 1 k log(d). 20 / 27

50 Outline Problems involving cones Some conic integral geometry Concentration

51 The spherical Steiner formula Recall the characterization δ(c) = E [ Π C (g) 2] The measure of points within angle arccos( ε) of cone C on the sphere is given by Spherical Steiner Formula (Herglotz, Allendoerfer, Santaló) P { Π C (θ) 2 ε } = d L k : k-dimensional subspace θ: uniform on S d 1. k=1 P{ Π Lk (θ) 2 ε } v k (C) 21 / 27

52 The spherical Steiner formula Recall the characterization δ(c) = E [ Π C (g) 2] The measure of points within angle arccos( ε) of cone C on the sphere is given by Spherical Steiner Formula (Herglotz, Allendoerfer, Santaló) P { Π C (θ) 2 ε } = d L k : k-dimensional subspace θ: uniform on S d 1. k=1 P{ Π Lk (θ) 2 ε } v k (C) substantially generalized by M. McCoy (McCoy-Tropp (2014)) 21 / 27

53 The spherical Steiner formula Volume of neighbourhood of subspheres P { Π C (θ) 2 ε } = d k=1 P{ Π Lk (θ) 2 ε } }{{} Beta distributed v k (C) Volume of arccos( ε)-neighbourhood of k-dimensional subsphere satisfies P { Π Lk (θ) 2 ε } { 0 if ε > k/d 1 if ε < k/d 22 / 27

54 The spherical Steiner formula Volume of neighbourhood of subspheres P { Π C (θ) 2 ε } d k= εd v k (C). Volume of arccos( ε)-neighbourhood of k-dimensional subsphere satisfies P { Π Lk (θ) 2 ε } { 0 if ε > k/d 1 if ε < k/d 23 / 27

55 The spherical Steiner formula Measure concentration P { Π C (θ) 2 ε } d v k (C). k= εd { 0 if ε > δ(c)/d 1 if ε < δ(c)/d Follows from concentration of measure, since the squared projection is Lipschitz and concentrates near expected value δ(c). 24 / 27

56 The spherical Steiner formula Let X C be a random variable with distribution given by the spherical intrinsic volumes P{X C = k} = v k (C). By the spherical Steiner formula we have P{X C εd} d k= εd v k (C) { 0 if ε > δ(c)/d 1 if ε < δ(c)/d Rigorous implementation uses more advanced concentration of measure technology. 25 / 27

57 Some problems Spherical Hadwiger conjecture: Each continuous, rotation-invariant valuation on closed convex cones is a linear combination of spherical intrinsic volumes. Are the spherical intrinsic volumes log concave? v k (C) 2 v k 1 (C) v k+1 (C) Is the variance of X C maximised by the Lorentz cone Circ π/4? Further develop the combinatorial approach to computing intrinsic volumes with a view towards cones of interest in statistics (isotonic regression). 26 / 27

58 For more details: D. Amelunxen, M. Lotz, M. B. McCoy, and J. A. Tropp. Living on the edge: phase transitions in convex probrams with random data. Information and Inference, 2014 arxiv: M. B. McCoy, J. A. Tropp. From Steiner formulas for cones to concentration of intrinsic volumes. Discrete Comput. Geometry, 2014 D. Amelunxen, M. Lotz. Gordon s inequality and condition numbers in convex optimization. To appear on arxiv these days. Thank You! 27 / 27

Living on the edge: phase transitions in convex programs with random data

Living on the edge: phase transitions in convex programs with random data Information and Inference: A Journal of the IMA (2014) 3, 224 294 doi:10.1093/imaiai/iau005 Advance Access publication on 30 June 2014 Living on the edge: phase transitions in convex programs with random

More information

LIVING ON THE EDGE: PHASE TRANSITIONS IN CONVEX PROGRAMS WITH RANDOM DATA

LIVING ON THE EDGE: PHASE TRANSITIONS IN CONVEX PROGRAMS WITH RANDOM DATA LIVING ON THE EDGE: PHASE TRANSITIONS IN CONVEX PROGRAMS WITH RANDOM DATA DENNIS AMELUNXEN, MARTIN LOTZ, MICHAEL B. MCCOY, AND JOEL A. TROPP ABSTRACT. Recent research indicates that many convex optimization

More information

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Larry Goldstein, University of Southern California Nourdin GIoVAnNi Peccati Luxembourg University University British

More information

EFFECTIVE CONDITION NUMBER BOUNDS FOR CONVEX REGULARIZATION

EFFECTIVE CONDITION NUMBER BOUNDS FOR CONVEX REGULARIZATION EFFECTIVE CONDITION NUMBER BOUNDS FOR CONVEX REGULARIZATION DENNIS AMELUNXEN, MARTIN LOTZ, AND JAKE WALVIN ABSTRACT. We derive bounds relating the statistical dimension of linear images of convex cones

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Technical Report No February 2017

Technical Report No February 2017 THE ACHIEVABLE PERFORMANCE OF CONVEX DEMIXING MICHAEL B. MCCOY AND JOEL A. TROPP Technical Report No. 2017-02 February 2017 APPLIED & COMPUTATIONAL MATHEMATICS CALIFORNIA INSTITUTE OF TECHNOLOGY mail code

More information

From Steiner Formulas for Cones to Concentration of Intrinsic Volumes

From Steiner Formulas for Cones to Concentration of Intrinsic Volumes Discrete Comput Geom (2014 51:926 963 DOI 10.1007/s00454-014-9595-4 From Steiner Formulas for Cones to Concentration of Intrinsic Volumes Michael B. McCoy Joel A. Tropp Received: 23 August 2013 / Revised:

More information

The convex algebraic geometry of linear inverse problems

The convex algebraic geometry of linear inverse problems The convex algebraic geometry of linear inverse problems The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

COMPRESSED Sensing (CS) is a method to recover a

COMPRESSED Sensing (CS) is a method to recover a 1 Sample Complexity of Total Variation Minimization Sajad Daei, Farzan Haddadi, Arash Amini Abstract This work considers the use of Total Variation (TV) minimization in the recovery of a given gradient

More information

arxiv: v2 [math.na] 10 May 2018

arxiv: v2 [math.na] 10 May 2018 EFFECTIVE CONDITION NUMBER BOUNDS FOR CONVEX REGULARIZATION DENNIS AMELUNXEN, MARTIN LOTZ, AND JAKE WALVIN arxiv:1707.01775v2 [math.na] 10 May 2018 ABSTRACT. We derive bounds relating Renegar s condition

More information

Least squares under convex constraint

Least squares under convex constraint Stanford University Questions Let Z be an n-dimensional standard Gaussian random vector. Let µ be a point in R n and let Y = Z + µ. We are interested in estimating µ from the data vector Y, under the assumption

More information

The Convex Geometry of Linear Inverse Problems

The Convex Geometry of Linear Inverse Problems Found Comput Math 2012) 12:805 849 DOI 10.1007/s10208-012-9135-7 The Convex Geometry of Linear Inverse Problems Venkat Chandrasekaran Benjamin Recht Pablo A. Parrilo Alan S. Willsky Received: 2 December

More information

Recovery of Simultaneously Structured Models using Convex Optimization

Recovery of Simultaneously Structured Models using Convex Optimization Recovery of Simultaneously Structured Models using Convex Optimization Maryam Fazel University of Washington Joint work with: Amin Jalali (UW), Samet Oymak and Babak Hassibi (Caltech) Yonina Eldar (Technion)

More information

On the Convex Geometry of Weighted Nuclear Norm Minimization

On the Convex Geometry of Weighted Nuclear Norm Minimization On the Convex Geometry of Weighted Nuclear Norm Minimization Seyedroohollah Hosseini roohollah.hosseini@outlook.com Abstract- Low-rank matrix approximation, which aims to construct a low-rank matrix from

More information

The Convex Geometry of Linear Inverse Problems

The Convex Geometry of Linear Inverse Problems The Convex Geometry of Linear Inverse Problems Venkat Chandrasekaran m, Benjamin Recht w, Pablo A. Parrilo m, and Alan S. Willsky m m Laboratory for Information and Decision Systems Department of Electrical

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May University of Luxembourg Master in Mathematics Student project Compressed sensing Author: Lucien May Supervisor: Prof. I. Nourdin Winter semester 2014 1 Introduction Let us consider an s-sparse vector

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

UNIVERSALITY LAWS FOR RANDOMIZED DIMENSION REDUCTION, WITH APPLICATIONS

UNIVERSALITY LAWS FOR RANDOMIZED DIMENSION REDUCTION, WITH APPLICATIONS UNIVERSALITY LAWS FOR RANDOMIZED DIMENSION REDUCTION, WITH APPLICATIONS SAMET OYMAK AND OEL A. TROPP ABSTRACT. Dimension reduction is the process of embedding high-dimensional data into a lower dimensional

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

PHASE TRANSITION OF JOINT-SPARSE RECOVERY FROM MULTIPLE MEASUREMENTS VIA CONVEX OPTIMIZATION

PHASE TRANSITION OF JOINT-SPARSE RECOVERY FROM MULTIPLE MEASUREMENTS VIA CONVEX OPTIMIZATION PHASE TRASITIO OF JOIT-SPARSE RECOVERY FROM MUTIPE MEASUREMETS VIA COVEX OPTIMIZATIO Shih-Wei Hu,, Gang-Xuan in, Sung-Hsien Hsieh, and Chun-Shien u Institute of Information Science, Academia Sinica, Taipei,

More information

1 Regression with High Dimensional Data

1 Regression with High Dimensional Data 6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:

More information

Sampling and high-dimensional convex geometry

Sampling and high-dimensional convex geometry Sampling and high-dimensional convex geometry Roman Vershynin SampTA 2013 Bremen, Germany, June 2013 Geometry of sampling problems Signals live in high dimensions; sampling is often random. Geometry in

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

arxiv: v2 [math.fa] 24 May 2016

arxiv: v2 [math.fa] 24 May 2016 Optimal Choice of Weights for Sparse Recovery With Prior Information arxiv:506.09054v [math.fa] 4 May 06 Axel Flinth April 6, 08 Abstract Compressed sensing deals with the recovery of sparse signals from

More information

(k, q)-trace norm for sparse matrix factorization

(k, q)-trace norm for sparse matrix factorization (k, q)-trace norm for sparse matrix factorization Emile Richard Department of Electrical Engineering Stanford University emileric@stanford.edu Guillaume Obozinski Imagine Ecole des Ponts ParisTech guillaume.obozinski@imagine.enpc.fr

More information

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison From Compressed Sensing to Matrix Completion and Beyond Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Netflix Prize One million big ones! Given 100 million ratings on a

More information

A SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS

A SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS A SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS CHRISTOPHER LIAW, ABBAS MEHRABIAN, YANIV PLAN, AND ROMAN VERSHYNIN Abstract. Let A be an isotropic, sub-gaussian m n matrix.

More information

arxiv: v3 [cs.it] 25 Jul 2014

arxiv: v3 [cs.it] 25 Jul 2014 Simultaneously Structured Models with Application to Sparse and Low-rank Matrices Samet Oymak c, Amin Jalali w, Maryam Fazel w, Yonina C. Eldar t, Babak Hassibi c arxiv:11.3753v3 [cs.it] 5 Jul 014 c California

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Matrix concentration inequalities

Matrix concentration inequalities ELE 538B: Mathematics of High-Dimensional Data Matrix concentration inequalities Yuxin Chen Princeton University, Fall 2018 Recap: matrix Bernstein inequality Consider a sequence of independent random

More information

Stein s Method for Matrix Concentration

Stein s Method for Matrix Concentration Stein s Method for Matrix Concentration Lester Mackey Collaborators: Michael I. Jordan, Richard Y. Chen, Brendan Farrell, and Joel A. Tropp University of California, Berkeley California Institute of Technology

More information

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas

Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas Department of Mathematics Department of Statistical Science Cornell University London, January 7, 2016 Joint work

More information

Sharp recovery bounds for convex demixing, with applications

Sharp recovery bounds for convex demixing, with applications Sharp recovery bounds for convex demixing, with applications Michael B. McCoy Joel A. Tropp Received: May 2, 212. Accepted: January 8, 214 Abstract Demixing refers to the challenge of identifying two structured

More information

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Feng Wei 2 University of Michigan July 29, 2016 1 This presentation is based a project under the supervision of M. Rudelson.

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

CONVEX RECOVERY OF A STRUCTURED SIGNAL FROM INDEPENDENT RANDOM LINEAR MEASUREMENTS

CONVEX RECOVERY OF A STRUCTURED SIGNAL FROM INDEPENDENT RANDOM LINEAR MEASUREMENTS CONVEX RECOVERY OF A STRUCTURED SIGNAL FROM INDEPENDENT RANDOM LINEAR MEASUREMENTS JOEL A. TROPP ABSTRACT. This chapter develops a theoretical analysis of the convex programming method for recovering a

More information

Convex Relaxation for Low-Dimensional Representation: Phase Transitions and Limitations

Convex Relaxation for Low-Dimensional Representation: Phase Transitions and Limitations Convex Relaxation for Low-Dimensional Representation: Phase Transitions and Limitations Thesis by Samet Oymak In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Scissors Congruence in Mixed Dimensions

Scissors Congruence in Mixed Dimensions Scissors Congruence in Mixed Dimensions Tom Goodwillie Brown University Manifolds, K-Theory, and Related Topics Dubrovnik June, 2014 Plan of the talk I have been exploring the consequences of a definition.

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Uniqueness Conditions For Low-Rank Matrix Recovery

Uniqueness Conditions For Low-Rank Matrix Recovery Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 3-28-2011 Uniqueness Conditions For Low-Rank Matrix Recovery Yonina C. Eldar Israel Institute of

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014 Convex optimization Javier Peña Carnegie Mellon University Universidad de los Andes Bogotá, Colombia September 2014 1 / 41 Convex optimization Problem of the form where Q R n convex set: min x f(x) x Q,

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Mahler Conjecture and Reverse Santalo

Mahler Conjecture and Reverse Santalo Mini-Courses Radoslaw Adamczak Random matrices with independent log-concave rows/columns. I will present some recent results concerning geometric properties of random matrices with independent rows/columns

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Saniv Kumar, Google Research, NY EECS-6898, Columbia University - Fall, 010 Saniv Kumar 9/13/010 EECS6898 Large Scale Machine Learning 1 Curse of Dimensionality Gaussian Mixture Models

More information

Intersection probabilities and kinematic formulas for polyhedral cones

Intersection probabilities and kinematic formulas for polyhedral cones Intersection probabilities and kinematic formulas for polyhedral cones Rolf Schneider Abstract For polyhedral convex cones in R d, we give a proof for the conic kinematic formula for conic curvature measures,

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

LECTURE 10: THE ATIYAH-GUILLEMIN-STERNBERG CONVEXITY THEOREM

LECTURE 10: THE ATIYAH-GUILLEMIN-STERNBERG CONVEXITY THEOREM LECTURE 10: THE ATIYAH-GUILLEMIN-STERNBERG CONVEXITY THEOREM Contents 1. The Atiyah-Guillemin-Sternberg Convexity Theorem 1 2. Proof of the Atiyah-Guillemin-Sternberg Convexity theorem 3 3. Morse theory

More information

Invertibility of random matrices

Invertibility of random matrices University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

More information

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1 EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex

More information

Small Ball Probability, Arithmetic Structure and Random Matrices

Small Ball Probability, Arithmetic Structure and Random Matrices Small Ball Probability, Arithmetic Structure and Random Matrices Roman Vershynin University of California, Davis April 23, 2008 Distance Problems How far is a random vector X from a given subspace H in

More information

Irredundant Families of Subcubes

Irredundant Families of Subcubes Irredundant Families of Subcubes David Ellis January 2010 Abstract We consider the problem of finding the maximum possible size of a family of -dimensional subcubes of the n-cube {0, 1} n, none of which

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

NEW FUNCTIONAL INEQUALITIES

NEW FUNCTIONAL INEQUALITIES 1 / 29 NEW FUNCTIONAL INEQUALITIES VIA STEIN S METHOD Giovanni Peccati (Luxembourg University) IMA, Minneapolis: April 28, 2015 2 / 29 INTRODUCTION Based on two joint works: (1) Nourdin, Peccati and Swan

More information

Convex Geometry. Otto-von-Guericke Universität Magdeburg. Applications of the Brascamp-Lieb and Barthe inequalities. Exercise 12.

Convex Geometry. Otto-von-Guericke Universität Magdeburg. Applications of the Brascamp-Lieb and Barthe inequalities. Exercise 12. Applications of the Brascamp-Lieb and Barthe inequalities Exercise 12.1 Show that if m Ker M i {0} then both BL-I) and B-I) hold trivially. Exercise 12.2 Let λ 0, 1) and let f, g, h : R 0 R 0 be measurable

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

Solution Recovery via L1 minimization: What are possible and Why?

Solution Recovery via L1 minimization: What are possible and Why? Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

Lecture 1. Toric Varieties: Basics

Lecture 1. Toric Varieties: Basics Lecture 1. Toric Varieties: Basics Taras Panov Lomonosov Moscow State University Summer School Current Developments in Geometry Novosibirsk, 27 August1 September 2018 Taras Panov (Moscow University) Lecture

More information

Lecture 3. Random Fourier measurements

Lecture 3. Random Fourier measurements Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our

More information

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Canyi Lu, Jiashi Feng 2, Zhouchen Li,4 Shuicheng Yan 5,2 Department of Electrical and Computer Engineering, Carnegie Mellon University 2

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

What can be expressed via Conic Quadratic and Semidefinite Programming?

What can be expressed via Conic Quadratic and Semidefinite Programming? What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent

More information

MATH36061 Convex Optimization

MATH36061 Convex Optimization M\cr NA Manchester Numerical Analysis MATH36061 Convex Optimization Martin Lotz School of Mathematics The University of Manchester Manchester, September 26, 2017 Outline General information What is optimization?

More information

List of Errata for the Book A Mathematical Introduction to Compressive Sensing

List of Errata for the Book A Mathematical Introduction to Compressive Sensing List of Errata for the Book A Mathematical Introduction to Compressive Sensing Simon Foucart and Holger Rauhut This list was last updated on September 22, 2018. If you see further errors, please send us

More information

On the cells in a stationary Poisson hyperplane mosaic

On the cells in a stationary Poisson hyperplane mosaic On the cells in a stationary Poisson hyperplane mosaic Matthias Reitzner and Rolf Schneider Abstract Let X be the mosaic generated by a stationary Poisson hyperplane process X in R d. Under some mild conditions

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Convex sets, conic matrix factorizations and conic rank lower bounds

Convex sets, conic matrix factorizations and conic rank lower bounds Convex sets, conic matrix factorizations and conic rank lower bounds Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

An Algorithmist s Toolkit Nov. 10, Lecture 17

An Algorithmist s Toolkit Nov. 10, Lecture 17 8.409 An Algorithmist s Toolkit Nov. 0, 009 Lecturer: Jonathan Kelner Lecture 7 Johnson-Lindenstrauss Theorem. Recap We first recap a theorem (isoperimetric inequality) and a lemma (concentration) from

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

Semidefinite programming lifts and sparse sums-of-squares

Semidefinite programming lifts and sparse sums-of-squares 1/15 Semidefinite programming lifts and sparse sums-of-squares Hamza Fawzi (MIT, LIDS) Joint work with James Saunderson (UW) and Pablo Parrilo (MIT) Cornell ORIE, October 2015 Linear optimization 2/15

More information

approximation algorithms I

approximation algorithms I SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information

Corrupted Sensing: Novel Guarantees for Separating Structured Signals

Corrupted Sensing: Novel Guarantees for Separating Structured Signals 1 Corrupted Sensing: Novel Guarantees for Separating Structured Signals Rina Foygel and Lester Mackey Department of Statistics, Stanford University arxiv:130554v csit 4 Feb 014 Abstract We study the problem

More information

A data-independent distance to infeasibility for linear conic systems

A data-independent distance to infeasibility for linear conic systems A data-independent distance to infeasibility for linear conic systems Javier Peña Vera Roshchina May 29, 2018 Abstract We offer a unified treatment of distinct measures of well-posedness for homogeneous

More information

Concave Mirsky Inequality and Low-Rank Recovery

Concave Mirsky Inequality and Low-Rank Recovery Simon Foucart Texas A&M University Abstract We propose a simple proof of a generalized Mirsky inequality comparing the differences of singular values of two matrices with the singular values of their difference.

More information

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions ARCS IN FINITE PROJECTIVE SPACES SIMEON BALL Abstract. These notes are an outline of a course on arcs given at the Finite Geometry Summer School, University of Sussex, June 26-30, 2017. Let K denote an

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

Using Friendly Tail Bounds for Sums of Random Matrices

Using Friendly Tail Bounds for Sums of Random Matrices Using Friendly Tail Bounds for Sums of Random Matrices Joel A. Tropp Computing + Mathematical Sciences California Institute of Technology jtropp@cms.caltech.edu Research supported in part by NSF, DARPA,

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Accelerated Block-Coordinate Relaxation for Regularized Optimization

Accelerated Block-Coordinate Relaxation for Regularized Optimization Accelerated Block-Coordinate Relaxation for Regularized Optimization Stephen J. Wright Computer Sciences University of Wisconsin, Madison October 09, 2012 Problem descriptions Consider where f is smooth

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Dimension reduction for semidefinite programming

Dimension reduction for semidefinite programming 1 / 22 Dimension reduction for semidefinite programming Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute of Technology

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information