Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula

Size: px
Start display at page:

Download "Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula"

Transcription

1 Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Larry Goldstein, University of Southern California Nourdin GIoVAnNi Peccati Luxembourg University University British Columbia, Jan 18 th, 2018 Ann. Appl. Prob., 2017

2 Ingredients Convex Geometry Compressed sensing Gaussian inequalities

3 Ingredients Convex Geometry: Angular version of the classical Steiner formula for expansion of compact convex sets. Compressed sensing Gaussian inequalities

4 Ingredients Convex Geometry: Angular version of the classical Steiner formula for expansion of compact convex sets. Compressed sensing: Recovery of unknowns under structural assumptions such as sparsity for vectors, or a low rank condition for matrices. Gaussian inequalities

5 Ingredients Convex Geometry: Angular version of the classical Steiner formula for expansion of compact convex sets reveal the intrinsic volume distributions associated with closed convex cones. Compressed sensing: Recovery of high dimensional unknowns under structural assumptions such as sparsity for vectors or a low rank condition, for matrices. Gaussian inequalities and Stein s Method: For providing finite sample bounds on Gaussian fluctuations.

6 Concentration Connection Subject of this work was motivated by the papers by Amelunxen, Lotz, McCoy and Tropp [AMLT13] and McCoy and Tropp [MT14], where the conic intrinsic volume distributions were studied using Poincaré and log Sobolev inequalities. In particular it was shown that these distributions exhibit concentration phenomenon. Raises the possibility that one might make use of other Gaussian inequalities that may be lurking about in the background.

7 Concentration Connection Subject of this work was motivated by the papers by Amelunxen, Lotz, McCoy and Tropp [AMLT13] and McCoy and Tropp [MT14], where the conic intrinsic volume distributions were studied using Poincaré and log Sobolev inequalities. In particular it was shown that these distributions exhibit concentration phenomenon. Raises the possibility that one might make use of other Gaussian inequalities that may be lurking about in the background.

8 Concentration of Conic Intrisic Volumes From: Living on the Edge [ALMT13] 0.06 PHASE TRANSITIONS IN RANDOM CONVEX PROGRAMS FIGURE 2.2: Concentration of conic intrinsic volumes. This plot displays the conic intrinsic volumes v k (C) of a circular cone C R 128 with angle π/6. The distribution concentrates sharply around the statistical dimension δ(c) See Section 3.4 for further discussion of this example.

9 Convex Geometry For any closed convex set K, let d(x, K) = inf x y. y K Classical Steiner formula (1840) for the expansion of a compact convex set K R d, with B j the unit ball in R j, Vol d {x : d(x, K) λ} = d λ d j Vol(B d j )V j. j=0 Intrinsic volumes, V d is volume, 2V d 1 is surface area... and V 0 is Euler characteristic.

10 Convex Geometry The set C R d is a cone if τ(x + y) C for all {x, y} C and τ > 0. With S j 1 unit sphere in R j, one has the angular analog, Hergoltz (1943), } Vol d 1 {x S d 1 : d 2 (x, C) λ = d β j,d (λ)v j The numbers v 0,..., v d are the conic intrinsic volumes, and are non negative and sum to 1. Can associate a distribution L(V ) given by P(V = j) = v j to the cone C, also write V C. j=0

11 Recovery of Structured Unknowns Observe (a small number) m of random linear combinations of an unknown x 0 R d (in high dimension), z = Ax 0 for A R m d known, with i.i.d. N (0, 1) entries. Unknown x 0 lies in the feasible region F = {x : Ax = z} = Null(A) + x 0. With m < d not possible to recover x 0. Say x 0 is sparse, has some small number s of non-zero entries, x 0 0 = s. Minimizing x 0 over x F is computationally infeasible.

12 Recovery of Structured Unknowns Observe (a small number) m of random linear combinations of an unknown x 0 R d (in high dimension), z = Ax 0 for A R m d known, with i.i.d. N (0, 1) entries. Unknown x 0 lies in the feasible region F = {x : Ax = z} = Null(A) + x 0. With m < d not possible to recover x 0. Say x 0 is sparse, has some small number s of non-zero entries, x 0 0 = s. Minimizing x 0 over x F is computationally infeasible.

13 Recovery of Structured Unknowns Observe (a small number) m of random linear combinations of an unknown x 0 R d (in high dimension), z = Ax 0 for A R m d known, with i.i.d. N (0, 1) entries. Unknown x 0 lies in the feasible region F = {x : Ax = z} = Null(A) + x 0. With m < d not possible to recover x 0. Say x 0 is sparse, has some small number s of non-zero entries, x 0 0 = s. Minimizing x 0 over x F is computationally infeasible.

14 Convex Program Determine a convex function f (x) that promotes the known structure of x 0, and minimize f (x) over x F. Chandrasekaran, Recht, Parrilo, and Willsky (2012) show how to construct f for some given structure. To promote sparsity let f (x) = x 1, the L 1 norm, i.e. d i=1 x i. Candès, Romberg, Tao (2006) and Donoho (2006), Donoho and Tanner (2009). Why does it work?

15 Descent Cone For f (x) = x 1 and x 0 R d, let D(f, x 0 ) = {y R d : τ > 0, x 0 + τy 1 x 0 1 }. The set of all directions from which, starting at x 0, do not increase the L 1 norm.

16 Convex recovery of unknowns From: Living on the Edge [ALMT13] PHASE TRANSITIONS IN RANDOM CONVEX PROGRAMS 7 x 0 + null(a) x 0 + null(a) x 0 x 0 {x : f (x) f (x0)} {x : f (x) f (x0)} x 0 + D(f, x 0) x 0 +D(f, x 0) FIGURE 2.3: The optimality condition for a regularized inverse problem. The condition for the regularized linear inverse problem (2.4) to succeed requires that the descent cone D(f, x 0) and the null space null(a) do not share a ray. [left] The regularized linear inverse problem succeeds. [right] The regularized linear inverse problem fails.

17 Convex recovery of sparse unknowns So, translating by x 0, success if and only if Null(A) C = {0} where C is the descent cone of the L 1 norm at x 0. Pause for an acknowledgment of priority.

18 Convex recovery of sparse unknowns So, translating by x 0, success if and only if Null(A) C = {0} where C is the descent cone of the L 1 norm at x 0. Pause for an acknowledgment of priority.

19 Acknowledgment of priority: Descent cone of the L 1 norm at a sparse vector

20 Convex recovery of sparse unknowns Translating by x 0, success if and only if Null(A) C = {0} where C is the descent cone of the L 1 norm at x 0. We know dim(null(a)) = d m. So if the dimension of C were δ(c), we would want d m + δ(c) d so m δ(c). Statistical Dimension of C is δ(c) = E[V C ]. Recovery success probability p sandwiched P(V m 1) p P(V m). Concentration would imply sharp transition, threshold phenomenon.

21 Metric Projection For K any closed convex set, the infimum d(x, K) = inf x y, y K is attained at a unique vector, called the metric projection of x onto K, denoted by Π K (x).

22 Gaussian Connection The cone C is polyhedral if there exists an integer N and vectors u 1,..., u N in R d such that C = N {x R d : u i, x 0}. i=1 For g N (0, I d ), the conic intrinsic volumes satisfy v j = P (Π C (g) lies in the relative interior of a j-dimensional face of C) Consider the orthant R d +. Cone C is like a subspace of random dimension V, hence with statistical dimension δ(c) = E[V C ].

23 Master Steiner Formula [MT14] Implies that with X i independent χ 2 1 random variables, Π C (g) 2 = d V C X i. j=0 Letting G C = Π C (g) 2, G C = d V C V C X i = (X i 1) + V C, j=0 j=0 so recalling δ C = E[V C ] and letting τ 2 C = Var(V C ), we see E[G C ] = δ C and Var(G C ) = 2δ C + τ 2 C

24 Master Steiner Formula [MT14] Relation G C = d V C X i for G C = Π C (g) 2 j=0 yields (strange) moment generating function identity Ee tv = Ee ξtg with ξ t = 1 2 ( 1 e 2t ).

25 Concentration [MT14] From Ee tv = Ee ξtg with ξ t = 1 2 ( 1 e 2t ), and the fact that G = Π C (g) 2, use log Sobolev inequality to obtain bound on its moment generating function. Translate into bound on moment generating fucntion for V C to show concentration around δ C, implying phase transition for exact recovery.

26 Concentration [MT14] From Ee tv = Ee ξtg with ξ t = 1 2 ( 1 e 2t ), and the fact that G = Π C (g) 2, use log Sobolev inequality to obtain bound on its moment generating function. Translate into bound on moment generating fucntion for V C to show concentration around δ C, implying phase transition for exact recovery.

27 Normal Fluctuations Standardizing G C = d V C (X i 1) + V C = W C + V C j=0 yields G C δ C σ C = d ( 2δC σ C ) ( ) WC τc VC δ C +. 2δC σ C τ C Standardized G C is asymptotically normal and expressions on the right hand side are asymptotically independent. Apply Cramér s theorem. Can derive lower bounds for τ C in terms of E[Π C (g), the statistical center.

28 Normal Fluctuations Standardizing G C = d V C (X i 1) + V C = W C + V C j=0 yields G C δ C σ C = d ( 2δC σ C ) ( ) WC τc VC δ C +. 2δC σ C τ C Standardized G C is asymptotically normal and expressions on the right hand side are asymptotically independent. Apply Cramér s theorem. Can derive lower bounds for τ C in terms of E[Π C (g), the statistical center.

29 Toward the 2 nd order Poincaré Inequality, apply to G C 1 st : Poincaré Inequality. If W (g) is a smooth real valued function of g N (0, I d ), then e.g., Var(W (g)) E W (g) 2. σ 2 C = Var ( Π C (g) 2) E 2Π C (g) 2 = 4δ C.

30 Poincaré Inequality, proof sketch Take E[W ] = 0, ϕ a smooth function, ĝ an independent copy of g, and consider E[W ϕ(w )] = E[(W (g) W (ĝ)) ϕ(w (g))]

31 Poincaré Inequality, proof sketch Take E[W ] = 0, ϕ a smooth function, ĝ an independent copy of g, and let ĝ t = e t g + 1 e 2t ĝ E[W ϕ(w )] = E[(W (g) W (ĝ)) ϕ(w (g))]

32 Poincaré Inequality, proof sketch Take E[W ] = 0, ϕ a smooth function, ĝ an independent copy of g, and let ĝ t = e t g + 1 e 2t ĝ E[W ϕ(w )] = E[(W (ĝ 0 ) W (ĝ )) ϕ(w (ĝ 0 ))]

33 Poincaré Inequality, proof sketch Take E[W ] = 0, ϕ a smooth function, ĝ an independent copy of g, and let ĝ t = e t g + 1 e 2t ĝ E[W ϕ(w )] = E[(W (ĝ 0 ) W (ĝ )) ϕ(w (ĝ 0 ))] d = dt E[W (ĝ t)ϕ(w (ĝ 0 ))]dt 0

34 Poincaré Inequality, proof sketch Take E[W ] = 0, ϕ a smooth function, ĝ an independent copy of g, and let ĝ t = e t g + 1 e 2t ĝ E[W ϕ(w )] = E[(W (ĝ 0 ) W (ĝ )) ϕ(w (ĝ 0 ))] d = dt E[W (ĝ t)ϕ(w (ĝ 0 ))]dt 0 = = E[T ϕ (W (g))], T = 0 e t W (g), Ê( W (ĝ t)) dt. Take ϕ(x) = x, obtain Var(W ) = E[T ], use Cauchy Schwarz.

35 Stein s Method Stein s method is based on an equation characterizing the target distribution. Stein s lemma: Z N (0, 1) if and only if E[Zϕ(Z)] = E[ϕ (Z)] for all absolutely continuous functions for which these expectations exist. For test function h, solve the Stein equation ϕ (w) wϕ(w) = h(w) Eh(Z) for ϕ, evaluate at W, variable whose distribution is to be approximated, and bound expectation Eh(W ) Eh(Z) = E[ϕ (W ) W ϕ(w )]

36 Stein s Method Stein s method is based on an equation characterizing the target distribution. Stein s lemma: Z N (0, 1) if and only if E[Zϕ(Z)] = E[ϕ (Z)] for all absolutely continuous functions for which these expectations exist. For test function h, solve the Stein equation ϕ (w) wϕ(w) = h(w) Eh(Z) for ϕ, evaluate at W, variable whose distribution is to be approximated, and bound expectation Eh(W ) Eh(Z) = E[ϕ (W ) W ϕ(w )]

37 Stein s Method Stein s lemma: Z N (0, 1) if and only if E[Zϕ(Z)] = E[ϕ (Z)]. Suppose for mean zero, variance one variable W one can find T such that E[W ϕ(w )] = E[T ϕ (W )], and note E[T ] = 1. For a continuous test function h : R [0, 1], solution of the Stein equation ϕ, Eh(W ) Eh(Z) = E[ϕ (W ) W ϕ(w )] = E[ϕ (W )(1 T )] ϕ E T 1 2 Var(T ). Taking sup over all such h, obtain bound in total variation.

38 Stein s Method Stein s lemma: Z N (0, 1) if and only if E[Zϕ(Z)] = E[ϕ (Z)]. Suppose for mean zero, variance one variable W one can find T such that E[W ϕ(w )] = E[T ϕ (W )], and note E[T ] = 1. For a continuous test function h : R [0, 1], solution of the Stein equation ϕ, Eh(W ) Eh(Z) = E[ϕ (W ) W ϕ(w )] = E[ϕ (W )(1 T )] ϕ E T 1 2 Var(T ). Taking sup over all such h, obtain bound in total variation.

39 2 nd order Poincaré inequality Recall from proof of Poincaré inequality, where E[W ] = 0 and W = W (g), E[W ϕ(w )] = E[T ϕ (W )] with T = Chatterjee (2009), for this T and Var(W ) = 1 we have d TV (W, Z) 2 Var(T ). 0 e t W (g), Ê( W (ĝ t)) dt. Variance might be difficult to evaluate directly, so use the Poincaré inequality.

40 2 nd order Poincaré inequality Recall from proof of Poincaré inequality, where E[W ] = 0 and W = W (g), E[W ϕ(w )] = E[T ϕ (W )] with T = Chatterjee (2009), for this T and Var(W ) = 1 we have d TV (W, Z) 2 Var(T ). 0 e t W (g), Ê( W (ĝ t)) dt. Variance might be difficult to evaluate directly, so use the Poincaré inequality.

41 Gaussian Projections on Convex Cones Want to obtain limiting normal law, with bounds, for G C = Π C (g) 2 in order to infer same for V C. Π C (x) 2 = 2Π C (x) hence T = 4 Use the Poincaré inequality to bound Var(T ), and that σc 2 = 2δ C + τc 2, to obtain 0 e t Π C (g), ÊΠ C (ĝ t ) dt d TV (G C, Z) 16 δ(c) σ 2 C 8 δ(c).

42 Berry-Esseen bound for conic intrinsic volumes If C is a closed convex cone with δ C = E[V C ] and τ 2 C = Var(V C ) > 0, then with α = τ 2 C /δ C, for δ C 8 [ sup P VC δ C u R τ C ] u P[Z u] h(δ C ) + 48 α log + ( α 2δ C ) where h(δ C ) = 1 72 ( ) 5/2 log δ C δ 3/16 C For m about δ C + tτ C, obtain bound on sup P {x 0 is recovered} 1 t 2π t R e u2 /2 du

43 Berry-Esseen bound for conic intrinsic volumes If C is a closed convex cone with δ C = E[V C ] and τ 2 C = Var(V C ) > 0, then with α = τ 2 C /δ C, for δ C 8 [ sup P VC δ C u R τ C ] u P[Z u] h(δ C ) + 48 α log + ( α 2δ C ) where h(δ C ) = 1 72 ( ) 5/2 log δ C δ 3/16 C For m about δ C + tτ C, obtain bound on sup P {x 0 is recovered} 1 t 2π t R e u2 /2 du

44 Ingredients Convex Geometry Compressed sensing Gaussian inequalities and Stein s Method Poincaré inequality bound a variance by an expectation. Log Sobolev inequality 2 nd order Poincaré inequality bound a total variation by a variance, which is bounded by an expectation.

Intrinsic Volumes of Convex Cones Theory and Applications

Intrinsic Volumes of Convex Cones Theory and Applications Intrinsic Volumes of Convex Cones Theory and Applications M\cr NA Manchester Numerical Analysis Martin Lotz School of Mathematics The University of Manchester with the collaboration of Dennis Amelunxen,

More information

Stein s Method: Distributional Approximation and Concentration of Measure

Stein s Method: Distributional Approximation and Concentration of Measure Stein s Method: Distributional Approximation and Concentration of Measure Larry Goldstein University of Southern California 36 th Midwest Probability Colloquium, 2014 Stein s method for Distributional

More information

NEW FUNCTIONAL INEQUALITIES

NEW FUNCTIONAL INEQUALITIES 1 / 29 NEW FUNCTIONAL INEQUALITIES VIA STEIN S METHOD Giovanni Peccati (Luxembourg University) IMA, Minneapolis: April 28, 2015 2 / 29 INTRODUCTION Based on two joint works: (1) Nourdin, Peccati and Swan

More information

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May University of Luxembourg Master in Mathematics Student project Compressed sensing Author: Lucien May Supervisor: Prof. I. Nourdin Winter semester 2014 1 Introduction Let us consider an s-sparse vector

More information

A Gentle Introduction to Stein s Method for Normal Approximation I

A Gentle Introduction to Stein s Method for Normal Approximation I A Gentle Introduction to Stein s Method for Normal Approximation I Larry Goldstein University of Southern California Introduction to Stein s Method for Normal Approximation 1. Much activity since Stein

More information

Kolmogorov Berry-Esseen bounds for binomial functionals

Kolmogorov Berry-Esseen bounds for binomial functionals Kolmogorov Berry-Esseen bounds for binomial functionals Raphaël Lachièze-Rey, Univ. South California, Univ. Paris 5 René Descartes, Joint work with Giovanni Peccati, University of Luxembourg, Singapore

More information

Living on the edge: phase transitions in convex programs with random data

Living on the edge: phase transitions in convex programs with random data Information and Inference: A Journal of the IMA (2014) 3, 224 294 doi:10.1093/imaiai/iau005 Advance Access publication on 30 June 2014 Living on the edge: phase transitions in convex programs with random

More information

COMPRESSED Sensing (CS) is a method to recover a

COMPRESSED Sensing (CS) is a method to recover a 1 Sample Complexity of Total Variation Minimization Sajad Daei, Farzan Haddadi, Arash Amini Abstract This work considers the use of Total Variation (TV) minimization in the recovery of a given gradient

More information

LIVING ON THE EDGE: PHASE TRANSITIONS IN CONVEX PROGRAMS WITH RANDOM DATA

LIVING ON THE EDGE: PHASE TRANSITIONS IN CONVEX PROGRAMS WITH RANDOM DATA LIVING ON THE EDGE: PHASE TRANSITIONS IN CONVEX PROGRAMS WITH RANDOM DATA DENNIS AMELUNXEN, MARTIN LOTZ, MICHAEL B. MCCOY, AND JOEL A. TROPP ABSTRACT. Recent research indicates that many convex optimization

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

From Steiner Formulas for Cones to Concentration of Intrinsic Volumes

From Steiner Formulas for Cones to Concentration of Intrinsic Volumes Discrete Comput Geom (2014 51:926 963 DOI 10.1007/s00454-014-9595-4 From Steiner Formulas for Cones to Concentration of Intrinsic Volumes Michael B. McCoy Joel A. Tropp Received: 23 August 2013 / Revised:

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

On the Convex Geometry of Weighted Nuclear Norm Minimization

On the Convex Geometry of Weighted Nuclear Norm Minimization On the Convex Geometry of Weighted Nuclear Norm Minimization Seyedroohollah Hosseini roohollah.hosseini@outlook.com Abstract- Low-rank matrix approximation, which aims to construct a low-rank matrix from

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

EFFECTIVE CONDITION NUMBER BOUNDS FOR CONVEX REGULARIZATION

EFFECTIVE CONDITION NUMBER BOUNDS FOR CONVEX REGULARIZATION EFFECTIVE CONDITION NUMBER BOUNDS FOR CONVEX REGULARIZATION DENNIS AMELUNXEN, MARTIN LOTZ, AND JAKE WALVIN ABSTRACT. We derive bounds relating the statistical dimension of linear images of convex cones

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison From Compressed Sensing to Matrix Completion and Beyond Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Netflix Prize One million big ones! Given 100 million ratings on a

More information

PHASE TRANSITION OF JOINT-SPARSE RECOVERY FROM MULTIPLE MEASUREMENTS VIA CONVEX OPTIMIZATION

PHASE TRANSITION OF JOINT-SPARSE RECOVERY FROM MULTIPLE MEASUREMENTS VIA CONVEX OPTIMIZATION PHASE TRASITIO OF JOIT-SPARSE RECOVERY FROM MUTIPE MEASUREMETS VIA COVEX OPTIMIZATIO Shih-Wei Hu,, Gang-Xuan in, Sung-Hsien Hsieh, and Chun-Shien u Institute of Information Science, Academia Sinica, Taipei,

More information

Technical Report No February 2017

Technical Report No February 2017 THE ACHIEVABLE PERFORMANCE OF CONVEX DEMIXING MICHAEL B. MCCOY AND JOEL A. TROPP Technical Report No. 2017-02 February 2017 APPLIED & COMPUTATIONAL MATHEMATICS CALIFORNIA INSTITUTE OF TECHNOLOGY mail code

More information

Uniqueness Conditions For Low-Rank Matrix Recovery

Uniqueness Conditions For Low-Rank Matrix Recovery Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 3-28-2011 Uniqueness Conditions For Low-Rank Matrix Recovery Yonina C. Eldar Israel Institute of

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

ON MEHLER S FORMULA. Giovanni Peccati (Luxembourg University) Conférence Géométrie Stochastique Nantes April 7, 2016

ON MEHLER S FORMULA. Giovanni Peccati (Luxembourg University) Conférence Géométrie Stochastique Nantes April 7, 2016 1 / 22 ON MEHLER S FORMULA Giovanni Peccati (Luxembourg University) Conférence Géométrie Stochastique Nantes April 7, 2016 2 / 22 OVERVIEW ı I will discuss two joint works: Last, Peccati and Schulte (PTRF,

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

arxiv: v3 [math.mg] 3 Nov 2017

arxiv: v3 [math.mg] 3 Nov 2017 arxiv:702.0069v3 [math.mg] 3 ov 207 Random polytopes: central limit theorems for intrinsic volumes Christoph Thäle, icola Turchi and Florian Wespi Abstract Short and transparent proofs of central limit

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31 STAT C26A / MATH C223A : Stein s method and applications Lecture 3 Lecture date: Nov. 7, 27 Scribe: Anand Sarwate Gaussian concentration recap If W, T ) is a pair of random variables such that for all

More information

Small Ball Probability, Arithmetic Structure and Random Matrices

Small Ball Probability, Arithmetic Structure and Random Matrices Small Ball Probability, Arithmetic Structure and Random Matrices Roman Vershynin University of California, Davis April 23, 2008 Distance Problems How far is a random vector X from a given subspace H in

More information

Bounds on the Constant in the Mean Central Limit Theorem

Bounds on the Constant in the Mean Central Limit Theorem Bounds on the Constant in the Mean Central Limit Theorem Larry Goldstein University of Southern California, Ann. Prob. 2010 Classical Berry Esseen Theorem Let X, X 1, X 2,... be i.i.d. with distribution

More information

arxiv: v2 [math.na] 10 May 2018

arxiv: v2 [math.na] 10 May 2018 EFFECTIVE CONDITION NUMBER BOUNDS FOR CONVEX REGULARIZATION DENNIS AMELUNXEN, MARTIN LOTZ, AND JAKE WALVIN arxiv:1707.01775v2 [math.na] 10 May 2018 ABSTRACT. We derive bounds relating Renegar s condition

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

The convex algebraic geometry of linear inverse problems

The convex algebraic geometry of linear inverse problems The convex algebraic geometry of linear inverse problems The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Stein s Method and Stochastic Geometry

Stein s Method and Stochastic Geometry 1 / 39 Stein s Method and Stochastic Geometry Giovanni Peccati (Luxembourg University) Firenze 16 marzo 2018 2 / 39 INTRODUCTION Stein s method, as devised by Charles Stein at the end of the 60s, is a

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Anti-concentration Inequalities

Anti-concentration Inequalities Anti-concentration Inequalities Roman Vershynin Mark Rudelson University of California, Davis University of Missouri-Columbia Phenomena in High Dimensions Third Annual Conference Samos, Greece June 2007

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Feng Wei 2 University of Michigan July 29, 2016 1 This presentation is based a project under the supervision of M. Rudelson.

More information

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Canyi Lu, Jiashi Feng 2, Zhouchen Li,4 Shuicheng Yan 5,2 Department of Electrical and Computer Engineering, Carnegie Mellon University 2

More information

Stein s method, logarithmic Sobolev and transport inequalities

Stein s method, logarithmic Sobolev and transport inequalities Stein s method, logarithmic Sobolev and transport inequalities M. Ledoux University of Toulouse, France and Institut Universitaire de France Stein s method, logarithmic Sobolev and transport inequalities

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Corrupted Sensing: Novel Guarantees for Separating Structured Signals

Corrupted Sensing: Novel Guarantees for Separating Structured Signals 1 Corrupted Sensing: Novel Guarantees for Separating Structured Signals Rina Foygel and Lester Mackey Department of Statistics, Stanford University arxiv:130554v csit 4 Feb 014 Abstract We study the problem

More information

Three Generalizations of Compressed Sensing

Three Generalizations of Compressed Sensing Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Matrix concentration inequalities

Matrix concentration inequalities ELE 538B: Mathematics of High-Dimensional Data Matrix concentration inequalities Yuxin Chen Princeton University, Fall 2018 Recap: matrix Bernstein inequality Consider a sequence of independent random

More information

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Larry Goldstein and Gesine Reinert November 8, 001 Abstract Let W be a random variable with mean zero and variance

More information

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and

More information

arxiv: v3 [cs.it] 25 Jul 2014

arxiv: v3 [cs.it] 25 Jul 2014 Simultaneously Structured Models with Application to Sparse and Low-rank Matrices Samet Oymak c, Amin Jalali w, Maryam Fazel w, Yonina C. Eldar t, Babak Hassibi c arxiv:11.3753v3 [cs.it] 5 Jul 014 c California

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

Malliavin calculus and central limit theorems

Malliavin calculus and central limit theorems Malliavin calculus and central limit theorems David Nualart Department of Mathematics Kansas University Seminar on Stochastic Processes 2017 University of Virginia March 8-11 2017 David Nualart (Kansas

More information

4 Sobolev spaces, trace theorem and normal derivative

4 Sobolev spaces, trace theorem and normal derivative 4 Sobolev spaces, trace theorem and normal derivative Throughout, n will be a sufficiently smooth, bounded domain. We use the standard Sobolev spaces H 0 ( n ) := L 2 ( n ), H 0 () := L 2 (), H k ( n ),

More information

Concentration inequalities and the entropy method

Concentration inequalities and the entropy method Concentration inequalities and the entropy method Gábor Lugosi ICREA and Pompeu Fabra University Barcelona what is concentration? We are interested in bounding random fluctuations of functions of many

More information

R. Lachieze-Rey Recent Berry-Esseen bounds obtained with Stein s method andgeorgia PoincareTech. inequalities, 1 / with 29 G.

R. Lachieze-Rey Recent Berry-Esseen bounds obtained with Stein s method andgeorgia PoincareTech. inequalities, 1 / with 29 G. Recent Berry-Esseen bounds obtained with Stein s method and Poincare inequalities, with Geometric applications Raphaël Lachièze-Rey, Univ. Paris 5 René Descartes, Georgia Tech. R. Lachieze-Rey Recent Berry-Esseen

More information

The Convex Geometry of Linear Inverse Problems

The Convex Geometry of Linear Inverse Problems Found Comput Math 2012) 12:805 849 DOI 10.1007/s10208-012-9135-7 The Convex Geometry of Linear Inverse Problems Venkat Chandrasekaran Benjamin Recht Pablo A. Parrilo Alan S. Willsky Received: 2 December

More information

Concentration, self-bounding functions

Concentration, self-bounding functions Concentration, self-bounding functions S. Boucheron 1 and G. Lugosi 2 and P. Massart 3 1 Laboratoire de Probabilités et Modèles Aléatoires Université Paris-Diderot 2 Economics University Pompeu Fabra 3

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

arxiv: v2 [cs.it] 8 Apr 2013

arxiv: v2 [cs.it] 8 Apr 2013 ONE-BIT COMPRESSED SENSING WITH NON-GAUSSIAN MEASUREMENTS ALBERT AI, ALEX LAPANOWSKI, YANIV PLAN, AND ROMAN VERSHYNIN arxiv:1208.6279v2 [cs.it] 8 Apr 2013 Abstract. In one-bit compressed sensing, previous

More information

arxiv: v2 [math.fa] 24 May 2016

arxiv: v2 [math.fa] 24 May 2016 Optimal Choice of Weights for Sparse Recovery With Prior Information arxiv:506.09054v [math.fa] 4 May 06 Axel Flinth April 6, 08 Abstract Compressed sensing deals with the recovery of sparse signals from

More information

Stein s method and weak convergence on Wiener space

Stein s method and weak convergence on Wiener space Stein s method and weak convergence on Wiener space Giovanni PECCATI (LSTA Paris VI) January 14, 2008 Main subject: two joint papers with I. Nourdin (Paris VI) Stein s method on Wiener chaos (ArXiv, December

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Nodal Sets of High-Energy Arithmetic Random Waves

Nodal Sets of High-Energy Arithmetic Random Waves Nodal Sets of High-Energy Arithmetic Random Waves Maurizia Rossi UR Mathématiques, Université du Luxembourg Probabilistic Methods in Spectral Geometry and PDE Montréal August 22-26, 2016 M. Rossi (Université

More information

(k, q)-trace norm for sparse matrix factorization

(k, q)-trace norm for sparse matrix factorization (k, q)-trace norm for sparse matrix factorization Emile Richard Department of Electrical Engineering Stanford University emileric@stanford.edu Guillaume Obozinski Imagine Ecole des Ponts ParisTech guillaume.obozinski@imagine.enpc.fr

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES CHRISTOPHER HEIL 1. Compact Sets Definition 1.1 (Compact and Totally Bounded Sets). Let X be a metric space, and let E X be

More information

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

Convex sets, conic matrix factorizations and conic rank lower bounds

Convex sets, conic matrix factorizations and conic rank lower bounds Convex sets, conic matrix factorizations and conic rank lower bounds Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014 Convex optimization Javier Peña Carnegie Mellon University Universidad de los Andes Bogotá, Colombia September 2014 1 / 41 Convex optimization Problem of the form where Q R n convex set: min x f(x) x Q,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Normal approximation of geometric Poisson functionals

Normal approximation of geometric Poisson functionals Institut für Stochastik Karlsruher Institut für Technologie Normal approximation of geometric Poisson functionals (Karlsruhe) joint work with Daniel Hug, Giovanni Peccati, Matthias Schulte presented at

More information

19.1 Problem setup: Sparse linear regression

19.1 Problem setup: Sparse linear regression ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 19: Minimax rates for sparse linear regression Lecturer: Yihong Wu Scribe: Subhadeep Paul, April 13/14, 2016 In

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Concentration of Measures by Bounded Couplings

Concentration of Measures by Bounded Couplings Concentration of Measures by Bounded Couplings Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] May 2013 Concentration of Measure Distributional

More information

A Note on the Complexity of L p Minimization

A Note on the Complexity of L p Minimization Mathematical Programming manuscript No. (will be inserted by the editor) A Note on the Complexity of L p Minimization Dongdong Ge Xiaoye Jiang Yinyu Ye Abstract We discuss the L p (0 p < 1) minimization

More information

arxiv: v1 [cs.it] 14 Apr 2009

arxiv: v1 [cs.it] 14 Apr 2009 Department of Computer Science, University of British Columbia Technical Report TR-29-7, April 29 Joint-sparse recovery from multiple measurements Ewout van den Berg Michael P. Friedlander arxiv:94.251v1

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor RANDOM FIELDS AND GEOMETRY from the book of the same name by Robert Adler and Jonathan Taylor IE&M, Technion, Israel, Statistics, Stanford, US. ie.technion.ac.il/adler.phtml www-stat.stanford.edu/ jtaylor

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Wasserstein-2 bounds in normal approximation under local dependence

Wasserstein-2 bounds in normal approximation under local dependence Wasserstein- bounds in normal approximation under local dependence arxiv:1807.05741v1 [math.pr] 16 Jul 018 Xiao Fang The Chinese University of Hong Kong Abstract: We obtain a general bound for the Wasserstein-

More information