Mathematical Foundation for Compressed Sensing

Size: px
Start display at page:

Download "Mathematical Foundation for Compressed Sensing"

Transcription

1 Mathematical Foundation for Compressed Sensing Jan-Olov Strömberg Royal Institute of Technology, Stockholm, Sweden Lecture 7, March 19, 2012

2 An outline for today Last time we stated:

3 An outline for today Last time we stated:

4 An outline for today Last time we stated: A RIP estimates for Structured Random matrices.

5 An outline for today Last time we stated: A RIP estimates for Structured Random matrices. Today and next 1-2 week: go through a proof of it

6 An outline for today Last time we stated: A RIP estimates for Structured Random matrices. Today and next 1-2 week: go through a proof of it Today: Go through some basic tools and usefull lemmas for the proof.

7 An outline for today Last time we stated: A RIP estimates for Structured Random matrices. Today and next 1-2 week: go through a proof of it Today: Go through some basic tools and usefull lemmas for the proof. Reduce the proof to an estimate that remains to prove.

8 An outline for today Last time we stated: A RIP estimates for Structured Random matrices. Today and next 1-2 week: go through a proof of it Today: Go through some basic tools and usefull lemmas for the proof. Reduce the proof to an estimate that remains to prove. Next week: Joel will present a proof, puttings things together About examiniation.

9

10

11 Let A be a random m N matrix. We define δ s = δ s (A) as a random variable: Definition: δ s (A) = sup x X (s) B2 (0,1) Ax 2 x 2

12 Rauhut state a RIP result for structured m N matrices: Theorem 6.2 There is a constant C > 0 and D < 0 such that for any K 1, ɛ > 0, and 0 < δ 1 and m satisfying we have: m ln(10m) > C K 2 s δ 2 ln2 (100s) ln(n), m > D K 2 s δ 2 ln(1 ɛ ).

13 Rauhut state a RIP result for structured m N matrices: Theorem 6.2 There is a constant C > 0 and D < 0 such that for any K 1, ɛ > 0, and 0 < δ 1 and m satisfying m ln(10m) > C K 2 s δ 2 ln2 (100s) ln(n), m > D K 2 s δ 2 ln(1 ɛ ). we have: For any m N structured random matrix A with coheerence bound: NmA K the following holds: P{δ s > δ} ɛ. The constants satisfy C < and D < 456.

14 In our project Joel Anderson and S. we will get: Theorem 6.2* There is a constant C > 0 and D < 0 such that for any ɛ > 0, and 0 < δ 1, the following holds:

15 In our project Joel Anderson and S. we will get: Theorem 6.2* There is a constant C > 0 and D < 0 such that for any ɛ > 0, and 0 < δ 1, the following holds: If m ln(2m) > C K 2 s δ 2 ln2 (cs/ log(2m)) ln(n),

16 In our project Joel Anderson and S. we will get: Theorem 6.2* There is a constant C > 0 and D < 0 such that for any ɛ > 0, and 0 < δ 1, the following holds: If then m ln(2m) > C K 2 s δ 2 ln2 (cs/ log(2m)) ln(n), m > D K 2 s δ 2 ln(1 ɛ ),

17 In our project Joel Anderson and S. we will get: Theorem 6.2* There is a constant C > 0 and D < 0 such that for any ɛ > 0, and 0 < δ 1, the following holds: If m ln(2m) > C K 2 s δ 2 ln2 (cs/ log(2m)) ln(n), m > D K 2 s δ 2 ln(1 ɛ ), then the m N structured random matrix A (as above) satisfy P{δ s > δ} ɛ.

18 In our project Joel Anderson and S. we will get: Theorem 6.2* There is a constant C > 0 and D < 0 such that for any ɛ > 0, and 0 < δ 1, the following holds: If m ln(2m) > C K 2 s δ 2 ln2 (cs/ log(2m)) ln(n), m > D K 2 s δ 2 ln(1 ɛ ), then the m N structured random matrix A (as above) satisfy P{δ s > δ} ɛ. The constants C 10 3 and D somewhat smaller (10 2?) and c is rather small The proof is much shorter than Rauhut-Yanis proof.

19 We may alse define the LEIP constant δ t = δ t (A) as a stochastic variable depending on the random matrice A: Definition: δ t = sup Ax 2 2 x 2 2 x B 1 (0,t) B 2 (0,1) Remark: The proof of Theorem 6.2* gives almost the same esitmate of m when δ s is replaced by δ t, t = s.

20 Our proof is much inspired by Rauhut-Yanis paper. Most ideas can be found there, some of the ideas are twisted or quite new.

21 Our proof is much inspired by Rauhut-Yanis paper. Most ideas can be found there, some of the ideas are twisted or quite new. Our proof hopefully give a little bit better constants. But most of all we think it is much shorter and dont not require so many sofisticated arguments.

22 Basic tools We list here the basics which the proof and the Preliminarey Lemmas are based on. We mark with an (R) if these ideas alse are used in Rauhu-Yanis paper:

23 (R)The triangle inequality

24 (R)The triangle inequality (R)Jensens inequality: EX p E X p when 1 p

25 (R)The triangle inequality (R)Jensens inequality: EX p E X p when 1 p (R)If X j, 1 j m are independent random variables which are symmetric (X j and X j has same distribution) and J = (j 1,..., j m ) is a multi index with j k non-negativ integer and X J = X j1, X jm then Ex J = 0 unless all j k are even.

26 (R)The triangle inequality (R)Jensens inequality: EX p E X p when 1 p (R)If X j, 1 j m are independent random variables which are symmetric (X j and X j has same distribution) and J = (j 1,..., j m ) is a multi index with j k non-negativ integer and X J = X j1, X jm then Ex J = 0 unless all j k are even. The norms X p = (E X p ) 1 p are increasing with p, i.e X p X q when p q

27 (R)The triangle inequality (R)Jensens inequality: EX p E X p when 1 p (R)If X j, 1 j m are independent random variables which are symmetric (X j and X j has same distribution) and J = (j 1,..., j m ) is a multi index with j k non-negativ integer and X J = X j1, X jm then Ex J = 0 unless all j k are even. The norms X p = (E X p ) 1 p are increasing with p, i.e X p X q when p q (R)Markov s inequality: Let X be a random variable then for λ > 0 P{ X > λ} E X λ.

28 (R)Stirlings formula: even if not quite trivial, we set it up here as a prerequisite: n! = ( n ) n 2πn e λ n, e for some numbers 1 13n λ n 1 12n.

29 (R)Stirlings formula: even if not quite trivial, we set it up here as a prerequisite: n! = ( n ) n 2πn e λ n, e for some numbers 1 13n λ n 1 12n. If a j 0,is a finite set of numbers, then (trivial put yet so powerful!):

30 (R)Stirlings formula: even if not quite trivial, we set it up here as a prerequisite: n! = ( n ) n 2πn e λ n, e for some numbers 1 13n λ n 1 12n. If a j 0,is a finite set of numbers, then (trivial put yet so powerful!): max j a j j a j.

31 (R)Stirlings formula: even if not quite trivial, we set it up here as a prerequisite: n! = ( n ) n 2πn e λ n, e for some numbers 1 13n λ n 1 12n. If a j 0,is a finite set of numbers, then (trivial put yet so powerful!): max j a j j a j. The last observation is not used, (at least not so extensivly) in Rauhut-Yani s paper, and it is the key to simplify the calculations in their proof.

32 (R)Stirlings formula: even if not quite trivial, we set it up here as a prerequisite: n! = ( n ) n 2πn e λ n, e for some numbers 1 13n λ n 1 12n. If a j 0,is a finite set of numbers, then (trivial put yet so powerful!): max j a j j a j. The last observation is not used, (at least not so extensivly) in Rauhut-Yani s paper, and it is the key to simplify the calculations in their proof.the following equivalent formulation might look less trivial: 1 max a j j j a p j p

33 Lemma 7.1: Let X j, 1 j M be realvalued st. variables, uniformly bounded in p-norm, i.e let p 1 and assume that there is a constant B such that X j p B for all j. Then max X j M 1 p B. j

34 Lemma 7.1: Let X j, 1 j M be realvalued st. variables, uniformly bounded in p-norm, i.e let p 1 and assume that there is a constant B such that X j p B for all j. Then The proof : max X j M 1 p B. j

35 Lemma 7.1: Let X j, 1 j M be realvalued st. variables, uniformly bounded in p-norm, i.e let p 1 and assume that there is a constant B such that X j p B for all j. Then The proof : HOMEWORK? max X j M 1 p B. j

36 Lemma 7.2:Asumme that the increasing sequence p 0 < p 1 < < p k < < p M satisfies p k /p k 1 γ for some γ > 1 and assume that there are st. variables X k, 1 k M and a constant B such that X k pk B for 1 k M. Then there is a number A depending only on γ such that The proof : max X j p 0 p j 0 AB p 0.

37 Lemma 7.2:Asumme that the increasing sequence p 0 < p 1 < < p k < < p M satisfies p k /p k 1 γ for some γ > 1 and assume that there are st. variables X k, 1 k M and a constant B such that X k pk B for 1 k M. Then there is a number A depending only on γ such that The proof : - HOMEWORK 1 max X j p 0 p j 0 AB p 0.

38 Let {ɛ j } be the Rademacher set independent random variable taking the value ±1 with equal probability. Then we have

39 Let {ɛ j } be the Rademacher set independent random variable taking the value ±1 with equal probability. Then we have Lemma 7.3 Kintchine inequality: Let a j, 1 j m be real numbers. Then for any positive integer n: E ɛ j a j j 2n (2n)! 2 n n! j Remark. The Rademacher set may be replaced by a set of independent bounded symmetric real random variables X j with Var(X j ) 1. a 2 j n.

40 Proof: After taking the Expectation value on the left hand side, only terms with even powers remains.

41 Proof: After taking the Expectation value on the left hand side, only terms with even powers remains. Thus the left hand side will be: J =n (2n)! (2j 1 )! (2j m )! (a J) 2.

42 Proof: After taking the Expectation value on the left hand side, only terms with even powers remains. Thus the left hand side will be: J =n (2n)! (2j 1 )! (2j m )! (a J) 2. The nth power of the sum on the right hand side will be J =n n! j 1! j m! (a J) 2.

43 Proof: After taking the Expectation value on the left hand side, only terms with even powers remains. Thus the left hand side will be: J =n (2n)! (2j 1 )! (2j m )! (a J) 2. The nth power of the sum on the right hand side will be J =n n! j 1! j m! (a J) 2. We get max J =n (2n)! (2j 1 )! (2j m)! n! j 1! j m! = (2n)! 2! 2! n! 1! 1! = (2n)! 2 2 n!.

44 Using Stirling s formula on Lemma 7.3 we get Lemma 7.4 Kintchine inequality: Let a j, 1 j m be real numbers. Then for any positive integer n: m E ɛ j a j j 2n 2 ( ) 2n n e m j a 2 j n.

45 Using Stirling s formula on Lemma 7.3 we get Lemma 7.4 Kintchine inequality: Let a j, 1 j m be real numbers. Then for any positive integer n: m E ɛ j a j j 2n 2 ( ) 2n n e By also using the trivial Chauchy s ineqauality: m E ɛ j a j j 2n m j a 2 j { min m n, ( ) 2n n } 2 e Remark: By interpolation between the even powers 2n the result may be extended to any power p 2 this at the expense of replacing the 2 factor by n m j. a 2 j n.

46 ( R)Lemma 7.5, Hoeffding s inequality for Radermader sums. Let a j and ɛ j as above, then for u 2 P j ɛ j X j u( j aj 2 ) e u 2 /2. We will not use Höffding s inequality!. Rather go back to the proof of it each time.

47 Lemma 7.5, Symmetrization. Let X = {X j } be a finite set independent random with average values EX j = X j, and let p 1 then E X j X j X j p 2 p E X E ɛ j ɛ j X j p. Proof: Let X j be an independent copy of X j. Use in order Jensen s inequality: E X j X j E X X j p E X E X j X j X j p.

48 Since (X j X j ) is symmetric for each j we may replace last expression by p E X E X E ɛ ɛ(x j X j ) = ɛ(x j X j ) p p. j By the triangle inequality: j j ɛ(x j X j ) p j ɛx j p + j ɛ X j p 2 j ɛx j p.

49 Let X = {X i } m 1 be a set of m row vectors in RN, with X i K for some constant K > 0. We use these vectors to define a quasi-metric on R N : Definitions d X (x, y) = max X i (x y). i Define the quasi-balls B X (x, r) = {y R n : d x (y, x) r} Note that the unit l 1 - ball is contained in B X (0, K). Note that B X (x, r), contains the the l 2 ball B 2 (x, r 1 ) with where r 1 = r/(k s. Thus we inherit the covering esimatite (Lemma 4.3b) from eucleadan balls.

50 Let X = {X i } m 1 be a set of m row vectors in RN, with X i K for some constant K > 0. We use these vectors to define a quasi-metric on R N : Definitions d X (x, y) = max X i (x y). i Define the quasi-balls B X (x, r) = {y R n : d x (y, x) r} Note that the unit l 1 - ball is contained in B X (0, K). Note that B X (x, r), contains the the l 2 ball B 2 (x, r 1 ) with where r 1 = r/(k s. Thus we inherit the covering esimatite (Lemma 4.3b) from eucleadan balls.

51 Let X = {X i } m 1 be a set of m row vectors in RN, with X i K for some constant K > 0. We use these vectors to define a quasi-metric on R N : Definitions d X (x, y) = max X i (x y). i Define the quasi-balls B X (x, r) = {y R n : d x (y, x) r} Note that the unit l 1 - ball is contained in B X (0, K). Note that B X (x, r), contains the the l 2 ball B 2 (x, r 1 ) with where r 1 = r/(k s. Thus we inherit the covering esimatite (Lemma 4.3b) from eucleadan balls. Lemma 4.3: Let 0 < r < 1, then N X (s) ( N s ) (1 + 2 r )s (Ne(1 + 2 r )/s)s / 2πs

52 We get Lemma 7.5: Let 0 < r K s. The set of s sparse vectors of length less then one, X (s) can be covered by N X (s) (r) balls B X (x i, r) where N X (s) (r) ( N s ) (1 + 2K s r ) s (Ne(1 + 2K s )/s) s / 2πs. r

53 For large r we can get a better covering estimate: Covering Lemma 7.6: Let 0 < r < K. Let M 8K 2 r 2 (log( 2m) ), and let {x i } be the set of grid points in the l 1 unit cube, with mesh size 1 M, i.e the set of points satisfying x 1 1 and Mx Z N. Then {x, x 1 1} is contained in i B X (x i, r). The number of grid points is less than ( N + M M ) 2 M ( ) 2Ne M. M

54 Proof:The proof is probabilistisk. Fix an arbitary point x = (x i ) with x 1 1. We will construct a random vector Z with EZ = x, by assigning to it the value e i sgn(x i ) with probability x i whenever x i 0, and assigning the value 0 with probability 1 x 1.

55 Let Z K, 1 K M be independent copies of the random vector Z and set z = 1 M Z k. M K=1 z is a random vector with values in the grid set mentioned above and with Ez = x.

56 Enough to show E Z (d X (z x)) p r p, (*) for some p > 1. E Z (d X (z x)) p = E Z max X i (z x) p i i E Z X i (z x) p. Now fix i and note that E Z X i (z) = X i (x)

57 Symmetrization lemma and Kintchine inequality gives E Z X i (z) X i (x) 2n 2 2n 1 M m ɛ k X i (Z k ) k=1 2 2n m n 2(2n/e) n M 2n (X i (Z k )) 2 2 2n 2(2n/e) n M n K 2n. k=1 To get i E Z X i (z x) 2n r 2n, we need 2 2n 2(2n/e) n M n K 2n m r 2n. 2n

58 Thus we need 2m(p/e) p 2 ( 2K r M ) p 1, for p = 2n (**) Set u = r M 2K and set p = u2 and choose 2n to be the even integer nearest to p. The inequality (**) holds if e 1 8 2me u 2 /2 1 Remark. The factor e 1 8 arise the local minimun of the analytic expression is at p = u 2 which may not be an even integer.

59 Derivating w.r.t p the logarithm of the left hand side expresion side of ( ) we get 1 log p log u, 2 and second derivate 1 2p which is no more than 1 4 if p 2. We conclude that at the even integer nearest the local min point p = u 2. the value is within av factor e b(u) from its minimum value, where b(u) < 1 8 provided u 2.Thus That is r M 2K = u 2(log(e 1 8 2m). M 8K 2 r 2 (log( 2m) ).

60 Following the idea from Rauhut-Yanis we want to proof: E δ s 2n C(K, N, m, s)(e δ s n + 1), where the constant C(K, N, m, s) is small enough. This will give us an estimate of E δ s 2n from which the probability P(δ s > δ) can be calculated.

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and

More information

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

Sparse solutions of underdetermined systems

Sparse solutions of underdetermined systems Sparse solutions of underdetermined systems I-Liang Chern September 22, 2016 1 / 16 Outline Sparsity and Compressibility: the concept for measuring sparsity and compressibility of data Minimum measurements

More information

Math 328 Course Notes

Math 328 Course Notes Math 328 Course Notes Ian Robertson March 3, 2006 3 Properties of C[0, 1]: Sup-norm and Completeness In this chapter we are going to examine the vector space of all continuous functions defined on the

More information

Z Algorithmic Superpower Randomization October 15th, Lecture 12

Z Algorithmic Superpower Randomization October 15th, Lecture 12 15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

MATH 426, TOPOLOGY. p 1.

MATH 426, TOPOLOGY. p 1. MATH 426, TOPOLOGY THE p-norms In this document we assume an extended real line, where is an element greater than all real numbers; the interval notation [1, ] will be used to mean [1, ) { }. 1. THE p

More information

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous: MATH 51H Section 4 October 16, 2015 1 Continuity Recall what it means for a function between metric spaces to be continuous: Definition. Let (X, d X ), (Y, d Y ) be metric spaces. A function f : X Y is

More information

Math 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1

Math 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1 Math 8B Solutions Charles Martin March 6, Homework Problems. Let (X i, d i ), i n, be finitely many metric spaces. Construct a metric on the product space X = X X n. Proof. Denote points in X as x = (x,

More information

Concentration function and other stuff

Concentration function and other stuff Concentration function and other stuff Sabrina Sixta Tuesday, June 16, 2014 Sabrina Sixta () Concentration function and other stuff Tuesday, June 16, 2014 1 / 13 Table of Contents Outline 1 Chernoff Bound

More information

IITM-CS6845: Theory Toolkit February 3, 2012

IITM-CS6845: Theory Toolkit February 3, 2012 IITM-CS6845: Theory Toolkit February 3, 2012 Lecture 4 : Derandomizing the logspace algorithm for s-t connectivity Lecturer: N S Narayanaswamy Scribe: Mrinal Kumar Lecture Plan:In this lecture, we will

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Week 9 November 13 November Deadline to hand in the homeworks: your exercise class on week 16 November 20 November Exercises (1) Show that if T B(X, Y ) and S B(Y, Z)

More information

Optimal compression of approximate Euclidean distances

Optimal compression of approximate Euclidean distances Optimal compression of approximate Euclidean distances Noga Alon 1 Bo az Klartag 2 Abstract Let X be a set of n points of norm at most 1 in the Euclidean space R k, and suppose ε > 0. An ε-distance sketch

More information

Real Analysis on Metric Spaces

Real Analysis on Metric Spaces Real Analysis on Metric Spaces Mark Dean Lecture Notes for Fall 014 PhD Class - Brown University 1 Lecture 1 The first topic that we are going to cover in detail is what we ll call real analysis. The foundation

More information

INTRODUCTION TO REAL ANALYSIS II MATH 4332 BLECHER NOTES

INTRODUCTION TO REAL ANALYSIS II MATH 4332 BLECHER NOTES INTRODUCTION TO REAL ANALYSIS II MATH 4332 BLECHER NOTES You will be expected to reread and digest these typed notes after class, line by line, trying to follow why the line is true, for example how it

More information

Generalization bounds

Generalization bounds Advanced Course in Machine Learning pring 200 Generalization bounds Handouts are jointly prepared by hie Mannor and hai halev-hwartz he problem of characterizing learnability is the most basic question

More information

16 Embeddings of the Euclidean metric

16 Embeddings of the Euclidean metric 16 Embeddings of the Euclidean metric In today s lecture, we will consider how well we can embed n points in the Euclidean metric (l 2 ) into other l p metrics. More formally, we ask the following question.

More information

Thus f is continuous at x 0. Matthew Straughn Math 402 Homework 6

Thus f is continuous at x 0. Matthew Straughn Math 402 Homework 6 Matthew Straughn Math 402 Homework 6 Homework 6 (p. 452) 14.3.3, 14.3.4, 14.3.5, 14.3.8 (p. 455) 14.4.3* (p. 458) 14.5.3 (p. 460) 14.6.1 (p. 472) 14.7.2* Lemma 1. If (f (n) ) converges uniformly to some

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

Lecture 4: Completion of a Metric Space

Lecture 4: Completion of a Metric Space 15 Lecture 4: Completion of a Metric Space Closure vs. Completeness. Recall the statement of Lemma??(b): A subspace M of a metric space X is closed if and only if every convergent sequence {x n } X satisfying

More information

Mathematical Analysis Outline. William G. Faris

Mathematical Analysis Outline. William G. Faris Mathematical Analysis Outline William G. Faris January 8, 2007 2 Chapter 1 Metric spaces and continuous maps 1.1 Metric spaces A metric space is a set X together with a real distance function (x, x ) d(x,

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Problem set 2 The central limit theorem.

Problem set 2 The central limit theorem. Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly

More information

Linear Analysis Lecture 5

Linear Analysis Lecture 5 Linear Analysis Lecture 5 Inner Products and V Let dim V < with inner product,. Choose a basis B and let v, w V have coordinates in F n given by x 1. x n and y 1. y n, respectively. Let A F n n be the

More information

Stanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/21/2010

Stanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/21/2010 Stanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/2/200 Counting Problems Today we describe counting problems and the class #P that they define, and we show that every counting

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini April 27, 2018 1 / 80 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Generalization theory

Generalization theory Generalization theory Daniel Hsu Columbia TRIPODS Bootcamp 1 Motivation 2 Support vector machines X = R d, Y = { 1, +1}. Return solution ŵ R d to following optimization problem: λ min w R d 2 w 2 2 + 1

More information

CS286.2 Lecture 13: Quantum de Finetti Theorems

CS286.2 Lecture 13: Quantum de Finetti Theorems CS86. Lecture 13: Quantum de Finetti Theorems Scribe: Thom Bohdanowicz Before stating a quantum de Finetti theorem for density operators, we should define permutation invariance for quantum states. Let

More information

Non-Interactive Zero Knowledge (II)

Non-Interactive Zero Knowledge (II) Non-Interactive Zero Knowledge (II) CS 601.442/642 Modern Cryptography Fall 2017 S 601.442/642 Modern CryptographyNon-Interactive Zero Knowledge (II) Fall 2017 1 / 18 NIZKs for NP: Roadmap Last-time: Transformation

More information

Generalization Bounds in Machine Learning. Presented by: Afshin Rostamizadeh

Generalization Bounds in Machine Learning. Presented by: Afshin Rostamizadeh Generalization Bounds in Machine Learning Presented by: Afshin Rostamizadeh Outline Introduction to generalization bounds. Examples: VC-bounds Covering Number bounds Rademacher bounds Stability bounds

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19 Introductory Analysis I Fall 204 Homework #9 Due: Wednesday, November 9 Here is an easy one, to serve as warmup Assume M is a compact metric space and N is a metric space Assume that f n : M N for each

More information

IEOR 265 Lecture 3 Sparse Linear Regression

IEOR 265 Lecture 3 Sparse Linear Regression IOR 65 Lecture 3 Sparse Linear Regression 1 M Bound Recall from last lecture that the reason we are interested in complexity measures of sets is because of the following result, which is known as the M

More information

List of Errata for the Book A Mathematical Introduction to Compressive Sensing

List of Errata for the Book A Mathematical Introduction to Compressive Sensing List of Errata for the Book A Mathematical Introduction to Compressive Sensing Simon Foucart and Holger Rauhut This list was last updated on September 22, 2018. If you see further errors, please send us

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Lecture 3. Econ August 12

Lecture 3. Econ August 12 Lecture 3 Econ 2001 2015 August 12 Lecture 3 Outline 1 Metric and Metric Spaces 2 Norm and Normed Spaces 3 Sequences and Subsequences 4 Convergence 5 Monotone and Bounded Sequences Announcements: - Friday

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive

More information

Lecture 9: Matrix approximation continued

Lecture 9: Matrix approximation continued 0368-348-01-Algorithms in Data Mining Fall 013 Lecturer: Edo Liberty Lecture 9: Matrix approximation continued Warning: This note may contain typos and other inaccuracies which are usually discussed during

More information

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics Elementary Matrices MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Outline Today s discussion will focus on: elementary matrices and their properties, using elementary

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Math 104: Homework 7 solutions

Math 104: Homework 7 solutions Math 04: Homework 7 solutions. (a) The derivative of f () = is f () = 2 which is unbounded as 0. Since f () is continuous on [0, ], it is uniformly continous on this interval by Theorem 9.2. Hence for

More information

From Calculus II: An infinite series is an expression of the form

From Calculus II: An infinite series is an expression of the form MATH 3333 INTERMEDIATE ANALYSIS BLECHER NOTES 75 8. Infinite series of numbers From Calculus II: An infinite series is an expression of the form = a m + a m+ + a m+2 + ( ) Let us call this expression (*).

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

MATH 202B - Problem Set 5

MATH 202B - Problem Set 5 MATH 202B - Problem Set 5 Walid Krichene (23265217) March 6, 2013 (5.1) Show that there exists a continuous function F : [0, 1] R which is monotonic on no interval of positive length. proof We know there

More information

Math Lecture 23 Notes

Math Lecture 23 Notes Math 1010 - Lecture 23 Notes Dylan Zwick Fall 2009 In today s lecture we ll expand upon the concept of radicals and radical expressions, and discuss how we can deal with equations involving these radical

More information

Hence, (f(x) f(x 0 )) 2 + (g(x) g(x 0 )) 2 < ɛ

Hence, (f(x) f(x 0 )) 2 + (g(x) g(x 0 )) 2 < ɛ Matthew Straughn Math 402 Homework 5 Homework 5 (p. 429) 13.3.5, 13.3.6 (p. 432) 13.4.1, 13.4.2, 13.4.7*, 13.4.9 (p. 448-449) 14.2.1, 14.2.2 Exercise 13.3.5. Let (X, d X ) be a metric space, and let f

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Week 2 November 6 November Deadline to hand in the homeworks: your exercise class on week 9 November 13 November Exercises (1) Let X be the following space of piecewise

More information

6.842 Randomness and Computation Lecture 5

6.842 Randomness and Computation Lecture 5 6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its

More information

Theoretical Cryptography, Lecture 10

Theoretical Cryptography, Lecture 10 Theoretical Cryptography, Lecture 0 Instructor: Manuel Blum Scribe: Ryan Williams Feb 20, 2006 Introduction Today we will look at: The String Equality problem, revisited What does a random permutation

More information

Introduction to Real Analysis

Introduction to Real Analysis Introduction to Real Analysis Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 Sets Sets are the basic objects of mathematics. In fact, they are so basic that

More information

Supremum of simple stochastic processes

Supremum of simple stochastic processes Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there

More information

1 Homework. Recommended Reading:

1 Homework. Recommended Reading: Analysis MT43C Notes/Problems/Homework Recommended Reading: R. G. Bartle, D. R. Sherbert Introduction to real analysis, principal reference M. Spivak Calculus W. Rudin Principles of mathematical analysis

More information

Random Methods for Linear Algebra

Random Methods for Linear Algebra Gittens gittens@acm.caltech.edu Applied and Computational Mathematics California Institue of Technology October 2, 2009 Outline The Johnson-Lindenstrauss Transform 1 The Johnson-Lindenstrauss Transform

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

Ex. Here's another one. We want to prove that the sum of the cubes of the first n natural numbers is. n = n 2 (n+1) 2 /4.

Ex. Here's another one. We want to prove that the sum of the cubes of the first n natural numbers is. n = n 2 (n+1) 2 /4. Lecture One type of mathematical proof that goes everywhere is mathematical induction (tb 147). Induction is essentially used to show something is true for all iterations, i, of a sequence, where i N.

More information

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n Lectures -2: Linear Algebra Background Almost all linear and nonlinear problems in scientific computation require the use of linear algebra These lectures review basic concepts in a way that has proven

More information

The p-adic numbers. Given a prime p, we define a valuation on the rationals by

The p-adic numbers. Given a prime p, we define a valuation on the rationals by The p-adic numbers There are quite a few reasons to be interested in the p-adic numbers Q p. They are useful for solving diophantine equations, using tools like Hensel s lemma and the Hasse principle,

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results

Introduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results Introduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics

More information

Fast Random Projections

Fast Random Projections Fast Random Projections Edo Liberty 1 September 18, 2007 1 Yale University, New Haven CT, supported by AFOSR and NGA (www.edoliberty.com) Advised by Steven Zucker. About This talk will survey a few random

More information

Lecture 2: Vector Spaces, Metric Spaces

Lecture 2: Vector Spaces, Metric Spaces CCS Discrete II Professor: Padraic Bartlett Lecture 2: Vector Spaces, Metric Spaces Week 2 UCSB 2015 1 Vector Spaces, Informally The two vector spaces 1 you re probably the most used to working with, from

More information

Math 699 Reading Course, Spring 2007 Rouben Rostamian Homogenization of Differential Equations May 11, 2007 by Alen Agheksanterian

Math 699 Reading Course, Spring 2007 Rouben Rostamian Homogenization of Differential Equations May 11, 2007 by Alen Agheksanterian . Introduction Math 699 Reading Course, Spring 007 Rouben Rostamian Homogenization of ifferential Equations May, 007 by Alen Agheksanterian In this brief note, we will use several results from functional

More information

ORIE 6334 Spectral Graph Theory December 1, Lecture 27 Remix

ORIE 6334 Spectral Graph Theory December 1, Lecture 27 Remix ORIE 6334 Spectral Graph Theory December, 06 Lecturer: David P. Williamson Lecture 7 Remix Scribe: Qinru Shi Note: This is an altered version of the lecture I actually gave, which followed the structure

More information

Combinatorial Proof of the Hot Spot Theorem

Combinatorial Proof of the Hot Spot Theorem Combinatorial Proof of the Hot Spot Theorem Ernie Croot May 30, 2006 1 Introduction A problem which has perplexed mathematicians for a long time, is to decide whether the digits of π are random-looking,

More information

Local Asymptotics and the Minimum Description Length

Local Asymptotics and the Minimum Description Length Local Asymptotics and the Minimum Description Length Dean P. Foster and Robert A. Stine Department of Statistics The Wharton School of the University of Pennsylvania Philadelphia, PA 19104-6302 March 27,

More information

Complex Analysis Homework 9: Solutions

Complex Analysis Homework 9: Solutions Complex Analysis Fall 2007 Homework 9: Solutions 3..4 (a) Let z C \ {ni : n Z}. Then /(n 2 + z 2 ) n /n 2 n 2 n n 2 + z 2. According to the it comparison test from calculus, the series n 2 + z 2 converges

More information

Case study: stochastic simulation via Rademacher bootstrap

Case study: stochastic simulation via Rademacher bootstrap Case study: stochastic simulation via Rademacher bootstrap Maxim Raginsky December 4, 2013 In this lecture, we will look at an application of statistical learning theory to the problem of efficient stochastic

More information

1 Review of The Learning Setting

1 Review of The Learning Setting COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #8 Scribe: Changyan Wang February 28, 208 Review of The Learning Setting Last class, we moved beyond the PAC model: in the PAC model we

More information

0.1 Uniform integrability

0.1 Uniform integrability Copyright c 2009 by Karl Sigman 0.1 Uniform integrability Given a sequence of rvs {X n } for which it is known apriori that X n X, n, wp1. for some r.v. X, it is of great importance in many applications

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS SOME CONVERSE LIMIT THEOREMS OR EXCHANGEABLE BOOTSTRAPS Jon A. Wellner University of Washington The bootstrap Glivenko-Cantelli and bootstrap Donsker theorems of Giné and Zinn (990) contain both necessary

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Lecture Learning infinite hypothesis class via VC-dimension and Rademacher complexity;

Lecture Learning infinite hypothesis class via VC-dimension and Rademacher complexity; CSCI699: Topics in Learning and Game Theory Lecture 2 Lecturer: Ilias Diakonikolas Scribes: Li Han Today we will cover the following 2 topics: 1. Learning infinite hypothesis class via VC-dimension and

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Math General Topology Fall 2012 Homework 1 Solutions

Math General Topology Fall 2012 Homework 1 Solutions Math 535 - General Topology Fall 2012 Homework 1 Solutions Definition. Let V be a (real or complex) vector space. A norm on V is a function : V R satisfying: 1. Positivity: x 0 for all x V and moreover

More information

Constructive bounds for a Ramsey-type problem

Constructive bounds for a Ramsey-type problem Constructive bounds for a Ramsey-type problem Noga Alon Michael Krivelevich Abstract For every fixed integers r, s satisfying r < s there exists some ɛ = ɛ(r, s > 0 for which we construct explicitly an

More information

Conditional distributions (discrete case)

Conditional distributions (discrete case) Conditional distributions (discrete case) The basic idea behind conditional distributions is simple: Suppose (XY) is a jointly-distributed random vector with a discrete joint distribution. Then we can

More information

Real Analysis Math 125A, Fall 2012 Sample Final Questions. x 1+x y. x y x. 2 (1+x 2 )(1+y 2 ) x y

Real Analysis Math 125A, Fall 2012 Sample Final Questions. x 1+x y. x y x. 2 (1+x 2 )(1+y 2 ) x y . Define f : R R by Real Analysis Math 25A, Fall 202 Sample Final Questions f() = 3 + 2. Show that f is continuous on R. Is f uniformly continuous on R? To simplify the inequalities a bit, we write 3 +

More information

converges as well if x < 1. 1 x n x n 1 1 = 2 a nx n

converges as well if x < 1. 1 x n x n 1 1 = 2 a nx n Solve the following 6 problems. 1. Prove that if series n=1 a nx n converges for all x such that x < 1, then the series n=1 a n xn 1 x converges as well if x < 1. n For x < 1, x n 0 as n, so there exists

More information

Multivariate Differentiation 1

Multivariate Differentiation 1 John Nachbar Washington University February 23, 2017 1 Preliminaries. Multivariate Differentiation 1 I assume that you are already familiar with standard concepts and results from univariate calculus;

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016 Midterm Exam Time: 09:10 12:10 11/23, 2016 Name: Student ID: Policy: (Read before You Start to Work) The exam is closed book. However, you are allowed to bring TWO A4-size cheat sheet (single-sheet, two-sided).

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Probabilistic Combinatorics. Jeong Han Kim

Probabilistic Combinatorics. Jeong Han Kim Probabilistic Combinatorics Jeong Han Kim 1 Tentative Plans 1. Basics for Probability theory Coin Tossing, Expectation, Linearity of Expectation, Probability vs. Expectation, Bool s Inequality 2. First

More information

Approximation of Minimal Functions by Extreme Functions

Approximation of Minimal Functions by Extreme Functions Approximation of Minimal Functions by Extreme Functions Teresa M. Lebair and Amitabh Basu August 14, 2017 Abstract In a recent paper, Basu, Hildebrand, and Molinaro established that the set of continuous

More information

Hölder s and Minkowski s Inequality

Hölder s and Minkowski s Inequality Hölder s and Minkowski s Inequality James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University September 1, 218 Outline Conjugate Exponents Hölder s

More information

Math 61CM - Solutions to homework 6

Math 61CM - Solutions to homework 6 Math 61CM - Solutions to homework 6 Cédric De Groote November 5 th, 2018 Problem 1: (i) Give an example of a metric space X such that not all Cauchy sequences in X are convergent. (ii) Let X be a metric

More information

The Arzelà-Ascoli Theorem

The Arzelà-Ascoli Theorem John Nachbar Washington University March 27, 2016 The Arzelà-Ascoli Theorem The Arzelà-Ascoli Theorem gives sufficient conditions for compactness in certain function spaces. Among other things, it helps

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information

524 Jan-Olov Stromberg practice, this imposes strong restrictions both on N and on d. The current state of approximation theory is essentially useless

524 Jan-Olov Stromberg practice, this imposes strong restrictions both on N and on d. The current state of approximation theory is essentially useless Doc. Math. J. DMV 523 Computation with Wavelets in Higher Dimensions Jan-Olov Stromberg 1 Abstract. In dimension d, a lattice grid of size N has N d points. The representation of a function by, for instance,

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions 18.175: Lecture 8 Weak laws and moment-generating/characteristic functions Scott Sheffield MIT 18.175 Lecture 8 1 Outline Moment generating functions Weak law of large numbers: Markov/Chebyshev approach

More information