Compressed Sensing: Lecture I. Ronald DeVore
|
|
- Cody Hill
- 5 years ago
- Views:
Transcription
1 Compressed Sensing: Lecture I Ronald DeVore
2 Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition
3 Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition To understand the motivation for this new area we shall consider the classical approach to signal and image acquisition
4 Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition To understand the motivation for this new area we shall consider the classical approach to signal and image acquisition This will expose some deficiencies in the classical approach
5 Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition To understand the motivation for this new area we shall consider the classical approach to signal and image acquisition This will expose some deficiencies in the classical approach The goal of Compressed Sensing (CS) is to remove these deficiencies
6 Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition To understand the motivation for this new area we shall consider the classical approach to signal and image acquisition This will expose some deficiencies in the classical approach The goal of Compressed Sensing (CS) is to remove these deficiencies We shall see that the remedy is a merging of ideas from functional analysis, approximation, and probability
7 Traditional Signal Processing Model signals as bandlimited functions x(t) Support of ˆx contained in [ Ωπ, Ωπ] Shannon-Nyquist says x can be exactly represented in terms of translates S(t nh) of a Sinc function S. Coefficients in this representation are uniform time samples x(nh) with spacing h 1 Ω These samples allow for exact reconstruction (Nyquist rate) A/D Converters: quantize these samples Problem: If Ω is too large we cant build circuitry to sample faithfully at the desired rate
8 Traditional Image Compression Models images as bivariate functions F(x, y) Pixel values are averages of F over small squares Sensors generate an array of quantized pixel values giving the piecewise constant function F Compression: Transform pixel values to coefficients in some basis representation: Fourier, wavelets, curvelets, etc. Compute all coefficients c λ of F in this basis Retain only quantized versions c λ of only the largest of these coefficients Problem: We take many samples, compute many coefficients, but retain only a few. Is it possible to take and store only a few samples from the very beginning?
9 Why there is hope The belief is that real world signals/images are sparse or very compressible in a suitable basis Thus the information content in the real world signals is small CS models signals by means of this sparsity or compressibility This is in contrast to the Shannon model of bandlimited CS will try to take advantage of this sparsity by designing new ways to sample signals and therefore new sensors Real world signals are analog but we will develop the mathematics in the discrete setting Some massaging must be done to move from discrete to analog
10 Discrete Compressed Sensing x IR N with N large
11 Discrete Compressed Sensing x IR N with N large We are able to ask n questions about x
12 Discrete Compressed Sensing x IR N with N large We are able to ask n questions about x Question means inner product v x with v IR N - called sample
13 Discrete Compressed Sensing x IR N with N large We are able to ask n questions about x Question means inner product v x with v IR N - called sample What are the best questions to ask??
14 Discrete Compressed Sensing x IR N with N large We are able to ask n questions about x Question means inner product v x with v IR N - called sample What are the best questions to ask?? Any such sampling is given by Φx where Φ is an n N matrix
15 Discrete Compressed Sensing x IR N with N large We are able to ask n questions about x Question means inner product v x with v IR N - called sample What are the best questions to ask?? Any such sampling is given by Φx where Φ is an n N matrix We are interested in the good / best matrices Φ
16 Discrete Compressed Sensing x IR N with N large We are able to ask n questions about x Question means inner product v x with v IR N - called sample What are the best questions to ask?? Any such sampling is given by Φx where Φ is an n N matrix We are interested in the good / best matrices Φ Here good means the samples y = Φx contain enough information to approximate x well
17 Discrete Compressed Sensing x IR N with N large We are able to ask n questions about x Question means inner product v x with v IR N - called sample What are the best questions to ask?? Any such sampling is given by Φx where Φ is an n N matrix We are interested in the good / best matrices Φ Here good means the samples y = Φx contain enough information to approximate x well How can we make this problem precise??
18 Encoder/Decoder We view Φ as an encoder
19 Encoder/Decoder We view Φ as an encoder Since Φ : IR N IR n many x are encoded with same y
20 Encoder/Decoder We view Φ as an encoder Since Φ : IR N IR n many x are encoded with same y N := {η : Φη = 0} the null space of Φ
21 Encoder/Decoder We view Φ as an encoder Since Φ : IR N IR n many x are encoded with same y N := {η : Φη = 0} the null space of Φ F(y) := {x : Φx = y} = x 0 + N for any x 0 F(y)
22 Encoder/Decoder We view Φ as an encoder Since Φ : IR N IR n many x are encoded with same y N := {η : Φη = 0} the null space of Φ F(y) := {x : Φx = y} = x 0 + N for any x 0 F(y) The hyperplanes F(y) with y IR n stratify IR N
23 ) 1 ( y F 2 ) ( y F k ) ( y F The sets F(y)
24 Encoder/Decoder We view Φ as an encoder Since Φ : IR N IR n many x are encoded with same y N := {η : Φη = 0} the null space of Φ F(y) := {x : Φx = y} = x 0 + N for any x 0 F(y) The hyperplanes F(y) with y IR n stratify IR N Decoder is any (possibly nonlinear) mapping from IR n IR N
25 Encoder/Decoder We view Φ as an encoder Since Φ : IR N IR n many x are encoded with same y N := {η : Φη = 0} the null space of Φ F(y) := {x : Φx = y} = x 0 + N for any x 0 F(y) The hyperplanes F(y) with y IR n stratify IR N Decoder is any (possibly nonlinear) mapping from IR n IR N x := (Φ(x)) is our approximation to x from the information extracted
26 Encoder/Decoder We view Φ as an encoder Since Φ : IR N IR n many x are encoded with same y N := {η : Φη = 0} the null space of Φ F(y) := {x : Φx = y} = x 0 + N for any x 0 F(y) The hyperplanes F(y) with y IR n stratify IR N Decoder is any (possibly nonlinear) mapping from IR n IR N x := (Φ(x)) is our approximation to x from the information extracted Let A := A n,n := {(Φ, ) : Φ is n N}
27 Measuring Sparsity We can say nothing about the performance of an encoder/decoder without some knowledge about x
28 Measuring Sparsity We can say nothing about the performance of an encoder/decoder without some knowledge about x CS assumes the signal is either sparse or compressible
29 Measuring Sparsity We can say nothing about the performance of an encoder/decoder without some knowledge about x CS assumes the signal is either sparse or compressible Measuring Sparsity
30 Measuring Sparsity We can say nothing about the performance of an encoder/decoder without some knowledge about x CS assumes the signal is either sparse or compressible Measuring Sparsity The support of x is supp(x) := {i : x i 0}
31 Measuring Sparsity We can say nothing about the performance of an encoder/decoder without some knowledge about x CS assumes the signal is either sparse or compressible Measuring Sparsity The support of x is supp(x) := {i : x i 0} Σ k := {x : #supp(x) k}
32 Measuring Sparsity We can say nothing about the performance of an encoder/decoder without some knowledge about x CS assumes the signal is either sparse or compressible Measuring Sparsity The support of x is supp(x) := {i : x i 0} Σ k := {x : #supp(x) k} Measuring Compressibility
33 Measuring Sparsity We can say nothing about the performance of an encoder/decoder without some knowledge about x CS assumes the signal is either sparse or compressible Measuring Sparsity The support of x is supp(x) := {i : x i 0} Σ k := {x : #supp(x) k} Measuring Compressibility Let X be a sequence space norm: typical examples are l p norms
34 Measuring Sparsity We can say nothing about the performance of an encoder/decoder without some knowledge about x CS assumes the signal is either sparse or compressible Measuring Sparsity The support of x is supp(x) := {i : x i 0} Σ k := {x : #supp(x) k} Measuring Compressibility Let X be a sequence space norm: typical examples are l p norms x lp := { N i=1 x i p } 1/p
35 Measuring Sparsity We can say nothing about the performance of an encoder/decoder without some knowledge about x CS assumes the signal is either sparse or compressible Measuring Sparsity The support of x is supp(x) := {i : x i 0} Σ k := {x : #supp(x) k} Measuring Compressibility Let X be a sequence space norm: typical examples are l p norms x lp := { N i=1 x i p } 1/p σ k (x) X := inf z Σk x z X
36 Measuring Sparsity We can say nothing about the performance of an encoder/decoder without some knowledge about x CS assumes the signal is either sparse or compressible Measuring Sparsity The support of x is supp(x) := {i : x i 0} Σ k := {x : #supp(x) k} Measuring Compressibility Let X be a sequence space norm: typical examples are l p norms x lp := { N i=1 x i p } 1/p σ k (x) X := inf z Σk x z X σ k (x) X measures compressibility of x
37 Measuring Sparsity We can say nothing about the performance of an encoder/decoder without some knowledge about x CS assumes the signal is either sparse or compressible Measuring Sparsity The support of x is supp(x) := {i : x i 0} Σ k := {x : #supp(x) k} Measuring Compressibility Let X be a sequence space norm: typical examples are l p norms x lp := { N i=1 x i p } 1/p σ k (x) X := inf z Σk x z X σ k (x) X measures compressibility of x If x l 1 then σ k (x) x k 1/2
38 I. First performance measure for Φ Given k,n we can ask for the smallest value of n for which there is an n N matrix such that each vector in Σ k is captured exactly by the information y = Φx
39 I. First performance measure for Φ Given k,n we can ask for the smallest value of n for which there is an n N matrix such that each vector in Σ k is captured exactly by the information y = Φx There is a such that (Φx) = x for all x Σ k
40 I. First performance measure for Φ Given k,n we can ask for the smallest value of n for which there is an n N matrix such that each vector in Σ k is captured exactly by the information y = Φx There is a such that (Φx) = x for all x Σ k Key to understanding when this happens are the following matrices Φ T
41 I. First performance measure for Φ Given k,n we can ask for the smallest value of n for which there is an n N matrix such that each vector in Σ k is captured exactly by the information y = Φx There is a such that (Φx) = x for all x Σ k Key to understanding when this happens are the following matrices Φ T Given any set T contained in {1,...,N}, the matrix Φ T is the submatrix of Φ obtained from the columns T
42 I. First performance measure for Φ Given k,n we can ask for the smallest value of n for which there is an n N matrix such that each vector in Σ k is captured exactly by the information y = Φx There is a such that (Φx) = x for all x Σ k Key to understanding when this happens are the following matrices Φ T Given any set T contained in {1,...,N}, the matrix Φ T is the submatrix of Φ obtained from the columns T Similarly x T is the restriction of x to T
43 Simple Solution to this problem Theorem: If Φ is any n N matrix and 2k n, then the following are equivalent: (i) There is a decoder such that (Φ(x)) = x, for all x Σ k, (ii) Σ 2k N(Φ) = {0}, (iii) For all sets T with #T = 2k, the matrix Φ T has rank 2k. (iv) For all sets T with #T = 2k, the 2k 2k positive definite matrix Φ T Φ T is invertible and therefore all its eigenvalues are positive.
44 Simple Proof The equivalence of (ii), (iii), (iv) is linear algebra. (i) (ii). Suppose (i) holds and x Σ 2k N. We can write x = x 0 x 1 where both x 0,x 1 Σ k. Since Φ(x 0 ) = Φ(x 1 ), we have, by (i), that x 0 = x 1 and hence x = x 0 x 1 = 0. (ii) (i). Given any y IR n, we define (y) to be any element in F(y) with smallest support. This defines the decoder. We need to check that each for each x Σ k, the vector y = Φ(x) is uniquely decoded as x. To see this, suppose x 1,x 2 Σ k with y = Φ(x 1 ) = Φ(x 2 ), then x 1 x 2 N Σ 2k. From (ii), this means that x 1 = x 2. Hence, if x Σ k then (Φ(x)) = x as desired.
45 Optimal Matrices for I. n = 2k is the solution to our first problem
46 Optimal Matrices for I. n = 2k is the solution to our first problem Given k we can construct matrices Φ of size 2k N with the properties of the theorem?
47 Optimal Matrices for I. n = 2k is the solution to our first problem Given k we can construct matrices Φ of size 2k N with the properties of the theorem? Vandermonde matrix. Choose x 1 < x 2 < < x N x 1 x 2 x N Φ := x n 1 1 x n 1 2 x n 1 N
48 Optimal Matrices for I. n = 2k is the solution to our first problem Given k we can construct matrices Φ of size 2k N with the properties of the theorem? Vandermonde matrix. Choose x 1 < x 2 < < x N x 1 x 2 x N Φ := x n 1 1 x n 1 2 x n 1 N Problem with these matrices is they are not stable in computation: Φ t T Φ T has large condition number
49 Second measure of optimality Optimality on classes
50 Second measure of optimality Optimality on classes Let X be a sequence norm on IR N which will be used to measure error
51 Second measure of optimality Optimality on classes Let X be a sequence norm on IR N which will be used to measure error Let K be a compact set in IR N
52 Second measure of optimality Optimality on classes Let X be a sequence norm on IR N which will be used to measure error Let K be a compact set in IR N Best performance on the class K in norm X E n,n (K) X := inf sup (Φ, ) A n,n x K x (Φ(x)) X
53 Second measure of optimality Optimality on classes Let X be a sequence norm on IR N which will be used to measure error Let K be a compact set in IR N Best performance on the class K in norm X E n,n (K) X := inf sup (Φ, ) A n,n x K x (Φ(x)) X Typical example is X = l p = l N p and K = U(l q ) with q < p
54 Gelfand widths E n,n (K) X is closely related to Gelfand widths d n (K) X := inf codim(y )=n sup x K Y x X
55 . The set K Y N Y =
56 Gelfand widths E n,n (K) X is closely related to Gelfand widths d n (K) X := inf codim(y )=n sup x K Y x X Theorem: If K = K and K + K CK, then d n (K) X E n,n (K) X Cd n (K) X
57 Gelfand widths E n,n (K) X is closely related to Gelfand widths d n (K) X := inf codim(y )=n sup x K Y x X Theorem: If K = K and K + K CK, then d n (K) X E n,n (K) X Cd n (K) X Easy to prove: Y N
58 Gelfand widths E n,n (K) X is closely related to Gelfand widths d n (K) X := inf codim(y )=n sup x K Y x X Theorem: If K = K and K + K CK, then d n (K) X E n,n (K) X Cd n (K) X Easy to prove: Y N Gelfand widths of U(l q ) in l p are all known
59 Gelfand widths E n,n (K) X is closely related to Gelfand widths d n (K) X := inf codim(y )=n sup x K Y x X Theorem: If K = K and K + K CK, then d n (K) X E n,n (K) X Cd n (K) X Easy to prove: Y N Gelfand widths of U(l q ) in l p are all known Example C 1 log(n/n) n d n (U(l 1 )) l2 C 2 log(n/n) n
60 Gelfand widths E n,n (K) X is closely related to Gelfand widths d n (K) X := inf codim(y )=n sup x K Y x X Theorem: If K = K and K + K CK, then d n (K) X E n,n (K) X Cd n (K) X Easy to prove: Y N Gelfand widths of U(l q ) in l p are all known Example C 1 log(n/n) n d n (U(l 1 )) l2 C 2 log(n/n) n Kashin 1977 (Improvement in logarithm by Gluskin-Garnaev)
61 What are optimal matrices for II Kashin; Gluskin; Candes-Romberg-Tao
62 What are optimal matrices for II Kashin; Gluskin; Candes-Romberg-Tao Restricted Isometry Property (RIP) of order k: There exists 0 < δ = δ k < 1 such that (1 δ) x 2 l N 2 Φ(x) 2 l n 2 (1 + δ) x 2 l N 2, x Σ k
63 What are optimal matrices for II Kashin; Gluskin; Candes-Romberg-Tao Restricted Isometry Property (RIP) of order k: There exists 0 < δ = δ k < 1 such that (1 δ) x 2 l N 2 Φ(x) 2 l n 2 (1 + δ) x 2 l N 2, x Σ k Equivalently the eigenvalues of Φ T Φ T are in [1 δ, 1 + δ] if #(T) k
64 What are optimal matrices for II Kashin; Gluskin; Candes-Romberg-Tao Restricted Isometry Property (RIP) of order k: There exists 0 < δ = δ k < 1 such that (1 δ) x 2 l N 2 Φ(x) 2 l n 2 (1 + δ) x 2 l N 2, x Σ k Equivalently the eigenvalues of Φ T Φ T are in [1 δ, 1 + δ] if #(T) k Here the matrix norm is the spectral norm
65 What are optimal matrices for II Kashin; Gluskin; Candes-Romberg-Tao Restricted Isometry Property (RIP) of order k: There exists 0 < δ = δ k < 1 such that (1 δ) x 2 l N 2 Φ(x) 2 l n 2 (1 + δ) x 2 l N 2, x Σ k Equivalently the eigenvalues of Φ T Φ T are in [1 δ, 1 + δ] if #(T) k Here the matrix norm is the spectral norm The larger k the better Φ will encode
66 What are optimal matrices for II Kashin; Gluskin; Candes-Romberg-Tao Restricted Isometry Property (RIP) of order k: There exists 0 < δ = δ k < 1 such that (1 δ) x 2 l N 2 Φ(x) 2 l n 2 (1 + δ) x 2 l N 2, x Σ k Equivalently the eigenvalues of Φ T Φ T are in [1 δ, 1 + δ] if #(T) k Here the matrix norm is the spectral norm The larger k the better Φ will encode Recall our earlier condition for Problem I which only requires invertibility for each T but no uniform bound on norms of inverses
67 RIP Implies Optimal Performance If Φ has RIP for k then for any q 1 there is a decoder such that x (Φ(x)) l2 Ck 1/q+1/2 x lq
68 RIP Implies Optimal Performance If Φ has RIP for k then for any q 1 there is a decoder such that x (Φ(x)) l2 Ck 1/q+1/2 x lq The constant C > 0 depends only on δ in RIP
69 RIP Implies Optimal Performance If Φ has RIP for k then for any q 1 there is a decoder such that x (Φ(x)) l2 Ck 1/q+1/2 x lq The constant C > 0 depends only on δ in RIP This can be proved rather easily
70 RIP Implies Optimal Performance If Φ has RIP for k then for any q 1 there is a decoder such that x (Φ(x)) l2 Ck 1/q+1/2 x lq The constant C > 0 depends only on δ in RIP This can be proved rather easily Given n,n what is the largest value of k for which we can find Φ satisfying RIP?
71 RIP Implies Optimal Performance If Φ has RIP for k then for any q 1 there is a decoder such that x (Φ(x)) l2 Ck 1/q+1/2 x lq The constant C > 0 depends only on δ in RIP This can be proved rather easily Given n,n what is the largest value of k for which we can find Φ satisfying RIP? There is a constant c 0 such that whenever k there is a Φ satisfying RIP for this value of k c 0n log N/n
72 RIP Implies Optimal Performance If Φ has RIP for k then for any q 1 there is a decoder such that x (Φ(x)) l2 Ck 1/q+1/2 x lq The constant C > 0 depends only on δ in RIP This can be proved rather easily Given n,n what is the largest value of k for which we can find Φ satisfying RIP? There is a constant c 0 such that whenever k there is a Φ satisfying RIP for this value of k c 0n log N/n For example, this gives a rather simple proof of the optimal upper bound for widths d n (U(l 1 )) l2 C log(n/n) n
73 Building matrices How can we construct Φ with RIP for the above range of k?
74 Building matrices How can we construct Φ with RIP for the above range of k? We choose at random N vectors from the unit sphere in IR n and use these as the columns of Φ
75 Building matrices How can we construct Φ with RIP for the above range of k? We choose at random N vectors from the unit sphere in IR n and use these as the columns of Φ We choose each entry of Φ independently and at random from the Gaussian distribution with mean 0 and variance n 1/2
76 Building matrices How can we construct Φ with RIP for the above range of k? We choose at random N vectors from the unit sphere in IR n and use these as the columns of Φ We choose each entry of Φ independently and at random from the Gaussian distribution with mean 0 and variance n 1/2 Can use Bernouli process and create a matrix with entries ±1/ n
77 Building matrices How can we construct Φ with RIP for the above range of k? We choose at random N vectors from the unit sphere in IR n and use these as the columns of Φ We choose each entry of Φ independently and at random from the Gaussian distribution with mean 0 and variance n 1/2 Can use Bernouli process and create a matrix with entries ±1/ n With high probability any of the above constructions result in a matrix which satisfies RIP for the large range of k
78 Building matrices How can we construct Φ with RIP for the above range of k? We choose at random N vectors from the unit sphere in IR n and use these as the columns of Φ We choose each entry of Φ independently and at random from the Gaussian distribution with mean 0 and variance n 1/2 Can use Bernouli process and create a matrix with entries ±1/ n With high probability any of the above constructions result in a matrix which satisfies RIP for the large range of k Problem: None of these are constructive. In this sense we are back to the 1970 s and Kashin
79 Verification of RIP Let Φ(ω) = (φ i,j (ω)), ω Ω, be random matrices
80 Verification of RIP Let Φ(ω) = (φ i,j (ω)), ω Ω, be random matrices Each φ i,j is an independent realization of some fixed random variable r with mean zero and variance 1/n
81 Verification of RIP Let Φ(ω) = (φ i,j (ω)), ω Ω, be random matrices Each φ i,j is an independent realization of some fixed random variable r with mean zero and variance 1/n Trivially we have E( Φ(ω)x 2 l ) = x 2 n 2 l N 2
82 Verification of RIP Let Φ(ω) = (φ i,j (ω)), ω Ω, be random matrices Each φ i,j is an independent realization of some fixed random variable r with mean zero and variance 1/n Trivially we have E( Φ(ω)x 2 l n 2 ) = x 2 l N 2 For many processes we have concentration inequality Prob( Φ(ω)x 2 l n 2 x 2 l N 2 δ x 2 l N 2 ) Ce c(δ)n
83 Verification of RIP Let Φ(ω) = (φ i,j (ω)), ω Ω, be random matrices Each φ i,j is an independent realization of some fixed random variable r with mean zero and variance 1/n Trivially we have E( Φ(ω)x 2 l n 2 ) = x 2 l N 2 For many processes we have concentration inequality Prob( Φ(ω)x 2 l n 2 x 2 l N 2 δ x 2 l N 2 ) Ce c(δ)n For example, this condition is well-known and not difficult to check for Gaussian, Bernouli, etc. Theorem (Baraniuk-Davenport-DeVore-Wakin) Given c 0 there is a c 1 > 0 such that with probability 1 e c 1n, the matrix Φ(ω) satisfies RIP of order k with constant δ for all k c 0 n/ log(n/n)
84 Sketch Proof of Theorem We find a net of points P which cover the unit sphere in Σ k to accuracy δ/4: #(P) ( N) k (12/δ) k e c 2n Using the concentration inequality, we see that Φ = Φ(ω) satisfies (1 δ/2) q l N 2 Φ(q) l n 2 (1 + δ/2) q l N 2, q P with probability 1 Ce (c 2 c(δ/2))n Extend this to all x Σ k by a boot strapping estimate: Let M be the norm of Φ on Σ k. Given x with x l N 2 = 1 find q such that x q l N 2 δ/4 Φ(x) l n 2 Φ(x q) l n 2 + Φ(q) l n 2 Mδ/ δ/2 Hence M Mδ/ δ/2 and so M 1 + δ
85 Johnson-Lindenstrauss Lemma Matrices Φ satisfying RIP are closely related to the following Johnson-Lindenstrauss Lemma
86 Johnson-Lindenstrauss Lemma Matrices Φ satisfying RIP are closely related to the following Johnson-Lindenstrauss Lemma Given a set of points Q in IR N, then for any n cǫ 2 log[#(q)], there is a linear mapping Φ from IR N into IR n such that for all x,y Q (1 ǫ)dist(x,y) dist(φ(x), Φ(y)) (1 + ǫ)dist(x,y)
87 Johnson-Lindenstrauss Lemma Matrices Φ satisfying RIP are closely related to the following Johnson-Lindenstrauss Lemma Given a set of points Q in IR N, then for any n cǫ 2 log[#(q)], there is a linear mapping Φ from IR N into IR n such that for all x,y Q (1 ǫ)dist(x,y) dist(φ(x), Φ(y)) (1 + ǫ)dist(x,y) This lemma is easily proved from the Concentration of Measure Inequalities
88 Johnson-Lindenstrauss Lemma Matrices Φ satisfying RIP are closely related to the following Johnson-Lindenstrauss Lemma Given a set of points Q in IR N, then for any n cǫ 2 log[#(q)], there is a linear mapping Φ from IR N into IR n such that for all x,y Q (1 ǫ)dist(x,y) dist(φ(x), Φ(y)) (1 + ǫ)dist(x,y) This lemma is easily proved from the Concentration of Measure Inequalities A Random draw of a matrix satisfying CMI will satisfy JL
89 Johnson-Lindenstrauss Lemma Matrices Φ satisfying RIP are closely related to the following Johnson-Lindenstrauss Lemma Given a set of points Q in IR N, then for any n cǫ 2 log[#(q)], there is a linear mapping Φ from IR N into IR n such that for all x,y Q (1 ǫ)dist(x,y) dist(φ(x), Φ(y)) (1 + ǫ)dist(x,y) This lemma is easily proved from the Concentration of Measure Inequalities A Random draw of a matrix satisfying CMI will satisfy JL It is also easy to prove RIP from JL
90 Road Map to Kashin-Gluskin Step 1: Show Random families satisfying CMI satisfy RIP
91 Road Map to Kashin-Gluskin Step 1: Show Random families satisfying CMI satisfy RIP We have shown an easy proof of this
92 Road Map to Kashin-Gluskin Step 1: Show Random families satisfying CMI satisfy RIP We have shown an easy proof of this Step 2: Show that RIP for Φ of order k implies x Φ(x) l1 x l1 k 1/2
93 Road Map to Kashin-Gluskin Step 1: Show Random families satisfying CMI satisfy RIP We have shown an easy proof of this Step 2: Show that RIP for Φ of order k implies x Φ(x) l1 x l1 k 1/2 This is also easy and will be shown in the next lecture
94 Road Map to Kashin-Gluskin Step 1: Show Random families satisfying CMI satisfy RIP We have shown an easy proof of this Step 2: Show that RIP for Φ of order k implies x Φ(x) l1 x l1 k 1/2 This is also easy and will be shown in the next lecture Taking k = cn/ log(n/n) these two results give d n (U(l 1 )) l1 sup x l1 x Φ(x) l1 Ck 1 C log(n/n) n
95 Road Map to Kashin-Gluskin Step 1: Show Random families satisfying CMI satisfy RIP We have shown an easy proof of this Step 2: Show that RIP for Φ of order k implies x Φ(x) l1 x l1 k 1/2 This is also easy and will be shown in the next lecture Taking k = cn/ log(n/n) these two results give d n (U(l 1 )) l1 sup x l1 x Φ(x) l1 Ck 1 C log(n/n) n This is the Kashin-Gluskin result in sharp form
96 Implementable Φ Say we want to build a sensor using Compressed Sensing ideas
97 Implementable Φ Say we want to build a sensor using Compressed Sensing ideas It is a very very significant problem to understand what we can build and retain the performance promised by the CS theory
98 Implementable Φ Say we want to build a sensor using Compressed Sensing ideas It is a very very significant problem to understand what we can build and retain the performance promised by the CS theory We cannot generate a random matrix
99 Implementable Φ Say we want to build a sensor using Compressed Sensing ideas It is a very very significant problem to understand what we can build and retain the performance promised by the CS theory We cannot generate a random matrix We could try to replace random by pseudo-random
100 Implementable Φ Say we want to build a sensor using Compressed Sensing ideas It is a very very significant problem to understand what we can build and retain the performance promised by the CS theory We cannot generate a random matrix We could try to replace random by pseudo-random We can also try to use parity check codes or other constructions of 0, 1 matrices
101 Implementable Φ Say we want to build a sensor using Compressed Sensing ideas It is a very very significant problem to understand what we can build and retain the performance promised by the CS theory We cannot generate a random matrix We could try to replace random by pseudo-random We can also try to use parity check codes or other constructions of 0, 1 matrices How close can we get to best performance with implementable algorithms?
102 Implementable Φ Say we want to build a sensor using Compressed Sensing ideas It is a very very significant problem to understand what we can build and retain the performance promised by the CS theory We cannot generate a random matrix We could try to replace random by pseudo-random We can also try to use parity check codes or other constructions of 0, 1 matrices How close can we get to best performance with implementable algorithms?
103 An example There is an n N matrix Φ with entries 0, 1 such that each row is a circulant shift of the previous row by N/n and Φ is instance-optimal for k c 0 n/ log(n/n)
104 An example There is an n N matrix Φ with entries 0, 1 such that each row is a circulant shift of the previous row by N/n and Φ is instance-optimal for k c 0 n/ log(n/n) Construction uses polynomials over finite fields (DeVore)
105 An example There is an n N matrix Φ with entries 0, 1 such that each row is a circulant shift of the previous row by N/n and Φ is instance-optimal for k c 0 n/ log(n/n) Construction uses polynomials over finite fields (DeVore) This is the best known range of k for non-probabilistic constructions
106 An example There is an n N matrix Φ with entries 0, 1 such that each row is a circulant shift of the previous row by N/n and Φ is instance-optimal for k c 0 n/ log(n/n) Construction uses polynomials over finite fields (DeVore) This is the best known range of k for non-probabilistic constructions
107 An example There is an n N matrix Φ with entries 0, 1 such that each row is a circulant shift of the previous row by N/n and Φ is instance-optimal for k c 0 n/ log(n/n) Construction uses polynomials over finite fields (DeVore) This is the best known range of k for non-probabilistic constructions
Deterministic constructions of compressed sensing matrices
Journal of Complexity 23 (2007) 918 925 www.elsevier.com/locate/jco Deterministic constructions of compressed sensing matrices Ronald A. DeVore Department of Mathematics, University of South Carolina,
More informationCompressed sensing and best k-term approximation
Compressed sensing and best k-term approximation Albert Cohen, Wolfgang Dahmen, and Ronald DeVore July 19, 2006 Abstract Compressed sensing is a new concept in signal processing where one seeks to minimize
More informationCompressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements
Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC
More informationA Simple Proof of the Restricted Isometry Property for Random Matrices
DOI 10.1007/s00365-007-9003-x A Simple Proof of the Restricted Isometry Property for Random Matrices Richard Baraniuk Mark Davenport Ronald DeVore Michael Wakin Received: 17 May 006 / Revised: 18 January
More informationCOMPRESSED SENSING AND BEST k-term APPROXIMATION
JOURNAL OF THE AMERICAN MATHEMATICAL SOCIETY Volume 22, Number 1, January 2009, Pages 211 231 S 0894-0347(08)00610-3 Article electronically published on July 31, 2008 COMPRESSED SENSING AND BEST k-term
More informationSolution Recovery via L1 minimization: What are possible and Why?
Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationSparse Optimization Lecture: Sparse Recovery Guarantees
Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationCompressive Sensing. Collection Editors: Mark A. Davenport Richard Baraniuk Ronald DeVore
Compressive Sensing Collection Editors: Mark A. Davenport Richard Baraniuk Ronald DeVore Compressive Sensing Collection Editors: Mark A. Davenport Richard Baraniuk Ronald DeVore Authors: Wai Lam Chan
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationLecture 13 October 6, Covering Numbers and Maurey s Empirical Method
CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationCompressive Sensing with Random Matrices
Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More informationRandom projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016
Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationStochastic geometry and random matrix theory in CS
Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationAN INTRODUCTION TO COMPRESSIVE SENSING
AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationCS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5
CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given
More informationEnhanced Compressive Sensing and More
Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationInstance Optimal Decoding by Thresholding in Compressed Sensing
Instance Optimal Decoding by Thresholding in Compressed Sensing Albert Cohen, Wolfgang Dahmen, and Ronald DeVore November 1, 2008 Abstract Compressed Sensing sees to capture a discrete signal x IR N with
More informationCOMPRESSED SENSING IN PYTHON
COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed
More informationA Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing
Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 28-30, 20 A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing
More informationSparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery
Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationCompressive Sensing Theory and L1-Related Optimization Algorithms
Compressive Sensing Theory and L1-Related Optimization Algorithms Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, USA CAAM Colloquium January 26, 2009 Outline:
More informationA Survey of Compressive Sensing and Applications
A Survey of Compressive Sensing and Applications Justin Romberg Georgia Tech, School of ECE ENS Winter School January 10, 2012 Lyon, France Signal processing trends DSP: sample first, ask questions later
More informationMAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing
MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very
More informationTHEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS
THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS YIN ZHANG Abstract. Compressive sensing (CS) is an emerging methodology in computational signal processing that has
More informationZ Algorithmic Superpower Randomization October 15th, Lecture 12
15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem
More informationMathematics Subject Classification (2000). Primary 41-02, 46-02; Secondary 62C20, 65N30, 68Q25, 74S05.
Optimal computation Ronald A. DeVore Abstract. A large portion of computation is concerned with approximating a function u. Typically, there are many ways to proceed with such an approximation leading
More informationLarge-Scale L1-Related Minimization in Compressive Sensing and Beyond
Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More information6 Compressed Sensing and Sparse Recovery
6 Compressed Sensing and Sparse Recovery Most of us have noticed how saving an image in JPEG dramatically reduces the space it occupies in our hard drives as oppose to file types that save the pixel value
More informationSparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery
Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed
More informationExact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice
Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University
More informationCombining geometry and combinatorics
Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss
More informationOptimisation Combinatoire et Convexe.
Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More informationSparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing
Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing Bobak Nazer and Robert D. Nowak University of Wisconsin, Madison Allerton 10/01/10 Motivation: Virus-Host Interaction
More informationCompressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes
Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationLecture 3. Random Fourier measurements
Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationCompressed Sensing and Linear Codes over Real Numbers
Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,
More informationSolving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming)
Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming) Justin Romberg Georgia Tech, ECE Caltech ROM-GR Workshop June 7, 2013 Pasadena, California Linear
More informationInterpolation via weighted l 1 -minimization
Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz
More informationA Generalized Restricted Isometry Property
1 A Generalized Restricted Isometry Property Jarvis Haupt and Robert Nowak Department of Electrical and Computer Engineering, University of Wisconsin Madison University of Wisconsin Technical Report ECE-07-1
More informationarxiv: v9 [cs.cv] 12 Mar 2013
Stable image reconstruction using total variation minimization Deanna Needell Rachel Ward March 13, 2013 arxiv:1202.6429v9 [cs.cv] 12 Mar 2013 Abstract This article presents near-optimal guarantees for
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationLecture 22: More On Compressed Sensing
Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an
More informationAn Overview of Compressed Sensing
An Overview of Compressed Sensing Nathan Schneider November 18, 2009 Abstract In a large number of applications, the system will be designed to sample at a rate equal to at least the frequency bandwidth
More informationThe Fundamentals of Compressive Sensing
The Fundamentals of Compressive Sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering Sensor Explosion Data Deluge Digital Revolution If we sample a signal
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More informationLimitations in Approximating RIP
Alok Puranik Mentor: Adrian Vladu Fifth Annual PRIMES Conference, 2015 Outline 1 Background The Problem Motivation Construction Certification 2 Planted model Planting eigenvalues Analysis Distinguishing
More informationExponential decay of reconstruction error from binary measurements of sparse signals
Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation
More informationNew ways of dimension reduction? Cutting data sets into small pieces
New ways of dimension reduction? Cutting data sets into small pieces Roman Vershynin University of Michigan, Department of Mathematics Statistical Machine Learning Ann Arbor, June 5, 2012 Joint work with
More informationCompressed sensing techniques for hyperspectral image recovery
Compressed sensing techniques for hyperspectral image recovery A. Abrardo, M. Barni, C. M. Carretti, E. Magli, S. Kuiteing Kamdem, R. Vitulli ABSTRACT Compressed Sensing (CS) theory is progressively gaining
More informationFunctional Analysis Exercise Class
Functional Analysis Exercise Class Week 9 November 13 November Deadline to hand in the homeworks: your exercise class on week 16 November 20 November Exercises (1) Show that if T B(X, Y ) and S B(Y, Z)
More informationLecture 16: Compressed Sensing
Lecture 16: Compressed Sensing Introduction to Learning and Analysis of Big Data Kontorovich and Sabato (BGU) Lecture 16 1 / 12 Review of Johnson-Lindenstrauss Unsupervised learning technique key insight:
More informationGeometry of log-concave Ensembles of random matrices
Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann
More informationResearch Collection. Compressive sensing a summary of reconstruction algorithms. Master Thesis. ETH Library. Author(s): Pope, Graeme
Research Collection Master Thesis Compressive sensing a summary of reconstruction algorithms Authors): Pope, Graeme Publication Date: 009 Permanent Link: https://doi.org/10.399/ethz-a-0057637 Rights /
More informationMultipath Matching Pursuit
Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy
More informationLecture 3: Compressive Classification
Lecture 3: Compressive Classification Richard Baraniuk Rice University dsp.rice.edu/cs Compressive Sampling Signal Sparsity wideband signal samples large Gabor (TF) coefficients Fourier matrix Compressive
More informationCompressed Sensing: Extending CLEAN and NNLS
Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction
More informationRandom hyperplane tessellations and dimension reduction
Random hyperplane tessellations and dimension reduction Roman Vershynin University of Michigan, Department of Mathematics Phenomena in high dimensions in geometric analysis, random matrices and computational
More informationCambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information
Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline
More informationU.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018
U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze
More informationPrimal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector
Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer
More informationSuper-resolution via Convex Programming
Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation
More informationNear Optimal Signal Recovery from Random Projections
1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:
More informationACCORDING to Shannon s sampling theorem, an analog
554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,
More informationLow-Dimensional Signal Models in Compressive Sensing
University of Colorado, Boulder CU Scholar Electrical, Computer & Energy Engineering Graduate Theses & Dissertations Electrical, Computer & Energy Engineering Spring 4-1-2013 Low-Dimensional Signal Models
More informationCOMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4
COMPRESSIVE SAMPLING USING EM ALGORITHM ATANU KUMAR GHOSH, ARNAB CHAKRABORTY Technical Report No: ASU/2014/4 Date: 29 th April, 2014 Applied Statistics Unit Indian Statistical Institute Kolkata- 700018
More informationSparse recovery for spherical harmonic expansions
Rachel Ward 1 1 Courant Institute, New York University Workshop Sparsity and Cosmology, Nice May 31, 2011 Cosmic Microwave Background Radiation (CMB) map Temperature is measured as T (θ, ϕ) = k k=0 l=
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationThe uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008
The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle
More informationCoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp
CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell
More informationIterative Hard Thresholding for Compressed Sensing
Iterative Hard Thresholding for Compressed Sensing Thomas lumensath and Mike E. Davies 1 Abstract arxiv:0805.0510v1 [cs.it] 5 May 2008 Compressed sensing is a technique to sample compressible signals below
More informationapproximation algorithms I
SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions
More informationDimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices
Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices Jan Vybíral Austrian Academy of Sciences RICAM, Linz, Austria January 2011 MPI Leipzig, Germany joint work with Aicke
More informationCommutative Banach algebras 79
8. Commutative Banach algebras In this chapter, we analyze commutative Banach algebras in greater detail. So we always assume that xy = yx for all x, y A here. Definition 8.1. Let A be a (commutative)
More informationCS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT
CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT Sparse Approximations Goal: approximate a highdimensional vector x by x that is sparse, i.e., has few nonzero
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationMultiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing
Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More information