Interpolation via weighted l 1 -minimization
|
|
- Claud Williamson
- 5 years ago
- Views:
Transcription
1 Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz Zentrum München September 20, 2013 Joint work with Rachel Ward (University of Texas at Austin)
2 Function interpolation Aim Given a function f : D C on a domain D reconstruct or interpolate f from sample values f (t 1 ),..., f (t m ). Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 2
3 Function interpolation Aim Given a function f : D C on a domain D reconstruct or interpolate f from sample values f (t 1 ),..., f (t m ). Approaches (Linear) polynomial interpolation assumes (classical) smoothness in order to achieve error rates works with special interpolation points (e.g. Chebyshev points). Compressive sensing reconstruction nonlinear assumes sparsity (or compressibility) of a series expansion in terms of a certain basis (e.g. trigonometric bases) fewer (random!) sampling points than degrees of freedom Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 2
4 Function interpolation Aim Given a function f : D C on a domain D reconstruct or interpolate f from sample values f (t 1 ),..., f (t m ). Approaches (Linear) polynomial interpolation assumes (classical) smoothness in order to achieve error rates works with special interpolation points (e.g. Chebyshev points). Compressive sensing reconstruction nonlinear assumes sparsity (or compressibility) of a series expansion in terms of a certain basis (e.g. trigonometric bases) fewer (random!) sampling points than degrees of freedom This talk: Combine sparsity and smoothness! Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 2
5 A classical interpolation result C r ([0, 1] d ): r-times continuously differentiable periodic functions Existence of set of sampling points t 1,..., t m and linear reconstruction operator R : C m C r ([0, 1] d ) such that for every f C r ([0, 1] d ) the approximation f = R(f (t 1 ),..., f (t m )) satisfies the optimal error bound f f Cm r/d f C r. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 3
6 A classical interpolation result C r ([0, 1] d ): r-times continuously differentiable periodic functions Existence of set of sampling points t 1,..., t m and linear reconstruction operator R : C m C r ([0, 1] d ) such that for every f C r ([0, 1] d ) the approximation f = R(f (t 1 ),..., f (t m )) satisfies the optimal error bound f f Cm r/d f C r. Curse of dimension: Need about m C f ε d/r samples for achieving error ε < 1. Exponential scaling in d cannot be avoided using only smoothness (DeVore, Howard, Micchelli 1989 Novak, Wozniakowski 2009). Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 3
7 Sparse representation of functions D: domain endowed with a probability measure ν ψ j : D C, j Γ (finite or infinite) {ψ j } j Γ orthonormal system: ψ j (t)ψ k (t)dν(t) = δ j,k, j, k Γ D We consider functions of the form f (t) = j Γ x j ψ j (t) f is called s-sparse if x 0 := {l : x l 0} s and compressible if the error of best s-term approximation error is small. σ s (f ) q := σ s (x) q := inf x z q (0 < q ) z: z 0 s Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 4
8 Fourier Algebra and Compressibility Fourier algebra A p = {f C[0, 1] : f p < }, 0 < p 1, f p := x p = ( x j p ) 1/p, f (t) = x j ψ j (t). j Z j Z Motivating example: D = [0, 1], ν Lebesgue measure, ψ j (t) = e 2πijt, t [0, 1] d, j Z d. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 5
9 Fourier Algebra and Compressibility Fourier algebra A p = {f C[0, 1] : f p < }, 0 < p 1, f p := x p = ( x j p ) 1/p, f (t) = x j ψ j (t). j Z j Z Motivating example: D = [0, 1], ν Lebesgue measure, ψ j (t) = e 2πijt, t [0, 1] d, j Z d. Compressibility via Stechkin estimate σ s (f ) q = σ s (x) q s 1/q 1/p x p = s 1/q 1/p f p. Since f := sup x [0,1] f (t) f 1, the best s-term approximation f 0 = j S x jψ j, S = s, satisfies f f 0 s 1 1/p f p. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 5
10 Sampling Task: Reconstruct sparse or compressible f (t) = j Γ x j ψ j (t) from samples f (t 1 ),..., f (t m ) with given sampling points t 1,..., t m D. With y l = f (t l ), l = 1,..., m and sampling matrix A l,j = ψ j (t l ), l = 1,..., m; j Γ we can write y = Ax Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 6
11 Sampling Task: Reconstruct sparse or compressible f (t) = j Γ x j ψ j (t) from samples f (t 1 ),..., f (t m ) with given sampling points t 1,..., t m D. With y l = f (t l ), l = 1,..., m and sampling matrix A l,j = ψ j (t l ), l = 1,..., m; j Γ we can write y = Ax Aim: Minimal number m of required samples, m N = Γ. Leads to underdetermined linear system. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 6
12 Detour: Compressive sensing Reconstruction of x C N from y = Ax with A C m N, m N. Ingredients: Sparsity / Compressibility Efficient reconstruction algorithms Randomness / Random matrices Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 7
13 Detour: Compressive sensing Reconstruction of x C N from y = Ax with A C m N, m N. Ingredients: Sparsity / Compressibility Efficient reconstruction algorithms Randomness / Random matrices Applications in Signal / Image Processing: Radar, Magnetic Resonance Imaging, Optics, Statistics, Astronomy,... Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 7
14 Reconstruction via l 1 -minimization l 0 -minimization min z 0 subject to Az = y is NP-hard. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 8
15 Reconstruction via l 1 -minimization l 0 -minimization min z 0 subject to Az = y is NP-hard. Convex relaxation: l 1 -minimization min z 1 subject to Az = y Version for noisy data: min z 1 subject to Az y 2 η. Alternatives: Orthogonal Matching Pursuit, Iterative Hard Thresholding, CoSaMP, Iteratively Reweighted Least Squares,... Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 8
16 Restricted Isometry Property (RIP) Recovery guarantees, Error Estimates? Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 9
17 Restricted Isometry Property (RIP) Recovery guarantees, Error Estimates? Definition The restricted isometry constant δ s of a matrix A C m N is defined as the smallest δ s such that for all s-sparse x C N. (1 δ s ) x 2 2 Ax 2 2 (1 + δ s ) x 2 2 Requires that all s-column submatrices of A are well-conditioned. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 9
18 RIP implies recovery by l 1 -minimization Theorem (Candès, Romberg, Tao 04 Candès 08 Foucart, Lai 09 Foucart 09/ 12 Li, Mo 11 Andersson, Strömberg 12) Assume that the restricted isometry constant of A C m N satisfies δ 2s < 4/ Then l 1 -minimization reconstructs every s-sparse vector x C N from y = Ax. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 10
19 Stability Theorem (Candès, Romberg, Tao 04 Candès 08 Foucart, Lai 09 Foucart 09/ 12 Li, Mo 11 Andersson, Strömberg 12) Let A C m N with δ 2s < 4/ Let x C N, and assume that noisy data are observed, y = Ax + η with η 2 σ. Let x # by a solution of min z z 1 such that Az y 2 σ. Then x x # 2 C σ s(x) 1 s + Dσ and x x # 1 Cσ s (x) 1 + D sσ for constants C, D > 0, that depend only on δ 2s. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 11
20 Matrices satisfying the RIP Open problem: Give explicit matrices A C m N with small δ 2s 0.62 for large s. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 12
21 Matrices satisfying the RIP Open problem: Give explicit matrices A C m N with small δ 2s 0.62 for large s. Goal: δ s δ, if for constants C δ and α. m C δ sln α (N), Deterministic matrices known, for which m C δ,k s 2 suffices if N m k. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 12
22 Matrices satisfying the RIP Open problem: Give explicit matrices A C m N with small δ 2s 0.62 for large s. Goal: δ s δ, if for constants C δ and α. m C δ sln α (N), Deterministic matrices known, for which m C δ,k s 2 suffices if N m k. Way out: consider random matrices. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 12
23 RIP for Gaussian and Bernoulli matrices Gaussian: entries of A independent N (0, 1) random variables Bernoulli : entries of A independent Bernoulli ±1 distributed rv Theorem Let A R m N be a Gaussian or Bernoulli random matrix and assume m Cδ 2 (s ln(en/s) + ln(2ε 1 )) for a universal constant C > 0. Then with probability at least 1 ε the restricted isometry constant of 1 m A satisfies δ s δ. Consequence: Recovery via l 1 -minimization with probability exceeding 1 e cm provided m Cs ln(en/s). Bound is optimal as follows from lower bound for Gelfand widths of l p -balls, 0 < p 1. (Gluskin, Garnaev 1984 Foucart, Pajor, R, Ullrich 2010) Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 13
24 Back to Sampling (ψ j ) j Γ bounded orthonormal system: max j Γ ψ j K Sampling points t 1,..., t m are chosen i.i.d. according to orthogonalization measure ν. Sampling matrix A with entries A l,j = ψ j (t l ) is random matrix. Theorem (Candès, Tao 06 Rudelson, Vershynin 06 R 08/ 10) If m CK 2 δ 2 s max{ln 3 (s) ln(n), ln(ε 1 )}, then the restricted 1 isometry constant of m A satisfies δ s δ with probability at least 1 ε. Implies stable recovery of all s-sparse f from m C K s ln 4 (N) random samples via l 1 -minimization with high probability. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 14
25 Trigonometric Polynomials D = [0, 1] d, ν: Lebesgue measure ψ j (t) = e 2πij t, j Z d, t [0, 1] d Boundedness constant K = 1 Exact recovery of s-sparse trigonometric polynomials from m Cs ln 3 (s) ln(n) i.i.d. samples uniformly distributed on [0, 1] d via l 1 -minimization. Error estimate for general f (Fourier coefficients supported on Γ) f f 2 Cσ s (f ) 1 / s Cs 1/2 1/p f p, f f C s 1 1/p f p, ( ) f f m 1 1/p C ln(m) 3 f p. ln(n) Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 15
26 Chebyshev polynomials Chebyshev-polynomials C j, j = 0, 1, 2, dx C j (x)c k (x) π 1 x = δ j,k, j, k N 0, 2 C 0 = 1 and C j = 2. Stable recovery of polynomials that are s-sparse in the Chebyshev system from m Cs ln 3 (s) ln(n) samples drawn i.i.d. from the Chebyshev measure dx π 1 x 2. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 16
27 Legendre polynomials L j, j = 0, 1, 2, Legendre polynomials L j (x)l k (x)dx = δ j,k, L j j + 1, j, k N 0. K = max j=0,...,n 1 L j = N. Leads to bound m CK 2 s ln 3 (s) ln(n) = CNs ln 3 (s) ln(n). Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 17
28 Legendre polynomials L j, j = 0, 1, 2, Legendre polynomials L j (x)l k (x)dx = δ j,k, L j j + 1, j, k N 0. K = max j=0,...,n 1 L j = N. Leads to bound m CK 2 s ln 3 (s) ln(n) = CNs ln 3 (s) ln(n). Preconditioned system Q j (x) = v(x)l j (x) with v(x) = (π/2) 1/2 (1 x 2 ) 1/4 satisfies 1 1 dx Q j (x)q k (x) π 1 x = δ j,k, Q j 3, j, k N 0. 2 Stable recovery of polynomials that are s-sparse in the Chebyshev system from m Cs ln 3 (s) ln(n) samples drawn i.i.d. from the Chebyshev measure dx π 1 x 2. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 17
29 Spherical harmonics Y k l, k l k, k N 0 : orthonormal system in L 2 (S 2 ) 1 4π 2π π 0 0 Yl k k (φ, θ)yl (θ, φ) sin(θ)dφdθ = δ l,l δ k,k (φ, θ) [0, 2π) [0, π): spherical coordinates (x = cos(θ) sin(φ), y = sin(θ) sin(φ), z = cos(φ)) S 2 With ultraspherical polynomials p α n, Yl k (φ, θ) = eikφ (sin θ) k p k l k (cos θ), (φ, θ) [0, 2π) [0, π) Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 18
30 Restricted Isometry Property for Spherical Harmonics L -bound: Y k l k 1/2 Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 19
31 Restricted Isometry Property for Spherical Harmonics L -bound: Y k l k 1/2 Preconditioning I (Krasikov 08) With w(θ, φ) = sin(θ) 1/2 wy k l Ck 1/4. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 19
32 Restricted Isometry Property for Spherical Harmonics L -bound: Y k l k 1/2 Preconditioning I (Krasikov 08) With w(θ, φ) = sin(θ) 1/2 wy k l Ck 1/4. Preconditioning II (Burq, Dyatkov, Ward, Zworski 12) With v(θ, φ) = sin 2 (θ) cos(θ) 1/6, vy k l Ck 1/6. RIP for associated preconditioned random sampling matrix 1 m A C m N with sampling points drawn according to ν(dθ, dφ) = v 2 (θ, φ) sin(θ)dθdφ = tan(θ) 1/3 dθdφ with high probability if m CsN 1/6 log 4 (N). Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 19
33 Questions Can we take into account if ψ j is growing with j, i.e., not uniformly small? How can we combine sparsity with smoothness? Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 20
34 Questions Can we take into account if ψ j is growing with j, i.e., not uniformly small? How can we combine sparsity with smoothness? Use weights! Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 20
35 Trigonometric system: smoothness and weights ψ j (t) = e 2πijt, j Z, t [0, 1] Derivatives satisfy ψ j = 2π j, j Z. For f (t) = j x jψ j (t) we have f + f = j x j ψ j + j x j ψ j x j ( ψ j + ψ j ) j Z = x j (1 + 2π j ) =: x ω,1. j Z Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 21
36 Trigonometric system: smoothness and weights ψ j (t) = e 2πijt, j Z, t [0, 1] Derivatives satisfy ψ j = 2π j, j Z. For f (t) = j x jψ j (t) we have f + f = j x j ψ j + j x j ψ j x j ( ψ j + ψ j ) j Z = x j (1 + 2π j ) =: x ω,1. j Z Weights model smoothness! Combine with sparsity (compressibility) weighted l p -spaces with 0 < p 1 Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 21
37 Weighted norms and weighted sparsity For a weight ω = (ω j ) j Γ with ω j 1, introduce x ω,p := ( j Γ x j p ω 2 p j ) 1/p, 0 < p 2. Special cases: x ω,1 = j Γ x j ω j, x ω,2 = x 2 Weighted sparsity x ω,0 := j:x j 0 x is called weighted s-sparse if x ω,0 s. ω 2 j Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 22
38 Weighted best s-term approximation error σ s (x) ω,p := Weighted best approximation inf z: z ω,0 s x z ω,p Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 23
39 Weighted best s-term approximation error σ s (x) ω,p := Weighted best approximation inf z: z ω,0 s x z ω,p Weighted quasi-best s-term approximation error of x Consider nonincreasing rearrangement of (x j ω 1 j ) j Γ, i.e., permutation π such that x π(1) ω 1 π(1) x π(2) ω 1 π(2) Choose largest k such that k l=1 ω2 π(l) s, set S = {π(1),..., π(k)} and σ s (x) ω,p := x x S ω,p, where (x S ) j = x j for j S and (x S ) j = 0 for j / S. (Note that x S ω,0 s by construction.) Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 23
40 Weighted Stechkin estimate Theorem For a weight ω, a vector x, 0 < p < q 2 and s > ω 2, σ s (x) ω,q σ s (x) ω,q (s ω 2 ) 1/q 1/p x ω,p. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 24
41 Weighted Stechkin estimate Theorem For a weight ω, a vector x, 0 < p < q 2 and s > ω 2, σ s (x) ω,q σ s (x) ω,q (s ω 2 ) 1/q 1/p x ω,p. If s 2 ω 2, say, then σ s (x) ω,q σ s (x) ω,q C p,q s 1/q 1/p x ω,p, C p,q = 2 1/p 1/q. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 24
42 Weighted Stechkin estimate Theorem For a weight ω, a vector x, 0 < p < q 2 and s > ω 2, σ s (x) ω,q σ s (x) ω,q (s ω 2 ) 1/q 1/p x ω,p. If s 2 ω 2, say, then σ s (x) ω,q σ s (x) ω,q C p,q s 1/q 1/p x ω,p, C p,q = 2 1/p 1/q. Lower bound on s natural because otherwise the single element set S = {j} with ω j = ω not allowed as support set. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 24
43 Weighted Stechkin estimate Theorem For a weight ω, a vector x, 0 < p < q 2 and s > ω 2, σ s (x) ω,q σ s (x) ω,q (s ω 2 ) 1/q 1/p x ω,p. If s 2 ω 2, say, then σ s (x) ω,q σ s (x) ω,q C p,q s 1/q 1/p x ω,p, C p,q = 2 1/p 1/q. Lower bound on s natural because otherwise the single element set S = {j} with ω j = ω not allowed as support set. Lemma If s w 2, then σ 3s (x) ω,p σ s (x) ω,p. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 24
44 (Weighted) Compressive Sensing Recover a weighted s-sparse (or weighted-compressible) vector x from measurements y = Ax, where A C m N with m < N. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 25
45 (Weighted) Compressive Sensing Recover a weighted s-sparse (or weighted-compressible) vector x from measurements y = Ax, where A C m N with m < N. Weighted l 1 -minimization min z C N z ω,1 subject to Az = y Noisy version min z C N z ω,1 subject to Az y 2 η Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 25
46 Weighted restricted isometry property (WRIP) Definition The weighted restricted isometry constant δ ω,s of a matrix A C m N is defined to be the smallest constant such that (1 δ ω,s ) x 2 2 Ax 2 2 (1 + δ ω,s ) x 2 2 for all x C N with x ω,0 = l:x l 0 ω2 j s. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 26
47 Weighted restricted isometry property (WRIP) Definition The weighted restricted isometry constant δ ω,s of a matrix A C m N is defined to be the smallest constant such that (1 δ ω,s ) x 2 2 Ax 2 2 (1 + δ ω,s ) x 2 2 for all x C N with x ω,0 = l:x l 0 ω2 j s. Since ω j 1 by assumption, the classical RIP implies the WRIP, δ ω,s δ 1,s = δ s. Alternative name: Weighted Uniform Uncertainty Principle (WUUP) Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 26
48 Recovery via weighted l 1 -minimization Theorem Let A C m N and s 2 ω 2 such that δ ω,3s < 1/3. For x C N and y = Ax + e with e 2 η let x be a minimizer of Then min z ω,1 subject to Az y 2 η. x x ω,1 C 1 σ s (x) ω,1 + D 1 sη, x x 2 C 2 σ s (x) ω,1 s + D 2 η. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 27
49 Function interpolation {ψ j } j Γ, finite ONS with respect to prob. measure ν. Given samples y 1 = f (t 1 ),..., y m = f (t m ) of f (t) = j Γ x jψ j (t) reconstruction amounts to solving y = Ax with sampling matrix A C m N, N = Γ, given by A lk = ψ k (t l ). Use weighted l 1 -minimization to recover weighted-sparse or weighted-compressible x when m < Γ. Choose t 1,..., t m i.i.d. at random according to ν in order to analyze the WRIP of the sampling matrix. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 28
50 Weighted RIP of random sampling matrix ψ j : D C, j Γ, N = Γ <, ONS w.r.t. prob. measure ν. Weight ω with ω j ψ j. Sampling points t 1,..., t m taken i.i.d. at random according to ν. Random sampling matrix A C m N with entries A lj = ψ j (t l ). Theorem If m Cδ 2 s ln 3 (s) ln(n) then the weighted restricted isometry constant of 1 m A satisfies δ ω,s δ with probability at least 1 N ln3 N. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 29
51 Weighted RIP of random sampling matrix ψ j : D C, j Γ, N = Γ <, ONS w.r.t. prob. measure ν. Weight ω with ω j ψ j. Sampling points t 1,..., t m taken i.i.d. at random according to ν. Random sampling matrix A C m N with entries A lj = ψ j (t l ). Theorem If m Cδ 2 s ln 3 (s) ln(n) then the weighted restricted isometry constant of 1 m A satisfies δ ω,s δ with probability at least 1 N ln3 N. Generalizes previous results (Candès, Tao Rudelson, Vershynin Rauhut) for systems with ψ j K for all j Γ, where the sufficient condition is m Cδ 2 K 2 s log 4 N. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 29
52 Abstract weighted function spaces A ω,p = {f : f (t) = j Γ x j ψ j (t), f ω,p := x ω,p < } If ω j ψ j then f f ω,1. If ω j ψ j + ψ j (when D R) then f + f f ω,1, and so on... Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 30
53 Interpolation via weighted l 1 -minimization Theorem Assume N = Γ <, ω j ψ j and 0 < p < 1. Choose t 1,..., t m i.i.d. at random according to ν where m Cs log 4 (N) for s 2 ω 2. Then with probability at least 1 N ln3 N the following holds for each f A ω,p. Let x be the solution of min z C N z ω,1 subject to j Γ z j ψ j (t l ) = f (t l ), l = 1,..., m and set f (t) = j Γ x j ψ j(t). Then f f f f ω,1 C 1 s 1 1/p f ω,p, f f L 2 ν C 2 s 1/2 1/p f ω,p. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 31
54 Quasi-interpolation in infinite-dimensional spaces Γ =, lim j ω j = and ω j ψ j. Theorem Let f A ω,p for some 0 < p < 1, and set Γ s = {j Γ : ω 2 j s/2} for some s. Choose t 1,..., t m i.i.d. at random according to ν where m Cs log 4 ( Γ s ). With η = f ω,p / s let x be the solution to min z ω,1 subject to (f (t l ) z j ψ j (t l )) m z C Γs l=1 2 η m. j Γ s and put f (t) = j Γ s x j ψ j(t). Then with prob. 1 N ln3 (N) f f f f ω,1 C 1 s 1 1/p f ω,p, f f L 2 ν C 2 s 1/2 1/p f ω,p. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 32
55 Numerical example I for the trigonometric system Original function Runge s example f (x) = x 2 Weights: w j = 1 + j 20 Interpolation points chosen uniformly at random from [ 1, 1]. 1 Least squares solution 1 Unweighted l1 minimizer 1 Weighted l1 minimizer Residual error 0.5 Residual error 0.5 Residual error Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 33
56 Numerical example II for the trigonometric system Original function f (x) = x Weights: w j = 1 + j. 20 Interpolation points chosen uniformly at random from [ 1, 1]. 1 Least squares solution 1 Unweighted l1 minimizer 1 Weighted l1 minimizer Residual error 0.5 Residual error 0.5 Residual error Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 34
57 Numerical example for Chebyshev polynomials Original function f (x) = x 2 Weights: w j = 1 + j. 20 Interpolation points chosen i.i.d. at random according to Chebyshev measure dν(x) = dx π 1 x 2. 1 Least squares solution 1 Unweighted l1 minimizer 1 Weighted l1 minimizer Residual error 0.5 Residual error 0.5 Residual error Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 35
58 Numerical example for Legendre polynomials Original function f (x) = x 2 Weights: w j = 1 + j. 20 Interpolation points chosen i.i.d. at random according to Chebyshev measure. 1 Least squares solution 1 Unweighted l1 minimizer 1 Weighted l1 minimizer Residual error 0.5 Residual error 0.5 Residual error Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 36
59 Back to spherical harmonics Y k l, k l k, k N 0 : spherical harmonics. Recall: L -bound: Yl k k 1/2 Preconditioned L -bound for v(θ, φ) = sin 2 (θ) cos(θ) 1/6 : vy k l Ck 1/6. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 37
60 Back to spherical harmonics Y k l, k l k, k N 0 : spherical harmonics. Recall: L -bound: Yl k k 1/2 Preconditioned L -bound for v(θ, φ) = sin 2 (θ) cos(θ) 1/6 : vy k l Ck 1/6. Weighted RIP: With weights ω k,l k 1/6 the preconditioned random sampling matrix 1 m A C m N satisfies δ ω,s δ with high probability if m Cδ 2 s log 4 (N). Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 37
61 Comparison of error bounds Error bound for reconstruction of f A ω,p from m Cs ln 3 (s) ln(n) samples drawn i.i.d. at random from the measure ν(dθ, dφ) = tan(θ) 1/3 via weighted l 1 -minimization: f f f f ω,1 Cs 1 1/p f ω,p, 0 < p < 1. Compare to error estimate for unweighted l 1 -minimization: If m CN 1/6 s ln 3 (s) ln(n) then f f 1 Cs 1 1/p f p, 0 < p < 1. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 38
62 Numerical Experiments for Sparse Spherical Harmonic Recovery Original function unweighted l 1 ω k,l = k 1/6 ω k,l = k 1/ Original function f (θ, φ) = 1 θ 2 +1/10 Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 39
63 High dimensional function interpolation Tensorized Chebyshev polynomials on D = [ 1, 1] d C k (t) = C k1 (t 1 )C k2 (t 2 ) C kd (t d ), k N d 0 with C k the L 2 -normalized Chebyshev polynomials on [ 1, 1]. Then 1 2 d C k (t)c j (t)dt = δ k,j, j, k N d 0. [ 1,1] d Expansions f (t) = k N d 0 x kc k (t) with x p < for 0 < p < 1 and large d (even d = ) appear in parametric PDE s (Cohen, DeVore, Schwab 2011,...). Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 40
64 (Weighted) sparse recovery for tensorized Chebyshev polynomials L -bound: C k = 2 k 0/2. Curse of dimension: Classical RIP bound requires m C2 d s ln 3 (s) ln(n). Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 41
65 (Weighted) sparse recovery for tensorized Chebyshev polynomials L -bound: C k = 2 k 0/2. Curse of dimension: Classical RIP bound requires m C2 d s ln 3 (s) ln(n). Weights: ω j = 2 k 0/2. Weighted RIP bound: m Cs ln 3 (s) ln(n) Approximate recovery requires x l ω,p! Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 41
66 Comparison Classical Interpolation vs. Weighted l 1 -minimization Classical bound f f Cm r/d f C r Interpolation via l 1 -minimization ( ) f f m 1 1/p C ln 4 f ω,p, 0 < p < 1. (m) Better rate if 1/p 1 > r/d, i.e., p < 1 r/d + 1. For instance, when r = d, then p < 1/2 sufficient. Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 42
67 Avertisement S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing Applied and Numerical Harmonic Analysis, Birkhäuser, Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 43
68 Rupert Lasser, All the best for your retirement! Holger Rauhut, RWTH Aachen University Interpolation via Weighted l 1 -minimization 44
Interpolation via weighted l 1 -minimization
Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,
More informationInterpolation via weighted l 1 minimization
Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C
More informationSparse recovery for spherical harmonic expansions
Rachel Ward 1 1 Courant Institute, New York University Workshop Sparsity and Cosmology, Nice May 31, 2011 Cosmic Microwave Background Radiation (CMB) map Temperature is measured as T (θ, ϕ) = k k=0 l=
More informationSparse Legendre expansions via l 1 minimization
Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery
More informationInterpolation via weighted l 1 minimization
Interpolation via weighted l minimization Holger Rauhut, Rachel Ward August 3, 23 Abstract Functions of interest are often smooth and sparse in some sense, and both priors should be taken into account
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More informationSparse and Low Rank Recovery via Null Space Properties
Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationAN INTRODUCTION TO COMPRESSIVE SENSING
AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE
More informationSparse solutions of underdetermined systems
Sparse solutions of underdetermined systems I-Liang Chern September 22, 2016 1 / 16 Outline Sparsity and Compressibility: the concept for measuring sparsity and compressibility of data Minimum measurements
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationCoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles
CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationInterpolation-Based Trust-Region Methods for DFO
Interpolation-Based Trust-Region Methods for DFO Luis Nunes Vicente University of Coimbra (joint work with A. Bandeira, A. R. Conn, S. Gratton, and K. Scheinberg) July 27, 2010 ICCOPT, Santiago http//www.mat.uc.pt/~lnv
More informationExponential decay of reconstruction error from binary measurements of sparse signals
Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationUniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationStability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries
Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Simon Foucart, Drexel University Abstract We investigate the recovery of almost s-sparse vectors x C N from
More informationSparse Recovery with Pre-Gaussian Random Matrices
Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of
More informationSparse Optimization Lecture: Sparse Recovery Guarantees
Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationSparse Approximation of PDEs based on Compressed Sensing
Sparse Approximation of PDEs based on Compressed Sensing Simone Brugiapaglia Department of Mathematics Simon Fraser University Retreat for Young Researchers in Stochastics September 24, 26 2 Introduction
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More informationCoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp
CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationCompressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements
Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationCompressed Sensing: Lecture I. Ronald DeVore
Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function
More informationCOMPRESSIVE SENSING PETROV-GALERKIN APPROXIMATION OF HIGH-DIMENSIONAL PARAMETRIC OPERATOR EQUATIONS
COMPRESSIVE SENSING PETROV-GALERKIN APPROXIMATION OF HIGH-DIMENSIONAL PARAMETRIC OPERATOR EQUATIONS HOLGER RAUHUT AND CHRISTOPH SCHWAB Abstract. We analyze the convergence of compressive sensing based
More informationCompressed Sensing and Redundant Dictionaries
Compressed Sensing and Redundant Dictionaries Holger Rauhut, Karin Schnass and Pierre Vandergheynst December 2, 2006 Abstract This article extends the concept of compressed sensing to signals that are
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationCompressed Sensing and Redundant Dictionaries
Compressed Sensing and Redundant Dictionaries Holger Rauhut, Karin Schnass and Pierre Vandergheynst Abstract This article extends the concept of compressed sensing to signals that are not sparse in an
More informationReconstruction of sparse Legendre and Gegenbauer expansions
Reconstruction of sparse Legendre and Gegenbauer expansions Daniel Potts Manfred Tasche We present a new deterministic algorithm for the reconstruction of sparse Legendre expansions from a small number
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationA Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia
A Tutorial on Compressive Sensing Simon Foucart Drexel University / University of Georgia CIMPA13 New Trends in Applied Harmonic Analysis Mar del Plata, Argentina, 5-16 August 2013 This minicourse acts
More information6 Compressed Sensing and Sparse Recovery
6 Compressed Sensing and Sparse Recovery Most of us have noticed how saving an image in JPEG dramatically reduces the space it occupies in our hard drives as oppose to file types that save the pixel value
More informationHard Thresholding Pursuit Algorithms: Number of Iterations
Hard Thresholding Pursuit Algorithms: Number of Iterations Jean-Luc Bouchot, Simon Foucart, Pawel Hitczenko Abstract The Hard Thresholding Pursuit algorithm for sparse recovery is revisited using a new
More informationStability and Robustness of Weak Orthogonal Matching Pursuits
Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery
More informationZ Algorithmic Superpower Randomization October 15th, Lecture 12
15.859-Z Algorithmic Superpower Randomization October 15th, 014 Lecture 1 Lecturer: Bernhard Haeupler Scribe: Goran Žužić Today s lecture is about finding sparse solutions to linear systems. The problem
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationThe Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1
The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,
More informationCompressed Sensing and Neural Networks
and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications
More informationLow-rank Tensor Recovery
Low-rank Tensor Recovery Dissertation zur Erlangung des Doktorgrades Dr. rer. nat. der Mathematisch-Naturwissenschaftlichen Fakultät der Rheinischen Friedrich-Wilhelms-Universität Bonn vorgelegt von Željka
More informationUniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
More informationCS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5
CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given
More informationExact Low-rank Matrix Recovery via Nonconvex M p -Minimization
Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:
More informationStochastic geometry and random matrix theory in CS
Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder
More informationGeometry of log-concave Ensembles of random matrices
Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann
More informationAn Introduction to Sparse Recovery (and Compressed Sensing)
An Introduction to Sparse Recovery (and Compressed Sensing) Massimo Fornasier Department of Mathematics massimo.fornasier@ma.tum.de http://www-m15.ma.tum.de/ CoSeRa 2016 RWTH - Aachen September 19, 2016
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationUniqueness Conditions for A Class of l 0 -Minimization Problems
Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search
More informationOn the coherence barrier and analogue problems in compressed sensing
On the coherence barrier and analogue problems in compressed sensing Clarice Poon University of Cambridge June 1, 2017 Joint work with: Ben Adcock Anders Hansen Bogdan Roman (Simon Fraser) (Cambridge)
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationEnhanced Compressive Sensing and More
Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationIEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,
More informationSparsity Regularization
Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation
More informationSparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1
Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics
More informationLecture 3. Random Fourier measurements
Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our
More informationRandom projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016
Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationAnalysis of Greedy Algorithms
Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm
More informationA New Estimate of Restricted Isometry Constants for Sparse Solutions
A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist
More informationCOMPRESSED SENSING IN PYTHON
COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed
More informationImplementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs
Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation
More informationThresholds for the Recovery of Sparse Solutions via L1 Minimization
Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu
More informationNonlinear tensor product approximation
ICERM; October 3, 2014 1 Introduction 2 3 4 Best multilinear approximation We are interested in approximation of a multivariate function f (x 1,..., x d ) by linear combinations of products u 1 (x 1 )
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More informationA Simple Proof of the Restricted Isometry Property for Random Matrices
DOI 10.1007/s00365-007-9003-x A Simple Proof of the Restricted Isometry Property for Random Matrices Richard Baraniuk Mark Davenport Ronald DeVore Michael Wakin Received: 17 May 006 / Revised: 18 January
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationTHEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS
THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS YIN ZHANG Abstract. Compressive sensing (CS) is an emerging methodology in computational signal processing that has
More informationMultiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing
Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National
More informationA new method on deterministic construction of the measurement matrix in compressed sensing
A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More informationRui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China
Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series
More informationKomprimované snímání a LASSO jako metody zpracování vysocedimenzionálních dat
Komprimované snímání a jako metody zpracování vysocedimenzionálních dat Jan Vybíral (Charles University Prague, Czech Republic) November 2014 VUT Brno 1 / 49 Definition and motivation Its use in bioinformatics
More informationList of Errata for the Book A Mathematical Introduction to Compressive Sensing
List of Errata for the Book A Mathematical Introduction to Compressive Sensing Simon Foucart and Holger Rauhut This list was last updated on September 22, 2018. If you see further errors, please send us
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationSparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery
Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationLecture 22: More On Compressed Sensing
Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an
More informationOptimization for Compressed Sensing
Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve
More informationSignal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit
Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably
More informationPHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN
PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the
More informationOptimisation Combinatoire et Convexe.
Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix
More informationThe Analysis Cosparse Model for Signals and Images
The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under
More informationRecovery of Compressible Signals in Unions of Subspaces
1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract
More informationApproximation numbers of Sobolev embeddings - Sharp constants and tractability
Approximation numbers of Sobolev embeddings - Sharp constants and tractability Thomas Kühn Universität Leipzig, Germany Workshop Uniform Distribution Theory and Applications Oberwolfach, 29 September -
More informationCombining geometry and combinatorics
Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss
More informationRobust multichannel sparse recovery
Robust multichannel sparse recovery Esa Ollila Department of Signal Processing and Acoustics Aalto University, Finland SUPELEC, Feb 4th, 2015 1 Introduction 2 Nonparametric sparse recovery 3 Simulation
More informationSolving Corrupted Quadratic Equations, Provably
Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin
More informationRapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization
Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016
More information