On the coherence barrier and analogue problems in compressed sensing

Size: px
Start display at page:

Download "On the coherence barrier and analogue problems in compressed sensing"

Transcription

1 On the coherence barrier and analogue problems in compressed sensing Clarice Poon University of Cambridge June 1, 2017 Joint work with: Ben Adcock Anders Hansen Bogdan Roman (Simon Fraser) (Cambridge) (Cambridge) 1 / 40

2 Compressed sensing Given an isometry U C N N, can we recover x C N from P Ω Ux, where P Ω is the restriction of a vector to an index set Ω and Ω = m << N? Theorem (Candès & Plan, 2011) Define the coherence of U be µ(u) := max u k,j 2. j,k=1,...,n If x C N is s-sparse, then by choosing Ω is uniformly at random such that Ω = O ( µ(u) N s log(n) log(ɛ 1 + 1) ) x is recovered exactly from its samples P Ω Ux with probability exceeding 1 ɛ by solving min β C N β 1 subject to P Ω Uβ = P Ω Ux Notation: if y = P Ω x, then y j = x j for all j Ω and y j = 0 otherwise. 2 / 40

3 Compressed sensing Given an isometry U C N N, can we recover x C N from P Ω Ux, where P Ω is the restriction of a vector to an index set Ω and Ω = m << N? Theorem (Candès & Plan, 2011) Let U be incoherent µ(u) := max u k,j 2 = O ( N 1) j,k=1,...,n If x C N is s-sparse, then by choosing Ω uniformly at random such that Ω = O ( µ(u) N s log(n) log(ɛ 1 + 1) ) x is recovered exactly from its samples P Ω Ux, with probability exceeding 1 ɛ by solving min β C N β 1 subject to P Ω Uβ = P Ω Ux. 3 / 40

4 Compressed sensing in action Applications: Magnetic Resonance Imaging (MRI), X-ray Computed Tomography, Electron Microscopy, Seismology, Radio interferometry,... Mathematically: recover f L 2 (R 2 ) from Ff (ω), ω Ω where Ff denotes the continuous Fourier transform of f and Ω is a finite set of frequencies. Standard approach: Solve min z C N2 U z 1 subject to P Ω U df z P Ω ˆf δ 2 where U df is the discrete Fourier transform, and U is some sparsifying transform (e.g. wavelets). 4 / 40

5 Compressed sensing in action If U is a discrete wavelet transform, then µ(udf U 1 ) = 1, so µ(udf U 1 ) N s log(n) log( 1 + 1) > N! Also, uniform random sampling does not work. Original Ω Rec. Wavelet Rec. TV Test phantom constructed by Guerquin-Kern, Lejeune, Pruessmann, Unser, / 40

6 Compressed sensing in action If U is a discrete wavelet transform, then µ(u df U 1 ) = 1, so µ(u df U 1 ) N s log(n) log(ɛ 1 + 1) > N! Also, uniform random sampling does not work. Original Ω Rec. Wavelet Rec. TV Lustig, Donoho & Pauli 07, Lustig et al. 08: Sample more densely at low Fourier frequencies and less at higher Fourier frequencies. Why? Test phantom constructed by Guerquin-Kern, Lejeune, Pruessmann, Unser, / 40

7 This talk The assumptions of incoherence and sparsity are not appropriate. Instead, we have asymptotic incoherence and asymptotic sparsity. CS is primarily finite dimensional, our theory will cover an infinite dimensional setting. Present a theoretical result which shows how the presence of asymptotic incoherence and asymptotic sparsity will allow for significant subsampling when solving min z 1 subject to P Ω Uz P Ω Ux 2 δ. 6 / 40

8 Outline Asymptotic incoherence Sparsity structure Compressed sensing in infinite dimensions The recovery statement 7 / 40

9 Outline Asymptotic incoherence Sparsity structure Compressed sensing in infinite dimensions The recovery statement 8 / 40

10 The coherence barrier For essentially any wavelet basis {ϕ j } j N, if U dw,n is its discrete wavelet transform and U df,n is the discrete Fourier transform, then U df,n U 1 WOT dw,n U, N where {ψ j } j N = {e 2πik } k Z, ϕ 1, ψ 1 ϕ 2, ψ 1 U = ϕ 1, ψ 2 ϕ 2, ψ 2, µ(u) c Any systems arising from the discretization of continuous problems will always run into the coherence barrier. 9 / 40

11 Asymptotic incoherence If U is the Fourier-wavelets matrix, then µ(p N U), µ(up N ) = O ( N 1). Fourier to DB4 Fourier to Legendre polynomials Notation: P N x = (x 1,, x N, 0, 0, ) and P N x = (0,, 0, x N+1, x N+2, ) 10 / 40

12 Local coherence Instead of coherence, divide U into rectangular blocks using N = (N j ) r j=1, M = (M j) r j=1, and consider local coherence µ N,M (k, l) = µ(p N k N k 1 UP M l M l 1 ). where P m n α = (..., 0, α n+1, α n+2,..., α m, 0...). Implication of asymptotic incoherence: sample more at low Fourier frequencies where the local coherence is high and less at higher Fourier frequencies. 11 / 40

13 Local coherence Instead of coherence, divide U into rectangular blocks using N = (N j ) r j=1, M = (M j) r j=1, and consider local coherence µ N,M (k, l) = µ(p N k N k 1 UP M l M l 1 ). where P m n α = (..., 0, α n+1, α n+2,..., α m, 0...). Implication of asymptotic incoherence: sample more at low Fourier frequencies where the local coherence is high and less at higher Fourier frequencies. Krahmer & Ward (2014): O ( s log 3 (s) log 2 (N) ) for Haar wavelet sparsity from Fourier samples and O ( s log 5 (s) log 3 (N) ) for TV sparsity with Fourier sampling / 40

14 Outline Asymptotic incoherence Sparsity structure Compressed sensing in infinite dimensions The recovery statement 12 / 40

15 Sparsity and the flip test In standard CS, the only signal structure considered is sparsity and RIP based results consider the recovery of all s-sparse signals using one Ω. In contrast, the flip test will demonstrate that we must look beyond sparsity. Consider the reconstruction of x from P Ω U df x by solving min z 1 subject to P Ω U df U 1 dw z = P ΩU df x. Ω x Reconstruction 13 / 40

16 The flip test Let α be the wavelet coefficients of x. 2 α 2 α flip 1.5 Truncated (max = ) 1.5 Truncated (max = ) x x10 5 Let α flip = (α N,..., α 1 ) and x flip = U 1 dw αflip. If it is enough to consider sparsity when choosing Ω, then for the same Ω, would yield α f arg min z z 1 subject to P Ω U df U 1 dw z = P ΩU df x flip, α f α flip = α flip f α ˆx = U 1 dw αflip f x = U 1 dw α 14 / 40

17 The flip test Ω Standard Reconstruction from reconstruction flipped coefficients We can repeat this test for different images, different sampling patterns, or even the sparsifying transform to that of *-lets or total variation, min z C N2 z TV subject to P ΩU df z = P Ω U df x... and observe that the optimal choice of Ω cannot depend on sparsity alone. 15 / 40

18 The flip test Ω Standard Reconstruction from reconstruction flipped coefficients We can repeat this test for different images, different sampling patterns, or even the sparsifying transform to that of *-lets or total variation, min z C N2 z TV subject to P ΩU df z = P Ω U df x... and observe that the optimal choice of Ω cannot depend on sparsity alone. 16 / 40

19 The flip test Ω Image 1 Image We can repeat this test for different images, different sampling patterns, or even different sparsifying transforms such as *-lets or total variation, min z C N2 z TV subject to P ΩU df z = P Ω U df x... and observe that the optimal choice of Ω cannot depend on sparsity alone. 17 / 40

20 The flip test Ω Image 1 Image 2 reconstruction reconstruction We can repeat this test for different images, different sampling patterns, or even the sparsifying transform to that of *-lets or total variation, min z C N2 z TV subject to P ΩU df z = P Ω U df x The optimal choice of Ω cannot depend on sparsity alone. 18 / 40

21 Remark on the RIP The flip test demonstrated that Ω cannot depend on sparsity alone, and in fact, the RIP is absent. This is in contrast to random Gaussian measurements are insensitive to sparsity structure: Standard reconstruction Reconstruction from flipped coefficients 19 / 40

22 Asymptotic sparsity Natural images are not just sparse, but asymptotically sparse. Given ɛ (0, 1) and f = j N α j ϕ j. we define the number of significant wavelet coefficients of f in the k th scale as s k (ɛ) = min n : n M k α π(j) ϕ π(j) ɛ α j ϕ j j=1 j=1+m 2 k 1 where the {M k 1 + 1,..., M k } be indices corresponding to the k th scale and π is a permutation of the indices in {M k 1 + 1,..., M k } such that α π(1) α π(2) α π(3) / 40

23 Asymptotic sparsity Sparsity, sk(ǫ)/(mk Mk 1) Level 1 Level 2 Level 3 Level 4 Level 5 Level 6 Level 7 Level 8 Worst sparsity Best sparsity Sparsity, sk(ǫ)/(mk Mk 1) Level 1 Level 2 Level 3 Level 4 Level 5 Level 6 Level 7 Level 8 Worst sparsity Best sparsity Relative threshold, ǫ Relative threshold, ǫ s k (ɛ) M k M k 1 0, k Variable density sampling patterns work because they exploit this additional structure. 21 / 40

24 Sparsity in levels For M = (M j ) r j=1 Nr, s = (s j ) r j=1 Nr with 0 = M 0 < M 1 <... < M r, α l 2 (N) is (s, M)-sparse if {j : α j 0} {M k 1 + 1,..., M k } = s k. 22 / 40

25 Outline Asymptotic incoherence Sparsity structure Compressed sensing in infinite dimensions The recovery statement 23 / 40

26 A continuous framework Typical model for sampling operator: U df U 1. But, our samples are of the continuous Fourier transform and not of the discrete one. In general, modelling infinite dimensional problems using finite dimensional techniques will lead to samples mismatch. 24 / 40

27 A continuous framework Typical model for sampling operator: U df U 1. But, our samples are of the continuous Fourier transform and not of the discrete one. In general, modelling infinite dimensional problems using finite dimensional techniques will lead to samples mismatch. Consider ˆf = {ˆf (j) : j N}, U df : C 2N+1 C 2N+1, min U dw g 1 subject to P Ω U df g = ˆf If Ω = {1,..., 2N + 1}, then the solution is the truncated Fourier representation f (x) = j N ˆf (j)e πijx/n. But the wavelet representation of f need not be sparse! 24 / 40

28 A continuous framework Consider two orthonormal bases of a Hilbert space H, e.g. L 2 (0, 1) d : Sampling basis: S = {ψ j } j N, e.g. Fourier basis. Sparsity basis: R = {ϕ j } j N. e.g. wavelet basis. Let f H be the object of interest. Write x j = f, ϕ j for the coefficients of f, i.e. f = j N x jϕ j, y j = f, ψ j for the measurements of f. Define the infinite matrix U = ( ϕ j, ψ i ) i,j N B(l 2 (N)) and note that Ux = y. 25 / 40

29 A continuous framework Consider two orthonormal bases of a Hilbert space H, e.g. L 2 (0, 1) d : Sampling basis: S = {ψ j } j N, e.g. Fourier basis. Sparsity basis: R = {ϕ j } j N. e.g. wavelet basis. Let f H be the object of interest. Write x j = f, ϕ j for the coefficients of f, i.e. f = j N x jϕ j, y j = f, ψ j for the measurements of f. Define the infinite matrix U = ( ϕ j, ψ i ) i,j N B(l 2 (N)) and note that P Ω Ux = P Ω y. 25 / 40

30 Infinite dimensional CS Solve min z l 1 (N) z 1 subject to PΩ Uz P Ω ˆf δ. (P δ (ˆf )) 2 Adcock & Hansen. Generalized sampling and infinite-dimensional compressed sensing (2011). Implementation: Guerquin-Kern, Haberlin, Pruessmann & Unser. A Fast Wavelet-Based Reconstruction Method for Magnetic Resonance Imaging (2011). 26 / 40

31 Infinite dimensional CS Solve min z l 1 (N) z 1 subject to PΩ UP K z P Ω ˆf δ. (P δ (ˆf )) 2 Adcock & Hansen. Generalized sampling and infinite-dimensional compressed sensing (2011). Implementation: Guerquin-Kern, Haberlin, Pruessmann & Unser. A Fast Wavelet-Based Reconstruction Method for Magnetic Resonance Imaging (2011). In the Fourier sampling/wavelet reconstruction case, if Ω {1,..., N}, then choosing K = O (N) yields the same error bounds as the infinite dimensional problem. 26 / 40

32 Infinite dimensional CS Considerations: What is the bandwidth N that we should draw from? i.e. Ω {1,..., N}. Given a finite sampling bandwidth, we cannot expect to recover any s-sparse vector x over N stably. We can only expect the recovery of sparse vectors such that Supp(x) {1,..., M}. 27 / 40

33 Infinite dimensional CS Considerations: What is the bandwidth N that we should draw from? i.e. Ω {1,..., N}. Given a finite sampling bandwidth, we cannot expect to recover any s-sparse vector x over N stably. We can only expect the recovery of sparse vectors such that Supp(x) {1,..., M}. Adcock & Hansen: N, K = N Ω should satisfy the balancing property relative to M and s: ( P M U P N UP M P M 8 log 1/2 ( ) ) skm, Typically, N > M. P MU P N UP M / 40

34 Infinite dimensional CS Considerations: What is the bandwidth N that we should draw from? i.e. Ω {1,..., N}. Given a finite sampling bandwidth, we cannot expect to recover any s-sparse vector x over N stably. We can only expect the recovery of sparse vectors such that Supp(x) {1,..., M}. Adcock & Hansen: N, K = N Ω should satisfy the balancing property relative to M and s: ( P M U P N UP M P M 8 log 1/2 ( ) ) skm, Typically, N > M. P MU P N UP M 1 8. Under the balancing property, one can guarantee ( the recovery of s-sparse ) vectors with support on {1,..., M} from O s N µ(u) log(ñ) samples from uniformly at random from {1,..., N}. 27 / 40

35 Remarks for the Fourier wavelets case Let S be a Fourier basis for L 2 [0, 1] and let R be any compactly supported orthonormal wavelet basis. min z z l 1 1 subject to PΩ Uz P Ω ˆf δ. (N) 2 Fast transforms (Gataric & P. 16): Given x C M and y C N, P N UP M x and P M U P N y can be computed in O (N log(m)) basic operations. Linear scaling (P. 14): Suppose the scaling function Φ and wavelet Ψ satisfy for some α 2 ˆΦ (k) (ξ) (1 + ξ ) α, ˆΨ (k) (ξ) (1 + ξ ) α, k = 0, 1, 2. If x are the wavelet coefficients of f, then by choosing Ω = {1,..., N} with N = O(M) and δ = 0, any minimizer ˆx satisfies x ˆx l 1 P M x / 40

36 Example from electron microscopy f (x, y ) = cos2 (7πx/2) cos2 (7πy /2) exp( x y ). Original Infinite-dimensional CS Finite-dimensional CS Err = 4.1% Err = 16.0% 29 / 40

37 Outline Asymptotic incoherence Sparsity structure Compressed sensing in infinite dimensions The recovery statement 30 / 40

38 Multi-level sampling scheme Let r N, N = {N k } r k=1 Nr, m = {m k } r k=1 Nr be such that 0 = N 0 < N 1 < < N r = N, m k N k N k 1. Ω = Ω 1 Ω r is an (N, m)-sampling scheme if Ω k consists of m k indices drawn uniformly at random from {N k 1 + 1,..., N k }. 31 / 40

39 Multi-level sampling scheme Let r N, N = {N k } r k=1 Nr, m = {m k } r k=1 Nr be such that 0 = N 0 < N 1 < < N r = N, m k N k N k 1. Ω = Ω 1 Ω r is an (N, m)-sampling scheme if Ω k consists of m k indices drawn uniformly at random from {N k 1 + 1,..., N k }. Q: Let U B(l 2 (N)) be an isometry. If x is approximately (s, M)-sparse σ s,m (α) = inf z α 1 << 1. z is (s,m)-sparse then how should N and m be chosen so as to guarantee robust and stable recovery of x by solving inf z z l 1 1 subject to P ΩUx P Ω Uz 2 δ? (N) 31 / 40

40 Recovery result (Adcock, Hansen, P. & Roman) Let ɛ (0, e 1 ] and x be approximately (s, M)-sparse. Suppose that Ω = Ω N,m satisfies the following. (i) N r, K = max r N k N k 1 k=1 m k, satisfy the balancing property: ( P Mr U P Nr UP Mr P Mr 8 log 1/2 ( ) ) skmr, PM r U P Nr UP Mr 1 8. (ii) m k (N k N k 1 ) ( r l=1 µ ) ( N,M(k, l) s l log(ɛ 1 ) log K M ) s, ( (iii) m k ˆm k log(ɛ 1 ) log K M ) s, where ˆm k satisfies 1 r k=1 ( ) Nk N k 1 1 µ N,M (k, l) s k. ˆm k Then, with probability exceeding 1 sɛ, any minimizer ξ l 1 (N) satisfies ξ x 2 δ K ( ) log2 (6ɛ ) log 2 (4KM s + σ s,m (x). s) 32 / 40

41 Recovery of wavelet coefficients from partial Fourier data {N k } r k=1 and {M k} r k=1 correspond to wavelet scales. The mother wavelet Ψ has v vanishing moments. There exists α 1, C > 0 such that ˆΨ(ξ) C (1+ ξ ) for all ξ R. α It suffices that (i) (ii) N r M 1+1/(2α 1) r (log 2 (4M r K s)) 1/(2α 1) (or even N r M r (log 2 (4M r K s)) 1/(2α 1) when Ψ is sufficiently smooth). ( m k 1 L ŝ k + N k N k 1 N k 1 k 2 s j 2 α(k l) + l=1 r l=k+2 s l 2 v(l k) ) where ŝ k = max {s k 1, s k, s k+1 } and L = log(ɛ 1 ) log ( KN s ) NB: m m r L (s s r ). 33 / 40

42 Resolution Dependence (5% samples, varying resolution) Asymptotic sparsity and asymptotic incoherence are only witnessed when N is large. Thus, V. D. sampling only reaps their benefits for large values of N and the success of compressed sensing is resolution dependent. 256x256 Error: 19.86% 512x512 Error: 10.69% 34 / 40

43 Resolution Dependence (5% samples, varying resolution) 1024x1024 Error: 7.35% 2048x2048 Error: 4.87% 4096x4096 Error: 3.06% 35 / 40

44 Recovering Fine Details At finer wavelet scales, the presence of sparsity and incoherence with Fourier samples allows us to subsample. Thus, compressed sensing allows one to enhance fine details without increasing the number of samples. In the next example, consider the reconstruction of a test phantom with details added at the finest wavelet scale. 36 / 40

45 Recovering Fine Details Figure: linear reconstruction from the first Fourier samples (6.25%) 37 / 40

46 Recovering Fine Details Figure: reconstruction from a multilevel scheme using Fourier samples (6.25%) 38 / 40

47 Conclusions There are many real world problems where there is no incoherence or RIP. We introduced a CS theory based on local coherence and local sparsities, and demonstrated that the successful recovery of wavelet coefficients from partial Fourier data can be explained by asymptotic incoherence and asymptotic sparsity. Two key consequences of our theory: (1) CS is resolution dependent. (2) Successful recovery is signal dependent, thus, an understanding of the structure imposed by the sparsifying transform can lead to optimal sampling patterns. On a practical note, one should see compressed sensing as a means of enhancing resolution / 40

48 References Breaking the coherence barrier: A new theory for compressed sensing. Adcock, Hansen, Poon & Roman, Forum of Mathematics, Sigma. Vol. 5. Cambridge University Press (2017). Generalized sampling and infinite-dimensional compressed sensing. Adcock & Hansen, Foundations of Computational Mathematics 16.5 (2016): A consistent and stable approach to generalized sampling. Poon. Journal of Fourier Analysis and Applications, 20(5), (2014). A practical guide to the recovery of wavelet coefficients from Fourier measurements. Gataric & Poon. SIAM Journal on Scientific Computing, 38(2), A1075-A1099 (2016). 40 / 40

49 References Breaking the coherence barrier: A new theory for compressed sensing. Adcock, Hansen, Poon & Roman, Forum of Mathematics, Sigma. Vol. 5. Cambridge University Press (2017). Generalized sampling and infinite-dimensional compressed sensing. Adcock & Hansen, Foundations of Computational Mathematics 16.5 (2016): A consistent and stable approach to generalized sampling. Poon. Journal of Fourier Analysis and Applications, 20(5), (2014). A practical guide to the recovery of wavelet coefficients from Fourier measurements. Gataric & Poon. SIAM Journal on Scientific Computing, 38(2), A1075-A1099 (2016). Thank you for you attention. 40 / 40

Beyond incoherence and beyond sparsity: compressed sensing in the real world

Beyond incoherence and beyond sparsity: compressed sensing in the real world Beyond incoherence and beyond sparsity: compressed sensing in the real world Clarice Poon 1st November 2013 University of Cambridge, UK Applied Functional and Harmonic Analysis Group Head of Group Anders

More information

Breaking the coherence barrier - A new theory for compressed sensing

Breaking the coherence barrier - A new theory for compressed sensing Breaking the coherence barrier - A new theory for compressed sensing Anders C. Hansen (Cambridge) Joint work with: B. Adcock (Simon Fraser University) C. Poon (Cambridge) B. Roman (Cambridge) UCL-Duke

More information

Compressed sensing and imaging

Compressed sensing and imaging Compressed sensing and imaging The effect and benefits of local structure Ben Adcock Department of Mathematics Simon Fraser University 1 / 45 Overview Previously: An introduction to compressed sensing.

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Compressed sensing (CS), introduced by Candès, Romberg &

Compressed sensing (CS), introduced by Candès, Romberg & On asymptotic structure in compressed sensing Bogdan Roman, Ben Adcock and Anders Hansen Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB3 0WA, UK, and Department

More information

Compressed sensing (CS), introduced by Candès, Romberg &

Compressed sensing (CS), introduced by Candès, Romberg & On asymptotic structure in compressed sensing Bogdan Roman, Ben Adcock and Anders Hansen Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB3 0WA, UK, and Department

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

A practical guide to the recovery of wavelet coefficients from Fourier measurements

A practical guide to the recovery of wavelet coefficients from Fourier measurements A practical guide to the recovery of wavelet coefficients from Fourier measurements Milana Gataric Clarice Poon April 05; Revised March 06 Abstract In a series of recent papers Adcock, Hansen and Poon,

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Part IV Compressed Sensing

Part IV Compressed Sensing Aisenstadt Chair Course CRM September 2009 Part IV Compressed Sensing Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Conclusion to Super-Resolution Sparse super-resolution is sometime

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,

More information

LEARNING DATA TRIAGE: LINEAR DECODING WORKS FOR COMPRESSIVE MRI. Yen-Huan Li and Volkan Cevher

LEARNING DATA TRIAGE: LINEAR DECODING WORKS FOR COMPRESSIVE MRI. Yen-Huan Li and Volkan Cevher LARNING DATA TRIAG: LINAR DCODING WORKS FOR COMPRSSIV MRI Yen-Huan Li and Volkan Cevher Laboratory for Information Inference Systems École Polytechnique Fédérale de Lausanne ABSTRACT The standard approach

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Linear reconstructions and the analysis of the stable sampling rate

Linear reconstructions and the analysis of the stable sampling rate Preprint submitted to SAMPLING THEORY IN SIGNAL AND IMAGE PROCESSING DVI file produced on Linear reconstructions and the analysis of the stable sampling rate Laura Terhaar Department of Applied Mathematics

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

Sparsifying Transform Learning for Compressed Sensing MRI

Sparsifying Transform Learning for Compressed Sensing MRI Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois

More information

Compressed Sensing and Related Learning Problems

Compressed Sensing and Related Learning Problems Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed

More information

Compressive sensing of low-complexity signals: theory, algorithms and extensions

Compressive sensing of low-complexity signals: theory, algorithms and extensions Compressive sensing of low-complexity signals: theory, algorithms and extensions Laurent Jacques March 7, 9, 1, 14, 16 and 18, 216 9h3-12h3 (incl. 3 ) Graduate School in Systems, Optimization, Control

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Sparse recovery for spherical harmonic expansions

Sparse recovery for spherical harmonic expansions Rachel Ward 1 1 Courant Institute, New York University Workshop Sparsity and Cosmology, Nice May 31, 2011 Cosmic Microwave Background Radiation (CMB) map Temperature is measured as T (θ, ϕ) = k k=0 l=

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

A Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia

A Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia A Tutorial on Compressive Sensing Simon Foucart Drexel University / University of Georgia CIMPA13 New Trends in Applied Harmonic Analysis Mar del Plata, Argentina, 5-16 August 2013 This minicourse acts

More information

IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM

IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM Anne Gelb, Rodrigo B. Platte, Rosie Renaut Development and Analysis of Non-Classical Numerical Approximation

More information

Uncertainty quantification for sparse solutions of random PDEs

Uncertainty quantification for sparse solutions of random PDEs Uncertainty quantification for sparse solutions of random PDEs L. Mathelin 1 K.A. Gallivan 2 1 LIMSI - CNRS Orsay, France 2 Mathematics Dpt., Florida State University Tallahassee, FL, USA SIAM 10 July

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz

More information

Multilevel Preconditioning and Adaptive Sparse Solution of Inverse Problems

Multilevel Preconditioning and Adaptive Sparse Solution of Inverse Problems Multilevel and Adaptive Sparse of Inverse Problems Fachbereich Mathematik und Informatik Philipps Universität Marburg Workshop Sparsity and Computation, Bonn, 7. 11.6.2010 (joint work with M. Fornasier

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices Compressed Sensing Yonina C. Eldar Electrical Engineering Department, Technion-Israel Institute of Technology, Haifa, Israel, 32000 1 Introduction Compressed sensing (CS) is an exciting, rapidly growing

More information

arxiv: v3 [math.na] 6 Sep 2015

arxiv: v3 [math.na] 6 Sep 2015 Weighted frames of exponentials and stable recovery of multidimensional functions from nonuniform Fourier samples Ben Adcock Milana Gataric Anders C. Hansen April 23, 2018 arxiv:1405.3111v3 [math.na] 6

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

A Survey of Compressive Sensing and Applications

A Survey of Compressive Sensing and Applications A Survey of Compressive Sensing and Applications Justin Romberg Georgia Tech, School of ECE ENS Winter School January 10, 2012 Lyon, France Signal processing trends DSP: sample first, ask questions later

More information

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC

More information

Random Sampling of Bandlimited Signals on Graphs

Random Sampling of Bandlimited Signals on Graphs Random Sampling of Bandlimited Signals on Graphs Pierre Vandergheynst École Polytechnique Fédérale de Lausanne (EPFL) School of Engineering & School of Computer and Communication Sciences Joint work with

More information

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National

More information

Multiscale Frame-based Kernels for Image Registration

Multiscale Frame-based Kernels for Image Registration Multiscale Frame-based Kernels for Image Registration Ming Zhen, Tan National University of Singapore 22 July, 16 Ming Zhen, Tan (National University of Singapore) Multiscale Frame-based Kernels for Image

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

6 Compressed Sensing and Sparse Recovery

6 Compressed Sensing and Sparse Recovery 6 Compressed Sensing and Sparse Recovery Most of us have noticed how saving an image in JPEG dramatically reduces the space it occupies in our hard drives as oppose to file types that save the pixel value

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

On Compressive Orthonormal Sensing

On Compressive Orthonormal Sensing JOURNAL OF L A TEX CLASS FILES, VOL. 4, NO. 8, AUGUST 05 On Compressive Orthonormal Sensing Yi Zhou, Huishuai Zhang, and Yingbin Liang, Abstract The Compressive Sensing (CS) approach for recovering sparse

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Phase Transition Phenomenon in Sparse Approximation

Phase Transition Phenomenon in Sparse Approximation Phase Transition Phenomenon in Sparse Approximation University of Utah/Edinburgh L1 Approximation: May 17 st 2008 Convex polytopes Counting faces Sparse Representations via l 1 Regularization Underdetermined

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Sparse Approximation of PDEs based on Compressed Sensing

Sparse Approximation of PDEs based on Compressed Sensing Sparse Approximation of PDEs based on Compressed Sensing Simone Brugiapaglia Department of Mathematics Simon Fraser University Retreat for Young Researchers in Stochastics September 24, 26 2 Introduction

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Inverse problems, Dictionary based Signal Models and Compressed Sensing

Inverse problems, Dictionary based Signal Models and Compressed Sensing Inverse problems, Dictionary based Signal Models and Compressed Sensing Rémi Gribonval METISS project-team (audio signal processing, speech recognition, source separation) INRIA, Rennes, France Ecole d

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Determinis)c Compressed Sensing for Images using Chirps and Reed- Muller Sequences

Determinis)c Compressed Sensing for Images using Chirps and Reed- Muller Sequences Determinis)c Compressed Sensing for Images using Chirps and Reed- Muller Sequences Kangyu Ni Mathematics Arizona State University Joint with Somantika Datta*, Prasun Mahanti, Svetlana Roudenko, Douglas

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Interpolation-Based Trust-Region Methods for DFO

Interpolation-Based Trust-Region Methods for DFO Interpolation-Based Trust-Region Methods for DFO Luis Nunes Vicente University of Coimbra (joint work with A. Bandeira, A. R. Conn, S. Gratton, and K. Scheinberg) July 27, 2010 ICCOPT, Santiago http//www.mat.uc.pt/~lnv

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Recovery of Compressible Signals in Unions of Subspaces

Recovery of Compressible Signals in Unions of Subspaces 1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France. Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Overview of the course Introduction sparsity & data compression inverse problems

More information

arxiv: v9 [cs.cv] 12 Mar 2013

arxiv: v9 [cs.cv] 12 Mar 2013 Stable image reconstruction using total variation minimization Deanna Needell Rachel Ward March 13, 2013 arxiv:1202.6429v9 [cs.cv] 12 Mar 2013 Abstract This article presents near-optimal guarantees for

More information

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering

More information

COMPRESSED SENSING IN PYTHON

COMPRESSED SENSING IN PYTHON COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Compressive Sensing Applied to Full-wave Form Inversion

Compressive Sensing Applied to Full-wave Form Inversion Compressive Sensing Applied to Full-wave Form Inversion Felix J. Herrmann* fherrmann@eos.ubc.ca Joint work with Yogi Erlangga, and Tim Lin *Seismic Laboratory for Imaging & Modeling Department of Earth

More information

Compressive Sensing Theory and L1-Related Optimization Algorithms

Compressive Sensing Theory and L1-Related Optimization Algorithms Compressive Sensing Theory and L1-Related Optimization Algorithms Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, USA CAAM Colloquium January 26, 2009 Outline:

More information

Compressed sensing techniques for hyperspectral image recovery

Compressed sensing techniques for hyperspectral image recovery Compressed sensing techniques for hyperspectral image recovery A. Abrardo, M. Barni, C. M. Carretti, E. Magli, S. Kuiteing Kamdem, R. Vitulli ABSTRACT Compressed Sensing (CS) theory is progressively gaining

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

Minimizing the Difference of L 1 and L 2 Norms with Applications

Minimizing the Difference of L 1 and L 2 Norms with Applications 1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS

THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS YIN ZHANG Abstract. Compressive sensing (CS) is an emerging methodology in computational signal processing that has

More information