The Pros and Cons of Compressive Sensing

Similar documents
The Pros and Cons of Compressive Sensing

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

The Fundamentals of Compressive Sensing

ANALOG-TO-DIGITAL converters (ADCs) are an essential. Democracy in Action: Quantization, Saturation, and Compressive Sensing

Compressed Sensing: Extending CLEAN and NNLS

Democracy in Action: Quantization, Saturation and Compressive Sensing

Applied and Computational Harmonic Analysis

ANALOG-TO-DIGITAL converters (ADCs) are an essential. Democracy in Action: Quantization, Saturation, and Compressive Sensing

Reconstruction from Anisotropic Random Measurements

Generalized Orthogonal Matching Pursuit- A Review and Some

Introduction to Compressed Sensing

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Greedy Signal Recovery and Uniform Uncertainty Principles

Compressive Sensing and Beyond

Exponential decay of reconstruction error from binary measurements of sparse signals

ACCORDING to Shannon s sampling theorem, an analog

Democracy in Action: Quantization, Saturation, and Compressive Sensing

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture

Analog-to-Information Conversion

Analysis of Greedy Algorithms

Recent Developments in Compressed Sensing

The Analysis Cosparse Model for Signals and Images

Stochastic geometry and random matrix theory in CS

Limitations in Approximating RIP

Compressed Sensing: Lecture I. Ronald DeVore

Strengthened Sobolev inequalities for a random subspace of functions

Stopping Condition for Greedy Block Sparse Signal Recovery

Sensing systems limited by constraints: physical size, time, cost, energy

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

ORTHOGONAL matching pursuit (OMP) is the canonical

Sigma Delta Quantization for Compressed Sensing

Multipath Matching Pursuit

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

17 Random Projections and Orthogonal Matching Pursuit

Optimization for Compressed Sensing

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

GREEDY SIGNAL RECOVERY REVIEW

Methods for sparse analysis of high-dimensional data, II

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Lecture Notes 9: Constrained Optimization

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

arxiv: v1 [math.na] 4 Nov 2009

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

of Orthogonal Matching Pursuit

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Signal Sparsity Models: Theory and Applications

New ways of dimension reduction? Cutting data sets into small pieces

Iterative Hard Thresholding for Compressed Sensing

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Signal Processing with Compressive Measurements

Compressed Sensing and Related Learning Problems

Extended Reconstruction Approaches for Saturation Measurements Using Reserved Quantization Indices Li, Peng; Arildsen, Thomas; Larsen, Torben

Methods for sparse analysis of high-dimensional data, II

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity

Finite Frames for Sparse Signal Processing

Conditions for Robust Principal Component Analysis

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

1 minimization without amplitude information. Petros Boufounos

Sketching for Large-Scale Learning of Mixture Models

Lecture 22: More On Compressed Sensing

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Part IV Compressed Sensing

Designing Information Devices and Systems I Discussion 13B

An Introduction to Sparse Approximation

AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE

SPARSE signal representations have gained popularity in recent

Sparse Signal Recovery Using Markov Random Fields

A Generalized Restricted Isometry Property

A new method on deterministic construction of the measurement matrix in compressed sensing

arxiv: v1 [cs.it] 21 Feb 2013

Sparsity in Underdetermined Systems

Improving Approximate Message Passing Recovery of Sparse Binary Vectors by Post Processing

Enhanced Compressive Sensing and More

arxiv: v2 [cs.lg] 6 May 2017

Simultaneous Sparsity

Sparse Recovery from Inaccurate Saturated Measurements

RICE UNIVERSITY. Random Observations on Random Observations: Sparse Signal Acquisition and Processing. Mark A. Davenport

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Toeplitz Compressed Sensing Matrices with. Applications to Sparse Channel Estimation

Stability and Robustness of Weak Orthogonal Matching Pursuits

THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS

A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades

Super-resolution via Convex Programming

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

Sampling and Recovery of Pulse Streams

Compressive sensing in the analog world

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

arxiv: v3 [cs.it] 7 Nov 2013

IEOR 265 Lecture 3 Sparse Linear Regression

Transcription:

The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics

Compressive Sensing Replace samples with general linear measurements measurements sampled signal -sparse What are the pros and cons of CS in practice?

Review: Sparsity nonzero entries pixels large wavelet coefficients

Review: Sparsity nonzero entries samples large Fourier coefficients

Review: Sparsity nonzero entries

Core Theoretical Challenges How should we design the matrix so that is as small as possible? How can we recover from the measurements?

Answers Choose a random matrix fill out the entries of with i.i.d. samples from a sub- Gaussian distribution project onto a random subspace Lots and lots of algorithms

Compressive Sensing: An Apology Objection 1: CS is discrete, finite-dimensional Objection 2: Impact of noise Objection 3: Impact of quantization

Analog Sensing is Matrix Multiplication If is bandlimited, vector Nyquist-rate samples of

Compressive Sensing: An Apology Objection 1: CS is discrete, finite-dimensional Objection 2: Impact of noise Objection 3: Impact of quantization

Recovery from Noisy Measurements Given or, find Optimization-based methods basis pursuit, basis pursuit de-noising, Dantzig selector Greedy/Iterative algorithms OMP, StOMP, ROMP, CoSaMP, Thresh, SP, IHT,

Stable Signal Recovery Suppose that we observe and that satisfies the RIP of order. Typical (worst-case) guarantee Even if is provided by an oracle, the error can still be as large as.

Stable Signal Recovery: Part II Suppose now that satisfies In this case our guarantee becomes Unit-norm rows

Expected Performance Worst-case bounds can be pessimistic What about the average error? assume is white noise with variance for oracle-assisted estimator if is Gaussian, then for -minimization

White Signal Noise What if our signal is contaminated with noise? Suppose has orthogonal rows with norm equal to. If is white noise with variance, then is white noise with variance. 3dB loss per octave of subsampling

White Signal Noise What if our signal is contaminated with noise? Suppose has orthogonal rows with norm equal to. If is white noise with variance, then is white noise with variance. 3dB loss per octave of subsampling

Noise Folding [D, Laska, Treichler, and Baraniuk - 2011]

Can We Do Better? Better choice of? Better recovery algorithm? If we knew the support of a priori, then we could achieve Is there any way to match this performance without knowing the support of in advance?

No! Theorem: If with, then If with, then See also: Raskutti, Wainwright, and Yu (2009) Ye and Zhang (2010) [Candès and D - 2011]

Proof Recipe Ingredients [Makes servings] Lemma 1: Suppose is a set of -sparse points such that for all. Then. Lemma 2: There exists a set of -sparse points such that for all for some Instructions Combine ingredients and add a dash of linear algebra.

Proof Outline

Recall: Lemma 2 Lemma 2: There exists a set of -sparse points such that for all for some Strategy Construct by sampling (with replacement) from Repeat for iterations. With probability, the remaining properties are satisfied. Key: Matrix Bernstein Inequality [Ahlswede and Winter, 2002]

Compressive Sensing: An Apology Objection 1: CS is discrete, finite-dimensional Objection 2: Impact of signal noise Objection 3: Impact of quantization

Signal Recovery with Quantization Finite-range quantization leads to saturation, i.e., unbounded errors on the largest measurements Quantization noise changes as we change the sampling rate

Saturation Strategies Rejection: Ignore saturated measurements Consistency: Retain saturated measurements. Use them only as inequality constraints on the recovered signal If the rejection approach works, the consistency approach should automatically do better

Rejection and Democracy The RIP is not sufficient for the rejection approach Example: perfect isometry every measurement must be kept We would like to be able to say that any submatrix of with sufficiently many rows will still satisfy the RIP Strong, adversarial form of democracy

Sketch of Proof Step 1: Concatenate the identity to Theorem: If is a sub-gaussian matrix with then satisfies the RIP of order with probability at least. [D, Laska, Boufounos, and Baraniuk - 2009]

Sketch of Proof Step 2: Combine with the interference cancellation lemma The fact that satisfies the RIP implies that if we take extra measurements, then we can delete arbitrary rows of and retain the RIP This is a strong adversarial notion of democracy [D, Laska, Boufounos, and Baraniuk - 2009]

Rejection In Practice T -T

Rejection In Practice T -T

Rejection In Practice T -T

Benefits of Saturation saturation rate Saturation Rate [Laska, Boufounos, D, and Baraniuk - 2011]

Benefits of Saturation db gain saturation rate Saturation Rate [Laska, Boufounos, D, and Baraniuk - 2011]

Potential for SNR Improvement? By sampling at a lower rate, we can quantize to a higher bitdepth, allowing for potential gains [Le et al. - 2005]

Empirical SNR Improvement [D, Laska, Treichler, and Baraniuk - 2011]

Conclusions Cons signal noise can potentially be a problem nonadaptivity entails a tremendous SNR loss if you have signal noise or can get benefits from averaging, taking fewer measurements might be a really bad idea! Pros if quantization noise dominates the error, CS can potentially lead to big improvements novel strategies for handling saturation errors low-bit CS might be useful even when large is relatively