Part IV Compressed Sensing

Similar documents
Part III Super-Resolution with Sparsity

Lecture 22: More On Compressed Sensing

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Compressed Sensing: Extending CLEAN and NNLS

An Introduction to Sparse Approximation

Strengthened Sobolev inequalities for a random subspace of functions

Sensing systems limited by constraints: physical size, time, cost, energy

Compressive Sensing and Beyond

Inverse problems, Dictionary based Signal Models and Compressed Sensing

Compressed Sensing and Related Learning Problems

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

The Pros and Cons of Compressive Sensing

Towards a Mathematical Theory of Super-resolution

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

Compressive sensing of low-complexity signals: theory, algorithms and extensions

Recovering overcomplete sparse representations from structured sensing

Image Processing in Astrophysics

Super-resolution via Convex Programming

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

On the coherence barrier and analogue problems in compressed sensing

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements

Sparse Optimization Lecture: Sparse Recovery Guarantees

Algorithms for sparse analysis Lecture I: Background on sparse approximation

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices

Sparse linear models

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.

Introduction to Compressed Sensing

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

An Overview of Compressed Sensing

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Low-Dimensional Signal Models in Compressive Sensing

Solution Recovery via L1 minimization: What are possible and Why?

Compressive Sampling for Energy Efficient Event Detection

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

A new method on deterministic construction of the measurement matrix in compressed sensing

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

17 Random Projections and Orthogonal Matching Pursuit

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Fast Reconstruction Algorithms for Deterministic Sensing Matrices and Applications

AN INTRODUCTION TO COMPRESSIVE SENSING

Lecture 5 : Sparse Models

Exponential decay of reconstruction error from binary measurements of sparse signals

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Optimization-based sparse recovery: Compressed sensing vs. super-resolution

Sparse Recovery Beyond Compressed Sensing

Phase Transition Phenomenon in Sparse Approximation

Compressed Sensing: Lecture I. Ronald DeVore

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

COMPRESSED SENSING IN PYTHON

Constrained optimization

Compressive Sensing Applied to Full-wave Form Inversion

Mathematical introduction to Compressed Sensing

Sparse Solutions of an Undetermined Linear System

Structured matrix factorizations. Example: Eigenfaces

Sparse recovery for spherical harmonic expansions

Performance Analysis for Sparse Support Recovery

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Compressed Sensing and Neural Networks

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Compressive Sensing (CS)

Wavelets and Multiresolution Processing

Fast Hard Thresholding with Nesterov s Gradient Method

Reconstruction from Anisotropic Random Measurements

Recovery of Compressible Signals in Unions of Subspaces

Generalized Orthogonal Matching Pursuit- A Review and Some

1 minimization without amplitude information. Petros Boufounos

The Pros and Cons of Compressive Sensing

A Survey of Compressive Sensing and Applications

Recent Developments in Compressed Sensing

Various signal sampling and reconstruction methods

Provable Alternating Minimization Methods for Non-convex Optimization

Introduction to compressive sampling

Greedy Signal Recovery and Uniform Uncertainty Principles

Fighting the Curse of Dimensionality: Compressive Sensing in Exploration Seismology

Compressive sampling meets seismic imaging

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

PULSE: Peeling-based Ultra-Low complexity algorithms for Sparse signal Estimation

Multipath Matching Pursuit

Compressed sensing for radio interferometry: spread spectrum imaging techniques

Combining geometry and combinatorics

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Sparse Legendre expansions via l 1 minimization

Lecture 16 Oct. 26, 2017

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Beyond incoherence and beyond sparsity: compressed sensing in the real world

Compressed Sensing and Sparse Recovery

Invariant Scattering Convolution Networks

Uncertainty principles and sparse approximation

of Orthogonal Matching Pursuit

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Sparse & Redundant Signal Representation, and its Role in Image Processing

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

GREEDY SIGNAL RECOVERY REVIEW

Transcription:

Aisenstadt Chair Course CRM September 2009 Part IV Compressed Sensing Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique

Conclusion to Super-Resolution Sparse super-resolution is sometime possible but... don t be fooled. It requires a signal that is sufficiently sparse and a transformed dictionary sufficiently incoherent. Does not work as is in many inverse-problems, such as image interpolation. More structured representations are needed, which incorporate more prior information on the signal.

Representation from Linear Sampling Linear sampling: an analog signal is projected on a basis { φ n } n<n of an approximation space V N, which specifies a linear approximation: Uniform sampling: P U f(x) = f, φ n φn n<n f(x) φ n (x) = φ(x Tn) Sparse representation of the discrete signal in a basis {g p } p Γ f[n] = f, φ n R N with the M largest coefficients { f, g p } p Λ with Λ = M.

Compressive Sensing Sparse random analog measurements Y [q] =Ū f[q]+w [q] = f,ū q + W [q] where ū q (x) are realizations of a random process. Discretization: if then ū q (x) V N Y [q] =Uf[q]+W [q] = f, u q + W [q] where f[n] = f, φ and u q [n] R N n is a random vector. If f is sparse in {g p } p Γ can we recover f from Y?

Compressive Sensing Recovery From measurements with a random operator sparse super-resolution estimation: where decomposition of Y in Y [q] =Uf[q]+W [q] F = ã[p] g p p Λ ã[p] has a support Λ D U = {Ug p } p Γ or with an orthogonal matching pursuit. computed with a sparse by minimizing: 1 2 a[p] Ug p Y 2 + T a[p] p Γ p Γ

Restricted Isometry and Incoherence Riesz basis condition for recovery stability (1 δ Λ ) p Γ Restricted isometry condition: (1 δ M ) p Γ any such family a[p] 2 a[p] 2 p Λ a[p] Ug p 2 (1 + δλ ) p Γ {Ug p } p Λ p Λ a[p] Ug p 2 (1 + δm ) p Γ is nearly orthogonal. a[p] 2. δ Λ δ M (D U ) < 1 if Λ M a[p] 2. Relation to incoherence: δ M (D U ) (M 1) µ(d U ) with µ(d U ) = max p q Ug p, Ug q.

Exact Recovery Theorem: If f = p Λ a[p] g p with Λ = M and δ 3M < 1/3 1 then a = arg min b 2 b[p] Ug p Y 2 + T b 1 p Γ Sparse signals are exactly recovered.

Gaussian Random Matrices We want to have {Ug p } p Γ nearly uniformly distributed over the unit sphere of R Q so that {Ug p } p Λ is as orthogonal as possible even for Λ not small. The distribution of a Gaussian white noise of variance 1 is a uniform measure in the neighborhood of the unit sphere of R Q. If {g p } p Γ is an orthonormal basis of R N and if U is a matrix of Q by N values taken by independant Gaussian random variables (white noise) then {Ug p } p Γ are values taken by Q independant Gaussian random variables.

RIP Stability for Gaussian Matrices Theorem: If U is a Gaussian random matrix then for any δ < 1 there exists β > 0 such that δ M (D U ) δ if M β Q log(n/q). Valid for random Bernouilli matrices (random 1 and -1). We need Q CM l on/m) g ( measurements to recover M values and M unknown indices among N. Coding would require of the order of M log(n/m) bits.

Perfect Recovery Constants Monte-Carlo experiments for recovering signals with M non-zero coefficients out of N with Q random Gaussian measurements: Q CM l on/m) g ( Ratio of recovered signals with Q = 100 1 0.8 0.6 Perfect for Q/M > 6.2 Q/M > 7.7 Q/M = 11 Q/M = 16 0.4 0.2 0 5 10 15 20 25 30 35

Other Random Operators Storing a random Gaussian matrix U and computing Uh requires memory and calculations, too much. O(N 2 ) RIP theorem valid for Bernouilli matrices (random 1 and -1). Still too much memory and computations. Similar RIP theorem valid for a random projector in an orthonormal basis {g m} m which is highly incoherent with the sparsity basis {g p } p. May require only O(N) memory and O(N log N) computations.

Measurements Random Sparse Spike Inversion Y = u f + W with f[n] = p Λ a[p] δ[n p]. Random wavelet makes a random sampling of the Fourier coefficients of f : û[k] is the indicator of random set of frequencies. Fourier and Dirac bases have a low coherence.

Random Sparse Spike Inversion 0.25 Deterministic wavelet 0.1 Random wavelet Seismic wavelet u[n] 0.2 0.15 0.1 0.05 0 0.05 0.05 0 0.05 0.1 50 0 50 100 0.1 50 0 50 100 Incoherence u ũ[p] 1 1 1 0.8 0.8 0.6 0.4 0.4 0.4 0.2 0 0.2 0 0.2 0.4 0.2 0.4 0.6 0.6 150 100 50 0 50 100 150 100 0 100 Original signal Recovery Recovery 1 1 0.5 0.5 0.5 0 0 0 0.5 0.5 0.5 1 200 400 600 800 1000 1 200 400 600 800 1000 1 200 400 600 800 1000

Non-Linear Approximation Error sorted with decreasing amplitude { f, g mk } k f, g mk+1 f, g mk. Non-linear approximation in an orthonormal basis: and f M = M f, g mk g mk k=1 f f M 2 = N f, g mk 2. k=m+1 If f, g mk = O(k α ) then f f M 2 = O(M 1 2α ).

Theorem: then and if Stability and Recovery Error 1 ã = arg min b 2 Y b[p] Ug p 2 + T b 1 p Γ f p Γ There exists C such that if δ 3M < 1/3 and ã[p] g p 2 C N 1 M k=m f, g mk + C W f, g mk = O(k s ) with s>1 and W =0 f p Γ ã[p] g p 2 = O(M 2s+1 ). Requires Q C M log(n/m) random measurements.

Approximation Constants For Q random measurements, M is the number basis coefficients defining an approximation having the same error. The ratio Q/M is computed with a Monte-Carlo experiment for different decay exponents s for N=1024. Q/M 18 16 14 12 10 150 200 250 300 Q

Recovery Efficiency Compressive sensing efficiency in random Gaussian and Fourier dictionaries. Comparison of Basis Pursuit, Matching Pursuit and Orthogonal Matching Pursuits. 7.5 7.5 7 7 6.5 6.5 6 6 5.5 150 200 250 300 350 400 5.5 150 200 250 300 350 400

Image Compressive Sampling Wavelet coefficients of images often have the decay exponent s = 1 of bounded variation images. Q measurements with a linear uniform sampling satisfy Q/M < 5, relatively to a non-linear approximation with M coefficients. Direct compressive is worst with typically Q/M > 7. Improvements by using some prior information on coefficients. Grouping wavelet coefficients scale by scale improves image approximations.

Compressive Sensing Applications Robustness: compressive sensing is a democratic acquisition process where all samples are equally important. A missing sample introduce an error that is diluted across the signal. Analog to Digital Converters: for signals that have a sparse Fourier transform, with a random time sampling. For utrawide band signals having a Nyquist rate which is too large. Single pixel camera: random Bernouilli (1 or -1) mesurements of images with a single pixel at very high sampling rate. Medical Resonance Imaging: randomize as much as possible the Fourier sampling of images obtained with MRI, and use their wavelet sparsity to improve their resolution.

Conclusion to Compressive Sensing Sparse super-resolution becomes stable with randomized measurements. Large potential applications. Asymptotic performance equivalent to a non-linear approximation with a known signal. The devil is in the constants, compared to a linear uniform sampling. Technological difficulties for the signal recovery: large amount of memory and computations are required.