Sparse solutions of underdetermined systems

Similar documents
A Tutorial on Compressive Sensing. Simon Foucart Drexel University / University of Georgia

Chapter 2 Sparse Solutions of Underdetermined Systems

Sparse Legendre expansions via l 1 minimization

Constrained optimization

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Interpolation via weighted l 1 -minimization

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Lecture Notes 9: Constrained Optimization

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

An algebraic perspective on integer sparse recovery

Hard Thresholding Pursuit Algorithms: Number of Iterations

Interpolation via weighted l 1 -minimization

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse and Low Rank Recovery via Null Space Properties

List of Errata for the Book A Mathematical Introduction to Compressive Sensing

Interpolation via weighted l 1 minimization

Notes. Simon Foucart

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

Compressed Sensing and Neural Networks

Stability and Robustness of Weak Orthogonal Matching Pursuits

6 Compressed Sensing and Sparse Recovery

1 The linear algebra of linear programs (March 15 and 22, 2015)

Introduction to Compressed Sensing

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries

Geometry of log-concave Ensembles of random matrices

Recovering overcomplete sparse representations from structured sensing

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture 22: More On Compressed Sensing

Signal Recovery from Permuted Observations

Optimisation Combinatoire et Convexe.

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Compressive Sensing with Random Matrices

AN INTRODUCTION TO COMPRESSIVE SENSING

Optimization-based sparse recovery: Compressed sensing vs. super-resolution

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Three Generalizations of Compressed Sensing

Interpolation via weighted l 1 minimization

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements

SPARSE signal representations have gained popularity in recent

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May

Greedy Signal Recovery and Uniform Uncertainty Principles

Gauge optimization and duality

Compressed Sensing and Sparse Recovery

Sparse Recovery from Inaccurate Saturated Measurements

Uncertainty quantification for sparse solutions of random PDEs

Strengthened Sobolev inequalities for a random subspace of functions

Mathematical introduction to Compressed Sensing

Interpolation-Based Trust-Region Methods for DFO

We will now focus on underdetermined systems of equations: data acquisition system

arxiv: v1 [math.oc] 19 Oct 2017

Compressed Sensing: Lecture I. Ronald DeVore

Exponential decay of reconstruction error from binary measurements of sparse signals

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

GREEDY SIGNAL RECOVERY REVIEW

Sparse recovery for spherical harmonic expansions

Conditions for Robust Principal Component Analysis

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Least Sparsity of p-norm based Optimization Problems with p > 1

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Reconstruction from Anisotropic Random Measurements

Algorithms for sparse analysis Lecture I: Background on sparse approximation

Sparsity and Compressed Sensing

Optimization for Compressed Sensing

Stochastic geometry and random matrix theory in CS

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017

1 Sparsity and l 1 relaxation

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Concave Mirsky Inequality and Low-Rank Recovery

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

minimize x subject to (x 2)(x 4) u,

Designing Information Devices and Systems I Discussion 13B

Compressive Sensing and Beyond

Supremum of simple stochastic processes

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

Sparse Proteomics Analysis (SPA)

Math 4153 Exam 1 Review

ACCORDING to Shannon s sampling theorem, an analog

Jordan blocks. Defn. Let λ F, n Z +. The size n Jordan block with e-value λ is the n n upper triangular matrix. J n (λ) =

arxiv: v1 [math.na] 26 Nov 2009

2D X-Ray Tomographic Reconstruction From Few Projections

Sparse Recovery Beyond Compressed Sensing

Lecture 1: Review of linear algebra

Computational Methods. Systems of Linear Equations

Compressed Sensing: a Subgradient Descent Method for Missing Data Problems

Compressive sensing of low-complexity signals: theory, algorithms and extensions

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

Low-rank Tensor Recovery

Mathematical Foundation for Compressed Sensing

Data Sparse Matrix Computation - Lecture 20

Lecture 16: Compressed Sensing

Generalized Orthogonal Matching Pursuit- A Review and Some

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

Transcription:

Sparse solutions of underdetermined systems I-Liang Chern September 22, 2016 1 / 16

Outline Sparsity and Compressibility: the concept for measuring sparsity and compressibility of data Minimum measurements to recover sparse data: Find a measurement A so that Ax = y is solvable For general sparse data For a specific sparse data x This is a note from S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Springer 2013. 2 / 16

Sparsity Notations: [N] := {1,..., N}, S := [N]\S; A vector x C N, its support is Supp (x) := {j x j 0}. For S [N], x C N, x S is defined by { x i if i S x S,i = 0 otherwise. s-sparse vectors Σs = {x C N, x is s-sparse.} Definitions: x 0 := # Supp (x); ( N ) 1/p, x p := j=1 x j p 0 < p < ; x p x 0 as p 0. 3 / 16

Sparsity Non increasing rearrangement of x is defined by x 1 x 2 x N 0 where x j = x π(j) and π is a permutation on [N]. Given x, we can find its s-sparse projection x s, the part of component of x which contains s largest absolute values of x. Such x s may not be unique. A vector x is s-sparse if x = x s Best s-term approximation: An x s of x. 4 / 16

Compressibility Best s-term approximation error in p-norm (p 1) σ s (x) p := inf{ x z p z Σ s } A vector is called compressible if σ s (x) p decays fast in s. (e.g. σ s (x) p = O(s r ) for some p 1 and some r > 1.) Many data are compressible. The following theorem says that σ s (x) q can be controlled by x p with 0 < p < q. 5 / 16

Compressibility of a vector Theorem The following inequality holds where 0 < p < q and c p,q := σ s (x) q [ (p q c p,q s 1/p 1/q x p, ) p/q ( 1 p q ) 1 p/q ] 1/p 1. 6 / 16

Remarks 1. For q = 2, p 1, we have σ s (x) 2 1 s 1/p 1/2 x p. This suggests that the unit ball in l p quasi-norm for p 1 are good models for compressible vectors. 2. In particular, p = 1, σ s (x) 2 1 2 s x 1. The vectors with x 1 1 can also serve as compressible vectors. 7 / 16

Proof. 1. Set α j = (x j) p / x p p, where x is a non-increasing rearrangement of x, then the theorem is equivalent to: implies (α 1 α N, α 1 + + α N 1) α q/p s+1 + + α q/p N cq p,q s q/p 1. This is a constrained optimization problem: let r = q/p > 1, we want to maximize over a convex polygon f(α 1,..., α N ) := α r s+1 + + α r N C := {(α 1,..., α N ) α 1... α N 0, α 1 + + α N 1} 8 / 16

2 The maximum occurs at corners. If α1 = = α N = 0, then f(α) = 0. If α1 + + α N = 1, α 1 = = α k > α k+1 = = α N = 0, 1 k s, then f(α) = 0. If α1 + + α N = 1, α 1 = = α k > α k+1 = = α N = 0, s + 1 k N, then α i = 1/k for 1 i k and f(α) = (k s)/k r. It follows that max f(α) = max k s α C s+1 k N k r 3 Taking k as a continuous variable, we obtain maximum at k = (r/(r 1))s. Hence max α C f(α) 1 r ( 1 1 r ) r 1 1 s = r 1 cq p,q 1 s q/p 1 9 / 16

Minimal Measurements for Reconstructing Sparse Vectors Linear measurement: y = Ax. A is m N. Question: minimal number of linear measurements needed to reconstruct s-sparse vectors from these measurements, regardless of the practicality of the reconstruction scheme. Two meanings: Reconstruction of all s-sparse vectors Given a specific s-sparse x, construct a measurement A such that it can recover x exactly. Minimal measurements: For the first, m = 2s. For the second, m = s + 1. 10 / 16

Remarks The meaning for the second is in the measure sense, which means that the Lebesgue measure of the set of matrices which cannot recover a given specific s-sparse vector x is zero. For the recovery of all vectors, however, the reconstruction may not be stable, in the sense that a small perturbation of y causes large change of x. If we add the stability requirement, then the minimal measurements become m Cs ln(n/s), where C depends on the stability criterion. 11 / 16

Recover all s-sparse vectors Theorem Given A C m N, the following properties are equivalent: (a) Every s-sparse vector x C N is the unique s-sparse solution of Az = Ax, that is, if Ax = Az and x, z Σ s, then x = z. (b) The null space kera does not contain any 2s-sparse vector other than the zero vector,that is, kera Σ 2s = {0}. (c) For every S [N] with S 2s, the submatrix A S is injective as a map from C S to C m. (d) Every set of 2s columns of A is linearly independent. 12 / 16

Proof 1. (b) (a): If Az = Ax, then z x kera. Since both z and x are s-sparse, we get z x is 2s-sparse. From (b), z x = 0. 2. (a) (b): For any v kera Σ 2s, we can split v = z x such that supp z supp x = and both x and z are s-sparse. By (a), z = x. Thus, v = 0. 3. (b) (c) (d): For any S [N], noting that Av = A S v S. Use S = dim Dom(A S ) = dim kera S + dim RanA S, we get that kera = {0} if and only if dim RanA S = S. 13 / 16

Recover all s-sparse vectors We look for a measurement matrix with 2s N which can recover all s-sparse vectors. Answer 1: Given any 0 < t 1 < < t N, define 1 1 1 t 1 t 2 t N A = t 2 1 t 2 2 t 2 N...... t 2s 1 1 t 2s 1 2 t 2s 1 N Let S [N] with card(s) = 2s. Any column sub matrix A S is a Vandermonde matrix, which is invertible. 14 / 16

Recover all s-sparse vectors Answer 2: Fourier sub matrix, we choose ω j = e 2πi(j 1)/N A = 1 1 1 ω 1 ω 2 ω N ω1 2 ω2 2 ωn 2...... ω1 2s 1 ω2 2s 1 ω 2s 1 N 15 / 16

Recover a specific s-sparse vector Given an s-sparse vector, find an (s + 1) N matrix A so that the measurement y = Ax can recover x exactly via (P0). Answer: The set of all (s + 1) N matrices which cannot do so has measure zero. 16 / 16