Compressed Sensing and Linear Codes over Real Numbers

Similar documents
Recent Developments in Compressed Sensing

Distributed Source Coding Using LDPC Codes

ECEN 655: Advanced Channel Coding

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

LDPC Codes. Intracom Telecom, Peania

5. Density evolution. Density evolution 5-1

An Introduction to Low Density Parity Check (LDPC) Codes

Joint Iterative Decoding of LDPC Codes and Channels with Memory

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

One-Bit LDPC Message Passing Decoding Based on Maximization of Mutual Information

Graph-based codes for flash memory

Symmetric Product Codes

Message Passing Algorithm with MAP Decoding on Zigzag Cycles for Non-binary LDPC Codes

On Bit Error Rate Performance of Polar Codes in Finite Regime

Low-Density Parity-Check Codes

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Combining geometry and combinatorics

Slepian-Wolf Code Design via Source-Channel Correspondence

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

An Introduction to Sparse Approximation

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission

Compressed Sensing and Neural Networks

Solution Recovery via L1 minimization: What are possible and Why?

Sudocodes: A Suite of Low Complexity Compressive Sampling Algorithms

Compressed Sensing and Related Learning Problems

Tutorial: Sparse Signal Recovery

On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble

AN INTRODUCTION TO COMPRESSIVE SENSING

On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming

Fountain Uncorrectable Sets and Finite-Length Analysis

Compressed Sensing: Lecture I. Ronald DeVore

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Capacity-Approaching PhaseCode for Low-Complexity Compressive Phase Retrieval

Lecture 8: Shannon s Noise Models

LDPC Codes. Slides originally from I. Land p.1

COMPRESSED SENSING IN PYTHON

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Sudocodes: Low Complexity Algorithms for Compressive Sampling and Reconstruction

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Convergence analysis for a class of LDPC convolutional codes on the erasure channel

SPARSE signal representations have gained popularity in recent

Part III Advanced Coding Techniques

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

Optimal Deterministic Compressed Sensing Matrices

Performance of Polar Codes for Channel and Source Coding

Measurements vs. Bits: Compressed Sensing meets Information Theory

AFRL-RI-RS-TR

Single-letter Characterization of Signal Estimation from Linear Measurements

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

An Overview of Compressed Sensing

Iterative Encoding of Low-Density Parity-Check Codes

Compressed Sensing Using Bernoulli Measurement Matrices

Channel Coding for Secure Transmissions

Lecture 4 Noisy Channel Coding

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property

Performance Regions in Compressed Sensing from Noisy Measurements

Iterative Quantization. Using Codes On Graphs

Binary Compressive Sensing via Analog. Fountain Coding

Enhancing Binary Images of Non-Binary LDPC Codes

Quasi-cyclic Low Density Parity Check codes with high girth

Signal Recovery from Permuted Observations

Raptor Codes: From a Math Idea to LTE embms. BIRS, October 2015

Practical Coding Scheme for Universal Source Coding with Side Information at the Decoder

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Design of Non-Binary Quasi-Cyclic LDPC Codes by Absorbing Set Removal

arxiv: v1 [cs.it] 21 Feb 2013

An Efficient Maximum Likelihood Decoding of LDPC Codes Over the Binary Erasure Channel

Time-invariant LDPC convolutional codes

arxiv: v1 [cs.it] 26 Oct 2018

Non-binary Distributed Arithmetic Coding

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Multipath Matching Pursuit

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels

Erasure Codes with a Banded Structure for Hybrid Iterative-ML Decoding

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach

Elaine T. Hale, Wotao Yin, Yin Zhang

Local correctability of expander codes

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

SPA decoding on the Tanner graph

Lecture 22: More On Compressed Sensing

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Strengthened Sobolev inequalities for a random subspace of functions

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

How to Achieve the Capacity of Asymmetric Channels

Transcription:

Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st, 2008 Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 1 / 27

Outline 1 Introduction Compressed Sensing Connections with Coding 2 Benefits of the Coding Perspective Large Body of Previous Work Low-Density Parity-Check Codes for Compressed Sensing Confessions 3 Summary and Open Problems Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 2 / 27

Outline 1 Introduction Compressed Sensing Connections with Coding 2 Benefits of the Coding Perspective Large Body of Previous Work Low-Density Parity-Check Codes for Compressed Sensing Confessions 3 Summary and Open Problems Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 3 / 27

What is Compressed Sensing? (1) Compressed sensing (CS) is a relatively new area of signal processing and statistics that focuses on signal reconstruction from a small number of linear (i.e. dot product) measurements CS originated with the observation that many systems: (1) Sample a large amount of data (2) Perform a linear transform (e.g., DCT, wavelet, etc...) (3) Throw away all the small coefficients If the locations of the important transform coefficients were known in advance, then they could be sampled directly with reduced complexity Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 4 / 27

What is Compressed Sensing? (2) CS is a research area that emerged when it was realized that random sampling could be used (with a small penalty) to achieve the same result without prior knowledge of the locations of important coefficients CS has two stages: sampling and reconstruction During sampling, the signal-of-interest (SOI) is sampled by computing its dot product with a set of sampling kernels During reconstruction, the SOI is estimated from the samples In many cases, the number of samples required for a good estimate is much smaller than other methods (e.g., Nyquist sampling at twice the maximum frequency) Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 5 / 27

Basic Problem Statement Use m dot-product samples to reconstruct a signal The signal vector is x R n The m n measurement matrix is Φ R m n The length-m sample vector is y = Φx Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 6 / 27

Basic Problem Statement Use m dot-product samples to reconstruct a signal The signal vector is x R n The m n measurement matrix is Φ R m n The length-m sample vector is y = Φx Given y, the valid signal set is V(y) = {x R n Φx = y} If m < n, then a unique solution is not possible With prior knowledge, we try to choose a good solutions If x is i.i.d. zero-mean Gaussian, then the ML solution is ˆx = arg min x V(y) x 2 = Φ T ( ΦΦ T) 1 y If x is sparse (w.r.t. Hamming weight H ), then ˆx = arg min x V(y) x H Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 6 / 27

Reconstruction But constrained H minimization is NP-Hard Instead use linear programming (LP) to solve min x V(y) x 1 Basis Pursuit by Chen, Donoho, and Saunders (1998) The minimizing vector is the same (w.h.p.) for both problems! If Φ is chosen randomly (e.g., such that Φ ij N(0, 1)) And m is large enough: m C x H log n See work by Donoho, Candès, Romberg, Tao, and others 2 -error small even for approximately sparse signals Donoho showed this for m Cp x p log n and 0 < p < 1 Definition: x p ( n i=1 x i p ) 1/p for 0 < p < Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 7 / 27

Connections to Other Research Areas Classical Sampling Theory How many samples for reconstruction without aliasing? Based on support set in the frequency domain Single interval Nyquist rate is twice the bandwidth Approximation Theory Signal is a function drawn from a particular class of functions Best k-term approximation w.r.t. to some overcomplete basis Linear approximation Kolmogorov n-width of function class Information Theory: Lossless and Lossy Compression Signal model becomes stochastic rather than deterministic Using this model, CS becomes syndrome source coding Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 8 / 27

Outline 1 Introduction Compressed Sensing Connections with Coding 2 Benefits of the Coding Perspective Large Body of Previous Work Low-Density Parity-Check Codes for Compressed Sensing Confessions 3 Summary and Open Problems Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 9 / 27

Syndrome Source Coding Compression via random linear projections dates to the 1970 s Mostly for distributed source coding (e.g., Slepian-Wolf) Early history of syndrome source coding by Ancheta [IT-1976] Basic Idea Treat random signal as an error pattern Parity-check matrix computes the syndrome of the error This syndrome is the compressed version of the signal A syndrome decoder is used to reconstruct the error Problem is identical to error correction coding For exactly sparse signals Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 10 / 27

Compressed Sensing and Coding Compressed Sensing Signal: x R n sparse: x H δn Measurement matrix: Φ R m n Blind to nullspace of Φ Sample vector: y = Φx Dec: ˆx = arg min x :y=φx x H Coding Error pattern: e F n sparse: Pr(ei 0) = δ Parity-check matrix: H F m n Code is nullspace of H Syndrome: s = He Dec: ê = arg min e :s=he e H In coding, F is usually a finite field not the real numbers 1 1 Except RS/BCH codes over C in [Wolf-TCOM83] Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 11 / 27

Minimum Distance Decoding For H R m n, we define the code C = {x R n Hx = 0} Let the minimum distance be d min x C\0 x H Decode: ˆx = min x :y=φx x H Can fail only if u C\0 such that u + x H x H Failure only possible if x H d 1 2 This is basic coding theory! Easy to construct MDS codes where d = m + 1 RS code given by a Fourier measurement matrix Random matrices give MDS codes with prob. 1 All entries chosen i.i.d. from a continuous distribution Very similar to coding over a large finite alphabet Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 12 / 27

Outline 1 Introduction Compressed Sensing Connections with Coding 2 Benefits of the Coding Perspective Large Body of Previous Work Low-Density Parity-Check Codes for Compressed Sensing Confessions 3 Summary and Open Problems Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 13 / 27

Tools from Coding Theory The design and analysis of error-correcting codes has been significant research topic for roughly 60 years Long random codes used to achieve capacity Individual codes analyzed in terms of weight spectrum Alternative error models (e.g., Lee metric decoding) Capacity-achieving codes with low-complexity iterative decoding Challenges in applying coding to CS Unrealistic error model implied by Hamming distance Coding uses stochastic (rather than deterministic) signal models Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 14 / 27

Low-Density Parity-Check (LDPC) Codes code bits 00000000000000000000000000000000 11111111111111111111111111111111 permutation parity checks Linear codes with sparse parity-check matrix H F m n Codes defined to be x F n such that j H ijx j = 0 Bipartite graph representation An edge connects equation node i to symbol node j if H ij 0 Iterative decoding passes messages along edges of graph Graphs with irregular degree profiles can approach capacity Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 15 / 27

Coding Over Large Finite Alphabets Verification-based iterative decoding of LDPC codes Introduced by Luby and Mitzenmacher (Allerton02) For the q-ary symmetric channel with large q: (C 1 δ) Pr(y x) = { 1 δ if x = y δ/(q 1) if x y x, y GF(q) Message {Z} S has value Z GF(q) and status S {U, V} A message is verified if correct with high probability Verify msg if two independent observations match Pr(two ind. observations give same incorrect value) = 1 q 1 δ2 False Verification (FV): message is verified but not correct Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 16 / 27

Previous Work on Verification-Based Decoding Algorithms by Luby and Mitzenmacher (Allerton02, IT05) 1st Alg. (LM1): Verify all symbols if check sums to zero 2nd Alg. (LM2): LM1 + Verify if two msgs match at symbol Algorithms by Shokrollahi and Wang (ISIT04) 1st Alg. (SW1): Same as LM2 2nd Alg. (SW2): Pass list msgs with all probable values Capacity-achieving but extremely high complexity Algorithm by Zhang and Pfister (Globecom07) List-Message Passing (LMP) with Verification Density evolution and optimization for list-size S DE gives message-passing threshold δ for (3,6) LDPC codes δlm1 = 0.169, δ LM2 = 0.21, and δ LMP = 0.23 for S = 32 Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 17 / 27

Outline 1 Introduction Compressed Sensing Connections with Coding 2 Benefits of the Coding Perspective Large Body of Previous Work Low-Density Parity-Check Codes for Compressed Sensing Confessions 3 Summary and Open Problems Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 18 / 27

Sudocodes and BP Approaches Sudocodes of Sarvotham, Baron, and Baraniuk (ISIT06) Measurement matrix Φ is a sparse 0/1 matrix (L ones per row) Iterative decoding method based on matching coefficients Equivalent to LM2 for syndrome source coding Requires m = O(n log n) for coverage (e.g., balls/bins) Second phase used to handle uncovered elements LDPC/Belief propagation for CS by Sarvotham et al. (2006) Sparse measurement matrix has -1/0/+1 entries Approx. sparse signal modeled by mixture of two Gaussians Belief propagation decoding used to find important coefficients Algorithm evaluated primarily by simulation Expander LDPC codes for CS by Xu and Hassibi (ITW07) Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 19 / 27

CS Reconstruction via LM1 Signal (circles) measurement (squares) model Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 20 / 27

CS Reconstruction via LM1 4 3 0 4 7 0 With measurements exposed Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 20 / 27

CS Reconstruction via LM1 0 0 0 0 0 0 0 0 4 3 0 4 7 0 All-zero signal assumed for decoding Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 20 / 27

CS Reconstruction via LM1 0 0 0 0 0 0 0 0 4 3 0 4 7 0 Assume all satisfied checks are correct Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 20 / 27

CS Reconstruction via LM1 0 0 0 0 0 0 0 0 4 3 4 7 Remove edges and fix values Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 20 / 27

CS Reconstruction via LM1 0 0 0 0 0 0 0 0 4 3 4 7 Use degree-1 check to determine a variable Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 20 / 27

CS Reconstruction via LM1 0 0 3 0 0 0 0 0 4 4 7 This value can be removed from all equations Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 20 / 27

CS Reconstruction via LM1 0 0 3 0 4 0 0 0 4 4 7 Another variable is determined Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 20 / 27

CS Reconstruction via LM1 0 0 3 0 4 0 0 0 4 4 This value can be removed from all equations Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 20 / 27

CS Reconstruction via LM1 0 0 3 0 4 0 0 0 4 4 Final value is determined in two ways Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 20 / 27

CS Reconstruction via LM1 0 0 3 0 4 0 0 0 Reconstruction is successful Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 20 / 27

New Result: Analysis of LM1 for CS Probabilistic signal model Length-n vector x with i.i.d. elements: Pr (xi 0) = δ x H (1 + ɛ)δn with high probability as n Sparse measurement matrix with m rows Regular degree profile with j non-zero entries per column Non-zero entries drawn from a continuous distribution Theorem: LM1 Recovery for CS LM1 recovery for CS with (j, k)-regular LDPC codes reconstructs x (with high probability as n ) if δ < (k 1) j/(j 1). Recovery time is linear in n with m = jδ 1/j n measurements Choosing j = ln 1 δ reduces this to m e ln 1 δ n measurements Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 21 / 27

New Result: Near Optimal Recovery Probabilistic signal model Length-n vector x with i.i.d. elements: Pr (xi 0) = δ x H (1 + ɛ)δn with high probability as n Sparse measurement matrix with m rows Irregular degree profile for capacity-achieving erasure codes Non-zero entries drawn from a continuous distribution Theorem: Near Optimum Recovery with LMP List-message passing decoding with verification reconstructs x (with high probability as n ), for any ɛ > 0, if m (1 + ɛ)δn. Complexity linear in n (but constant grows as δ 0 or ɛ 0) Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 22 / 27

Conjecture: Uniform Recovery Reconstruct all x with some maximum x H Assume length-n vector x such that x H δn Sparse measurement matrix with m rows (j, k)-regular degree profile used for row and column weights Non-zero entries drawn from a continuous distribution Conjecture: Uniform Recovery with LM1 LM1 recovery with randomly chosen (j, k)-regular LDPC codes (w.h.p. as n ) recovers all x if δ < 1 2 (k 1) 2/j. Based on a preliminary stopping set analysis of LM1 Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 23 / 27

Outline 1 Introduction Compressed Sensing Connections with Coding 2 Benefits of the Coding Perspective Large Body of Previous Work Low-Density Parity-Check Codes for Compressed Sensing Confessions 3 Summary and Open Problems Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 24 / 27

Exact Versus Approximate Sparsity Connection to coding theory is strongest for exact sparsity Current analysis works only in that case (unless we cheat) Use ɛ-neighborhood for verification, assume no FV DE analysis of [SBB06] decoder in progress This should extend our results to approximate sparsity Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 25 / 27

Transform Domain Issues Encoding and decoding interacts with the domain of sparsity Sparsifying transform can be built into encoding in two ways Transform then encode Add transform to decoding graph (think ISI channels) In contrast Basis pursuit allows sampling before transform is known Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 26 / 27

Summary and Open Problems Discussed connections between ECC and CS Connection is not new, but coding brings many new tools Linear-time recovery algorithms for CS based on coding Capacity achieving codes can provide near optimal recovery Uniform recovery conjecture based on stopping set analysis Open Questions Extension of analysis to approximately sparse signals Must extend both signal model and DE analysis Overcoming transform domain issues with linear complexity Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 27 / 27