Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming)

Similar documents
Random Coding for Fast Forward Modeling

A Survey of Compressive Sensing and Applications

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

Compressive Sensing and Beyond

Compressed Sensing and Robust Recovery of Low Rank Matrices

Lecture Notes 9: Constrained Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Self-Calibration and Biconvex Compressive Sensing

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

SPARSE signal representations have gained popularity in recent

The Fundamentals of Compressive Sensing

Blind Deconvolution Using Convex Programming. Jiaming Cao

GREEDY SIGNAL RECOVERY REVIEW

Recovering overcomplete sparse representations from structured sensing

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Lecture 3: Compressive Classification

Sparse Solutions of an Undetermined Linear System

Compressed Sensing: Lecture I. Ronald DeVore

Randomness-in-Structured Ensembles for Compressed Sensing of Images

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

Multipath Matching Pursuit

An Overview of Compressed Sensing

Compressed Sensing: Extending CLEAN and NNLS

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Introduction to Compressed Sensing

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Strengthened Sobolev inequalities for a random subspace of functions

An Introduction to Sparse Approximation

Coherent imaging without phases

Exponential decay of reconstruction error from binary measurements of sparse signals

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Stable Signal Recovery from Incomplete and Inaccurate Measurements

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Compressed Sensing and Sparse Recovery

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

A Generalized Restricted Isometry Property

Enhanced Compressive Sensing and More

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.

Near Optimal Signal Recovery from Random Projections

Compressive Sensing Applied to Full-wave Form Inversion

Low-Rank Matrix Recovery

Compressive Sensing Theory and L1-Related Optimization Algorithms

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Lecture 22: More On Compressed Sensing

Blind Deconvolution using Convex Programming

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Optimization-based sparse recovery: Compressed sensing vs. super-resolution

Sensing systems limited by constraints: physical size, time, cost, energy

COMPRESSED SENSING IN PYTHON

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

Robust Principal Component Analysis

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

Generalized Orthogonal Matching Pursuit- A Review and Some

ACCORDING to Shannon s sampling theorem, an analog

Sparse recovery for spherical harmonic expansions

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Compressed sensing and imaging

AN INTRODUCTION TO COMPRESSIVE SENSING

Structured matrix factorizations. Example: Eigenfaces

Compressed sensing techniques for hyperspectral image recovery

An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

A new method on deterministic construction of the measurement matrix in compressed sensing

of Orthogonal Matching Pursuit

Computable Performance Analysis of Sparsity Recovery with Applications

ANGLE OF ARRIVAL DETECTION USING COMPRESSIVE SENSING

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Robust multichannel sparse recovery

Signal Recovery from Permuted Observations

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

Compressive Sensing of Sparse Tensor

Recovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices

Compressive sensing of low-complexity signals: theory, algorithms and extensions

Solution Recovery via L1 minimization: What are possible and Why?

Compressive Deconvolution in Random Mask Imaging

Three Generalizations of Compressed Sensing

People Hearing Without Listening: An Introduction To Compressive Sampling

Sequential Compressed Sensing

The convex algebraic geometry of rank minimization

Compressive Sampling for Energy Efficient Event Detection

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

Seismic wavefield inversion with curvelet-domain sparsity promotion

Compressive Sensing of Streams of Pulses

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Analog-to-Information Conversion

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations

Transcription:

Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming) Justin Romberg Georgia Tech, ECE Caltech ROM-GR Workshop June 7, 2013 Pasadena, California

Linear systems in data acquisition

Linear systems of equations are ubiquitous Model: y = A x y: data coming off of sensor A: mathematical (linear) model for sensor x: signal/image to reconstruct

Classical: When can we stably invert a matrix? Suppose we have an M N observation matrix A with M N (MORE observations than unknowns), through which we observe y = Ax 0 + noise

Classical: When can we stably invert a matrix? Suppose we have an M N observation matrix A with M N (MORE observations than unknowns), through which we observe y = Ax 0 + noise Standard way to recover x 0, use the pseudo-inverse solve min x y Ax 2 2 ˆx = (A T A) 1 A T y

Classical: When can we stably invert a matrix? Suppose we have an M N observation matrix A with M N (MORE observations than unknowns), through which we observe y = Ax 0 + noise Standard way to recover x 0, use the pseudo-inverse solve min x y Ax 2 2 ˆx = (A T A) 1 A T y Q: When is this recovery stable? That is, when is ˆx x 0 2 2 noise 2 2?

Classical: When can we stably invert a matrix? Suppose we have an M N observation matrix A with M N (MORE observations than unknowns), through which we observe y = Ax 0 + noise Standard way to recover x 0, use the pseudo-inverse solve min x y Ax 2 2 ˆx = (A T A) 1 A T y Q: When is this recovery stable? That is, when is ˆx x 0 2 2 noise 2 2? A: When the matrix A preserves distances... A(x 1 x 2 ) 2 2 x 1 x 2 2 2 for all x 1, x 2 R N

Sparsity Decompose signal/image x(t) in orthobasis {ψ i (t)} i x(t) = i α i ψ i (t) wavelet transform zoom x 0 {α i } i

Wavelet approximation Take 1% of largest coefficients, set the rest to zero (adaptive) original approximated rel. error = 0.031

When can we stably recover an S-sparse vector? y =! x 0 Now we have an underdetermined M N system Φ (FEWER measurements than unknowns), and observe y = Φx 0 + noise

Sampling a superposition of sinusoids We take M samples of a superposition of S sinusoids: Time domain x 0 (t) Frequency domain ˆx 0 (ω) Measure M samples (red circles = samples) S nonzero components

Sampling a superposition of sinusoids Reconstruct by solving min x ˆx l1 subject to x(t m ) = x 0 (t m ), m = 1,..., M original ˆx 0, S = 15 perfect recovery from 30 samples

Numerical recovery curves Resolutions N = 256, 512, 1024 (black, blue, red) Signal composed of S randomly selected sinusoids Sample at M randomly selected locations 100 90 80 % success 70 60 50 40 30 20 10 0 0 0.2 0.4 0.6 0.8 1 S/M In practice, perfect recovery occurs when M 2S for N 1000

A nonlinear sampling theorem Exact Recovery Theorem (Candès, R, Tao, 2004): Unknown ˆx 0 is supported on set of size S Select M sample locations {t m } at random with M Const S log N Take time-domain samples (measurements) y m = x 0 (t m ) Solve min x ˆx l1 subject to x(t m ) = y m, m = 1,..., M Solution is exactly f with extremely high probability

When can we stably recover an S-sparse vector? y =! x 0 Now we have an underdetermined M N system Φ (FEWER measurements than unknowns), and observe y = Φx 0 + noise

When can we stably recover an S-sparse vector? Now we have an underdetermined M N system Φ (FEWER measurements than unknowns), and observe y = Φx 0 + noise We can recover x 0 when Φ keeps sparse signals separated for all S-sparse x 1, x 2 Φ(x 1 x 2 ) 2 2 x 1 x 2 2 2

When can we stably recover an S-sparse vector? Now we have an underdetermined M N system Φ (FEWER measurements than unknowns), and observe y = Φx 0 + noise We can recover x 0 when Φ keeps sparse signals separated for all S-sparse x 1, x 2 To recover x 0, we solve Φ(x 1 x 2 ) 2 2 x 1 x 2 2 2 min x x 0 subject to Φx = y x 0 = number of nonzero terms in x This program is computationally intractable

When can we stably recover an S-sparse vector? Now we have an underdetermined M N system Φ (FEWER measurements than unknowns), and observe y = Φx 0 + noise We can recover x 0 when Φ keeps sparse signals separated for all S-sparse x 1, x 2 A relaxed (convex) program Φ(x 1 x 2 ) 2 2 x 1 x 2 2 2 x 1 = k x k min x x 1 subject to Φx = y This program is very tractable (linear program) The convex program can recover nearly all identifiable sparse vectors, and it is robust.

Intuition for l 1 )*+$!&,-$ min x x!"#$l 2 s.t. Φx 1 =!%&'( y min x x 1 s.t. Φx = y +*),)$L 1 (%-,.*%+ L 0 (01&(2(.$(%-,.*%+$*3 random orientation dimension N-M

Sparse recovery algorithms l 1 can recover sparse vectors almost anytime it is possible perfect recovery with no noise stable recovery in the presence of noise robust recovery when the signal is not exactly sparse

Sparse recovery algorithms Other recovery techniques have similar theoretical properties (their practical effectiveness varies with applications) greedy algorithms iterative thresholding belief propagation specialized decoding algorithms

What kind of matrices keep sparse signals separated? M 0"'()-%,,%.& (%!,1-%(%*+,2& -!*.'(& ±1 %*+-/%,& Φ S!"#$%&& "'()'*%*+,& +'+!3&-%,'31#'*45!*.6/.+7&8& N Random matrices are provably efficient We can recover S-sparse x from M S log(n/s) measurements

Rice single pixel camera Rice Single-Pixel CS Camera single photon detector random pattern on DMD array DMD DMD image reconstruction or processing (Duarte, Davenport, Takhar, Laska, Sun, Kelly, Baraniuk 08)

Hyperspectral imaging 256 frequency bands, 10s of megapixels, 30 frames per second...

DARPA s Analog-to-Information Multichannel ADC/receiver for identifying radar pulses Covers 3 GHz with 400 MHz sampling rate

Compressive sensing with structured randomness Subsampled rows of incoherent orthogonal matrix Can recover S-sparse x 0 with measurements M S log a N Candes, R, Tao, Rudelson, Vershynin, Tropp,...

Accelerated MRI ARC SPIR-iT (Lustig et al. 08)

Matrices for sparse recovery with structured randomness Random convolution + subsampling Universal; Can recover S-sparse x 0 with M S log a N Applications include: radar imaging sonar imaging seismic exploration channel estimation for communications super-resolved imaging R, Bajwa, Haupt, Tropp, Rauhut,...

Integrating compression and sensing

Recovering a matrix from limited observations Suppose we are interested in recovering the values of a matrix X X 1,1 X 1,2 X 1,3 X 1,4 X 1,5 X 2,1 X 2,2 X 2,3 X 2,4 X 2,5 X = X 3,1 X 3,2 X 3,3 X 3,4 X 3,5 X 4,1 X 4,2 X 4,3 X 4,4 X 4,5 X 5,1 X 5,2 X 5,3 X 5,4 X 5,5 We are given a series of different linear combinations of the entries y = A(X)

Example: matrix completion Suppose we do not see all the entries in a matrix... X 1,1 X 1,3 X 1,5 X 2,2 X 2,4 X = X 3,2 X 3,3 X 4,1 X 4,4 X 4,5 X 5,4 X 5,5... can we fill in the blanks?

Applications of matrix completion Recommender Systems Euclidean Embedding Rank of: Data Matrix Gram Matrix (slide courtesy of Benjamin Recht)

Low rank structure 2 6 6 6 6 6 6 6 6 4 X 3 7 7 7 7 7 7 7 7 5 = 2 6 6 6 6 6 6 6 6 4 L 3 7 7 7 7 7 7 7 7 5 2 4 R T 3 5 K N K R R N

When can we stably recover a rank-r matrix? We have an underdetermined linear operator A A : R K N L, L KN and observe where X 0 has rank R y = A(X 0 ) + noise We can recover X 0 when A keeps low-rank matrices separated A(X 1 X 2 ) 2 2 X 1 X 2 2 F for all rank-r X 1, X 2

When can we stably recover a rank-r matrix? We have an underdetermined linear operator A A : R K N L, L KN and observe where X 0 has rank R y = A(X 0 ) + noise To recover X 0, we would like to solve min X rank(x) subject to A(X) = y but this is intractable

When can we stably recover a rank-r matrix? We have an underdetermined linear operator A A : R K N L, L KN and observe where X 0 has rank R y = A(X 0 ) + noise A relaxed (convex) program min X X subject to A(X) = y where X = sum of the singular values of X

(Recht, Fazel, Parillo, Candes, Plan,...) Matrix Recovery Take vectorize X, stack up vectorized A m as rows of a matrix A X Independent Gaussian entires in the A m embeds rank-r matrices when M R(K + N)

Example: matrix completion Suppose we do not see all the entries in a matrix... X 1,1 X 1,3 X 1,5 X 2,2 X 2,4 X = X 3,2 X 3,3 X 4,1 X 4,4 X 4,5 X 5,4 X 5,5... we can fill them in from M R(K + N) log 2 (KN) randomly chosen samples if X is diffuse. (Recht, Candes, Tao, Montenari, Oh,...)

Summary: random projections and structured recovery The number of measurements (dimension of the projection) needed for structured recovery depends on the geometrical complexity of the class. Three examples: structure S-sparse vectors, length N number of measurements S log(n/s) rank-r matrix, size K N R(K + N) manifold in R N, intrins. dim. K K (function of vol, curvature, etc)

Systems of quadratic and bilinear equations

Quadratic equations Quadratic equations contain unknown terms multiplied by one another x 1 x 3 + x 2 + x 2 5 = 13 3x 2 x 6 7x 3 + 9x 2 4 = 12 Their nonlinearity makes them trickier to solve, and the computational framework is nowhere nearly as strong as for linear equations.

Recasting quadratic equations 2 vv T = 6 4 v1 2 v 1 v 2 v 1 v 3 v 1 v N v 2 v 1 v2 2 v 2 v 3 v 2 v N v 3 v 1 v 3 v 2 v3 3 v 3 v N.... v N v 1 v N v 2 v N v 3 vn 2 3 7 5 2v 2 1 + 5v 3 v 1 + 7v 2 v 3 = v 2 v 1 + 9v 2 2 + 4v 3 v 2 = A quadratic system of equations can be recast as a linear system of equations on a matrix that has rank 1.

Recasting quadratic equations 2 vv T = 6 4 v1 2 v 1 v 2 v 1 v 3 v 1 v N v 2 v 1 v2 2 v 2 v 3 v 2 v N v 3 v 1 v 3 v 2 v3 3 v 3 v N.... v N v 1 v N v 2 v N v 3 vn 2 3 7 5 Compressive (low rank) recovery Generic quadratic systems with cn equations and N unknowns can be solved using nuclear norm minimization

Blind deconvolution image deblurring multipath in wireless comm We observe y[n] = l w[l] x[n l] and want to untangle w and x.

Recasting as linear equations While each observation is a quadratic combination of the unknowns: y[l] = n w[n]x[l n] it is linear is the outer product: w[1]x[1] w[1]x[2] w[1]x[l] wx T w[2]x[1] w[2]x[2] w[2]x[l] =..... w[l]x[1] w[l]x[2] w[l]x[l] So y = A(X 0 ), where X 0 = wx T has rank 1.

Recasting as linear equations While each observation is a quadratic combination of the unknowns: y[l] = n w[n]x[l n] it is linear is the outer product: b 1, h m, c 1 b 1, h m, c 2 b 1, h m, c N (Bh)(Cm) T b 2, h m, c 1 b 2, h m, c 2 b 2, h m, c N =..... b K, h m, c 1 b K, h m, c 2 b K, h m, c N where b k is the kth row of B, and c n is the nth row of C. So y is linear in hm T.

Blind deconvolution theoretical results We observe and then solve y = w x, w = Bh, x = Cm = A(hm ), h R K, m R N, min X X subject to A(X) = y. Ahmed, Recht, R, 12: If B is incoherent in the Fourier domain, and C is randomly chosen, then we will recover X 0 = hm exactly (with high probability) when max(k, N) L log 3 L

Numerical results white = 100% success, black = 0% success w sparse, x sparse w sparse, x short We can take K + N L/3

Numerical results Unknown image with known support in the wavelet domain, Unknown blurring kernel with known support in spatial domain

Phase retrieval (image courtesy of Manuel Guizar-Sicairos) Observe the magnitude of the Fourier transform ˆx(ω) 2 ˆx(ω) is complex, and its phase carries important information

Phase retrieval (image courtesy of Manuel Guizar-Sicairos) Recently, Candès, Strohmer, Voroninski, have looked at stylized version of phase retrieval: observe y l = a l, x 2 l = 1,..., L and shown that x R N can be recovered when L Const N for random a l.

Random projections in fast forward modeling

Forward modeling/simulation Given a candidate model of the earth, we want to estimate the channel between each source/receiver pair p 1 p 4 p 2 p 3 ""!""""""""""%""""""""""".""""""""""/" #$#" h :,1 h :,2 h :,3 h :,4 1456(643*")76*8" '()*"+,-" %$&" 0*1*(2*0",*3",()9843*6"793:93"

Simultaneous activation Run a single simulation with all of the sources activated simultaneously with random waveforms The channel responses interfere with one another, but the randomness codes them in such a way that they can be separated later #"!%" ""!""""""""""%""""""""""".""""""""""/" p 1 p 2 p 3 p 4! '()*"+,-" #$#" %$&" h :,1 h :,2 h :,3 h :,4 0*1*(2*0",*3" #" 4*5167(589"!%" y 1:57(7:3*")67*;" 0*1*(2*0",*3"

Multiple channel linear algebra m y k = G 1 G 2 h G 1,k p h 2,k.!"#$$%&'()*(( &%$+,"((n!)$-)&./)$(01,"(2.&'%( p j h c,k How long does each pulse need to be to recover all of the channels? (the system is m nc, m = pulse length, c =# channels) Of course we can do it for m nc But if the channels have a combined sparsity of S, then we can take m s + n

Seismic imaging simulation (a) Simple case. (b) More complex case. Array of 128 64 (8192) sources activated simultaneously (1 receiver) Sparsity enforced in the curvelet domain urce and receiver geometry. We use 8192 (128 64) sources and 1 receiver. Figure Can 2: Desired compress band-limited computations Greens s functions 20 obtained by sequential-source modeli (a) Simple case and (b) More complex case.

Source localization We observe a narrowband source emitting from (unknown) location r 0 : Y = αg( r 0 ) + noise, Y C N Goal: estimate r 0 using only implicit knowledge of the channel G

Matched field processing Y, G( r) 2! $ "$!" ($$!($ ("$!("! "#$$ "%$$ "&$$ "'$$ &$$$!#$ Given observations Y, estimate r 0 by matching against the field : ˆr = arg min min Y Y, G( r) 2 r β C βg( r) 2 = max r G( r) 2 Y, G( r) 2 We do not have direct access to G, but can calculate Y, G( r) for all r using time-reversal

Multiple realizations We receive a series of measurements Y 1,..., Y K from the same environment (but possibly different source locations) Calculating G H Y k for each instance can be expensive (requires a PDE solve) A naïve precomputing approach: set off a source at each receiver location si time reverse G H 1 si to pick off a row of G We can use ideas from compressive sensing to significantly reduce the amount of precomputation

Coded simulations Pre-compute the responses to a series of randomly and simultaneously activated sources along the receiver array b 1 = G H φ 1, b 2 = G H φ 2,... b M = G H φ M, where the φ m are random vectors Stack up the b H m to form the matrix ΦG Given the observations Y, code them to form ΦY, and solve ˆr cs = arg min r min β C ΦY βφg( r) 2 2 = arg max r ΦY, ΦG( r) 2 ΦG( r) 2

Compressive ambiguity functions ambiguity function (G H Y )( r) compressed amb func (G H Φ H ΦY )( r) 20 0 20 0 40 60!" 40 60!" 80 80 100!10 100!10 120 120 140 160!1" 140 160!1" 180 "200 "400 "600 "800!20 180 "200 "400 "600 "800!20 M = 10 (compare to 37 receivers) The compressed ambiguity function is a random process whose mean is the true ambiguity function For very modest M, these two functions peak in the same place

Analysis Suppose we can approximate G T G( r) with a 2D Gaussian with widths λ 1, λ 2 20 40 60 80 100 120 set W = area λ 1 λ 2 140 160 180 5100 5200 5300 5400 5500 5600 5700 then we can reliably estimate r 0 when M log W and withstand noise levels σ 2 A 2 M W log W A= source amplitude

Sampling ensembles of correlated signals

Sensor arrays

Neural probes recording site Up to 100s of channels sampled at 100 khz 10s of millions of samples/second Near Future: 1 million channels, terabits per second

Correlated signals M Nyquist acquisition: sampling rate (number of signals) (bandwith) = M W

Correlated signals M = 0.82 1.31 1.09 0.27 1.05 1.81 0.74 0.31 0.97 0.94 1.19 2.19 R Can we exploit the latent correlation structure to reduce the sampling rate?

Coded multiplexing modulator code p 1 rate ϕ. modulator code p 2 rate ϕ modulator code p 3 rate ϕ modulator code p M rate ϕ. + ADC rate ϕ Architecture that achieves sampling rate (independent signals) (bandwidth) RW log 3/2 W

References E. Candes, J. Romberg, T. Tao, Stable Signal Recovery from Incomplete and Inaccurate Measurements, CPAM, 2006. E. Candes and J. Romberg, Sparsity and Incoherence in Compressive Sampling, Inverse Problems, 2007. J. Romberg, Compressive Sensing by Random Convolution, SIAM J. Imaging Science, 2009. A. Ahmed, B. Recht, and J. Romberg, Blind Deconvolution using Convex Programming, submitted to IEEE Transactions on Information Theory, November 2012. A. Ahmed and J. Romberg, Compressive Sampling of Correlated Signals, Manuscript in preparation. Preliminary version at IEEE Asilomar Conference on Signals, Systems, and Computers, October 2011. A. Ahmed and J. Romberg, Compressive Multiplexers for Correlated Signals, Manuscript in preparation. Preliminary version at IEEE Asilomar Conference on Signals, Systems, and Computers, November 2012. http://users.ece.gatech.edu/~justin/publications.html