Hamming Compressed Sensing

Similar documents
Randomized Recovery for Boolean Compressed Sensing

Sharp Time Data Tradeoffs for Linear Inverse Problems

Weighted- 1 minimization with multiple weighting sets

On the theoretical analysis of cross validation in compressive sensing

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Lower Bounds for Quantized Matrix Completion

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

A note on the multiplication of sparse matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

A Simple Homotopy Algorithm for Compressive Sensing

Fixed-to-Variable Length Distribution Matching

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Support recovery in compressed sensing: An estimation theoretic approach

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010

Non-Parametric Non-Line-of-Sight Identification 1

A Probabilistic and RIPless Theory of Compressed Sensing

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

Recovery of Sparsely Corrupted Signals

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

CS Lecture 13. More Maximum Likelihood

Kernel Methods and Support Vector Machines

An RIP-based approach to Σ quantization for compressed sensing

Compressive Sensing Over Networks

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Feature Extraction Techniques

On the Use of A Priori Information for Sparse Signal Approximations

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

On Constant Power Water-filling

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

arxiv: v5 [cs.it] 16 Mar 2012

COS 424: Interacting with Data. Written Exercises

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

PAC-Bayes Analysis Of Maximum Entropy Learning

arxiv: v1 [cs.ds] 17 Mar 2016

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Detection and Estimation Theory

Hybrid System Identification: An SDP Approach

arxiv: v1 [math.na] 10 Oct 2016

SPECTRUM sensing is a core concept of cognitive radio

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

Multi-Scale/Multi-Resolution: Wavelet Transform

Exact tensor completion with sum-of-squares

The Weierstrass Approximation Theorem

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

Distributed Subgradient Methods for Multi-agent Optimization

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

Ch 12: Variations on Backpropagation

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Multi-Dimensional Hegselmann-Krause Dynamics

Convex Programming for Scheduling Unrelated Parallel Machines

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Stochastic Subgradient Methods

Using a De-Convolution Window for Operating Modal Analysis

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

arxiv: v1 [cs.ds] 3 Feb 2014

Computational and Statistical Learning Theory

On Conditions for Linearity of Optimal Estimation

Lecture 20 November 7, 2013

Boosting with log-loss

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION

Highly Robust Error Correction by Convex Programming

OPTIMIZATION in multi-agent networks has attracted

Interactive Markov Models of Evolutionary Algorithms

Recovering Block-structured Activations Using Compressive Measurements

Block designs and statistics

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all

Birthday Paradox Calculations and Approximation

Efficient Filter Banks And Interpolators

A method to determine relative stroke detection efficiencies from multiplicity distributions

Lean Walsh Transform

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

The Transactional Nature of Quantum Information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Statistical clustering and Mineral Spectral Unmixing in Aviris Hyperspectral Image of Cuprite, NV

arxiv: v3 [quant-ph] 18 Oct 2017

Pattern Recognition and Machine Learning. Artificial Neural networks

Robust Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen, Student Member, IEEE, and Yuejie Chi, Member, IEEE

Chapter 6 1-D Continuous Groups

Sparse Signal Reconstruction via Iterative Support Detection

Fairness via priority scheduling

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks

Lecture 21. Interior Point Methods Setup and Algorithm

New Slack-Monotonic Schedulability Analysis of Real-Time Tasks on Multiprocessors

Bipartite subgraphs and the smallest eigenvalue

Complex Quadratic Optimization and Semidefinite Programming

A remark on a success rate model for DPA and CPA

Weighted Superimposed Codes and Constrained Integer Compressed Sensing

A Simple Regression Problem

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number

Quantile Search: A Distance-Penalized Active Learning Algorithm for Spatial Sampling

Probability Distributions

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

Transcription:

Haing Copressed Sensing Tianyi Zhou, and Dacheng Tao, Meber, IEEE Abstract arxiv:.73v2 [cs.it] Oct 2 Copressed sensing CS and -bit CS cannot directly recover quantized signals and require tie consuing recovery. In this paper, we introduce Haing copressed sensing that directly recovers a k-bit quantized signal of diensional n fro its -bit easureents via invoking n ties of Kullback-Leibler divergence based nearest neighbor search. Copared with CS and -bit CS, allows the signal to be dense, takes considerably less linear recovery tie and requires substantially less easureents Olog n. Moreover, recovery can accelerate the subsequent -bit CS dequantizer. We study a quantized recovery error bound of for general signals and +dequantizer recovery error bound for sparse signals. Extensive nuerical siulations verify the appealing accuracy, robustness, efficiency and consistency of. Index Ters Copressed sensing, -bit copressed sensing, quantizer, quantized recovery, nearest neighbor search, dequantizer. I. INTRODUCTION Digital revolution triggered a rapid growth of novel signal acquisition techniques with priary interests in reducing sapling costs and iproving recovery efficiency. The theoretical proise of conventional sapling ethods coes fro the Shannon/Nyquist sapling theore [], which states a signal can be fully recovered if it is sapled uniforly at a rate ore than twice its bandwidth. Such unifor sapling is always done by analog-to-digital converters ADCs. Unfortunately, for any real applications such as radar iaging and agnetic resonance iaging MRI, Nyquist rate is too high due to the expensive cost of analog-to-digital AD conversion, T. Zhou and D. Tao are with the Centre for Quantu Coputation & Intelligent Systes, University of Technology, Sydney, Australia, NSW 27. October, 2

2 the axiu sapling rate liits of the hardware, or the additional costly copression to the obtained saples. A. Copressed sensing Recently, prosperous researches in copressed sensing CS [2][3][4][5][6][7] show that an accurate recovery can be obtained by sapling signals at a rate proportional to their underlying inforation content rather than their bandwidth. The key iproveent brought by CS is that the sapling rate can be significantly reduced by replacing the unifor sapling with linear easureent, if the signals are sparse or copressible on certain dictionary. This iproveent leverages the fact that any signals of interest occupies a quite large bandwidth but has a sparse spectru, which leads to a redundancy of the unifor sapling. In particular, CS dedicates to reconstruct a signal x R n fro its linear easureents y = Φx = ΦΨα, where Φ R n is the easureent or sensing atrix allowing n in which case Φ is an underdeterined syste. The signal x is K-sparse if its nonzero entries are less than K. Given a dictionary Ψ, x is K-copressible if the nonzero entries of α are less than K. If sparse/copressible x is the Nyquist-rate saples of a analog signal xt, CS replaces ADCs with a novel sapler Φ such that y = Φx = Φxt. A straightforward approach for recovering x fro y is to iniize the nuber of nonzero entries in x, i.e., the l nor of x. Specifically, it is not difficult to deonstrate that a K-sparse signal x can be accurately recovered fro 2K easureents by solving in x R n x s.t. y = Φx CS l with exhaustive search if Φ is a generic atrix. However, such exhaustive search has intractable cobinatorial coplexity. So soe CS ethods adopts iterative and greedy algoriths to solve the l iniization, such as orthogonal atching pursuit OMP [8], copressive sapling atching pursuit CoSaMP [9], approxiate essage passing [], iterative splitting and thresholding IST [] and iterated hard shrinkage IHT [2]. Since l iniization is a non-convex proble, soe other CS ethods solve its convex relaxation, i.e., l iniization and its variants: in x x R n s.t. y = Φx. CS l October, 2

3 Various convex optiization approaches have been developed or introduced to solve the above proble and its variants. Representatives include basis pursuit [3], Dantzig selecter [4], NESTA [5], interior point ethod [6], coordinate gradient descent [7], gradient projection [8] and the class of approaches based on the fixed point ethod such as Bregan iterative algorith [9], fixed point continuation [2] and iteratively re-weighted least squares IRLS [2]. It is also worthy noting that the lasso [22] type algoriths [23][24] for odel selection can be applied to CS recovery proble as well. However, copared with the recovery schees in conventional sapling theory, which reconstructs signals by invoking nearly linear transforation, ost aforeentioned recovery algoriths in CS ethods require polynoial tie cost, which is substantially ore expensive than the conventional ethods. This burden in recovery efficiency liits the applications of CS in any real probles, in which the diension of the signals is extreely high. Beyond the recovery efficiency, another iportant issue in CS is the theoretical guarantee for precise recovery. Since ost existing CS algorith finds the signal that agrees with the easureents y without directly iniizing l nor, the recovery success of existing CS ethods relies on another theoretical requireent for Φ or ΦΨ in copressible case, i.e., two sufficiently close easureents y = Φx and y 2 = Φx 2 indicates that vectors x and x 2 are sufficiently close to each other. This low-distortion property of the linear operator Φ is called restricted isoetry property RIP [25][26], which can also be interpreted as the incoherence between the easureent atrix Φ and the signal dictionary Ψ or identity atrix I for sparse x in order to restrict the concentration of a single α i or x i in sparse case in the easureents. An intriguing property of CS is that soe randoly generated atrices such as Gaussian, Bernoulli and partial Fourier enseble fulfill RIP with high probabilities. For exaple, Φ whose entries are randoly drawn fro a sub-gaussian distribution satisfies RIP with a high probability if = OK logn/k. By using the concept of RIP, the global solution of the l iniization with such Φ is guaranteed to be sufficiently close to the original sparse signal. Thus CS can successfully recover K-sparse signals of diension n fro = OK logn/k easureents. However, given a deterinistic Φ with = OK logn/k, it is generally regarded as NP-hard to test whether RIP holds or not. In practice, it is paraount that signals are not exactly sparse and the easureents cannot be precise due to the hardware liits. Questions arise in this case that is it possible to recovery the October, 2

4 K largest entries if x is nearly sparse? and is it possible to recover x fro noisy easureents y?. These questions lead to the proble of stable recovery [26][27] in CS. Fortunately RIP can naturally address this proble, because it ensures that sall changes in easureents induce sall changes in the recoveries. In stable recovery, the constraint y = Φx in the original l and l iniization probles are replaced with y Φx 2 ɛ. Another variant in this case is iniizing y Φx 2 with penalty or constraint to l or l nor of x. Many existing CS algoriths such as basis pursuit denoising BPDN [3] can also handle the stable recovery proble. Today s state-of-the-art researches in CS focus priarily on further reducing the nuber of easureents, iproving the recovery efficiency and increasing the robustness of stable recovery. Although CS [28] exhibits powerful potential in siplifying the sapling process and reducing the easureent aount, there are soe unsettled issues of CS when applied to realistic digital systes, especially in developing hardware. A crucial proble is how to deal with the quantization of the easureents. B. Quantized copressed sensing In practical digital systes, quantization of CS easureents y is a natural and inevitable process, in which each easureent is transfored fro a real value to a finite nuber of bits that represent a finite interval containing the real value. In CS, quantization is an irreversible process introducing error in easureents, if we round the quantized easureent as any real value within the corresponding interval in recovery. One coonly used trick to deal with quantization error is to treat it as bounded Gaussian noise in easureents, and thus stable recovery ethods in CS guarantee to obtain a robust recovery. However, this solution cannot produce acceptable recovery result unless the quantization error is sufficiently sall and near Gaussian. Unfortunately, these two conditions are often hardly fulfilled because sall quantization error is obtained fro sall interval width, which is the result of high sapling rate and high conversion accuracy of ADC. This conflicts with the spirit of CS; and 2 the quantization error is usually highly non-gaussian. Several recent works [29][3][3][32] address the quantized copressed sensing QCS by iplicitly or explicitly treating the quantization as a box constraint to the easureents y = October, 2

5 Φx + ɛ, in x x R n s.t. u Φx + ɛ v, QCS where the two vectors u and v store the corresponding boundaries of the intervals that the entries of y lie in, and ɛ is the easureent noise. The box constraint is also called quantization consistency QC constraint. By solving this proble, the quantization error will not be wholly transfored into the recovery error via RIP. Thus it is possible to obtain an accurate recovery fro very coarsely quantized easureents. A variant of BPDN called basis pursuit dequantizer proposed in [29] restricts y Φx p 2 p rather than y Φx 2 in l nor iniization, and proves that the recovery error decreases by a factor of p +. In [32], an adaption of BPDN and subspace pursuit [33] integrate an explicit QC constraint. An l regularized axiu likelihood estiation is developed in [3] to solve QCS with noise i.e., ɛ. As the extree case of QCS, -bit CS [3][34] has been developed to reconstruct sparse signals fro -bit easureents, which erely capture the signs of the linear easureents in CS. The -bit easureents enable siple and fast quantization. Thus it can significantly reduce the sapling costs and strengthen the robustness of hardware ipleentation. One inevitable inforation loss in -bit easureents is the scale of the original signal, because scaled signal will have the sae linear easureent signs as the original one. Theoretically, -bit CS ensures consistent reconstructions of signals on the unit l 2 sphere [35][36]. In -bit CS, one-sided l 2 [3] or l [34] objectives are designed to guarantee the consistency of -bit easureents by iposing a sign constraint or iniizing the sign violations in optiization. Analogous to RIP in CS, the binary ɛ-stable ebedding BɛSE [34] ensures the low distortion between the original signals and their -bit easureents, and thus guarantees the accuracy and stableness of the reconstruction. It is rearkable that = OK log n can guarantees BɛSE and the subsequent successful recovery. Most -bit CS recovery algoriths, e.g., renoralized fixed point iteration [3], atched sign pursuit [35] and binary iterative hard thresholding [34], are extensions of CS recovery algoriths. It has been shown that, a variant of IHT [2], can produce precise and consistent recovery fro -bit easureents. QCS and -bit CS not only consider the quantization of the easureents but also iprove the recovery robustness to the nonlinear distortions brought by ADC, because the quantized October, 2

6 easureents only preserve the intervals the real-value easureents lie in. However, QCS and -bit CS ethods require polynoial-tie recovery algoriths and thus they are prohibitive to high diensional signals in practical applications. Moreover, another central proble is that either CS or QCS recovers the original real-value signals, but quantization of the recovered signals is inevitable in digital systes. C. Haing copressed sensing Digital systes prefer to use the quantized recovery of the original signal, which can be processed directly, but the recoveries of both CS and QCS are continuous and real-valued. In order to apply the to digital systes, a straightforward solution is to ipose an additional quantization to the CS or QCS recoveries. However, this quantization requests additional tie costs and expenses on ADCs, which could be expensive if the sapling rate is required to be high. Moreover, the convex optiization or iterative greedy search based recovery in CS and QCS is of polynoial-tie. This is not acceptable for high-diensional signals. In addition, the trade-off between the recovery tie and the recovery resolution cannot be controlled in CS and QCS, although it is preferred in practice. Finally, the success of CS and QCS is based upon the assuption that signals are sparse. When the signal x is dense, the nubers of easureents are large required by CS and QCS and the advantages of CS and QCS are lost accordingly. In this paper, we directly recover the quantization of a general signal not necessary to be K-sparse fro the quantization of its linear easureents, or quantized recovery QR. In particular, for a signal x and its quantization q = Qx by a quantizer Q, we seek for a recovery algorith R that reconstructs q = Ry sufficiently close to q fro the quantized easureents y = Ax, where the operator A is a coposition of linear easureent and quantization. This proble has not been forally studied before, and has the potential to itigate the aforeentioned liitations of CS and QCS. The ain otivation behind QR is sacrificing the quantization error of the recovery for reducing the nuber of easureents. Thus the recovery tie can be significantly reduced with the decreasing of the nuber of bits for the quantized recovery, and the nuber of easureents can be sall even when the signal is dense. Coparing with CS and QCS, QR considers the quantization error of the quantized recovery in deterining the sapling rate and developing the reconstruction algorith. The priary contribution of this paper is developing Haing copressed sensing October, 2

7 to achieve quantized recovery fro a sall nuber of quantized easureents with extreely sall tie cost and without signal sparsity constraint. In copression sapling, we adopt the -bit easureents [34] to guarantee consistency and BɛSE but eploy the in a different way. In particular, we introduce a bijection between each diension of the signal and a Bernoulli distribution. The underlying idea of is to estiate the Bernoulli distribution for each diension fro the -bit easureents, and thus each diension of the signal can be recovered fro the corresponding Bernoulli distribution. In order to define the quantized recovery, we propose a k-bit quantizer splitting the signal doain into k intervals, which are derived fro the bijection as the appings of the k unifor linear quantization boundaries for the Bernoulli distribution doain. In recovery, searches the nearest neighbor of the estiated Bernoulli distribution aong the k boundaries in the Bernoulli distribution doain, and recovers the quantization of the corresponding diension as the quantizer interval associated with the nearest boundary. We theoretically study a quantized recovery error bound of by investigating the precision of the estiation and its ipact on the KL divergence based nearest neighbor search. The theoretical analysis provide a strong support to the successful recovery of. Coparing with CS and QCS, has the following significant and appealing erits: provides siple and low-cost sapling and recovery schees for digital systes. The procedures are substantially siple: the sapling and sensing are integrated to -bit easureents, while the recovery and quantization are integrated to quantized recovery. Furtherore, both the -bit easureent and the quantized recovery do not require ADC with a high sapling rate. Note that reains the recovery robustness due to quantized easureents inherited fro QCS. 2 The recovery in only requires to copute nk Kullback-Leibler KL divergences for obtaining k-bit recovery of an n-diensional signal, and thus is a non-iterative, linear algorith. The recovery includes very siple coputations. Therefore, is considerably ore efficient and easier to be ipleented than CS and QCS. 3 According to the theoretical analysis of, erely = Olog n -bit easureents are sufficient to produce a successful quantized recovery with high probability. Note there is no sparse assuption to the signal x. Therefore, allows ore econoical copression than CS and QCS. October, 2

8 Another copelling advantage of is it can proote the recovery of the real-value signals after quantized recovery. When the subsequent dequantization x = Dq after quantized recovery is required, we can treat the quantized recovery as a box constraint to reduce the search space of the -bit CS dequantizer D in order to accelerate the convergence. By invoking the recovery bound, the consistency and BɛSE fro -bit CS, we show an error bound of +dequantizer recovery for sparse signals. The rest of this paper is outlined as follows. Section 2 introduces the -bit easureents in, which lead to a bijection between each diension of the signal and a Bernoulli distribution, and its consistency. Section 3 presents the k-bit reconstruction in, including how to obtain the quantizer, KL-divergence nearest neighbor search based recovery and theoretical evidence for successful recovery. Section 4 introduces the application of recovery results in dequantization, an theoretical analysis of the dequantization error is given here. Section 5 shows the power of via three groups of experients. Section 6 concludes. II. -BIT MEASUREMENTS recovers the quantized signal directly fro its quantized easureents, each of which is coposed of a finite nuber of bits. We consider the extree case of -bit easureents of a signal x R n, which are given by y = Ax = sign Φx, where sign is an eleent-wise sign operator and A aps x fro R n to the Boolean cube B M := {, } M. Since the scale of the signal is lost in -bit easureents y ultiplying x with a positive scalar will not change the signs of the easureents, the consistent reconstruction can be obtained by enforcing the signal x Σ K := {x Sn : x K} where S n := {x R n : x 2 = } is the n-diensional unit hyper-sphere. The -bit easureents y can also be viewed as a hash of the signal x. Siilar hash based on rando projection signs is developed in locality sensitive hashing LSH [37][38]. LSH perfors an approxiate nearest neighbor ANN searches on the hashes of signals, and proves the results approach the precise NN searches on the original signals with high probability. This theoretical guarantee is based on condition siilar to BɛSE [34] in -bit CS. It is interesting to copare LSH with, because LSH is an irreversible process aiing at ANN, while can be viewed as a reversible LSH in this case. October, 2

9 A. Bijection In contrast to CS and -bit CS, does not recover the original signal, but reconstructs the quantized signal by recovering each diension in isolation. In particular, according to Lea 3.2 in [39], we show that there exists a bijection cf. Theore between each diension of the signal x and a Bernoulli distribution, which can be uniquely estiated fro the -bit easureents. The underlying idea of is to estiate the Bernoulli distribution for each diension, and recover the quantization of the corresponding diension as the interval where the Bernoulli distribution s apping lies in. Theore : Bijection For a noralized signal x R n with x 2 = and a noralized Gaussian rando vector φ that is drawn uniforly fro the unit l 2 sphere in R n i.e., each eleent of φ is firstly drawn i.i.d. fro the standard Gaussian distribution N, and then φ is noralized as φ/ φ 2, given the i th diension of the signal x i and the corresponding coordinate unit vector e i = {,,,,,, }, where appears in the i th diension, there exists a bijection P : R P fro x i to the Bernoulli distribution of the binary rando variable s i = sign x, φ sign e i, φ : Pr s i = = P x i = arccos x π i, Pr s i = = arccos x π i. Since the apping between x i and P x i is bijective, given P x i, the i th diension of x can be uniquely identified. According to the definition of s i, P x i can be estiated fro the instances of the rando variable sign x, φ, which are exactly the -bit easureents y defined in. Therefore, the -bit easureents y include sufficient inforation to reconstruct x i fro the estiation of P x i, and the recovery accuracy of x i depends on the accuracy of the estiation to P x i. 2 B. Consistency Given a signal x, its quantization q = Qx by quantizer Q, the quantized recovery q = Ry obtained by reconstruction R fro the -bit easureents y = Ax, and its dequantization x = Dq obtained by a dequantizer D, the +dequantizer recovery x is given by x = x + err H + err D, 3 October, 2

where err H is deterined by the difference between q and q caused by reconstruction, and err D is the dequantization error fro q to x. The upper bounds of err H and err D will be given in Sections 4 and 5, respectively. The following Lea shows the consistency pertaining to err H and err D. Lea : Consistency Let Φ be a standard Gaussian rando atrix whose rows are coposed of φ i defined in Theore. The easureent operator A is defined in. Given a fixed γ >, for any signal x R n and its +dequantizer recovery x, we have E D H Ax, Ax gσ, x 2, 4 Pr D H Ax, Ax > gσ, x 2 + γ e 2γ2, 5 where D H u, v = M M i= u i v i u, v {, } M is the noralized Haing distance, gσ, x 2 = 2 σ x 2 2 +σ2 2 σ x 2 and σ = err H + err D 2. Proof: According to and 3, we have Ax = sign Φx = sign Φx + Φ err H + err D. 6 Since err = Φ err H + err D is a Gaussian rando noise vector whose i th eleent err i N, σ 2. According to Lea 5 in [34], we obtain Lea. This copletes the proof. The consistent reconstruction in CS and -bit CS iniizes Ax Ax for a K-sparse signal x. RIP and BɛSE bridge the consistency and the reconstruction accuracy in CS and -bit CS, respectively. Instead of iniizing Ax Ax to achieve the recovery accuracy, directly estiates the interval that each diension of the signal x lies in fro the estiated Bernoulli distribution defined in Theore. In addition, the consistency between Ax and Ax is iportant for, because in part it deterines the aount of inforation preserved in -bit easureents, and 2 the error bound of +dequantizer recovery for sparse signals. III. K-BIT RECONSTRUCTION The priary contribution of this paper is the quantized recovery in, which reconstructs the quantized signal fro its -bit easureents. Figure a illustrates quantized recovery. To define the quantizer, we firstly find k boundaries P j j =,, k 8 in Bernoulli distribution doain by iposing the unifor linear quantizer to the range of P j. Given an October, 2

arbitrary x i, the nearest neighbor of P x i aong the k boundaries P j j =,, k indicates the interval q i that x i lies in the signal doain. The k + boundaries S j j =,, k associated with the k intervals q j j =,, k are calculated fro the k boundaries P j j =,, k according to the bijection defined in Theore. In recovery, P x i is estiated as ˆP x i fro the -bit easureents y. Then the nearest neighbor of ˆP x i aong the k boundaries P j j =,, k is deterined by coparing the KL-divergences between ˆP x i and P j. The quantization of x i defined by quantizer is recovered as the interval q i corresponding to the nearest neighbor. In this section, we first introduce the quantizer, which is a apping resulting fro the unifor linear quantizer of the Bernoulli distribution doain to the signal doain. The quantized recovery procedure is coposed of n ties of KL-divergence based nearest neighbor searches. Thus it is a linear algorith uch faster than the conventional reconstruction algoriths of CS and -bit CS, which require optiization with the l p p 2 constraint/penalty, or iterative thresholding/greedy search. We then study the upper bound of the quantized recovery error err H. A. quantizer Since ais at recovering the quantization of the original signal, we firstly introduce quantizer, which defines the intervals and boundaries for quantization in the signal doain. These intervals and boundaries are uniquely derived fro a predefined unifor linear quantizer in the Bernoulli distribution doain. Given a signal and the boundaries of quantizer, its k-bit quantization can be identified. We will show quantizer perfors closely to the unifor linear quantizer. Note that in the quantized recovery of, the reconstruction and quantization are siultaneously accoplished. Thus the quantizer will not play an explicit role in the recovery procedure. However, it is related to and uniquely deterined by the quantization of the Bernoulli distribution doain, which plays an iportant role in the recovery and explains the reconstruction q. Moreover, it will be applied to the error bound analyses for err H and err H + err D. We introduce the quantizer Q by defining a bijective apping fro the boundaries of the Bernoulli distribution doain to the intervals of the signal doain according to Theore. Assue the range of a signal x is given by: x inf x i x sup, i,, n. 7 October, 2

2 KL divergence.5 S i.5 5 5 2 25 3 35 4 45 5 i Fig.. a Quantized recovery in. Bernoulli distribution P x i given in Theore has estiate ˆP x i 2 fro -bit easureents y = Ax. searches the nearest neighbor of ˆP x i aong the k boundaries P jj =,, k 8 in the Bernoulli distribution doain. The quantization of x i, i.e., q i is recovered as the interval between the two boundaries S i and S i corresponding to the nearest neighbor, wherein S i is a apping of P i and P i in signal doain. b quantizer. The boundaries S i in when k =, 3, 5,, 5 and x inf =, x sup =. By applying the unifor linear quantizer with the quantization interval to the Bernoulli distribution doain, we get the corresponding boundaries P i = Pr = P i = arccos x π inf i, P + i = Pr = Pr., i =,, k. 8 October, 2

3 The interval is = k π arccos x inf π arccos x sup = P k P k. 9 We define the k-bit quantizer in the signal doain by coputing its k + boundaries as a apping fro the k boundaries P i i =,, k to R in the Bernoulli doain: x inf, i = ; S i = cos, i =,, k ; where f P i = π +fp i x sup, i = k. P i P i P i P i P i + P i + P i P i Although the apping between the boundaries of quantizer S i to the boundaries of the quantizer in the Bernoulli distribution doain P i is bijective, such apping cannot be explicitly obtained. So it is difficult to derive the corresponding quantizer in the Bernoulli distribution doain fro a predefined quantizer. Thus quantizer cannot be fixed as a unifor linear quantizer and has to be coputed fro a predefined quantizer in the Bernoulli distribution doain. Fortunately, quantizer perfors very closely to the unifor linear quantizer, especially when x i is not very close to or. Figure b shows the fact. Given a signal x and the boundaries defined in, its k-bit quantization q is: /. Qx = q, q i = {j : S j x i S j }. B. KL-divergence based nearest neighbor search The k + boundaries of the k-bit quantizer in define k intervals in R. Quantized recovery in reconstructs a quantized signal by estiating which interval each diension of the signal x lies in. The estiation is obtained by a nearest neighbor search in the Bernoulli distribution doain. To be specific, an estiation of P x i given in 2 can be derived fro the -bit easureents y. For each P x i, we find its nearest neighbor aong the k boundaries P j j =,, k 8 in the Bernoulli distribution doain. The interval that x i lies in is then estiated as the interval of quantizer corresponding to the nearest neighbor. KL-divergence easures the distance between two Bernoulli distributions in the nearest neighbor search. October, 2

4 According to Theore, the bijection fro x i to a particular Bernoulli distribution, i.e., P x i given in 2, has an unbiased estiation fro the -bit easureents y ˆP x i ˆP = ˆPr s i = = j : [y sign Φ i] j = /, x i = ˆP x i + = ˆPr s i = = ˆPr s i =, where Φ i is the i th colun of the easureent atrix Φ. The quantization of x i can then be recovered by searching the nearest neighbor of ˆP xi aong the k boundary Bernoulli distributions P j j =,, k in 8. In this paper, the distance between P j and ˆP x i is easured by the KL-divergence: D KL P j ˆP x i = P j log P j ˆP x i + P + j log P + j ˆP x i +, 2 i =,, n, j =,, k. 3 The interval that x i lies in aong the k intervals defined by the boundaries S j j =,, k in is identified as the one whose corresponding boundary distribution P j neighbor of ˆP x i. Therefore, the quantized recovery of x, i.e., q, is given by Ry = q, qi = + arg in D KL P j ˆP x i, j Thus the interval that x i lies in can be recovered as is the nearest i =,, n, j =,, k. 4 S q i x i S q i. 5 The recovery algorith is fully suarized in 4, which only includes siple coputations without iteration and thus can be easily ipleented in real systes. According to 4, the quantized recovery in requires nk coputations of KL-divergence between two Bernoulli distributions. This indicates the high efficiency of linear recovery tie, and the trade-off between resolution k and tie cost nk. C. Quantized recovery error bound We investigate the error bound of the quantized recovery 4 by studying the difference between q i and q i, which are the quantization of x i and its quantized recovery by, October, 2

5 respectively. The difference between q and q defines the error err H in 3, which is the error caused by reconstruction 4: S qi S q i + q i qi ax, q i > qi ; err H i =, q i = qi ; S q i S qi + qi q i ax, q i < qi. The ax denotes the largest interval between neighboring boundaries of the quantizer, i.e., ax = ax j=,,k S j S j. In order to investigate the difference between q i and q i, we study the upper bound for the probability of the event that the true quantization of x i is q i = + α, while its recovery by is q i = + ββ α. According to the quantizer and the reconstruction 4, this probability is 6 Pr β = arg in D KL P j ˆP x i S α x i S α+. 7 j In order to study the conditional probability in 7, we first consider an equivalent event of β = arg in D KL P j ˆP x i, shown in the following Lea 2. j Lea 2: Equivalence The event that the nearest neighbor of ˆP x i aong P j j =,, k is P β equals to the event that ˆP x i is closer to P β than both P β and P β+, where the distance between P j and ˆP x i is easured by KL divergence 3, i.e., β = arg in D KL P j ˆP x i j D KL P β ˆP x i D KL P β ˆP x i >, D KL P β+ ˆP x i D KL P β ˆP x i >. Proof: It is direct to have the following equivalence: β = arg in D KL P j ˆP x i D KL P j:j β ˆP x i D KL P β ˆP x i >. 9 j Thus = in 8 is true. In order to prove = in 8, for arbitrary j {,, k } and fixed x i, we study the onotonicity of D KL P j ˆP x i as a function of P j : D KL P j ˆP x i P j P = log j ˆP x i ˆP x i P. 2 j 8 October, 2

6 Therefore, it holds that D KL P j ˆP x i P j >, P j > ˆP x i ; <, P j < ˆP x i. 2 According to the definition of P j in 8 and the right hand side of in 8, we have P j:j=,,β > ˆP x i that indicate D KL P j:j=,,β 2 ˆP x i > D KL P β ˆP x i > D KL P β ˆP x i, 22 and P j:j=β+,,k < ˆP x i that indicate D KL P j:j=β+,,k ˆP x i > D KL P β+ ˆP x i > D KL P β ˆP x i. 23 Therefore, we can derive the left hand side of fro its right hand side in 8. This copletes the proof. By using the equivalence in Lea 2, the conditional probability given in 7 can be upper bounded by two other conditional probabilities, whose conditions are the two cases of the condition in 7. Corollary : Upper bounds in two cases The conditional probability given in 7 can be upper bounded by Pr β = arg in D KL P j ˆP x i S α x i S α+ j Pr D KL P β ˆP x i D KL P β ˆP x i > S α x i S α+ S β, Pr D KL P β+ ˆP x i D KL P β ˆP x i > S β+ S α x i S α+. Proof: By using Lea 2, we discuss the the conditional probability in 7 by considering the two cases of the conditional event S α x i S α+. Case When S α+ S β, we have Pr β = arg in D KL P j ˆP x i S α x i S α+ j = Pr D KL P β ˆP x i D KL P β ˆP x i > S α x i S α+ S β Pr D KL P β+ ˆP x i D KL P β ˆP x i > S α x i S α+ S β Pr D KL P β ˆP x i D KL P β ˆP x i > S α x i S α+ S β. 25 24 October, 2

7 Case 2 When S β+ S α, we have Pr β = arg in D KL P j ˆP x i S α x i S α+ j = Pr D KL P β ˆP x i D KL P β ˆP x i > S β+ S α x i S α+ Pr D KL P β+ ˆP x i D KL P β ˆP x i > S β+ S α x i S α+ Pr D KL P β+ ˆP x i D KL P β ˆP x i > S β+ S α x i S α+. 26 This copletes the proof. Hence we can bound the conditional probability in 7 by exploring the upper bounds of the two conditional probabilities in Corollary. Proposition : Two probabilistic bounds The two conditional probabilities in 24 are upper bounded by Pr D KL P β ˆP x i D KL P β ˆP x i > S α x i S α+ S β 2 2 exp 2 π arccos x i f P, 27 β + Pr D KL P β+ ˆP x i D KL P β ˆP x i > S β+ S α x i S α+ 2 exp 2 f P 2 β+ + π arccos x i, 28 where f is defined as f P j = P P j j P j P j P j + P j + P j P j /. 29 Proof: For proving 27, according to 3 and the definition of ˆP x i in 2, we have October, 2

8 the following equivalences: D KL P β ˆP x i D KL P β ˆP x i = log ˆP x i P P β β P P β β log ˆP x i P β + P + > β P β P β ˆP x i < f P, f P P j P β + j = j P / P j j P j + P + j P j P j [ ] j : [y sign Φ i] j =, f P, 3 β + where x denotes the largest integer saller than x. Since j : [y sign Φ i] j = refers to the event that in a sequence of independent Bernoulli trials defined in 2, there are j trials return s i =, we can conclude the distribution of j follows the binoial distribution j Pr : [y sign Φ i] j = = j j π arccos x i j i π arccos x. 3 According to the equivalence shown in 3, the probability in 27 can then be coputed as Since [ ] Pr j, f P = β + we have f P j P j fp β + j= = log j j π arccos x i j i π arccos x. 32 P j P j + P j P <, 33 j S α+ S β f P β + f Pα+ = + π arccos S α+. 34 Hence the condition of Hoeffding s inequality for probability 32 holds: f P β + π arccos S α+ π arccos x i. 35 By applying Hoeffding s inequality to probability 32, we have [ ] Pr j, f P 2 β + 2 exp 2 π arccos x i f P. 36 β + October, 2

9 Due to the equivalence proved in 3, we obtain 27. This copletes the proof of 27. To prove 28, siilarly, according to 3 and the definition of ˆP x i in 2, we have the following equivalences: D KL P β+ ˆP x i D KL P β ˆP x i = log P P β+ β+ P P β+ β+ P β+ + P + log β+ P β+ P β+ ˆP x i > ˆP x i ˆP x i > [ ] f P j : [y sign Φ β+ + i] j = f P,, 37 β+ + where x denotes the sallest integer larger than x. According to the equivalence shown in 37 and the binoial distribution given in 3, the probability in 28 can be coputed as [ ] Pr j f P, = β+ + f P β+ + j= j j j π arccos x i π arccos x i. 38 The onotonicity of f P j in 33 yields S β+ S α f P β+ + f Pα + = π arccos S α. 39 Hence the condition of Hoeffding s inequality for probability 38 holds: f P β+ + π arccos S α π arccos x i. 4 By applying Hoeffding s inequality to probability 38, we have [ ] Pr j f P, β+ + 2 exp 2 f P 2 β+ + π arccos x i. 4 Due to the equivalence proved in 37, we obtain 28. This copletes the proof of 28. By using Lea 2, Corollary and Proposition, we have the following Theore about the upper bound of the probability in 7. Theore 2: Quantized recovery bound Given quantizer Q in and reconstruction R 4, the probability of the event that the true quantization of x i is q i = + α October, 2

2 while its recovery by is qi = + βq i qi is upper bounded by Pr [Qx] i = q i [Ry] i = qi = Pr β = arg in D KL P j ˆP x i S α x i S α+ j 2 exp 2 2 exp 2 2 f P q i + + π arccos x i arccos x π i f P q i + 2, q i > q i ;, q i < q i. The iniu aount of -bit easureents that ensures the successful quantized recovery in is then directly obtained fro Theore 2. Corollary 2: Aount of easureents successfully reconstructs x i with probability exceeding η η if the nuber of easureents where δ i = in f P q i + π arccos x i 42 2δ i log 2η, 43 2, π arccos x i f Pq i + + 2. 44 Moreover, successfully reconstruct the signal x with probability exceeding η if the nuber of easureents 2 in i δ i log n 2η. 45 Reark: Corollary 2 states that the quantization of an n-diensional signal x on the unit sphere can be successfully recovered by fro = Olog n with high probability. Copared with CS and QCS, the aount of easureents required by is substantially reduced and irrelevant to the sparsity of the signal. Thus provides a sipler and ore econoical sapling schee that does not rely on sparse or copressible assuption to the signal. A new issue in quantized recovery is the influence of quantization bits k to the recovery accuracy. According to the definition of δ i in 44, both the upper bound for the probability of reconstruction failure in 42 and the least nuber of easureents ensuring reconstruction success in 43 will be reduced if q i q i increases. This indicates two facts: the interval x i lies in is easier to be istakenly recovered as its nearest intervals; and 2 when we increase October, 2

2 the nuber of bits k in quantized recovery, x i will becoe closer to the boundaries S q and S q, which leads to the decreasing of in i δ i in 45. In this case, the nuber of easureents has to be increased in order to ensure a successful recovery. In suary, recovering finer quantization in requires an augent in the nuber of easureents. In another word, perfors a trade-off between sapling rate and resolution. IV. DEQUANTIZER If required, we can dequantize the quantized recovery of the signal by assigning to x i the idpoint of the interval that x i lies in, i.e., x i = Sq 2 i + S q i. Although this dequantizer is siple and efficient, it is not accurate. Fortunately, existing -bit CS provides accurate tools for dequantization on the quantized recovery result copared with the idpoint reconstruction, though they introduce extra coputational costs to trade-off the efficiency against dequantization accuracy. That is because ost -bit CS recovery algoriths invoke tie consuing optiization with l p p 2 penalty or constraint. However, the quantized recovery of provides a box constraint to the subsequent -bit CS optiization, and thus significantly reduces the tie costs by shrinking the search space for x. In particular, we obtain the following box constraint to the signal x fro the quantized recovery q : Since Ω in 46 is convex, the projection to it is direct Ω = { x : S q i x i S q i }. 46 P Ω x = z, z i = edian { S q i, x i, S q i }, 47 The dequantization can then be obtained by adding a projection step 47 at the end of each iteration round of -bit CS recovery algoriths. Note the -bit CS algoriths with this odification have a substantially saller searching space, so they will converge quickly. Note x has to be noralized to x := x / x 2 at the end of the dequantization, because x is assued to be on the unit l 2 sphere. October, 2

22 A. +dequantizer error bound We analyze the error of +dequantizer recovery based on the fact that both the error err H caused by reconstruction and the error err D caused by a dequantizer can be upper bounded. In the worst case, the upper bound of the dequantization error err D is err D i S q i S q i, i =,, n. 48 Based on the consistency in Lea, the definition of BɛSE [34] and Lea of aount of easureent [34], we derive the upper bound of +dequantizer recovery error when the signal is K-sparse in the following Theore 3. Definition : Binary ɛ-stable Ebedding Let ɛ,. A apping A : R n {, } is a binary ɛ-stable ebedding BɛSE of order K for sparse signals if D S x, x ɛ D H Ax, Ax D S x, x + ɛ, D S x, x = π arccos x, x. 49 for all K-sparse signals x, x on the unit l 2 sphere. Lea 3: Aount of easureents Let Φ be the easureent atrix defined in Theore, and let the -bit easureent operator A be defined as in. Fix µ and ɛ >. If the nuber of easureents is 4 K log n + 2K log 5 ɛ 2 ɛ + log 2, 5 µ then with probability exceeding µ, the apping defined by A is a BɛSE of order K for sparse signals. Theore 3: +dequantizer error bound If x = Dq = DRy = DRAx is the +dequantizer recovery of K-sparse signal x, where y includes easureents whose aount satisfies 5, then where σ, γ are defined in Lea. D S x, x D H Ax, Ax + ɛ σ + γ + ɛ, 5 2 x 2 V. EMPIRICAL STUDY This section copares and copare it with [34] for -bit CS on 3 groups of nuerical experients. We use average quantized recovery error n i= q i q i /nk to easure err H shown in Section 3.3. In each trial, we draw a noralized Gaussian rando atrix Φ October, 2

23 quantized recovery error n=24, k=24.4 quantized recovery error n=24, k=24.4.3.3.5.2.5.2...5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery tie n=24, k=24 quantized recovery tie n=24, k=24.5.5.5.5.5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery error n=24, k=256.4 quantized recovery error n=24, k=256.4.3.3.5.2.5.2...5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery tie n=24, k=256 quantized recovery tie n=24, k=256.5.5.5.5.5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery error n=24, k=28.4 quantized recovery error n=24, k=28.4.3.3.5.2.5.2...5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery tie n=24, k=28 quantized recovery tie n=24, k=28.5.5.5.5.5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 Fig. 2. Phase plots of and -bit CS+ quantizer in the noiseless case. R n given in Theore and a signal of length n and cardinality K, whose K nonzero entries drawn uniforly at rando on the unit l 2 sphere. A. Phase transition in the noiseless case We first study the phase transition properties of and -bit CS on quantized recovery error and on recovery tie in the noiseless case. We conduct and + quantizer for 5 trials. In particular, given fixed n and k, we uniforly choose different values October, 2

24 between and, and different values between and 4. For each {, } pair, we conduct trials, i.e., recovery and -bit CS+ quantizer of n-diensional signals with cardinality K fro their -bit easureents. The average quantized recovery errors and average tie costs of the two ethods on overall 4 {, } pairs for different n and k are shown in Figure 2 and Figure 3. quantized recovery error n=52, k=52.4 quantized recovery error n=52, k=52.4.3.3.5.2.5.2...5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery tie n=52, k=52.6 quantized recovery tie n=52, k=52.6.5.4.5.4.2.2.5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery error n=52, k=256.4 quantized recovery error n=52, k=256.4.3.3.5.2.5.2...5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery tie n=52, k=256.6 quantized recovery tie n=52, k=256.6.5.4.5.4.2.2.5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery error n=52, k=28.4 quantized recovery error n=52, k=28.4.3.3.5.2.5.2...5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery tie n=52, k=28.6 quantized recovery tie n=52, k=28.6.5.4.5.4.2.2.5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 Fig. 3. Phase plots of and -bit CS+ quantizer in the noiseless case. In Figure 2 and Figure 3, the phase plots of quantized recovery error show the quantized recovery of is accurate if the -bit easureents are sufficient. Copared to -bit CS+ October, 2

25 quantizer, needs slightly ore easureents to reach the sae recovery precision. That is because -bit CS recovers the exact signal, while recovers its quantization. Another reason is quantizer perfors different fro unifor linear quantizer when x i approaching or for the noralized signal x, which exactly corresponds to the left argin area of the phase plot. However, the phase plots of quantized recovery tie shows that costs substantially less tie than -bit CS+ quantizer. Thus can significantly iprove the efficiency of practical digital systes and eliinate the hardware cost for additional quantization. B. Phase transition in the noisy case We also consider the phase transition properties [4] of and -bit CS on quantized recovery error and on recovery tie in the noisy case. The experients setting up is the sae as that in the noiseless case except Gaussian rando noises are iposed to the input signals. The results are shown in Figure 4 and Figure 5. Coparing to the phase plots of quantized recovery error in the noiseless case, perfors uch ore robust than -bit CS. The tie costs of shown in Figure 4 and Figure 5 still significantly less than that of -bit CS. C. Quantized recovery error vs. nuber of easureents in the noisy case We then show the trade-off between quantized recovery error and the aount of easureents on 25 trials for noisy signals of different n, K, k and signal-to-noise ratio SNR. In particular, given fixed n, K, k and SNR, we uniforly choose 5 values of between and 6n. For each value, we conduct 5 trials of recovery and -bit CS+ quantizer by recovering the quantizations of 5 noisy signals fro their -bit easureents. The quantized recovery error and tie cost of each trial for different n, K, k and SNR are shown in Figure 6 and Figure 7. Figure 6 and Figure 7 show the quantized recovery error of both and -bit CS+ quantization drops drastically with the increasing of the nuber of easureents. For dense signals with large noise, the two ethods perfor nearly the sae on the recovery accuracy. This phenoenon indicates that works well on dense signals and is robust to noise coparing to CS and -bit CS. In addition, the tie cost of increases substantially slower than that of -bit CS+ quantizer with the increasing of the nuber of easureents. October, 2

26 quantized recovery error n=24, k=24, SNR=27.9588.4 quantized recovery error n=24, k=24, SNR=27.9588.4.3.3.5.2.5.2...5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery tie n=24, k=24, SNR=27.9588.8 quantized recovery tie n=24, k=24, SNR=27.9588.8.6.6.5.4.5.4.2.2.5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery error n=24, k=256, SNR=27.9588.4 quantized recovery error n=24, k=256, SNR=27.9588.4.3.3.5.2.5.2...5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery tie n=24, k=256, SNR=27.9588 quantized recovery tie n=24, k=256, SNR=27.9588.5.5.5.5.5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery error n=24, k=28, SNR=27.9588.4 quantized recovery error n=24, k=28, SNR=27.9588.4.3.3.5.2.5.2...5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery tie n=24, k=28, SNR=27.9588 quantized recovery tie n=24, k=28, SNR=27.9588.5.5.5.5.5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 Fig. 4. Phase plots of and -bit CS+ quantizer in the noisy case. D. Dequantization and consistency We finally explore the perforance of +dequantizer stated in Section 4 and verify the consistency investigated in Lea. In particular, we plot the noralized Haing loss defined in Lea between Ax and Ax vs. the angular error 49 between x and x of 2 trials for different aount of easureents in Figure 8. Figure 8 shows the linear relationship between Haing error D H Ax, Ax and angular October, 2

27 quantized recovery error n=52, k=52, SNR=26.26.4 quantized recovery error n=52, k=52, SNR=26.26.4.3.3.5.2.5.2...5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery tie n=52, k=52, SNR=26.26 quantized recovery tie n=52, k=52, SNR=26.26.6.6.5.4.5.4.2.2.5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery error n=52, k=256, SNR=26.26.4 quantized recovery error n=52, k=256, SNR=26.26.4.3.3.5.2.5.2...5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery tie n=52, k=256, SNR=26.26.6 quantized recovery tie n=52, k=256, SNR=26.26.6.5.4.2.5.4.2.5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery error n=52, k=28, SNR=23.98.4 quantized recovery error n=52, k=28, SNR=23.98.4.3.3.5.2.5.2...5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 quantized recovery tie n=52, k=28, SNR=23.98 quantized recovery tie n=52, k=28, SNR=23.98.6.6.5.4.5.4.2.2.5.5 2 2.5 3 3.5 4.5.5 2 2.5 3 3.5 4 Fig. 5. Phase plots of and -bit CS+ quantizer in the noisy case. error D S x, x in +-bit CS dequantizer, +idpoint dequantizer and -bit CS given sufficient easureents. This linear relationship verifies the +dequantizer error bound in Theore 3 and the consistency in Lea of the Haing Copressed Sensing subission. The figure also shows that +-bit CS dequantizer and -bit CS perfor better than +idpoint dequantizer. This verifies the effectiveness of -bit CS dequantizer. In the experients, the -bit CS dequantizer only requires iterates less than to reach the accuracy obtained by -bit CS with ore than 5 iterates. Thus can significantly save the October, 2

28 coputation of the subsequent dequantization. VI. CONCLUSION We have proposed a new signal acquisition technique Haing Copressed Sensing to recover the k-bit quantization of a signal x fro a sall aount of its -bit easureents. recovery invokes n ties of KL-divergence based nearest neighbor searching in a Bernoulli distribution doain and requires only nk coputations of KL-divergence. The ain significance of is as follows: it provides a direct recovery of quantized signal fro a few easureents for digital systes, which has not been thoroughly studied but is essential in practice; 2 it has linear recovery tie and thus its speed is extreely faster than optiization based or iterative ethods; 3 the sparse assuption to signal is not copulsive in. Another copelling advantage of is that its recovery can significantly accelerate the subsequent dequantization. The quantized error bound of for general signals and +dequantizer recovery error bound for sparse signals have been carefully studied. REFERENCES [] C. E. Shannon, Counication in the presence of noise, Proceedings of Institute of Radio Engineers, vol. 37, no., pp. 2, 949. [2] D. L. Donoho, Copressed sensing, IEEE Transactions on Inforation Theory, vol. 52, no. 4, pp. 289 36, 26. [3] E. J. Candès and T. Tao, Near-optial signal recovery fro rando projections: Universal encoding strategies? IEEE Transactions on Inforation Theory, vol. 52, no. 2, pp. 546 5425, 26. [4] E. J. Candès, M. Rudelson, T. Tao, and R. Vershynin, Error correction via linear prograing, Foundations of Coputer Science, Annual IEEE Syposiu on, pp. 295 38, 25. [5] R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, Model-based copressive sensing, IEEE Transactions on Inforation Theory, vol. 56, pp. 982 2, 2. [6] S. Ji, Y. Xue, and L. Carin, Bayesian copressive sensing, IEEE Transactions on Signal Processing, vol. 56, no. 6, pp. 2346 2356, 28. [7] A. Gilbert and P. Indyk, Sparse recovery using sparse atrices, Proceedings of the IEEE, vol. 98, no. 6, pp. 937 947, 2. [8] J. A. Tropp and A. C. Gilbert, Signal recovery fro rando easureents via orthogonal atching pursuit, IEEE Transactions on Inforation Theory, vol. 53, pp. 4655 4666, 27. [9] D. Needell and J. A. Tropp, Cosap: Iterative signal recovery fro incoplete and inaccurate saples, Applied and Coputational Haronic Analysis, vol. 26, pp. 3 32, 28. [] D. L. Donoho, A. Maleki, and A. Montanari, Message passing algoriths for copressed sensing, Proceedings of the National Acadey of Sciences, 29. October, 2