Hamming Compressed Sensing

Size: px
Start display at page:

Download "Hamming Compressed Sensing"

Transcription

1 Haing Copressed Sensing Tianyi Zhou, and Dacheng Tao, Meber, IEEE Abstract arxiv:.73v2 [cs.it] Oct 2 Copressed sensing CS and -bit CS cannot directly recover quantized signals and require tie consuing recovery. In this paper, we introduce Haing copressed sensing that directly recovers a k-bit quantized signal of diensional n fro its -bit easureents via invoking n ties of Kullback-Leibler divergence based nearest neighbor search. Copared with CS and -bit CS, allows the signal to be dense, takes considerably less linear recovery tie and requires substantially less easureents Olog n. Moreover, recovery can accelerate the subsequent -bit CS dequantizer. We study a quantized recovery error bound of for general signals and +dequantizer recovery error bound for sparse signals. Extensive nuerical siulations verify the appealing accuracy, robustness, efficiency and consistency of. Index Ters Copressed sensing, -bit copressed sensing, quantizer, quantized recovery, nearest neighbor search, dequantizer. I. INTRODUCTION Digital revolution triggered a rapid growth of novel signal acquisition techniques with priary interests in reducing sapling costs and iproving recovery efficiency. The theoretical proise of conventional sapling ethods coes fro the Shannon/Nyquist sapling theore [], which states a signal can be fully recovered if it is sapled uniforly at a rate ore than twice its bandwidth. Such unifor sapling is always done by analog-to-digital converters ADCs. Unfortunately, for any real applications such as radar iaging and agnetic resonance iaging MRI, Nyquist rate is too high due to the expensive cost of analog-to-digital AD conversion, T. Zhou and D. Tao are with the Centre for Quantu Coputation & Intelligent Systes, University of Technology, Sydney, Australia, NSW 27. October, 2

2 2 the axiu sapling rate liits of the hardware, or the additional costly copression to the obtained saples. A. Copressed sensing Recently, prosperous researches in copressed sensing CS [2][3][4][5][6][7] show that an accurate recovery can be obtained by sapling signals at a rate proportional to their underlying inforation content rather than their bandwidth. The key iproveent brought by CS is that the sapling rate can be significantly reduced by replacing the unifor sapling with linear easureent, if the signals are sparse or copressible on certain dictionary. This iproveent leverages the fact that any signals of interest occupies a quite large bandwidth but has a sparse spectru, which leads to a redundancy of the unifor sapling. In particular, CS dedicates to reconstruct a signal x R n fro its linear easureents y = Φx = ΦΨα, where Φ R n is the easureent or sensing atrix allowing n in which case Φ is an underdeterined syste. The signal x is K-sparse if its nonzero entries are less than K. Given a dictionary Ψ, x is K-copressible if the nonzero entries of α are less than K. If sparse/copressible x is the Nyquist-rate saples of a analog signal xt, CS replaces ADCs with a novel sapler Φ such that y = Φx = Φxt. A straightforward approach for recovering x fro y is to iniize the nuber of nonzero entries in x, i.e., the l nor of x. Specifically, it is not difficult to deonstrate that a K-sparse signal x can be accurately recovered fro 2K easureents by solving in x R n x s.t. y = Φx CS l with exhaustive search if Φ is a generic atrix. However, such exhaustive search has intractable cobinatorial coplexity. So soe CS ethods adopts iterative and greedy algoriths to solve the l iniization, such as orthogonal atching pursuit OMP [8], copressive sapling atching pursuit CoSaMP [9], approxiate essage passing [], iterative splitting and thresholding IST [] and iterated hard shrinkage IHT [2]. Since l iniization is a non-convex proble, soe other CS ethods solve its convex relaxation, i.e., l iniization and its variants: in x x R n s.t. y = Φx. CS l October, 2

3 3 Various convex optiization approaches have been developed or introduced to solve the above proble and its variants. Representatives include basis pursuit [3], Dantzig selecter [4], NESTA [5], interior point ethod [6], coordinate gradient descent [7], gradient projection [8] and the class of approaches based on the fixed point ethod such as Bregan iterative algorith [9], fixed point continuation [2] and iteratively re-weighted least squares IRLS [2]. It is also worthy noting that the lasso [22] type algoriths [23][24] for odel selection can be applied to CS recovery proble as well. However, copared with the recovery schees in conventional sapling theory, which reconstructs signals by invoking nearly linear transforation, ost aforeentioned recovery algoriths in CS ethods require polynoial tie cost, which is substantially ore expensive than the conventional ethods. This burden in recovery efficiency liits the applications of CS in any real probles, in which the diension of the signals is extreely high. Beyond the recovery efficiency, another iportant issue in CS is the theoretical guarantee for precise recovery. Since ost existing CS algorith finds the signal that agrees with the easureents y without directly iniizing l nor, the recovery success of existing CS ethods relies on another theoretical requireent for Φ or ΦΨ in copressible case, i.e., two sufficiently close easureents y = Φx and y 2 = Φx 2 indicates that vectors x and x 2 are sufficiently close to each other. This low-distortion property of the linear operator Φ is called restricted isoetry property RIP [25][26], which can also be interpreted as the incoherence between the easureent atrix Φ and the signal dictionary Ψ or identity atrix I for sparse x in order to restrict the concentration of a single α i or x i in sparse case in the easureents. An intriguing property of CS is that soe randoly generated atrices such as Gaussian, Bernoulli and partial Fourier enseble fulfill RIP with high probabilities. For exaple, Φ whose entries are randoly drawn fro a sub-gaussian distribution satisfies RIP with a high probability if = OK logn/k. By using the concept of RIP, the global solution of the l iniization with such Φ is guaranteed to be sufficiently close to the original sparse signal. Thus CS can successfully recover K-sparse signals of diension n fro = OK logn/k easureents. However, given a deterinistic Φ with = OK logn/k, it is generally regarded as NP-hard to test whether RIP holds or not. In practice, it is paraount that signals are not exactly sparse and the easureents cannot be precise due to the hardware liits. Questions arise in this case that is it possible to recovery the October, 2

4 4 K largest entries if x is nearly sparse? and is it possible to recover x fro noisy easureents y?. These questions lead to the proble of stable recovery [26][27] in CS. Fortunately RIP can naturally address this proble, because it ensures that sall changes in easureents induce sall changes in the recoveries. In stable recovery, the constraint y = Φx in the original l and l iniization probles are replaced with y Φx 2 ɛ. Another variant in this case is iniizing y Φx 2 with penalty or constraint to l or l nor of x. Many existing CS algoriths such as basis pursuit denoising BPDN [3] can also handle the stable recovery proble. Today s state-of-the-art researches in CS focus priarily on further reducing the nuber of easureents, iproving the recovery efficiency and increasing the robustness of stable recovery. Although CS [28] exhibits powerful potential in siplifying the sapling process and reducing the easureent aount, there are soe unsettled issues of CS when applied to realistic digital systes, especially in developing hardware. A crucial proble is how to deal with the quantization of the easureents. B. Quantized copressed sensing In practical digital systes, quantization of CS easureents y is a natural and inevitable process, in which each easureent is transfored fro a real value to a finite nuber of bits that represent a finite interval containing the real value. In CS, quantization is an irreversible process introducing error in easureents, if we round the quantized easureent as any real value within the corresponding interval in recovery. One coonly used trick to deal with quantization error is to treat it as bounded Gaussian noise in easureents, and thus stable recovery ethods in CS guarantee to obtain a robust recovery. However, this solution cannot produce acceptable recovery result unless the quantization error is sufficiently sall and near Gaussian. Unfortunately, these two conditions are often hardly fulfilled because sall quantization error is obtained fro sall interval width, which is the result of high sapling rate and high conversion accuracy of ADC. This conflicts with the spirit of CS; and 2 the quantization error is usually highly non-gaussian. Several recent works [29][3][3][32] address the quantized copressed sensing QCS by iplicitly or explicitly treating the quantization as a box constraint to the easureents y = October, 2

5 5 Φx + ɛ, in x x R n s.t. u Φx + ɛ v, QCS where the two vectors u and v store the corresponding boundaries of the intervals that the entries of y lie in, and ɛ is the easureent noise. The box constraint is also called quantization consistency QC constraint. By solving this proble, the quantization error will not be wholly transfored into the recovery error via RIP. Thus it is possible to obtain an accurate recovery fro very coarsely quantized easureents. A variant of BPDN called basis pursuit dequantizer proposed in [29] restricts y Φx p 2 p rather than y Φx 2 in l nor iniization, and proves that the recovery error decreases by a factor of p +. In [32], an adaption of BPDN and subspace pursuit [33] integrate an explicit QC constraint. An l regularized axiu likelihood estiation is developed in [3] to solve QCS with noise i.e., ɛ. As the extree case of QCS, -bit CS [3][34] has been developed to reconstruct sparse signals fro -bit easureents, which erely capture the signs of the linear easureents in CS. The -bit easureents enable siple and fast quantization. Thus it can significantly reduce the sapling costs and strengthen the robustness of hardware ipleentation. One inevitable inforation loss in -bit easureents is the scale of the original signal, because scaled signal will have the sae linear easureent signs as the original one. Theoretically, -bit CS ensures consistent reconstructions of signals on the unit l 2 sphere [35][36]. In -bit CS, one-sided l 2 [3] or l [34] objectives are designed to guarantee the consistency of -bit easureents by iposing a sign constraint or iniizing the sign violations in optiization. Analogous to RIP in CS, the binary ɛ-stable ebedding BɛSE [34] ensures the low distortion between the original signals and their -bit easureents, and thus guarantees the accuracy and stableness of the reconstruction. It is rearkable that = OK log n can guarantees BɛSE and the subsequent successful recovery. Most -bit CS recovery algoriths, e.g., renoralized fixed point iteration [3], atched sign pursuit [35] and binary iterative hard thresholding [34], are extensions of CS recovery algoriths. It has been shown that, a variant of IHT [2], can produce precise and consistent recovery fro -bit easureents. QCS and -bit CS not only consider the quantization of the easureents but also iprove the recovery robustness to the nonlinear distortions brought by ADC, because the quantized October, 2

6 6 easureents only preserve the intervals the real-value easureents lie in. However, QCS and -bit CS ethods require polynoial-tie recovery algoriths and thus they are prohibitive to high diensional signals in practical applications. Moreover, another central proble is that either CS or QCS recovers the original real-value signals, but quantization of the recovered signals is inevitable in digital systes. C. Haing copressed sensing Digital systes prefer to use the quantized recovery of the original signal, which can be processed directly, but the recoveries of both CS and QCS are continuous and real-valued. In order to apply the to digital systes, a straightforward solution is to ipose an additional quantization to the CS or QCS recoveries. However, this quantization requests additional tie costs and expenses on ADCs, which could be expensive if the sapling rate is required to be high. Moreover, the convex optiization or iterative greedy search based recovery in CS and QCS is of polynoial-tie. This is not acceptable for high-diensional signals. In addition, the trade-off between the recovery tie and the recovery resolution cannot be controlled in CS and QCS, although it is preferred in practice. Finally, the success of CS and QCS is based upon the assuption that signals are sparse. When the signal x is dense, the nubers of easureents are large required by CS and QCS and the advantages of CS and QCS are lost accordingly. In this paper, we directly recover the quantization of a general signal not necessary to be K-sparse fro the quantization of its linear easureents, or quantized recovery QR. In particular, for a signal x and its quantization q = Qx by a quantizer Q, we seek for a recovery algorith R that reconstructs q = Ry sufficiently close to q fro the quantized easureents y = Ax, where the operator A is a coposition of linear easureent and quantization. This proble has not been forally studied before, and has the potential to itigate the aforeentioned liitations of CS and QCS. The ain otivation behind QR is sacrificing the quantization error of the recovery for reducing the nuber of easureents. Thus the recovery tie can be significantly reduced with the decreasing of the nuber of bits for the quantized recovery, and the nuber of easureents can be sall even when the signal is dense. Coparing with CS and QCS, QR considers the quantization error of the quantized recovery in deterining the sapling rate and developing the reconstruction algorith. The priary contribution of this paper is developing Haing copressed sensing October, 2

7 7 to achieve quantized recovery fro a sall nuber of quantized easureents with extreely sall tie cost and without signal sparsity constraint. In copression sapling, we adopt the -bit easureents [34] to guarantee consistency and BɛSE but eploy the in a different way. In particular, we introduce a bijection between each diension of the signal and a Bernoulli distribution. The underlying idea of is to estiate the Bernoulli distribution for each diension fro the -bit easureents, and thus each diension of the signal can be recovered fro the corresponding Bernoulli distribution. In order to define the quantized recovery, we propose a k-bit quantizer splitting the signal doain into k intervals, which are derived fro the bijection as the appings of the k unifor linear quantization boundaries for the Bernoulli distribution doain. In recovery, searches the nearest neighbor of the estiated Bernoulli distribution aong the k boundaries in the Bernoulli distribution doain, and recovers the quantization of the corresponding diension as the quantizer interval associated with the nearest boundary. We theoretically study a quantized recovery error bound of by investigating the precision of the estiation and its ipact on the KL divergence based nearest neighbor search. The theoretical analysis provide a strong support to the successful recovery of. Coparing with CS and QCS, has the following significant and appealing erits: provides siple and low-cost sapling and recovery schees for digital systes. The procedures are substantially siple: the sapling and sensing are integrated to -bit easureents, while the recovery and quantization are integrated to quantized recovery. Furtherore, both the -bit easureent and the quantized recovery do not require ADC with a high sapling rate. Note that reains the recovery robustness due to quantized easureents inherited fro QCS. 2 The recovery in only requires to copute nk Kullback-Leibler KL divergences for obtaining k-bit recovery of an n-diensional signal, and thus is a non-iterative, linear algorith. The recovery includes very siple coputations. Therefore, is considerably ore efficient and easier to be ipleented than CS and QCS. 3 According to the theoretical analysis of, erely = Olog n -bit easureents are sufficient to produce a successful quantized recovery with high probability. Note there is no sparse assuption to the signal x. Therefore, allows ore econoical copression than CS and QCS. October, 2

8 8 Another copelling advantage of is it can proote the recovery of the real-value signals after quantized recovery. When the subsequent dequantization x = Dq after quantized recovery is required, we can treat the quantized recovery as a box constraint to reduce the search space of the -bit CS dequantizer D in order to accelerate the convergence. By invoking the recovery bound, the consistency and BɛSE fro -bit CS, we show an error bound of +dequantizer recovery for sparse signals. The rest of this paper is outlined as follows. Section 2 introduces the -bit easureents in, which lead to a bijection between each diension of the signal and a Bernoulli distribution, and its consistency. Section 3 presents the k-bit reconstruction in, including how to obtain the quantizer, KL-divergence nearest neighbor search based recovery and theoretical evidence for successful recovery. Section 4 introduces the application of recovery results in dequantization, an theoretical analysis of the dequantization error is given here. Section 5 shows the power of via three groups of experients. Section 6 concludes. II. -BIT MEASUREMENTS recovers the quantized signal directly fro its quantized easureents, each of which is coposed of a finite nuber of bits. We consider the extree case of -bit easureents of a signal x R n, which are given by y = Ax = sign Φx, where sign is an eleent-wise sign operator and A aps x fro R n to the Boolean cube B M := {, } M. Since the scale of the signal is lost in -bit easureents y ultiplying x with a positive scalar will not change the signs of the easureents, the consistent reconstruction can be obtained by enforcing the signal x Σ K := {x Sn : x K} where S n := {x R n : x 2 = } is the n-diensional unit hyper-sphere. The -bit easureents y can also be viewed as a hash of the signal x. Siilar hash based on rando projection signs is developed in locality sensitive hashing LSH [37][38]. LSH perfors an approxiate nearest neighbor ANN searches on the hashes of signals, and proves the results approach the precise NN searches on the original signals with high probability. This theoretical guarantee is based on condition siilar to BɛSE [34] in -bit CS. It is interesting to copare LSH with, because LSH is an irreversible process aiing at ANN, while can be viewed as a reversible LSH in this case. October, 2

9 9 A. Bijection In contrast to CS and -bit CS, does not recover the original signal, but reconstructs the quantized signal by recovering each diension in isolation. In particular, according to Lea 3.2 in [39], we show that there exists a bijection cf. Theore between each diension of the signal x and a Bernoulli distribution, which can be uniquely estiated fro the -bit easureents. The underlying idea of is to estiate the Bernoulli distribution for each diension, and recover the quantization of the corresponding diension as the interval where the Bernoulli distribution s apping lies in. Theore : Bijection For a noralized signal x R n with x 2 = and a noralized Gaussian rando vector φ that is drawn uniforly fro the unit l 2 sphere in R n i.e., each eleent of φ is firstly drawn i.i.d. fro the standard Gaussian distribution N, and then φ is noralized as φ/ φ 2, given the i th diension of the signal x i and the corresponding coordinate unit vector e i = {,,,,,, }, where appears in the i th diension, there exists a bijection P : R P fro x i to the Bernoulli distribution of the binary rando variable s i = sign x, φ sign e i, φ : Pr s i = = P x i = arccos x π i, Pr s i = = arccos x π i. Since the apping between x i and P x i is bijective, given P x i, the i th diension of x can be uniquely identified. According to the definition of s i, P x i can be estiated fro the instances of the rando variable sign x, φ, which are exactly the -bit easureents y defined in. Therefore, the -bit easureents y include sufficient inforation to reconstruct x i fro the estiation of P x i, and the recovery accuracy of x i depends on the accuracy of the estiation to P x i. 2 B. Consistency Given a signal x, its quantization q = Qx by quantizer Q, the quantized recovery q = Ry obtained by reconstruction R fro the -bit easureents y = Ax, and its dequantization x = Dq obtained by a dequantizer D, the +dequantizer recovery x is given by x = x + err H + err D, 3 October, 2

10 where err H is deterined by the difference between q and q caused by reconstruction, and err D is the dequantization error fro q to x. The upper bounds of err H and err D will be given in Sections 4 and 5, respectively. The following Lea shows the consistency pertaining to err H and err D. Lea : Consistency Let Φ be a standard Gaussian rando atrix whose rows are coposed of φ i defined in Theore. The easureent operator A is defined in. Given a fixed γ >, for any signal x R n and its +dequantizer recovery x, we have E D H Ax, Ax gσ, x 2, 4 Pr D H Ax, Ax > gσ, x 2 + γ e 2γ2, 5 where D H u, v = M M i= u i v i u, v {, } M is the noralized Haing distance, gσ, x 2 = 2 σ x 2 2 +σ2 2 σ x 2 and σ = err H + err D 2. Proof: According to and 3, we have Ax = sign Φx = sign Φx + Φ err H + err D. 6 Since err = Φ err H + err D is a Gaussian rando noise vector whose i th eleent err i N, σ 2. According to Lea 5 in [34], we obtain Lea. This copletes the proof. The consistent reconstruction in CS and -bit CS iniizes Ax Ax for a K-sparse signal x. RIP and BɛSE bridge the consistency and the reconstruction accuracy in CS and -bit CS, respectively. Instead of iniizing Ax Ax to achieve the recovery accuracy, directly estiates the interval that each diension of the signal x lies in fro the estiated Bernoulli distribution defined in Theore. In addition, the consistency between Ax and Ax is iportant for, because in part it deterines the aount of inforation preserved in -bit easureents, and 2 the error bound of +dequantizer recovery for sparse signals. III. K-BIT RECONSTRUCTION The priary contribution of this paper is the quantized recovery in, which reconstructs the quantized signal fro its -bit easureents. Figure a illustrates quantized recovery. To define the quantizer, we firstly find k boundaries P j j =,, k 8 in Bernoulli distribution doain by iposing the unifor linear quantizer to the range of P j. Given an October, 2

11 arbitrary x i, the nearest neighbor of P x i aong the k boundaries P j j =,, k indicates the interval q i that x i lies in the signal doain. The k + boundaries S j j =,, k associated with the k intervals q j j =,, k are calculated fro the k boundaries P j j =,, k according to the bijection defined in Theore. In recovery, P x i is estiated as ˆP x i fro the -bit easureents y. Then the nearest neighbor of ˆP x i aong the k boundaries P j j =,, k is deterined by coparing the KL-divergences between ˆP x i and P j. The quantization of x i defined by quantizer is recovered as the interval q i corresponding to the nearest neighbor. In this section, we first introduce the quantizer, which is a apping resulting fro the unifor linear quantizer of the Bernoulli distribution doain to the signal doain. The quantized recovery procedure is coposed of n ties of KL-divergence based nearest neighbor searches. Thus it is a linear algorith uch faster than the conventional reconstruction algoriths of CS and -bit CS, which require optiization with the l p p 2 constraint/penalty, or iterative thresholding/greedy search. We then study the upper bound of the quantized recovery error err H. A. quantizer Since ais at recovering the quantization of the original signal, we firstly introduce quantizer, which defines the intervals and boundaries for quantization in the signal doain. These intervals and boundaries are uniquely derived fro a predefined unifor linear quantizer in the Bernoulli distribution doain. Given a signal and the boundaries of quantizer, its k-bit quantization can be identified. We will show quantizer perfors closely to the unifor linear quantizer. Note that in the quantized recovery of, the reconstruction and quantization are siultaneously accoplished. Thus the quantizer will not play an explicit role in the recovery procedure. However, it is related to and uniquely deterined by the quantization of the Bernoulli distribution doain, which plays an iportant role in the recovery and explains the reconstruction q. Moreover, it will be applied to the error bound analyses for err H and err H + err D. We introduce the quantizer Q by defining a bijective apping fro the boundaries of the Bernoulli distribution doain to the intervals of the signal doain according to Theore. Assue the range of a signal x is given by: x inf x i x sup, i,, n. 7 October, 2

12 2 KL divergence.5 S i i Fig.. a Quantized recovery in. Bernoulli distribution P x i given in Theore has estiate ˆP x i 2 fro -bit easureents y = Ax. searches the nearest neighbor of ˆP x i aong the k boundaries P jj =,, k 8 in the Bernoulli distribution doain. The quantization of x i, i.e., q i is recovered as the interval between the two boundaries S i and S i corresponding to the nearest neighbor, wherein S i is a apping of P i and P i in signal doain. b quantizer. The boundaries S i in when k =, 3, 5,, 5 and x inf =, x sup =. By applying the unifor linear quantizer with the quantization interval to the Bernoulli distribution doain, we get the corresponding boundaries P i = Pr = P i = arccos x π inf i, P + i = Pr = Pr., i =,, k. 8 October, 2

13 3 The interval is = k π arccos x inf π arccos x sup = P k P k. 9 We define the k-bit quantizer in the signal doain by coputing its k + boundaries as a apping fro the k boundaries P i i =,, k to R in the Bernoulli doain: x inf, i = ; S i = cos, i =,, k ; where f P i = π +fp i x sup, i = k. P i P i P i P i P i + P i + P i P i Although the apping between the boundaries of quantizer S i to the boundaries of the quantizer in the Bernoulli distribution doain P i is bijective, such apping cannot be explicitly obtained. So it is difficult to derive the corresponding quantizer in the Bernoulli distribution doain fro a predefined quantizer. Thus quantizer cannot be fixed as a unifor linear quantizer and has to be coputed fro a predefined quantizer in the Bernoulli distribution doain. Fortunately, quantizer perfors very closely to the unifor linear quantizer, especially when x i is not very close to or. Figure b shows the fact. Given a signal x and the boundaries defined in, its k-bit quantization q is: /. Qx = q, q i = {j : S j x i S j }. B. KL-divergence based nearest neighbor search The k + boundaries of the k-bit quantizer in define k intervals in R. Quantized recovery in reconstructs a quantized signal by estiating which interval each diension of the signal x lies in. The estiation is obtained by a nearest neighbor search in the Bernoulli distribution doain. To be specific, an estiation of P x i given in 2 can be derived fro the -bit easureents y. For each P x i, we find its nearest neighbor aong the k boundaries P j j =,, k 8 in the Bernoulli distribution doain. The interval that x i lies in is then estiated as the interval of quantizer corresponding to the nearest neighbor. KL-divergence easures the distance between two Bernoulli distributions in the nearest neighbor search. October, 2

14 4 According to Theore, the bijection fro x i to a particular Bernoulli distribution, i.e., P x i given in 2, has an unbiased estiation fro the -bit easureents y ˆP x i ˆP = ˆPr s i = = j : [y sign Φ i] j = /, x i = ˆP x i + = ˆPr s i = = ˆPr s i =, where Φ i is the i th colun of the easureent atrix Φ. The quantization of x i can then be recovered by searching the nearest neighbor of ˆP xi aong the k boundary Bernoulli distributions P j j =,, k in 8. In this paper, the distance between P j and ˆP x i is easured by the KL-divergence: D KL P j ˆP x i = P j log P j ˆP x i + P + j log P + j ˆP x i +, 2 i =,, n, j =,, k. 3 The interval that x i lies in aong the k intervals defined by the boundaries S j j =,, k in is identified as the one whose corresponding boundary distribution P j neighbor of ˆP x i. Therefore, the quantized recovery of x, i.e., q, is given by Ry = q, qi = + arg in D KL P j ˆP x i, j Thus the interval that x i lies in can be recovered as is the nearest i =,, n, j =,, k. 4 S q i x i S q i. 5 The recovery algorith is fully suarized in 4, which only includes siple coputations without iteration and thus can be easily ipleented in real systes. According to 4, the quantized recovery in requires nk coputations of KL-divergence between two Bernoulli distributions. This indicates the high efficiency of linear recovery tie, and the trade-off between resolution k and tie cost nk. C. Quantized recovery error bound We investigate the error bound of the quantized recovery 4 by studying the difference between q i and q i, which are the quantization of x i and its quantized recovery by, October, 2

15 5 respectively. The difference between q and q defines the error err H in 3, which is the error caused by reconstruction 4: S qi S q i + q i qi ax, q i > qi ; err H i =, q i = qi ; S q i S qi + qi q i ax, q i < qi. The ax denotes the largest interval between neighboring boundaries of the quantizer, i.e., ax = ax j=,,k S j S j. In order to investigate the difference between q i and q i, we study the upper bound for the probability of the event that the true quantization of x i is q i = + α, while its recovery by is q i = + ββ α. According to the quantizer and the reconstruction 4, this probability is 6 Pr β = arg in D KL P j ˆP x i S α x i S α+. 7 j In order to study the conditional probability in 7, we first consider an equivalent event of β = arg in D KL P j ˆP x i, shown in the following Lea 2. j Lea 2: Equivalence The event that the nearest neighbor of ˆP x i aong P j j =,, k is P β equals to the event that ˆP x i is closer to P β than both P β and P β+, where the distance between P j and ˆP x i is easured by KL divergence 3, i.e., β = arg in D KL P j ˆP x i j D KL P β ˆP x i D KL P β ˆP x i >, D KL P β+ ˆP x i D KL P β ˆP x i >. Proof: It is direct to have the following equivalence: β = arg in D KL P j ˆP x i D KL P j:j β ˆP x i D KL P β ˆP x i >. 9 j Thus = in 8 is true. In order to prove = in 8, for arbitrary j {,, k } and fixed x i, we study the onotonicity of D KL P j ˆP x i as a function of P j : D KL P j ˆP x i P j P = log j ˆP x i ˆP x i P. 2 j 8 October, 2

16 6 Therefore, it holds that D KL P j ˆP x i P j >, P j > ˆP x i ; <, P j < ˆP x i. 2 According to the definition of P j in 8 and the right hand side of in 8, we have P j:j=,,β > ˆP x i that indicate D KL P j:j=,,β 2 ˆP x i > D KL P β ˆP x i > D KL P β ˆP x i, 22 and P j:j=β+,,k < ˆP x i that indicate D KL P j:j=β+,,k ˆP x i > D KL P β+ ˆP x i > D KL P β ˆP x i. 23 Therefore, we can derive the left hand side of fro its right hand side in 8. This copletes the proof. By using the equivalence in Lea 2, the conditional probability given in 7 can be upper bounded by two other conditional probabilities, whose conditions are the two cases of the condition in 7. Corollary : Upper bounds in two cases The conditional probability given in 7 can be upper bounded by Pr β = arg in D KL P j ˆP x i S α x i S α+ j Pr D KL P β ˆP x i D KL P β ˆP x i > S α x i S α+ S β, Pr D KL P β+ ˆP x i D KL P β ˆP x i > S β+ S α x i S α+. Proof: By using Lea 2, we discuss the the conditional probability in 7 by considering the two cases of the conditional event S α x i S α+. Case When S α+ S β, we have Pr β = arg in D KL P j ˆP x i S α x i S α+ j = Pr D KL P β ˆP x i D KL P β ˆP x i > S α x i S α+ S β Pr D KL P β+ ˆP x i D KL P β ˆP x i > S α x i S α+ S β Pr D KL P β ˆP x i D KL P β ˆP x i > S α x i S α+ S β October, 2

17 7 Case 2 When S β+ S α, we have Pr β = arg in D KL P j ˆP x i S α x i S α+ j = Pr D KL P β ˆP x i D KL P β ˆP x i > S β+ S α x i S α+ Pr D KL P β+ ˆP x i D KL P β ˆP x i > S β+ S α x i S α+ Pr D KL P β+ ˆP x i D KL P β ˆP x i > S β+ S α x i S α+. 26 This copletes the proof. Hence we can bound the conditional probability in 7 by exploring the upper bounds of the two conditional probabilities in Corollary. Proposition : Two probabilistic bounds The two conditional probabilities in 24 are upper bounded by Pr D KL P β ˆP x i D KL P β ˆP x i > S α x i S α+ S β 2 2 exp 2 π arccos x i f P, 27 β + Pr D KL P β+ ˆP x i D KL P β ˆP x i > S β+ S α x i S α+ 2 exp 2 f P 2 β+ + π arccos x i, 28 where f is defined as f P j = P P j j P j P j P j + P j + P j P j /. 29 Proof: For proving 27, according to 3 and the definition of ˆP x i in 2, we have October, 2

18 8 the following equivalences: D KL P β ˆP x i D KL P β ˆP x i = log ˆP x i P P β β P P β β log ˆP x i P β + P + > β P β P β ˆP x i < f P, f P P j P β + j = j P / P j j P j + P + j P j P j [ ] j : [y sign Φ i] j =, f P, 3 β + where x denotes the largest integer saller than x. Since j : [y sign Φ i] j = refers to the event that in a sequence of independent Bernoulli trials defined in 2, there are j trials return s i =, we can conclude the distribution of j follows the binoial distribution j Pr : [y sign Φ i] j = = j j π arccos x i j i π arccos x. 3 According to the equivalence shown in 3, the probability in 27 can then be coputed as Since [ ] Pr j, f P = β + we have f P j P j fp β + j= = log j j π arccos x i j i π arccos x. 32 P j P j + P j P <, 33 j S α+ S β f P β + f Pα+ = + π arccos S α+. 34 Hence the condition of Hoeffding s inequality for probability 32 holds: f P β + π arccos S α+ π arccos x i. 35 By applying Hoeffding s inequality to probability 32, we have [ ] Pr j, f P 2 β + 2 exp 2 π arccos x i f P. 36 β + October, 2

19 9 Due to the equivalence proved in 3, we obtain 27. This copletes the proof of 27. To prove 28, siilarly, according to 3 and the definition of ˆP x i in 2, we have the following equivalences: D KL P β+ ˆP x i D KL P β ˆP x i = log P P β+ β+ P P β+ β+ P β+ + P + log β+ P β+ P β+ ˆP x i > ˆP x i ˆP x i > [ ] f P j : [y sign Φ β+ + i] j = f P,, 37 β+ + where x denotes the sallest integer larger than x. According to the equivalence shown in 37 and the binoial distribution given in 3, the probability in 28 can be coputed as [ ] Pr j f P, = β+ + f P β+ + j= j j j π arccos x i π arccos x i. 38 The onotonicity of f P j in 33 yields S β+ S α f P β+ + f Pα + = π arccos S α. 39 Hence the condition of Hoeffding s inequality for probability 38 holds: f P β+ + π arccos S α π arccos x i. 4 By applying Hoeffding s inequality to probability 38, we have [ ] Pr j f P, β+ + 2 exp 2 f P 2 β+ + π arccos x i. 4 Due to the equivalence proved in 37, we obtain 28. This copletes the proof of 28. By using Lea 2, Corollary and Proposition, we have the following Theore about the upper bound of the probability in 7. Theore 2: Quantized recovery bound Given quantizer Q in and reconstruction R 4, the probability of the event that the true quantization of x i is q i = + α October, 2

20 2 while its recovery by is qi = + βq i qi is upper bounded by Pr [Qx] i = q i [Ry] i = qi = Pr β = arg in D KL P j ˆP x i S α x i S α+ j 2 exp 2 2 exp 2 2 f P q i + + π arccos x i arccos x π i f P q i + 2, q i > q i ;, q i < q i. The iniu aount of -bit easureents that ensures the successful quantized recovery in is then directly obtained fro Theore 2. Corollary 2: Aount of easureents successfully reconstructs x i with probability exceeding η η if the nuber of easureents where δ i = in f P q i + π arccos x i 42 2δ i log 2η, 43 2, π arccos x i f Pq i Moreover, successfully reconstruct the signal x with probability exceeding η if the nuber of easureents 2 in i δ i log n 2η. 45 Reark: Corollary 2 states that the quantization of an n-diensional signal x on the unit sphere can be successfully recovered by fro = Olog n with high probability. Copared with CS and QCS, the aount of easureents required by is substantially reduced and irrelevant to the sparsity of the signal. Thus provides a sipler and ore econoical sapling schee that does not rely on sparse or copressible assuption to the signal. A new issue in quantized recovery is the influence of quantization bits k to the recovery accuracy. According to the definition of δ i in 44, both the upper bound for the probability of reconstruction failure in 42 and the least nuber of easureents ensuring reconstruction success in 43 will be reduced if q i q i increases. This indicates two facts: the interval x i lies in is easier to be istakenly recovered as its nearest intervals; and 2 when we increase October, 2

21 2 the nuber of bits k in quantized recovery, x i will becoe closer to the boundaries S q and S q, which leads to the decreasing of in i δ i in 45. In this case, the nuber of easureents has to be increased in order to ensure a successful recovery. In suary, recovering finer quantization in requires an augent in the nuber of easureents. In another word, perfors a trade-off between sapling rate and resolution. IV. DEQUANTIZER If required, we can dequantize the quantized recovery of the signal by assigning to x i the idpoint of the interval that x i lies in, i.e., x i = Sq 2 i + S q i. Although this dequantizer is siple and efficient, it is not accurate. Fortunately, existing -bit CS provides accurate tools for dequantization on the quantized recovery result copared with the idpoint reconstruction, though they introduce extra coputational costs to trade-off the efficiency against dequantization accuracy. That is because ost -bit CS recovery algoriths invoke tie consuing optiization with l p p 2 penalty or constraint. However, the quantized recovery of provides a box constraint to the subsequent -bit CS optiization, and thus significantly reduces the tie costs by shrinking the search space for x. In particular, we obtain the following box constraint to the signal x fro the quantized recovery q : Since Ω in 46 is convex, the projection to it is direct Ω = { x : S q i x i S q i }. 46 P Ω x = z, z i = edian { S q i, x i, S q i }, 47 The dequantization can then be obtained by adding a projection step 47 at the end of each iteration round of -bit CS recovery algoriths. Note the -bit CS algoriths with this odification have a substantially saller searching space, so they will converge quickly. Note x has to be noralized to x := x / x 2 at the end of the dequantization, because x is assued to be on the unit l 2 sphere. October, 2

22 22 A. +dequantizer error bound We analyze the error of +dequantizer recovery based on the fact that both the error err H caused by reconstruction and the error err D caused by a dequantizer can be upper bounded. In the worst case, the upper bound of the dequantization error err D is err D i S q i S q i, i =,, n. 48 Based on the consistency in Lea, the definition of BɛSE [34] and Lea of aount of easureent [34], we derive the upper bound of +dequantizer recovery error when the signal is K-sparse in the following Theore 3. Definition : Binary ɛ-stable Ebedding Let ɛ,. A apping A : R n {, } is a binary ɛ-stable ebedding BɛSE of order K for sparse signals if D S x, x ɛ D H Ax, Ax D S x, x + ɛ, D S x, x = π arccos x, x. 49 for all K-sparse signals x, x on the unit l 2 sphere. Lea 3: Aount of easureents Let Φ be the easureent atrix defined in Theore, and let the -bit easureent operator A be defined as in. Fix µ and ɛ >. If the nuber of easureents is 4 K log n + 2K log 5 ɛ 2 ɛ + log 2, 5 µ then with probability exceeding µ, the apping defined by A is a BɛSE of order K for sparse signals. Theore 3: +dequantizer error bound If x = Dq = DRy = DRAx is the +dequantizer recovery of K-sparse signal x, where y includes easureents whose aount satisfies 5, then where σ, γ are defined in Lea. D S x, x D H Ax, Ax + ɛ σ + γ + ɛ, 5 2 x 2 V. EMPIRICAL STUDY This section copares and copare it with [34] for -bit CS on 3 groups of nuerical experients. We use average quantized recovery error n i= q i q i /nk to easure err H shown in Section 3.3. In each trial, we draw a noralized Gaussian rando atrix Φ October, 2

23 23 quantized recovery error n=24, k=24.4 quantized recovery error n=24, k= quantized recovery tie n=24, k=24 quantized recovery tie n=24, k= quantized recovery error n=24, k=256.4 quantized recovery error n=24, k= quantized recovery tie n=24, k=256 quantized recovery tie n=24, k= quantized recovery error n=24, k=28.4 quantized recovery error n=24, k= quantized recovery tie n=24, k=28 quantized recovery tie n=24, k= Fig. 2. Phase plots of and -bit CS+ quantizer in the noiseless case. R n given in Theore and a signal of length n and cardinality K, whose K nonzero entries drawn uniforly at rando on the unit l 2 sphere. A. Phase transition in the noiseless case We first study the phase transition properties of and -bit CS on quantized recovery error and on recovery tie in the noiseless case. We conduct and + quantizer for 5 trials. In particular, given fixed n and k, we uniforly choose different values October, 2

24 24 between and, and different values between and 4. For each {, } pair, we conduct trials, i.e., recovery and -bit CS+ quantizer of n-diensional signals with cardinality K fro their -bit easureents. The average quantized recovery errors and average tie costs of the two ethods on overall 4 {, } pairs for different n and k are shown in Figure 2 and Figure 3. quantized recovery error n=52, k=52.4 quantized recovery error n=52, k= quantized recovery tie n=52, k=52.6 quantized recovery tie n=52, k= quantized recovery error n=52, k=256.4 quantized recovery error n=52, k= quantized recovery tie n=52, k=256.6 quantized recovery tie n=52, k= quantized recovery error n=52, k=28.4 quantized recovery error n=52, k= quantized recovery tie n=52, k=28.6 quantized recovery tie n=52, k= Fig. 3. Phase plots of and -bit CS+ quantizer in the noiseless case. In Figure 2 and Figure 3, the phase plots of quantized recovery error show the quantized recovery of is accurate if the -bit easureents are sufficient. Copared to -bit CS+ October, 2

25 25 quantizer, needs slightly ore easureents to reach the sae recovery precision. That is because -bit CS recovers the exact signal, while recovers its quantization. Another reason is quantizer perfors different fro unifor linear quantizer when x i approaching or for the noralized signal x, which exactly corresponds to the left argin area of the phase plot. However, the phase plots of quantized recovery tie shows that costs substantially less tie than -bit CS+ quantizer. Thus can significantly iprove the efficiency of practical digital systes and eliinate the hardware cost for additional quantization. B. Phase transition in the noisy case We also consider the phase transition properties [4] of and -bit CS on quantized recovery error and on recovery tie in the noisy case. The experients setting up is the sae as that in the noiseless case except Gaussian rando noises are iposed to the input signals. The results are shown in Figure 4 and Figure 5. Coparing to the phase plots of quantized recovery error in the noiseless case, perfors uch ore robust than -bit CS. The tie costs of shown in Figure 4 and Figure 5 still significantly less than that of -bit CS. C. Quantized recovery error vs. nuber of easureents in the noisy case We then show the trade-off between quantized recovery error and the aount of easureents on 25 trials for noisy signals of different n, K, k and signal-to-noise ratio SNR. In particular, given fixed n, K, k and SNR, we uniforly choose 5 values of between and 6n. For each value, we conduct 5 trials of recovery and -bit CS+ quantizer by recovering the quantizations of 5 noisy signals fro their -bit easureents. The quantized recovery error and tie cost of each trial for different n, K, k and SNR are shown in Figure 6 and Figure 7. Figure 6 and Figure 7 show the quantized recovery error of both and -bit CS+ quantization drops drastically with the increasing of the nuber of easureents. For dense signals with large noise, the two ethods perfor nearly the sae on the recovery accuracy. This phenoenon indicates that works well on dense signals and is robust to noise coparing to CS and -bit CS. In addition, the tie cost of increases substantially slower than that of -bit CS+ quantizer with the increasing of the nuber of easureents. October, 2

26 26 quantized recovery error n=24, k=24, SNR= quantized recovery error n=24, k=24, SNR= quantized recovery tie n=24, k=24, SNR= quantized recovery tie n=24, k=24, SNR= quantized recovery error n=24, k=256, SNR= quantized recovery error n=24, k=256, SNR= quantized recovery tie n=24, k=256, SNR= quantized recovery tie n=24, k=256, SNR= quantized recovery error n=24, k=28, SNR= quantized recovery error n=24, k=28, SNR= quantized recovery tie n=24, k=28, SNR= quantized recovery tie n=24, k=28, SNR= Fig. 4. Phase plots of and -bit CS+ quantizer in the noisy case. D. Dequantization and consistency We finally explore the perforance of +dequantizer stated in Section 4 and verify the consistency investigated in Lea. In particular, we plot the noralized Haing loss defined in Lea between Ax and Ax vs. the angular error 49 between x and x of 2 trials for different aount of easureents in Figure 8. Figure 8 shows the linear relationship between Haing error D H Ax, Ax and angular October, 2

27 27 quantized recovery error n=52, k=52, SNR= quantized recovery error n=52, k=52, SNR= quantized recovery tie n=52, k=52, SNR=26.26 quantized recovery tie n=52, k=52, SNR= quantized recovery error n=52, k=256, SNR= quantized recovery error n=52, k=256, SNR= quantized recovery tie n=52, k=256, SNR= quantized recovery tie n=52, k=256, SNR= quantized recovery error n=52, k=28, SNR= quantized recovery error n=52, k=28, SNR= quantized recovery tie n=52, k=28, SNR=23.98 quantized recovery tie n=52, k=28, SNR= Fig. 5. Phase plots of and -bit CS+ quantizer in the noisy case. error D S x, x in +-bit CS dequantizer, +idpoint dequantizer and -bit CS given sufficient easureents. This linear relationship verifies the +dequantizer error bound in Theore 3 and the consistency in Lea of the Haing Copressed Sensing subission. The figure also shows that +-bit CS dequantizer and -bit CS perfor better than +idpoint dequantizer. This verifies the effectiveness of -bit CS dequantizer. In the experients, the -bit CS dequantizer only requires iterates less than to reach the accuracy obtained by -bit CS with ore than 5 iterates. Thus can significantly save the October, 2

28 28 coputation of the subsequent dequantization. VI. CONCLUSION We have proposed a new signal acquisition technique Haing Copressed Sensing to recover the k-bit quantization of a signal x fro a sall aount of its -bit easureents. recovery invokes n ties of KL-divergence based nearest neighbor searching in a Bernoulli distribution doain and requires only nk coputations of KL-divergence. The ain significance of is as follows: it provides a direct recovery of quantized signal fro a few easureents for digital systes, which has not been thoroughly studied but is essential in practice; 2 it has linear recovery tie and thus its speed is extreely faster than optiization based or iterative ethods; 3 the sparse assuption to signal is not copulsive in. Another copelling advantage of is that its recovery can significantly accelerate the subsequent dequantization. The quantized error bound of for general signals and +dequantizer recovery error bound for sparse signals have been carefully studied. REFERENCES [] C. E. Shannon, Counication in the presence of noise, Proceedings of Institute of Radio Engineers, vol. 37, no., pp. 2, 949. [2] D. L. Donoho, Copressed sensing, IEEE Transactions on Inforation Theory, vol. 52, no. 4, pp , 26. [3] E. J. Candès and T. Tao, Near-optial signal recovery fro rando projections: Universal encoding strategies? IEEE Transactions on Inforation Theory, vol. 52, no. 2, pp , 26. [4] E. J. Candès, M. Rudelson, T. Tao, and R. Vershynin, Error correction via linear prograing, Foundations of Coputer Science, Annual IEEE Syposiu on, pp , 25. [5] R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, Model-based copressive sensing, IEEE Transactions on Inforation Theory, vol. 56, pp , 2. [6] S. Ji, Y. Xue, and L. Carin, Bayesian copressive sensing, IEEE Transactions on Signal Processing, vol. 56, no. 6, pp , 28. [7] A. Gilbert and P. Indyk, Sparse recovery using sparse atrices, Proceedings of the IEEE, vol. 98, no. 6, pp , 2. [8] J. A. Tropp and A. C. Gilbert, Signal recovery fro rando easureents via orthogonal atching pursuit, IEEE Transactions on Inforation Theory, vol. 53, pp , 27. [9] D. Needell and J. A. Tropp, Cosap: Iterative signal recovery fro incoplete and inaccurate saples, Applied and Coputational Haronic Analysis, vol. 26, pp. 3 32, 28. [] D. L. Donoho, A. Maleki, and A. Montanari, Message passing algoriths for copressed sensing, Proceedings of the National Acadey of Sciences, 29. October, 2

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Weighted- 1 minimization with multiple weighting sets

Weighted- 1 minimization with multiple weighting sets Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University

More information

On the theoretical analysis of cross validation in compressive sensing

On the theoretical analysis of cross validation in compressive sensing MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.erl.co On the theoretical analysis of cross validation in copressive sensing Zhang, J.; Chen, L.; Boufounos, P.T.; Gu, Y. TR2014-025 May 2014 Abstract

More information

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements 1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

A Simple Homotopy Algorithm for Compressive Sensing

A Simple Homotopy Algorithm for Compressive Sensing A Siple Hootopy Algorith for Copressive Sensing Lijun Zhang Tianbao Yang Rong Jin Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China Departent of Coputer

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010 A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING By Eanuel J Candès Yaniv Plan Technical Report No 200-0 Noveber 200 Departent of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

A Probabilistic and RIPless Theory of Compressed Sensing

A Probabilistic and RIPless Theory of Compressed Sensing A Probabilistic and RIPless Theory of Copressed Sensing Eanuel J Candès and Yaniv Plan 2 Departents of Matheatics and of Statistics, Stanford University, Stanford, CA 94305 2 Applied and Coputational Matheatics,

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Recovery of Sparsely Corrupted Signals

Recovery of Sparsely Corrupted Signals TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION TEORY 1 Recovery of Sparsely Corrupted Signals Christoph Studer, Meber, IEEE, Patrick Kuppinger, Student Meber, IEEE, Graee Pope, Student Meber, IEEE, and

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

An RIP-based approach to Σ quantization for compressed sensing

An RIP-based approach to Σ quantization for compressed sensing An RIP-based approach to Σ quantization for copressed sensing Joe-Mei Feng and Felix Kraher October, 203 Abstract In this paper, we provide new approach to estiating the error of reconstruction fro Σ quantized

More information

Compressive Sensing Over Networks

Compressive Sensing Over Networks Forty-Eighth Annual Allerton Conference Allerton House, UIUC, Illinois, USA Septeber 29 - October, 200 Copressive Sensing Over Networks Soheil Feizi MIT Eail: sfeizi@it.edu Muriel Médard MIT Eail: edard@it.edu

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x) 7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

arxiv: v5 [cs.it] 16 Mar 2012

arxiv: v5 [cs.it] 16 Mar 2012 ONE-BIT COMPRESSED SENSING BY LINEAR PROGRAMMING YANIV PLAN AND ROMAN VERSHYNIN arxiv:09.499v5 [cs.it] 6 Mar 0 Abstract. We give the first coputationally tractable and alost optial solution to the proble

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

PAC-Bayes Analysis Of Maximum Entropy Learning

PAC-Bayes Analysis Of Maximum Entropy Learning PAC-Bayes Analysis Of Maxiu Entropy Learning John Shawe-Taylor and David R. Hardoon Centre for Coputational Statistics and Machine Learning Departent of Coputer Science University College London, UK, WC1E

More information

arxiv: v1 [cs.ds] 17 Mar 2016

arxiv: v1 [cs.ds] 17 Mar 2016 Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Detection and Estimation Theory

Detection and Estimation Theory ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

SPECTRUM sensing is a core concept of cognitive radio

SPECTRUM sensing is a core concept of cognitive radio World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China 6th International Conference on Machinery, Materials, Environent, Biotechnology and Coputer (MMEBC 06) Solving Multi-Sensor Multi-Target Assignent Proble Based on Copositive Cobat Efficiency and QPSO Algorith

More information

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED PAPER) 1 Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding Lai Wei, Student Meber, IEEE, David G. M. Mitchell, Meber, IEEE, Thoas

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion A Nonlinear Sparsity Prooting Forulation and Algorith for Full Wavefor Inversion Aleksandr Aravkin, Tristan van Leeuwen, Jaes V. Burke 2 and Felix Herrann Dept. of Earth and Ocean sciences University of

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

Convex Programming for Scheduling Unrelated Parallel Machines

Convex Programming for Scheduling Unrelated Parallel Machines Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 2: PAC Learning and VC Theory I Fro Adversarial Online to Statistical Three reasons to ove fro worst-case deterinistic

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Lecture 20 November 7, 2013

Lecture 20 November 7, 2013 CS 229r: Algoriths for Big Data Fall 2013 Prof. Jelani Nelson Lecture 20 Noveber 7, 2013 Scribe: Yun Willia Yu 1 Introduction Today we re going to go through the analysis of atrix copletion. First though,

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION GUANGHUI LAN AND YI ZHOU Abstract. In this paper, we consider a class of finite-su convex optiization probles defined over a distributed

More information

Highly Robust Error Correction by Convex Programming

Highly Robust Error Correction by Convex Programming Highly Robust Error Correction by Convex Prograing Eanuel J. Candès and Paige A. Randall Applied and Coputational Matheatics, Caltech, Pasadena, CA 9115 Noveber 6; Revised Noveber 7 Abstract This paper

More information

OPTIMIZATION in multi-agent networks has attracted

OPTIMIZATION in multi-agent networks has attracted Distributed constrained optiization and consensus in uncertain networks via proxial iniization Kostas Margellos, Alessandro Falsone, Sione Garatti and Maria Prandini arxiv:603.039v3 [ath.oc] 3 May 07 Abstract

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Recovering Block-structured Activations Using Compressive Measurements

Recovering Block-structured Activations Using Compressive Measurements Recovering Block-structured Activations Using Copressive Measureents Sivaraan Balakrishnan, Mladen Kolar, Alessandro Rinaldo, and Aarti Singh Abstract We consider the probles of detection and support recovery

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either

More information

Birthday Paradox Calculations and Approximation

Birthday Paradox Calculations and Approximation Birthday Paradox Calculations and Approxiation Joshua E. Hill InfoGard Laboratories -March- v. Birthday Proble In the birthday proble, we have a group of n randoly selected people. If we assue that birthdays

More information

Efficient Filter Banks And Interpolators

Efficient Filter Banks And Interpolators Efficient Filter Banks And Interpolators A. G. DEMPSTER AND N. P. MURPHY Departent of Electronic Systes University of Westinster 115 New Cavendish St, London W1M 8JS United Kingdo Abstract: - Graphical

More information

A method to determine relative stroke detection efficiencies from multiplicity distributions

A method to determine relative stroke detection efficiencies from multiplicity distributions A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,

More information

Lean Walsh Transform

Lean Walsh Transform Lean Walsh Transfor Edo Liberty 5th March 007 inforal intro We show an orthogonal atrix A of size d log 4 3 d (α = log 4 3) which is applicable in tie O(d). By applying a rando sign change atrix S to the

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Statistical clustering and Mineral Spectral Unmixing in Aviris Hyperspectral Image of Cuprite, NV

Statistical clustering and Mineral Spectral Unmixing in Aviris Hyperspectral Image of Cuprite, NV CS229 REPORT, DECEMBER 05 1 Statistical clustering and Mineral Spectral Unixing in Aviris Hyperspectral Iage of Cuprite, NV Mario Parente, Argyris Zynis I. INTRODUCTION Hyperspectral Iaging is a technique

More information

arxiv: v3 [quant-ph] 18 Oct 2017

arxiv: v3 [quant-ph] 18 Oct 2017 Self-guaranteed easureent-based quantu coputation Masahito Hayashi 1,, and Michal Hajdušek, 1 Graduate School of Matheatics, Nagoya University, Furocho, Chikusa-ku, Nagoya 464-860, Japan Centre for Quantu

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3

More information

Robust Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen, Student Member, IEEE, and Yuejie Chi, Member, IEEE

Robust Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen, Student Member, IEEE, and Yuejie Chi, Member, IEEE 6576 IEEE TRANSACTIONS ON INORMATION THEORY, VOL 60, NO 0, OCTOBER 04 Robust Spectral Copressed Sensing via Structured Matrix Copletion Yuxin Chen, Student Meber, IEEE, and Yuejie Chi, Meber, IEEE Abstract

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Sparse Signal Reconstruction via Iterative Support Detection

Sparse Signal Reconstruction via Iterative Support Detection Sparse Signal Reconstruction via Iterative Support Detection Yilun Wang and Wotao Yin Abstract. We present a novel sparse signal reconstruction ethod, iterative support detection (), aiing to achieve fast

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks 1 Optial Resource Allocation in Multicast Device-to-Device Counications Underlaying LTE Networks Hadi Meshgi 1, Dongei Zhao 1 and Rong Zheng 2 1 Departent of Electrical and Coputer Engineering, McMaster

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

New Slack-Monotonic Schedulability Analysis of Real-Time Tasks on Multiprocessors

New Slack-Monotonic Schedulability Analysis of Real-Time Tasks on Multiprocessors New Slack-Monotonic Schedulability Analysis of Real-Tie Tasks on Multiprocessors Risat Mahud Pathan and Jan Jonsson Chalers University of Technology SE-41 96, Göteborg, Sweden {risat, janjo}@chalers.se

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

Complex Quadratic Optimization and Semidefinite Programming

Complex Quadratic Optimization and Semidefinite Programming Coplex Quadratic Optiization and Seidefinite Prograing Shuzhong Zhang Yongwei Huang August 4 Abstract In this paper we study the approxiation algoriths for a class of discrete quadratic optiization probles

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

Weighted Superimposed Codes and Constrained Integer Compressed Sensing

Weighted Superimposed Codes and Constrained Integer Compressed Sensing Weighted Superiposed Codes and Constrained Integer Copressed Sensing Wei Dai and Olgica Milenovic Dept. of Electrical and Coputer Engineering University of Illinois, Urbana-Chapaign Abstract We introduce

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number Research Journal of Applied Sciences, Engineering and Technology 4(23): 5206-52, 202 ISSN: 2040-7467 Maxwell Scientific Organization, 202 Subitted: April 25, 202 Accepted: May 3, 202 Published: Deceber

More information

Quantile Search: A Distance-Penalized Active Learning Algorithm for Spatial Sampling

Quantile Search: A Distance-Penalized Active Learning Algorithm for Spatial Sampling Quantile Search: A Distance-Penalized Active Learning Algorith for Spatial Sapling John Lipor 1, Laura Balzano 1, Branko Kerkez 2, and Don Scavia 3 1 Departent of Electrical and Coputer Engineering, 2

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information