Compressive Sensing Over Networks

Size: px
Start display at page:

Download "Compressive Sensing Over Networks"

Transcription

1 Forty-Eighth Annual Allerton Conference Allerton House, UIUC, Illinois, USA Septeber 29 - October, 200 Copressive Sensing Over Networks Soheil Feizi MIT Eail: sfeizi@it.edu Muriel Médard MIT Eail: edard@it.edu Michelle Effros Caltech Eail: effros@caltech.edu Abstract In this paper, we deonstrate soe applications of copressive sensing over networks. We ake a connection between copressive sensing and traditional inforation theoretic techniques in source coding and channel coding. Our results provide an explicit trade-off between the rate and the decoding coplexity. The key difference of copressive sensing and traditional inforation theoretic approaches is at their decoding side. Although optial decoders to recover the original signal, copressed by source coding have high coplexity, the copressive sensing decoder is a linear or convex optiization. First, we investigate applications of copressive sensing on distributed copression of correlated sources. Here, by using copressive sensing, we propose a copression schee for a faily of correlated sources with a odularized decoder, providing a trade-off between the copression rate and the decoding coplexity. We call this schee Sparse Distributed Copression. We use this copression schee for a general ulticast network with correlated sources. Here, we first decode soe of the sources by a network decoding technique and then, we use a copressive sensing decoder to obtain the whole sources. Then, we investigate applications of copressive sensing on channel coding. We propose a coding schee that cobines copressive sensing and rando channel coding for a high-snr point-to-point Gaussian channel. We call this schee Sparse Channel Coding. We propose a odularized decoder providing a trade-off between the capacity loss and the decoding coplexity. At the receiver side, first, we use a copressive sensing decoder on a noisy signal to obtain a noisy estiate of the original signal and then, we apply a traditional channel coding decoder to find the original signal. I. INTRODUCTION Data copression has been a research topic of any scholars in past years. However, recently, the field of copressive sensing, originated in [], [2] and [3], looked at the copression proble fro another point of view. In this paper, we try to ake a bridge between copressive sensing which ay be viewed as a signal processing technique and both source coding and channel coding. By using this connection, we propose soe non-trivial practical coding schees that provide a trade-off between the rate and the decoding coplexity. Copressive sensing has provided a low coplexity approxiation to the signal reconstruction. Inforation theoretic has been ostly concerned with accuracy of the signal reconstruction under rate constraints. In this paper, we seek to provide new connections which use copressive sensing for traditional inforation theory probles such as Slepian- Wolf copression and channel coding in order to provide This aterial is based upon work under subcontract C, ITMANET project and award No supported by AFSOR. echaniss to explicitly trade-off between the decoding coplexity and the rate. Soe previous work investigated the connection between copressive sensing and inforation theory. For exaple, reference [4] studied the iniu nuber of noisy easureents required to recover a sparse signal by using Shannon inforation theory bounds. Reference [5] investigated the contained inforation in noisy easureents by viewing the easureent syste as an inforation theoretic channel and using the rate distortion function. Also, reference [6] studied the trade-offs between the nuber of easureents, the signal sparsity level, and the easureent noise level for exact support recovery of sparse signals by using an analogy between support recovery and counication over the Gaussian ultiple access channel. In this paper, we want to investigate applications of copressive sensing on source coding and channel coding. First, in Section III, we consider distributed copression of correlated sources. In this section, by using copressive sensing, we propose a copression schee for a Slepian- Wolf proble with a faily of correlated sources. The proposed decoder is a odularized decoder, providing a trade-off between the copression rate and the decoding coplexity. We call this schee Sparse Distributed Copression. Then, We use this copression schee for a general ulticast network with correlated sources. Here, we first decode soe of the sources by a network decoding technique and then, we use the copressive sensing decoder to obtain the whole sources. Then, in Section IV, we investigate applications of copressive sensing on channel coding. We propose a coding schee that cobines copressive sensing and rando channel coding for a high-snr point-to-point Gaussian channel. We call this schee Sparse Channel Coding. We propose a odularized decoder, providing a trade-off between the capacity loss and the decoding coplexity (i.e., the higher the capacity loss, the lower the decoding coplexity.). The idea is to add intentionally soe correlation to transitted signals in order to decrease the decoding coplexity. At the receiver side, first, we use a copressive sensing decoder on a noisy signal to obtain a noisy estiate of the original signal and then, we apply a traditional channel coding decoder to find the original signal. II. TECHNICAL BACKGROUND AND PRIOR WORK In this section, we review soe prior results, in both copressive sensing and distributed copression, which will /0/$ IEEE 29

2 be used in the rest of the paper. A. Copressive Sensing Background Let X R n be a signal vector. We say this signal is k-sparse if k of its coordinates are non-zero and the rest are zero. Define α = k n as the sparsity ratio of the signal. Suppose Φ is a n n easureent atrix (hence, Y = ΦX is the easureent vector.). If Φ is non-singular, by having Y and Φ, one can recover X. However, [2] and [3] showed that, one can recover the original sparse signal by having far fewer easureents than n. Suppose for a given n and k, satisfies the following: M Mn ENCODER ENCODER M tx Mn tx R Rn DECODER M, M2,,Mn ρk log(n/k) () where ρ is a constant. Let S be the power set of {, 2,..., }, and S represent a eber of S with cardinality (e.g., S = {, 2,..., }). A n atrix Φ S is fored of rows of Φ whose indices belong to S. Φ S represents an incoplete easureent atrix. Reference [7] showed that, if satisfies () and Φ S satisfies the Restricted Isoetry Property (which will be explained later), having these easureents is sufficient to recover the original sparse vector. In fact, it has been shown in [7] that, X is the solution of the following linear prograing: in X L (2) subject to Y = Φ S X, where. L represents the L nor of a vector. Let us refer to the coplexity of this optiization by CXcs noiseless (, n). We say Φ S satisfies the restricted isoetry property (RIP) iff there exists 0 < δ k < such that, for any vector V R n, we have: ( δ k ) V 2 L 2 Φ S V 2 L 2 ( + δ k ) V 2 L 2. (3) Reference [7] showed if, δ k + δ 2k + δ 3k <, (4) then, the optiization (2) can recover the original k-sparse signal. This condition leads to (). Intuitively, if Φ S satisfies the RIP condition, it preserves the Euclidean length of k sparse signals. Hence, these sparse signals cannot be in the null-space of Φ S. Moreover, any k subset of coluns of Φ S are nearly orthogonal if Φ S satisfies RIP. Most of atrices which satisfy RIP are rando atrices. Reference [8] showed a connection between copressive sensing, n- widths, and the Johnson-Lindenstrauss Lea. Through this connection, [9] provided an elegant way to find δ k by using the concentration theore for rando atrices and proposed soe data-base friendly atrices (binary atrices) satisfying RIP. Now, suppose our easureents are noisy (i.e., Y = Φ S X + Z, where Z L2 = ϵ.). Then, reference [] showed that if Φ S satisfies RIP (3) and X is sufficiently sparse, the Fig.. A depth one ulticast tree with n sources. following optiization recovers the original signal up to the noise level: in X L (5) subject to Y Φ S X L2 ϵ. Hence, if X is a solution of (5), then X X L2 βϵ, where β is a constant. To have this, the original vector X should be sufficiently sparse. Specifically, [] showed that, if δ 3k + 3δ 4k < 2, (6) then, the results hold. We refer to the coplexity of this optiization by CXcs noisy (, n). B. Distributed Copression Background In this section, we briefly review soe prior results in distributed copression. In Section III, we will use these results along with copressive sensing results, aking a bridge these two subjects. Consider the network depicted in Figure. This is a depth one tree network with n sources. Suppose each source i, has essages referred by M i, wishes to send to the receiver. To do this, each source i sends its essage with a rate R i (i.e., say i has essages of source i. This source aps i to {, 2,..., 2 Ri }.). Let R = n i= R i. According to Slepian- Wolf copression ([0]), one needs to have, R H(M, M 2,..., M n ). (7) By using linear network coding, reference [] extended Slepian-Wolf copression for a general ulticast proble. Note that, the proposed decoders in [0] and [] (a iniu entropy or a axiu probability decoder) have high coplexity ([2]). For a depth one tree network, references [3], [4], [5], [6], [7] and [8] provide soe low coplexity decoding techniques for Slepian-Wolf copression. However, none of these ethods can explicitly provide a trade-off between the rate and the decoding coplexity. For a general ulticast network, it has been shown in [9] that, there is no separation between Slepian-Wolf copression and network 30

3 coding. For this network, a low coplexity decoder for an optial copression schee has not been proposed yet. III. COMPRESSIVE SENSING AND DISTRIBUTED SOURCE CODING In this part, we propose a distributed copression schee for correlated sources providing a trade-off between the decoding coplexity and the copression rate. References [0] and [] propose optial distributed copression schees for correlated sources, for depth-one trees and general networks, respectively. However, their proposed copression schees need to use a iniu entropy or a axiu probability decoder, which has high coplexity ([2]). On the other hand, copressive sensing provides a low coplexity decoder, by using a convex optiization, to recover a sparse vector fro incoplete and containated observations (e.g., [], [2], [3] and [7]). Our ai is to use copressive sensing over networks to design copression schees for correlated sources. Note that, our proposed techniques can be applied on different network topologies, with different assuptions and constraints. Here, to illustrate the ain ideas, first we consider a noiseless tree network with depth one and with correlated sources. Then, in Section III-B, we extend our techniques to a general ulticast network. A. A one-stage tree with n-correlated sources Consider the network shown in Figure, which is a one-stage noiseless ulticast tree network with n-sources. First, consider a case where sources are independent. Hence, the proble reduces to a classical source coding proble. Say each source is transitting with a rate R i. Denote R = n i= R i. It is well-known that the following su-rate for this network is the iniu required su-rate: R n H(M i ), (8) i= where M i is the essage rando variable of source i. Let us refer to the coplexity of its decoder at the receiver as CX indep (n), since sources are independent. Now, we consider a case where sources are correlated. We forulate the correlation of sources as follows: Definition. Suppose µ i,t represents a realization of the i th source essage rando variable at tie t (i.e., a realization of M i at tie t). Hence, the vector µ t = {µ,t,..., µ n,t } is the sources essage vector at tie t. We drop the subscript t when no confusion arises. Suppose sources are correlated so that there exists a transfor atrix under which the sources essage vector is k-sparse (i.e., there exists a n n transfor atrix Φ and a k-sparse vector µ t such that µ t = Φµ t.). Let S be the power set of {, 2,..., n}, and S denote ebers of S whose cardinality are. Suppose Φ S, defined as in Section II-A, satisfies RIP (3), where satisfies () for a given n and k. We have, µ t,s = Φ S µ t, where µ t,s is coponents of µ t whose indices belong to S. We refer to these sources as k-sparsely correlated sources. If we know zero locations of µ t and if the transfor atrix is non-singular, by having each subset containing k sources, the whole vector µ t and µ t can be coputed. Hence, by the data processing inequality, we have, H(M i,..., M ik ) + nh b (α) = H(M,..., M n ), (9) for and i < i 2 <... < i k n where M ij is the essage of source i j. Also, by [2], having sources, where satisfies () for a given k and n, allows us to recover the whole vector µ t and therefore, µ t. Hence, H(M i,..., M i ) = H(M,..., M n ). (0) Exaple 2. For an exaple of k-sparsely correlated sources, suppose each coordinates of µ t is zero with probability α, otherwise it is uniforly distributed over {, 2,..., 2 R }. Suppose the entries of Φ are independent realizations of Bernoulli rando variables: Φ i,j = { with prob. 2 with prob. 2. () Thus, as shown in [9], Φ S satisfies RIP (3). Also, we have, H(M,..., M n ) = nh b (α) + k(r + ) nh b (α) + kr. (2) The entropy of each source rando variable for large n can be coputed approxiately as follows: H(M i ) R + log(k). (3) 2 Note that, the individual entropies of k-sparsely correlated sources are roughly the sae. Now, we want to use copressive sensing to have a distributed source coding providing a trade-off between the copression rate and the decoding coplexity. We shall show that, if sources are k-sparsely correlated, if we have of the at the receiver at each tie t, by using a linear prograing, all sources can be recovered at that tie. Hence, instead of sending n correlated sources, one needs to transit of the where << n. We assue that the entropies of sources are the sae. Hence, each source transits with probability γ = n. Therefore, by the law of large nubers, for large enough n, we have transitting sources. For the case of non-equal source entropies, one can a priori choose sources with the lowest entropy as the transitting sources. In Table III-A, we copare four different schees: The first schee which Theore 3 is about, is called Sparse Distributed Copression with independent coding (SDCIC). The idea is to send sources fro n of the, assuing they are independent (i.e., their correlation is not used in coding). In this case, at the receiver, first we decode these transitted essages 3

4 TABLE I COMPARISON OF DISTRIBUTED COMPRESSION METHODS OF CORRELATED SOURCES Copression Methods Miniu Min-Cut Rate Decoding Coplexity SDCIC j= ) CX indep() + CX noiseless H(Mij cs (, n) Slepian-Wolf (SW) H(M,..., M n) CX sw(n) Cobination of SDC and SW H(M,..., M n) CX sw() + CXcs noiseless (, n) Naive Correlation Ignorance n i= H(Mi) CXindep(n) M 2 M M, M 2,,M n M, M 2,,M n and then, we use a copressive sensing decoder to recover the whole sources. The decoding coplexity is CX indep () + CXcs noiseless (, n). The required in-cut rate between each receiver and sources is ). In the second schee referred in Table III-A as the Slepian-Wolf coding, one perfors an optial source coding on correlated sources. The required in-cut rate between sources and the receiver is H(M,..., M n ) which is less than ). However, at the receiver, we need to have a Slepian-Wolf decoder with coplexity CX sw (n). One can have a cobination of sparse distributed copression and Slepian-Wolf copression. To do this, instead of n sources, we transit of the by using a Slepian-Wolf coding. The required in-cut rate would be H(M i,..., M i ) which by (0), H(M,..., M n ) = H(M i,..., M i ) ). At the receiver, first we use a Slepian-Wolf decoder with coplexity CX sw () to decode the transitted sources inforation. Then, we use a copressive sensing decoder to recover the whole sources. The fourth ethod is a naive way of transitting n correlated sources so that we have an easy decoder at the receiver. We call this schee a Naive Correlation Ignorance ethod. In this ethod, we siply ignore the correlation in the coding schee. In this naive schee, the required in-cut rate is n i= H(M i) with a decoding coplexity equal to CX indep (n). Note that, in sparse distributed copression with independent coding, the required in-cut rate is uch less than this schee ( ) << n i= H(M i)). The following theore is about sparse distributed copression with independent coding: Theore 3. For a depth one tree network with n-sources, k-sparsely correlated, the required in-cut rate between the receiver and sources in sparse distributed copression with independent coding (SDCIC) is, R H(M ij ), (4) j= where is the sallest nuber satisfies () for a given n and k and i j is the index of j th active source. Also, the decoding coplexity of SDCIC ethod is CX indep () + CX noiseless cs (, n). Proof: For a given n (nuber of sources) and k (the M n Fig. 2. M, M 2,,M n A general ulticast network with n sources. sparsity factor), we choose to satisfy (). Now, suppose each source is transitting its block with probability γ where γ = n. Hence, by the law of large nubers, the probability that we have transitting sources blocks goes to one for sufficiently large n. Without loss of generality, say sources,..., are transitting (i.e., active sources) and other sources are idle. If R i H(M i ) for active sources and zero for idle sources, active sources can transit their essages to the receiver and the required in-cut rate between the receiver and sources would be R ). Let S indicate active sources indices. First, at the receiver, we decode these active sources. The coplexity of this decoding part is CX indep () because we did not use their correlation in the coding schee. Hence, we have µ t,s, active sources essages at tie t at the receiver (in this case, µ t,s = (µ,t,..., µ,t )). Then, by having µ t,s and using the sparsity of µ t, the following optiization can recover the whole sources: in µ t L (5) subject to µ t,s = Φ S µ t. The overall decoding coplexity is CX indep () + (, n). CX noiseless cs B. A General Multicast Network with Correlated Sources In this section, we extend results of Section III-A to a general noiseless ulticast network. Note that, for a depth one tree network, references [3], [4], [5], [6], [7] and [8] provide soe low coplexity decoding techniques for Slepian-Wolf copression. However, none of these ethods can explicitly provide a trade-off between the rate and the decoding coplexity. For a general ulticast network, it has been shown in [9] that, there is no separation between Slepian-Wolf copression and network coding. For this network, a low coplexity decoder for an optial copression schee has not been proposed yet. There are few practical approaches and they rely on sall topologies [20]. In this section, by using copressive sensing, we propose a coding schee which provides a trade-off between the copression rate and the decoding coplexity. 32

5 Consider a ulticast network shown in Figure 2 which has n sources and T receivers. If sources are independent, reference [2] showed that the iniu required in-cut rate between each receiver and sources is as (8). This rate can be achieved by using linear network coding over the network. Reference [] showed that rando linear network coding can perfor arbitrarily closely to this rate bound. Say (n) is the coplexity of its decoder. Now, consider a case when sources are correlated. Reference [] extended Slepian-Wolf copression result ([0]) to a ulticast network by using network coding. The proposed decoder is a iniu entropy or a axiu probability decoder whose coplexity is high ([2]). We refer to its CX nc indep coplexity by CX nc sw(n). In this section, we consider sources to be k-sparsely correlated. We want to use copressive sensing along with network coding to propose a distributed copression schee providing a trade-off between the copression rate and the decoding coplexity. In the following, we copare four distributed copression ethods for a general ulticast network: In the first schee, we use sparse distributed copression with independent coding. With high probability, we have active sources. Then, we perfor network coding on these active sources over the network, assuing they are independent. At the receiver, first we decode these active sources and then, by a copressive sensing decoder, we recover the whole sources. The decoding coplexity is CXindep nc ()+CXnoiseless cs (, n). The required in-cut rate between each receiver and sources is ). In the second schee, we use an extended version of Slepian-Wolf copression for a ulticast network ([22]). Reference [22] considers a vector linear network code that operates on blocks of bits. The required in-cut rate between sources and the receiver is H(M,..., M n ). At the receiver, one needs to have a iniu entropy or a axiu probability decoder with coplexity CXsw(n). nc A cobination of sparse distributed copression and Slepian-Wolf copression can provide a trade-off between the copression rate and the decoding coplexity. For exaple, instead of n sources, one can transit of the by using a Slepian-Wolf coding for ulticast networks. The required in-cut rate would be H(M i,..., M i ). At the receiver, first we use a Slepian- Wolf decoder with coplexity CXsw() nc to decode active sources. Then, we use a copressive sensing decoder to recover the whole sources. In a naive correlation ignorance ethod, we siply ignore the correlation in the coding schee. In this naive schee, the required in-cut rate is n i= H(M i) with a decoding coplexity equal to CXindep nc (n). The following theore is about sparse distributed copression with independent coding for a ulticast network: Theore 4. For a general ulticast network with n-sources, SOURCE/NETWORK DECODER CS DECODER (a) (b) CS DECODER CHANNEL DECODER Fig. 3. A odularized decoder for a) copressive sensing with source coding, (b) copressive sensing with channel coding. k-sparsely correlated, the required in-cut rate between each receiver and sources in sparse distributed copression with independent coding is, R H(M ij ), (6) j= where is the sallest nuber satisfies () for a given n and k. Also, the decoding coplexity is CXindep nc () + (, n). CX noiseless cs Proof: The proof is siilar to the one of Theore 3. The only difference is that, here, one needs to perfor linear network coding on active sources over the network without using the correlation aong the. At the receiver, first, these active sources blocks are decoded (the decoding coplexity of this part is CXindep nc ()). Then, a copressive sensing decoder (5) is used to recover the whole sources. Reark: In this section, we explained how copressive sensing can be used with distributed source coding. At the receiver side, first, we decode the active sources by a network decoding technique and then, we use a copressive sensing decoder to recover the whole sources. This high level odularized decoding schee of sparse distributed copression is depicted in Figure 3-a. On the other hand, when we use copressive sensing for channel coding (which will be explained in detail in Section IV), we switch these two decoding odules (as shown in Figure 3-b). In other words, first, we use a copressive sensing decoder on a noisy signal to obtain a noisy estiate of the original signal and then, we use a channel decoder to find the original essage fro this noisy estiate. IV. COMPRESSIVE SENSING AND CHANNEL CODING In this section, we ake a bridge between copressive sensing, which is ore a signal processing technique, and channel coding. We consider a high-snr Gaussian pointto-point channel depicted in Figure 4. We propose a coding schee that cobines copressive sensing and rando channel coding with a odularized decoder to provide a tradeoff between the capacity loss and the decoding coplexity (i.e., the higher the capacity loss, the lower the decoding 33

6 M i ENCODER Fig. 4. W i A Point-to-Point Gaussian Channel Z Y=W i +Z coplexity.). We call this schee Sparse Channel Coding. We add intentionally soe correlation to transitted signals to decrease the decoding coplexity. This is an exaple of how copressive sensing can be used with channel coding and could be extended to other types of channels. Consider a point to point channel with additive Gaussian noise with noise power N and transission power constraint P, shown in Figure 4. Suppose we are in a high SNR regie P (i.e., N ). The capacity of this channel is, C 2 log ( P ). (7) N An encoding and a decoding schee for this channel can be found in [2]. As explained in [2], at the receiver, one needs to use a -diensional axiu likelihood decoder where, the code length, is arbitrarily large. We refer to the coplexity of such a decoder as CX l (). Our ai is to design a coding schee to have a trade-off between the rate and the decoding coplexity. Suppose is arbitrarily large. Choose n and k to satisfy (6). Definition 5. A (2 R, ) sparse channel code for a pointto-point Gaussian channel with power constraint P satisfies the following: Encoding process: essage i fro the set {, 2,..., 2 R } is assigned to a -length vector W i, satisfying the power constraint; that is, for each codeword, we have, Wij 2 P, (8) j= where W ij is the j th coordinate of codeword W i. Decoding process: the receiver receives Y, a noisy version of the transitted codeword W i (i.e., Y = W i + Z, where Z is a Gaussian noise vector with i.i.d. coordinates with power N). A decoding function g aps Y to the set {, 2,..., 2 R }. The decoding coplexity is CX l (k) + CXcs noisy (, n). A rate R is achievable if Pe 0 as, where, Pe = 2 R 2 R P r(g(y) i M = i), (9) i= and M represents the transitted essage. Theore 6. For a high-snr Gaussian point-to-point channel described in Definition 5, the following rate is achievable, while the decoding coplexity is CX l (k)+cxcs noisy (, n): R = k C + n H b(α) + k 2 log( ), (20) β( + δ k ) where C is the channel capacity (7), α, β and δ k are paraeters defined in Section II-A, and H b (.) is the binary entropy function. Before expressing the proof, let us ake soe rearks: if we have () (i.e., /n = ρα log(/α)), the achievable rate can be approxiated as follows: R log(/α) C + α log(/α) H b(α) (2) where α = k/n is the sparsity ratio. The first ter of (2) shows that the capacity loss factor is a log function of the sparsity ratio. The ter α log(/α) H b(α) is the rate gain that we obtain by using copressive sensing in our schee. Note that, since the SNR is sufficiently high, the overall rate would be less than the capacity. The coplexity ter CX l (k) is an exponential function, while CXcs noisy (, n) is a polynoial. Proof: We use rando coding and copressive sensing arguents to propose a concatenated channel coding schee satisfying (20). First, we explain the encoding part: Each essage i is apped randoly to a k-sparse vector X i with length n so that its non-zero coordinates are i.i.d. drawn fro N (0, k(+δ k ) P ). n is chosen to satisfy (6), for a given k and. To obtain W i, we ultiply X i by a n atrix (Φ S ) satisfying the RIP condition (3) (i.e., W i = Φ S X i ). We send W i through the channel. In fact, it is a concatenated channel coding schee [23]. The outer layer of this coding is the sparsity pattern of X i. The inner layer of this coding is based on rando coding around each sparisty pattern. To satisfy transission power constraint P, we generate each non-zero coordinate of X i by N (0, k(+δ k ) P ). Note that, the encoding atrix Φ S is known to both encoding and decoding sides. At the receiver, we have a noisy version of the transitted signal. Suppose essage i has been transitted. Hence, Y = W i + Z = Φ S X i + Z. The decoding part is as follows: Since X i is k-sparse, first, we use a copressive sensing decoder to find X i such that X i X i 2 L 2 βn, as follows, where β is the paraeter of (5): in X L (22) subject to Y Φ S X i 2 L 2 N. We assue that the coplexity of this convex optiization is saller than the one of a axiu likelihood decoder. Since we are in a high-snr regie, by having X i, we can find the sparsity pattern of X i. It gives us the outer layer code of essage i. The higher the outer layer code rate, the higher the required SNR. We shall develop this 34

7 arguent with ore detail later (26). Having the sparsity pattern of X i, we use a axiu likelihood decoder in a k-diensional space (i.e., nonzero coordinates of X i ) to find non-zero coordinates of X i (the inner layer code). The coplexity of this decoder is denoted by CX l (k). Before presenting the error probability analysis, let us discuss different ters of (20). Since in our coding schee, for each essage i, we send a k-sparse signal by sending a -length vector, we have the fraction k before the capacity ter C. The capacity ter C in (20) coes fro the inner layer coding schee. The ter n H b(α) coes fro the outer layer coding schee. Note that, since α is a sall nuber, H b (α) is close to one. Also, the ratio n depends on the outer layer code rate. Here, we assued the SNR is sufficiently high so that we can use all possible outer k 2 log( β(+δ k ) layer codes. The ter ) is because of the power constraint and the RIP condition. Note that, if one perfors a tie-sharing based channel coding, a rate k C is achievable with a decoding coplexity CX l (k). By using the copressive sensing, we obtain additional rate ters because of the outer layer coding with approxiately the sae coplexity. We proceed by the error probability analysis. Define Ξ i as the sparsity pattern of X i (i.e., Ξ ij = 0 if X ij = 0, otherwise, Ξ ij =.). Also, define Ξ i as the decoded sparsity pattern of essage i. Without loss of generality, assue that essage was transitted. Thus, Y = Φ S X + Z. Define the following events: E 0 = { Wj 2 > P } j= E i = { n X i X 2 L 2 β n N} E p = { Ξ Ξ }. (23) Hence, an error occurs when E 0 occurs (i.e., the power constraint is violated), or E p occurs (i.e., the outer layer code is wrongly decoded), or E c occurs (i.e., the underlying sparse signal of the transitted essage X and its decoded noisy version X are in a distance greater than the noise level), or one of E i occurs while i. Say M is the transitted essage (in this case M = ) and ˆM is the decoded essage. Let E denote the event M ˆM. Hence, P r(e M = ) = P r(e) R 2 = P r(e 0 Ep E c E j ) (24) j=2 P r(e 0 ) + P r(e p ) + P r(e c ) + P r(e j ), 2 R j=2 by union bounds of probabilities. We bound these probabilities ter by ter as follows: By the law of large nubers, we have P r( X 2 L 2 > +δ k P +ϵ) 0 as n. By using the RIP condition (3) and with probability one, we have, Φ S X 2 L 2 ( + δ k ) X 2 L 2 P. (25) Hence, P r(e 0 ) 0 as n. After using the copressive sensing decoder entioned in (22), we have X such that n X X 2 L 2 β n N. Suppose this error is uniforly distributed over different coordinates. We use a threshold coparison to deterine the sparsity pattern Ξ. The probability of error in deterining Ξ deterines how high the SNR should be. Intuitively, if we use all possible outer layer codes (all possible sparsity patterns), we should not ake any error in deterining the sparsity pattern of each coordinate j (i.e., Ξ j ). Hence, we need sufficiently high SNR for a given n. On the other hand, if we decrease the outer layer code rate, the required SNR would be lower than the case before. Here, to illustrate how the required SNR can be coputed, we assue that we use all possible sparsity patterns. Say E pj is the event of aking an error in deterining whether the j th coordinate is zero or not. We say Ξ j = 0 if X ij < τ. Otherwise, Ξ j =. Hence, for a given threshold τ, we have, τ P r(e pj Ξ j = 0) = ϕ( ) β n N τ P r(e pj Ξ j = ) = 2ϕ( ) P + βn N where ϕ(.) is the cuulative distribution function of a noral rando variable. Hence, P r(e p ) = P r(ep c ) (26) = ( τ ) n( α) ϕ( ) β n N ( τ 2ϕ( ) ) nα P + β n N Note that, one can choose τ and P N large enough to have P r(e p ) arbitrarily sall. By [2], P r(e) c is zero. At the last step, we need to bound P r(e j ) for j. We have, P r(e j ) = P r(ξ j = Ξ )P r(e j Ξ j = Ξ ) + P r(ξ j Ξ )P r(e j Ξ j Ξ ) = α k ( α) n k P r(e j Ξ j = Ξ ).(27) Note that, P r(e j Ξ j Ξ ) goes to zero in a high SNR regie. By bounding the noise power of non-zero coordinates by βn (the whole noise power) and using rando coding arguent in a k diensional space, we 35

8 have, P r(e j Ξ j = Ξ ) ( + P β(+δ k ) N ), (28) where indicates the right hand side is less or equal than the left one in an exponential rate. Therefore, by (27) and (28), and doing soe anipulations, we have, P (E j ) 2 n(h b(α)+ α 2 log( P β(+δ k ) N )), (29) for j = 2,..., 2 R. Thus, in a sufficiently high SNR regie, by (24) and (29), we have, P e = P r(e) = P r(e M = ) (30) 2 R 2 n(h b(α)+ α 2 log( P β(+δ k ) N )) 2 (R n H b(α) k C k 2 log( β(+δ k ) )). For any given ϵ > 0, we can choose n large enough to have R k C + n H b(α) + k 2 log( β(+δ k ) ) ϵ achievable. V. CONCLUSIONS In this paper, we deonstrated soe applications of copressive sensing over networks. We ade a connection between copressive sensing and traditional inforation theoretic techniques in source coding and channel coding in order to provide echaniss to explicitly trade-off between the decoding coplexity and the rate. Although optial decoders to recover the original signal copressed by source coding have high coplexity, the copressive sensing decoder is a linear or convex optiization. First, we investigated applications of copressive sensing on distributed copression of correlated sources. For a depth one tree network, references ([3]-[8]) provide soe low coplexity decoding techniques for Slepian-Wolf copression. However, none of these ethods can explicitly provide a trade-off between the rate and the decoding coplexity. For a general ulticast network, reference [] extended Slepian-Wolf copression for a ulticast proble. It is showed in [9] that, there is no separation between Slepian-Wolf copression and network coding. For this network, a low coplexity decoder for an optial copression schee has not been proposed yet. Here, by using copressive sensing, we proposed a copression schee for a faily of correlated sources with a odularized decoder providing a trade-off between the copression rate and the decoding coplexity. We called this schee Sparse Distributed Copression. We used this copression schee for a general ulticast network with correlated sources. Here, we first decoded soe of the sources by a network decoding technique and then, we used a copressive sensing decoder to obtain the whole sources. Next, we investigated applications of copressive sensing on channel coding. We proposed a coding schee that cobines copressive sensing and rando channel coding for a high-snr point-to-point Gaussian channel. We called this schee Sparse Channel Coding. Our coding schee provides a odularized decoder to have a trade-off between the capacity loss and the decoding coplexity. The idea is to add intentionally soe correlation to transitted signals in order to decrease the decoding coplexity. At the decoder side, first we used a copressive sensing decoder to get an estiate of the original sparse signal, and then we used a channel coding decoder in the subspace of non-zero coordinates. REFERENCES [] E. Candès, J. Roberg, and T. Tao, Stable signal recovery fro incoplete and inaccurate easureents, Counications on Pure and Applied Matheatics, vol. 59, no. 8, pp , [2] E. Candès and J. Roberg, Sparsity and incoherence in copressive sapling, Inverse Probles, vol. 23, p. 969, [3] D. Donoho, Copressed sensing, IEEE Transactions on Inforation Theory, vol. 52, no. 4, pp , [4] M. Akçakaya and V. Tarokh, Shannon theoretic liits on noisy copressive sapling, arxiv, vol. 7. [5] S. Sarvotha, D. Baron, and R. Baraniuk, Measureents vs. bits: Copressed sensing eets inforation theory, in Proceedings of 44th Allerton Conf. Co., Ctrl., Coputing, [6] Y. Jin, Y.-H. Ki, and B. D. Rao, Perforance tradeoffs for exact support recovery of sparse signals, in 200 IEEE International Syposiu on Inforation Theory, ISIT 200, TX, US, 200, pp [7] E. Candes and T. Tao, Decoding by linear prograing, IEEE Transactions on Inforation Theory, vol. 5, no. 2, pp , [8] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, A siple proof of the restricted isoetry property for rando atrices, Constructive Approxiation, vol. 28, no. 3, pp , [9] D. Achlioptas, Database-friendly rando projections: Johnson- Lindenstrauss with binary coins, Journal of Coputer and Syste Sciences, vol. 66, no. 4, pp , [0] D. Slepian and J. K. Wolf, Noiseless coding of correlated inforation sources, IEEE Trans. Inf. Theory, vol. 9, no. 4, pp , Jul [] T. Ho, M. Médard, R. Koetter, D. R. Karger, M. Effros, J. Shi, and B. Leong, A rando linear network coding approach to ulticast, IEEE Trans. Inf. Theory, vol. 52, no. 0, pp , Oct [2] I. Csiszar and J. Körner, in Inforation Theory: Coding Theores for Discrete Meoryless Systes. Acadeic Press, New York, 98. [3] S. Pradhan and K. Rachandran, Distributed source coding using syndroes (DISCUS): Design and construction, in dcc. Published by the IEEE Coputer Society, 999, p. 58. [4] T. Colean, A. Lee, M. Médard, and M. Effros, Low-coplexity approaches to Slepian Wolf near-lossless distributed data copression, IEEE Transactions on Inforation Theory, vol. 52, no. 8, [5] R. Gallager, Low-density parity-check codes, Inforation Theory, IRE Transactions on, vol. 8, no., pp. 2 28, 962. [6] T. Tian, J. Garcia-Frias, and W. Zhong, Copression of correlated sources using LDPC codes, in Data Copression Conference, Proceedings. DCC 2003, [7] C. Berrou, A. Glavieux, P. Thitiajshia et al., Near Shannon liit error-correcting coding and decoding, in Proceedings of ICC, vol. 93, 993, pp [8] J. Garcia-Frias, Copression of correlated binary sources using turbo codes, IEEE Counications Letters, vol. 5, no. 0, pp , 200. [9] A. Raaoorthy, K. Jain, P. Chou, and M. Effros, Separating distributed source coding fro network coding, Inforation Theory, IEEE Transactions on, vol. 52, no. 6, pp , [20] G. Maierbacher, J. Barros, and M. Médard, Practical source-network decoding, in Wireless Counication Systes, ISWCS th International Syposiu on. IEEE, 2009, pp [2] R. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung, Network inforation flow, IEEE Trans. Inf. Theory, vol. 46, pp , [22] R. Koetter and M. Medard, An algebraic approach to network coding, IEEE/ACM Trans. on networking, vol., no. 5, pp , Oct [23] G. Forney, Concatenated codes. Citeseer,

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 28-30, 20 A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

More information

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp

More information

Weighted- 1 minimization with multiple weighting sets

Weighted- 1 minimization with multiple weighting sets Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University

More information

An RIP-based approach to Σ quantization for compressed sensing

An RIP-based approach to Σ quantization for compressed sensing An RIP-based approach to Σ quantization for copressed sensing Joe-Mei Feng and Felix Kraher October, 203 Abstract In this paper, we provide new approach to estiating the error of reconstruction fro Σ quantized

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements 1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Hamming Compressed Sensing

Hamming Compressed Sensing Haing Copressed Sensing Tianyi Zhou, and Dacheng Tao, Meber, IEEE Abstract arxiv:.73v2 [cs.it] Oct 2 Copressed sensing CS and -bit CS cannot directly recover quantized signals and require tie consuing

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

On the theoretical analysis of cross validation in compressive sensing

On the theoretical analysis of cross validation in compressive sensing MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.erl.co On the theoretical analysis of cross validation in copressive sensing Zhang, J.; Chen, L.; Boufounos, P.T.; Gu, Y. TR2014-025 May 2014 Abstract

More information

Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder

Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder Convolutional Codes Lecture Notes 8: Trellis Codes In this lecture we discuss construction of signals via a trellis. That is, signals are constructed by labeling the branches of an infinite trellis with

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

A Simple Homotopy Algorithm for Compressive Sensing

A Simple Homotopy Algorithm for Compressive Sensing A Siple Hootopy Algorith for Copressive Sensing Lijun Zhang Tianbao Yang Rong Jin Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China Departent of Coputer

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models

Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models 2014 IEEE International Syposiu on Inforation Theory Copression and Predictive Distributions for Large Alphabet i.i.d and Markov odels Xiao Yang Departent of Statistics Yale University New Haven, CT, 06511

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED PAPER) 1 Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding Lai Wei, Student Meber, IEEE, David G. M. Mitchell, Meber, IEEE, Thoas

More information

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010 A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING By Eanuel J Candès Yaniv Plan Technical Report No 200-0 Noveber 200 Departent of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Weighted Superimposed Codes and Constrained Integer Compressed Sensing

Weighted Superimposed Codes and Constrained Integer Compressed Sensing Weighted Superiposed Codes and Constrained Integer Copressed Sensing Wei Dai and Olgica Milenovic Dept. of Electrical and Coputer Engineering University of Illinois, Urbana-Chapaign Abstract We introduce

More information

Lecture 20 November 7, 2013

Lecture 20 November 7, 2013 CS 229r: Algoriths for Big Data Fall 2013 Prof. Jelani Nelson Lecture 20 Noveber 7, 2013 Scribe: Yun Willia Yu 1 Introduction Today we re going to go through the analysis of atrix copletion. First though,

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

A Probabilistic and RIPless Theory of Compressed Sensing

A Probabilistic and RIPless Theory of Compressed Sensing A Probabilistic and RIPless Theory of Copressed Sensing Eanuel J Candès and Yaniv Plan 2 Departents of Matheatics and of Statistics, Stanford University, Stanford, CA 94305 2 Applied and Coputational Matheatics,

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Rateless Codes for MIMO Channels

Rateless Codes for MIMO Channels Rateless Codes for MIMO Channels Marya Modir Shanechi Dept EECS, MIT Cabridge, MA Eail: shanechi@itedu Uri Erez Tel Aviv University Raat Aviv, Israel Eail: uri@engtauacil Gregory W Wornell Dept EECS, MIT

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Highly Robust Error Correction by Convex Programming

Highly Robust Error Correction by Convex Programming Highly Robust Error Correction by Convex Prograing Eanuel J. Candès and Paige A. Randall Applied and Coputational Matheatics, Caltech, Pasadena, CA 9115 Noveber 6; Revised Noveber 7 Abstract This paper

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

arxiv: v1 [cs.ds] 17 Mar 2016

arxiv: v1 [cs.ds] 17 Mar 2016 Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass

More information

ESE 523 Information Theory

ESE 523 Information Theory ESE 53 Inforation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electrical and Systes Engineering Washington University 11 Urbauer Hall 10E Green Hall 314-935-4173 (Lynda Marha Answers) jao@wustl.edu

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel 1 Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel Rai Cohen, Graduate Student eber, IEEE, and Yuval Cassuto, Senior eber, IEEE arxiv:1510.05311v2 [cs.it] 24 ay 2016 Abstract In

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution Testing approxiate norality of an estiator using the estiated MSE and bias with an application to the shape paraeter of the generalized Pareto distribution J. Martin van Zyl Abstract In this work the norality

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

Generalized Alignment Chain: Improved Converse Results for Index Coding

Generalized Alignment Chain: Improved Converse Results for Index Coding Generalized Alignent Chain: Iproved Converse Results for Index Coding Yucheng Liu and Parastoo Sadeghi Research School of Electrical, Energy and Materials Engineering Australian National University, Canberra,

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

arxiv: v5 [cs.it] 16 Mar 2012

arxiv: v5 [cs.it] 16 Mar 2012 ONE-BIT COMPRESSED SENSING BY LINEAR PROGRAMMING YANIV PLAN AND ROMAN VERSHYNIN arxiv:09.499v5 [cs.it] 6 Mar 0 Abstract. We give the first coputationally tractable and alost optial solution to the proble

More information

Impact of Imperfect Channel State Information on ARQ Schemes over Rayleigh Fading Channels

Impact of Imperfect Channel State Information on ARQ Schemes over Rayleigh Fading Channels This full text paper was peer reviewed at the direction of IEEE Counications Society subject atter experts for publication in the IEEE ICC 9 proceedings Ipact of Iperfect Channel State Inforation on ARQ

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Content-Type Coding. arxiv: v2 [cs.it] 3 Jun Linqi Song UCLA

Content-Type Coding. arxiv: v2 [cs.it] 3 Jun Linqi Song UCLA Content-Type Coding Linqi Song UCLA Eail: songlinqi@ucla.edu Christina Fragouli UCLA and EPFL Eail: christina.fragouli@ucla.edu arxiv:1505.03561v [cs.it] 3 Jun 015 Abstract This paper is otivated by the

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

Error Exponents in Asynchronous Communication

Error Exponents in Asynchronous Communication IEEE International Syposiu on Inforation Theory Proceedings Error Exponents in Asynchronous Counication Da Wang EECS Dept., MIT Cabridge, MA, USA Eail: dawang@it.edu Venkat Chandar Lincoln Laboratory,

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

Recovering Block-structured Activations Using Compressive Measurements

Recovering Block-structured Activations Using Compressive Measurements Recovering Block-structured Activations Using Copressive Measureents Sivaraan Balakrishnan, Mladen Kolar, Alessandro Rinaldo, and Aarti Singh Abstract We consider the probles of detection and support recovery

More information

Lecture 9 November 23, 2015

Lecture 9 November 23, 2015 CSC244: Discrepancy Theory in Coputer Science Fall 25 Aleksandar Nikolov Lecture 9 Noveber 23, 25 Scribe: Nick Spooner Properties of γ 2 Recall that γ 2 (A) is defined for A R n as follows: γ 2 (A) = in{r(u)

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks 1 Optial Resource Allocation in Multicast Device-to-Device Counications Underlaying LTE Networks Hadi Meshgi 1, Dongei Zhao 1 and Rong Zheng 2 1 Departent of Electrical and Coputer Engineering, McMaster

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product. Coding Theory Massoud Malek Reed-Muller Codes An iportant class of linear block codes rich in algebraic and geoetric structure is the class of Reed-Muller codes, which includes the Extended Haing code.

More information

arxiv: v1 [cs.ds] 29 Jan 2012

arxiv: v1 [cs.ds] 29 Jan 2012 A parallel approxiation algorith for ixed packing covering seidefinite progras arxiv:1201.6090v1 [cs.ds] 29 Jan 2012 Rahul Jain National U. Singapore January 28, 2012 Abstract Penghui Yao National U. Singapore

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

An Algorithm for Quantization of Discrete Probability Distributions

An Algorithm for Quantization of Discrete Probability Distributions An Algorith for Quantization of Discrete Probability Distributions Yuriy A. Reznik Qualco Inc., San Diego, CA Eail: yreznik@ieee.org Abstract We study the proble of quantization of discrete probability

More information

PULSE-TRAIN BASED TIME-DELAY ESTIMATION IMPROVES RESILIENCY TO NOISE

PULSE-TRAIN BASED TIME-DELAY ESTIMATION IMPROVES RESILIENCY TO NOISE PULSE-TRAIN BASED TIME-DELAY ESTIMATION IMPROVES RESILIENCY TO NOISE 1 Nicola Neretti, 1 Nathan Intrator and 1,2 Leon N Cooper 1 Institute for Brain and Neural Systes, Brown University, Providence RI 02912.

More information

Antenna Saturation Effects on MIMO Capacity

Antenna Saturation Effects on MIMO Capacity Antenna Saturation Effects on MIMO Capacity T S Pollock, T D Abhayapala, and R A Kennedy National ICT Australia Research School of Inforation Sciences and Engineering The Australian National University,

More information

Bayes Decision Rule and Naïve Bayes Classifier

Bayes Decision Rule and Naïve Bayes Classifier Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.

More information

Distributed Lossy Averaging

Distributed Lossy Averaging Distribute Lossy Averaging Han-I Su Departent of Electrical Engineering Stanfor University Stanfor, CA 94305, USA Eail: hanisu@stanforeu Abbas El Gaal Departent of Electrical Engineering Stanfor University

More information

On the Design of MIMO-NOMA Downlink and Uplink Transmission

On the Design of MIMO-NOMA Downlink and Uplink Transmission On the Design of MIMO-NOMA Downlink and Uplink Transission Zhiguo Ding, Meber, IEEE, Robert Schober, Fellow, IEEE, and H. Vincent Poor, Fellow, IEEE Abstract In this paper, a novel MIMO-NOMA fraework for

More information

Lean Walsh Transform

Lean Walsh Transform Lean Walsh Transfor Edo Liberty 5th March 007 inforal intro We show an orthogonal atrix A of size d log 4 3 d (α = log 4 3) which is applicable in tie O(d). By applying a rando sign change atrix S to the

More information

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe PROPERTIES OF MULTIVARIATE HOMOGENEOUS ORTHOGONAL POLYNOMIALS Brahi Benouahane y Annie Cuyt? Keywords Abstract It is well-known that the denoinators of Pade approxiants can be considered as orthogonal

More information

Warning System of Dangerous Chemical Gas in Factory Based on Wireless Sensor Network

Warning System of Dangerous Chemical Gas in Factory Based on Wireless Sensor Network 565 A publication of CHEMICAL ENGINEERING TRANSACTIONS VOL. 59, 07 Guest Editors: Zhuo Yang, Junie Ba, Jing Pan Copyright 07, AIDIC Servizi S.r.l. ISBN 978-88-95608-49-5; ISSN 83-96 The Italian Association

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions Journal of Matheatical Research with Applications Jul., 207, Vol. 37, No. 4, pp. 496 504 DOI:0.3770/j.issn:2095-265.207.04.0 Http://jre.dlut.edu.cn An l Regularized Method for Nuerical Differentiation

More information

Genetic Quantum Algorithm and its Application to Combinatorial Optimization Problem

Genetic Quantum Algorithm and its Application to Combinatorial Optimization Problem Genetic Quantu Algorith and its Application to Cobinatorial Optiization Proble Kuk-Hyun Han Dept. of Electrical Engineering, KAIST, 373-, Kusong-dong Yusong-gu Taejon, 305-70, Republic of Korea khhan@vivaldi.kaist.ac.kr

More information

Low-complexity, Low-memory EMS algorithm for non-binary LDPC codes

Low-complexity, Low-memory EMS algorithm for non-binary LDPC codes Low-coplexity, Low-eory EMS algorith for non-binary LDPC codes Adrian Voicila,David Declercq, François Verdier ETIS ENSEA/CP/CNRS MR-85 954 Cergy-Pontoise, (France) Marc Fossorier Dept. Electrical Engineering

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

A note on the realignment criterion

A note on the realignment criterion A note on the realignent criterion Chi-Kwong Li 1, Yiu-Tung Poon and Nung-Sing Sze 3 1 Departent of Matheatics, College of Willia & Mary, Williasburg, VA 3185, USA Departent of Matheatics, Iowa State University,

More information

Power-Controlled Feedback and Training for Two-way MIMO Channels

Power-Controlled Feedback and Training for Two-way MIMO Channels 1 Power-Controlled Feedback and Training for Two-way MIMO Channels Vaneet Aggarwal, and Ashutosh Sabharwal, Senior Meber, IEEE arxiv:0901188v3 [csit] 9 Jan 010 Abstract Most counication systes use soe

More information

A Quantum Observable for the Graph Isomorphism Problem

A Quantum Observable for the Graph Isomorphism Problem A Quantu Observable for the Graph Isoorphis Proble Mark Ettinger Los Alaos National Laboratory Peter Høyer BRICS Abstract Suppose we are given two graphs on n vertices. We define an observable in the Hilbert

More information