Interactive Decoding of a Broadcast Message
|
|
- Jonas Peters
- 6 years ago
- Views:
Transcription
1 In Proc. Allerton Conf. Commun., Contr., Computing, (Illinois), Oct Interactive Decoding of a Broadcast Message Stark C. Draper Brendan J. Frey Frank R. Kschischang University of Toronto Toronto, ON, M5S 3G4, Canada {sdraper@comm.utoronto.ca, frey@psi.toronto.edu, frank@comm.utoronto.ca} Abstract We develop communication strategies for the rate-constrained interactive decoding of a message broadcast to a group of interested users. This situation differs from the relay channel in that all users are interested in the transmitted message, and from the broadcast channel because no user can decode on its own. We focus on two-user scenarios, and describe a baseline strategy that uses ideas of coding with decoder side information. One user acts initially as a relay for the other. That other user then decodes the message and sends back random parity bits, enabling the first user to decode. We show how to improve on this scheme s performance through a conversation consisting of multiple rounds of discussion. While there are now more messages, each message is shorter, lowering the overall rate of the conversation. Such multi-round conversations can be more efficient because earlier messages serve as side information known at both encoder and decoder. We illustrate these ideas for binary erasure channels. We show that multi-round conversations can decode using less overall rate than is possible with the single-round scheme. 1 Introduction In this paper we consider the interactive decoding of a message received by a number of users. None of the users has a clear enough reception to decode the message singly, and all users are interested in the message sent. Therefore, the users must have a conversation to determine the message. This scenario models a wide range of situations where requesting a retransmission or a clarifying message from the transmitter is not possible. For instance, the transmitter may be a satellite or airplane that is within view of the users for only a short period of time. This may be the case, e.g., in a military application where the users are soldiers that for security reasons do not want to send a high-power request for retransmission back to the transmitter, but would rather cooperate to decode the message locally. Alternately, the users may constitute a sensor network, and the message a common source of randomness needed to set up network functionality, but power limitations constrain the sensors to local communication. Or, in a network context, the users may be terminals receiving multicast data. If the terminals have received different subsets of the packets sent, they can sort out ambiguity in the data stream by sharing already-received packets. In situation where the terminals are located far from the information source, this local sharing can cause much less network congestion than requesting clarifying packets from 1
2 the source. In digital fountain approaches, where re-requests are not needed, interactive decoding can still be useful since, through interaction, terminals may be able to sort out their ambiguity much more quickly than the time it would take for additional packets to arrive from the source. The central question we focus on is whether the conversation between users should consist of a single round of discussion (one message per user), or multiple rounds. We show that many-round conversations based on generalizations of the most efficient strategies known for the relay channel outperform single-round conversations based on these same strategies. Multiple rounds can help because already-received messages serve as side information known to both encoder and decoder. Generally, coding with side information is more efficient when the side information is known at both encoder and decoder, rather than at one or the other. In a multiple-round discussion, encoder and decoder can condition their encoding and decoding, respectively, on this shared information. The investigations of this paper are related to Orlitsky s and others work on interactive communications, e.g., see [4, 5] and the reference therein. The situation is most akin to [5] where Orlitsky and Roche design interactive communication strategies to reconstruct a memoryless function of two correlated observations at one of two users. The major difference in the problem setups is that in our case a detection problem underlies the space of observations about which discussion occurs. A second difference is that in our case both users are interested in decoding the transmitted message. The outline of the paper is as follows. In Section 2 we describe our system model and define a conversation formally. In Section 3 we describe a single-round approach using ideas of relay channel coding. In Section 4 we generalize this approach to many rounds of discussion. And, in Section 5, we analyze the proposed strategies for binary erasure channels. 2 System Model In this section we describe our communication system and define a conversation. We concentrate on the simplest scenario, consisting of a pair of users. In this case, the memoryless channel law is given by p ya,yb x(y a, y b x) = n j=1 p(y a,j x j )p(y b,j x j ), where y a and y b are the observations of the two users a and b, respectively, and where x is the channel input. Definition 1 A (2 nr, n, k) code and k-round conversation for this channel consists of of an encoding function f : {1, 2,..., 2 nr } X n, a set of 2k inter-user messages, m a,i {1, 2,..., 2 nr a,i }, m b,i {1, 2,..., 2 nr b,i }, a corresponding set of 2k inter-user message encoding functions {g a,i, g b,i } k such that m a,i = g a,i (y a, m b,1, m b,2,... m b,i 1 ), m b,i = g b,i (y b, m a,1, m a,2,..., m a,i ), 2
3 where, without loss of generality, we assume that user a begins the conversation, and a pair of decoding functions h a : Y n a {1, 2,..., 2 nr b,1 }... {1, 2,..., 2 nr b,k } {1, 2,..., 2 nr }, h b : Y n b {1, 2,..., 2 nr a,1 }... {1, 2,..., 2 nr a,k } {1, 2,..., 2 nr }. In this definition we assume that users wait until they have each received their full n-length block of observations before beginning their conversation. The conversation consists of a sequence of finite-rate messages where R a,i denotes the rate of the ith message m a,i sent from user a to user b, and R b,i the rate of the ith message m b,i sent from user b to user a. Rate is normalized with respect to n, the length of the codeword x(m) where m {1, 2,..., 2 nr } is the channel message. Our goal is to find the strategy that minimizes conversation complexity as measured by R sum = k R a,i + R b,i. From cut-set arguments, the transmitted message m cannot be reliably decoded if its rate R > I(x; y a, y b ) C ab. We term C ab is the joint-decoding capacity which can be achieved if the decoders are able to convene and jointly decode the message. This is an upper bound on the rate of reliable communication. 3 Decoding in a Single Round of Discussion In this section we describe a decoding strategy where the conversation consists of a single round of discussion. This strategy serves as a baseline with which we will compare conversations consisting of many rounds of discussion. Theorem 1 A rate-r message can be reliably decoded by users a and b in one round of discussion if R sum R I(x; y a ) + min I(u; y a y b ), (1) x,u P where the set P consists of all input variables x and auxiliary random variables u such that (i) the Markov condition u y a x, y b holds, and (ii) I(x; y b, u) R. To achieve this sum-rate, user a first acts as a relay for user b, user b decodes and sends back random parity bits. We first describe how user b can reliable decode if R a,1 min x,u P I(u; y a y b ). This result follows from standard coding with decoder side information arguments, and follows as a special case of Cover and El Gamal s Theorem 6 in [2] if the channel from the relay to the destination is a finite-rate noiseless link. Briefly, the transmitter uses a channel codebook C of rate R < C ab. Codebook C is generated randomly in an independent identically distributed (i.i.d.) manner according to p(x). User a has a source codebook C a consisting of 2 n R a length-n codewords u(s), s {1, 2,..., 2 n R a } generated i.i.d. according to p(u). User a randomly and uniformly partitions all codewords in C a into 2 nr a,1 subsets or bins. When user a observes y a, it finds a u(s) C a jointly typical with y a, and transmits to user b the index of the bin in which u(s) lies. User b searches that bin for a u(s) jointly typical with y b. It then selects the x(m) jointly typical with the pair (u(s), y b ) as the transmitted codeword. Let us choose R a,1 = I(y a ; u) + ɛ, and R a,1 = I(u; y a ) I(u; y b ) + 3ɛ = I(u; y a y b ) + 3ɛ. Then, because of our choice of rates, the Markov Lemma [1], and because R < C ab, the encoding into u(s), the selection from the bin, and message decoding can all be done reliably. At this point user b has successfully determined m. 3
4 Since user b knows m, it can use a more efficient strategy when replying to user a. In particular, user b also uses a binning strategy, but this time bins the messages (or codewords) directly, rather than the intermediate statistics given by the u(s). User b bins the 2 nr codewords into 2 nr b,1 = 2 n(r I(x;y a)+2ɛ) bins and transmits to a the index of the bin containing the codeword x(m). User a intersects the contents of this bin with the list L(y a ) of codewords jointly typical with its observation y a. Thus, if we use A (n) ɛ }. From standard to denote the jointly typical set [3], L(y a ) = {x C : (x, y a ) A (n) ɛ typicality arguments, the expected size of the list is E [ L(y a ) ] = x C Pr[(x, y a) A (n) ɛ ] x C 2 n(i(x;ya) ɛ) = 2 n(r I(x;ya)+ɛ). Since the probability of any given codeword being in the indicated bin is 2 nr b,1, the size of the intersection will be roughly n(r I(x;ya)+ɛ) 2 nr b,1 = nɛ. Therefore, with this choice of rates, we can find a codebook with a small maximum probability of error that satisfies (1). Note that communication to user a is the most efficient possible. The information flow from the transmitter to user a is I(x; y a ). The information flow from user b to user a is R b,1 = R I(x; y a ) + 2ɛ. Thus the information cut-set to user a is R + 2ɛ, which is within 2ɛ of the rate of the channel code. This is the least possible information flow for which user a can still decode reliably. Zhang has shown [6] that such most-efficient communication strategies are possible only when the relay has perfectly decoded the message. This result implies that the information flow to the first user to decode (user b in this case) will always exceed R I(x; y b ), the difference between the message rate and the information flow from transmitter to b. 4 Decoding in Many Rounds In this section we generalize the one-round approach to multiple rounds of discussion. We show the following theorem Theorem 2 A rate-r message can be reliably decoded by users a and b through a k-round conversation if [ k R sum min I(u a,i ; y a y b, u a,1,..., u a,i 1, u b,1,..., u b,i 1 ) x,u a,1,...u a,k,u b,1,...u b,k 1 P ] k 1 + I(u b,i ; y b y a, u a,1,..., u a,i, u b,1,..., u b,i 1 ) + R I(x; y a, u b,1,..., u b,k 1 ), (2) where the set P consists of all random variables x, u a,1,..., u a,k, u b,1,..., u b,k 1 that satisfy the Markov conditions (i) u a,i y a, u a,1,..., u a,i 1, u b,1,..., u b,i 1 x, y b and (ii) u b,i y b, u a,1,..., u a,i, u b,1,..., u b,i 1 x, y a for all i, and that satisfy (iii) I(x; y b, u a,1,..., u a,k ) > R. The coding construction that achieves this theorem is a direct generalization of Theorem 1. For message m a,i we use the posterior p(u a,i y a, u a,1,... u a,i 1, u b,1,... u b,i 1 ), and for message m b,i we use the posterior p(u b,i y b, u a,1,... u a,i, u b,1,... u b,i 1 ). Many-round decoding takes advantage of the fact that coding with side information techniques are generally more efficient when side information is known at both encoder and decoder. In the conversation already-transmitted messages serve as side information known at both encoder and decoder. We see in the next section that for this reason this strategy can decode the message at a strictly lower sum-rate R sum than the baseline approach of Theorem 1. 4
5 5 Example: Binary Erasure Channels In this section we illustrate single-round and many-round conversation strategies for a binary erasure broadcast channel. For simplicity we assume that the erasure channels to the users are both symmetric with the same erasure probability p. Furthermore, in order to approach within ɛ the joint-decoding capacity C ab we use a channel codebook generated according to a Bernoulli(0.5) distribution, consisting of 2 n(1 p2 ɛ) = 2 n(c ab ɛ) codewords. 5.1 Decoding in One Round We now show that to be able to decode in one round via the strategy of Theorem 1, user a must send enough information so that user b can fully determine user a s observation. In other words, user a should use a Slepian-Wolf code. First observe that with the choice of p(x) as Bernoulli(0.5) we can rewrite the second condition of Theorem 1 as ɛ C ab I(x; y b, u) = I(x; y a, y b ) I(x; y b, u). This puts extra conditions on the test channel p(u y b ) that we are able to pick. In particular, it implies a certain extra Markov condition must hold as ɛ goes to zero. ɛ I(x; y a, y b ) I(x; u, y b ) = H(x u, y b ) H(x y a, y b ) (3) = p[h(x u) H(x y a )] = p[i(x; y a ) I(x; u)] (4) where (4) follows because x is Bernoulli(0.5) and the channels are symmetric binary erasure channels with erasure probability p. At this point we can combine the mutual information terms in (4) into the single divergence term pd(p(x y a, u) p(x u)), and use Pinsker s inequality [3] to show that p(x u, y a ) p(x u) must be small. For brevity, herein we simply assume for the rest of the discussion that as ɛ 0, the Markov chain x u y a holds exactly. We next expand (3) in a second way to learn about the rate R a,1. ɛ H(y a y b ) H(y a x) + H(u x) H(u y b ) = I(u; y b ) I(y a ; y b ) + H(x u) H(x y a ) (5) I(u; y b ) I(y a ; y b ), (6) where (5) follows from the fact that H(x u) H(x y a ) 0, a consequence of the Markov chain x y a u and the data processing inequality. Substituting (6) into the expression for R a,1 in Theorem 1 we get R a,1 = I(u; y a y b ) = I(u; y a ) I(u; y b ) I(u; y a ) I(y a ; y b ) ɛ = H(y a y b ) H(y a u) ɛ. (7) Finally, we now show that the Markov conditions derived above show that H(y a u) in (7) is zero. The two Markov chains x y a u and x u y a imply the following, for any (y a, u) pair such that p(y a, u) > 0: p(x y a, u) = p(x y a ) = p(x u). (8) For example, say there is some value u = j such that p(y a = 1, u = j) > 0, then because the channel is a binary erasure channel, 1 = p(x = 1 y a = 1, u = j) = p(x = 1 y a = 1) = p(x = 1 u = j). (9) 5
6 It is then easy to show that there must exist a set S 1 of values of u such that p(y a = 1 u S 1 ) = 1, and p(u S 1 ) = p(y a = 1). Thus p(u S 1 y a = 1) = 1. Similarly, one can define sets S ɛ and S 0 such that p(u S ɛ y a = ɛ) = p(y a = ɛ u S ɛ ) = 1 and p(u S 0 y a = 0) = p(y a = 0 u S 0 ) = 1. This implies that any test channel p(u y a ) that achieves I(x; u, y b ) = I(x; y a, y b ) maps y a into one of three disjoint sets S 0, S ɛ and S 1, depending on the value of y a. And, therefore, that H(y a u) = 0. Substituting the result that H(y a u) = 0 into (7) shows that in order to decode in one round, user a at best must do Slepian-Wolf coding. As we show in the next section, this means that for a single-round discussion R sum 2p(1 p) + H B (p). 6 Test Channel for One-Round Decoding In this section we introduce a candidate test channel p(u y a ) that enables decoding at the minimal rate derived in the last section. It can be used to decode at an even lower sumrate if multiple rounds of conversation are allowed. Let U = {0, e, 1} where e is the symbol for an erasure. Define p(u y a ) as follows: p(u = 1 y a = 1) = 1 p u, p(u = e y a = 1) = p u, p(u = e y a = e) = 1, p(u = 0 y a = 0) = 1 p u, p(u = e y a = 0) = p u. For this test channel it is straightforward 1 to show that R a,1 = I(u; y a y b ) = H B (p + p u (1 p)) + p(1 p)(1 p u ) (1 p)h B (p u ), (10) where H B (p u ) denotes the entropy of a Bernoulli(p u ) random variable. For a given p u the maximum communication rate that user b can decode reliably is I(x; y b, u) = 1 p[p + (1 p)p u ]. (11) To get I(x; y b, u) = 1 p 2 = C ab, we must set p u = 0, yielding R a,1 = p(1 p) + H B (p) = H(y a y b ). After user b decodes it can bin the channel messages per Theorem 1 and transmit the bin index at rate p(1 p) to user a. This give sum-rate R sum = R a,1 + R b,1 = 2p(1 p) + H B (p). (12) We compare the sum-rate of many-round conversations to this baseline. 6.1 Useful Alternative Derivation of One-Round Strategy We first give a useful alternative derivation of (10) and (11) in terms of the absolute fraction of symbols described by each message. Keeping track of absolute fractions rather than p u, which is a relative fraction of extra erasures added in by the test channel, makes it easier to consider the effect of multiple rounds. The alternative derivation follows in three steps. First, say message m a,1 informs b of the values of a fraction γ a,1 of the n symbols that a observes. Furthermore, let each symbol described by m a,1 be unerased on y a. Thus, 0 γ a,1 (1 p). The total number of possible sequences of length nγ a,1 is ( ) n 2 nh B(γ a,1 ). (13) nγ a,1 1 In deriving these relationships it is helpful to define extra random variables e a, e b, e u, which denote whether y a, y b, or u, are erasures, respectively. These random variables can be used in a manner analogous to the derivation of Fano s inequality in [3]. 6
7 Second, we don t care which set of nγ a,1 unerased symbols we describe, but all such sets must be subsets of the n(1 p) symbols that are observed unerased by a. There are ( ) n(1 p) 2n(1 p)h B(γ a,1 /(1 p)) = 2 n((1 p)h B(γ a,1 /(1 p)) ɛ) (14) n(1 p) + 1 nγ a,1 such subsets. Therefore, for there to be on average at least one subset that matches nγ a,1 of a s unerased symbols, we need to index at least 2 n(h B(γ a,1 ) (1 p)h B (γ a,1 /(1 p))+ɛ) subsets of size nγ a,1. Again, since we don t care which such subset we describe, we can index fewer subsets than in (13), thereby saving rate. This is equivalent to the quantization effect of the test channel p(u y a ). Finally, we need to account for the conditional entropy of the nγ a,1 symbols given the decoder information. The conditional entropy of symbol y a,i given y b,i and the event that y a,i is not erased (since we only discuss unerased symbols) equals p. We describe nγ a,1 symbols, giving a conditional entropy rate γ a,1 p. This is equivalent to the rate savings given by binning. All together this gives message rate R a,1 = H B (γ a,1 ) (1 p)h B (γ a,1 /(1 p)) + γ a,1 p + ɛ. (15) If we set γ a,1 = (1 p u )(1 p), and use the symmetry of H B ( ), then (15) equals (10). At the end of this communication step user b knows perfectly the following fraction of symbols (which we term the communication rate supportable by user b at step 1, since if the rate of the channel code C is below this, user b will be able to decode): R comm,b,1 = 1 p + pγ a,1. (16) By setting γ a,1 = (1 p u )(1 p), (16) equals (11). Figure 1 gives a pictorial explanation of this derivation. The top and bottom bars represent the received vectors y a and y b, respectively. The shaded areas represent the fractions of erased symbols. Symbols have been ordered so that, starting from the left, the symbols received unerased by both are shown first, next the symbols unerased on y a but erased on y b, next erased on y b but unerased on y a, and finally erased on both. The dotted lines indicate the fraction of symbols transmitted in each message and whether they are erased or unerased in y a, y b. For example, on average user a s first message m a,1 describes npγ a,1 symbols to user b that are unerased on y a, but erased on y b. Since when sending m b,1 user b only needs to be concerned with sending information about the symbols not discussed in m a,1, the message m b,1 sent is conditioned on m a,1. This is equivalent to conditioning the test channel for u b,1 on the result of the first test channel, u a,1, in Theorem 2. User b discusses nγ b,1 symbols in message m b,1. The first step in finding R b,1 is to determine the number of subsets of size nγ b,1 of the n nγ a,1 = n(1 γ a,1 ) symbols not discussed in m a,1, i.e., ( ) n(1 γa,1 ) 2 n(1 γ a,1)h B (γ b,1 /(1 γ a,1 )). (17) nγ b,1 Second, the expected number of possibly useful symbols (i.e., unerased in y b and undiscussed in earlier messages) is n(1 p) nγ a,1 (1 p) = n(1 γ a,1 )(1 p). User b needs to describe only a subset of nγ b,1 of these symbols, where γ b,1 < (1 γ a,1 )(1 p). The number of such subsets is ( ) n(1 p)(1 γa,1 ) 2 n[(1 p)(1 γ a,1)h B (γ b,1 /(1 p)(1 γ a,1 )) ɛ]. (18) nγ b,1 7
8 (1 p) 2 p(1 p) p(1 p) p 2 y a (1 p)γ a,1 ( 1 p 1 γ a,1 ) γ b,1 pγ a,1 pγ b,1 1 γ a,1 y b Figure 1: The fraction of unerased symbols is white, the fraction of erased symbols is shaded. Finally, the conditional entropy of a symbol equals the probability that a remaining possible useful symbol on y b is erased on y a. Figure 1 helps us see this probability is np(1 p) n(1 p) nγ a,1 (1 p) = p. (19) 1 γ a,1 Putting together (17), (18), and (19) gives the rate of m b,1, ( ) ( γb,1 R b,1 = (1 γ a,1 )H B (1 p)(1 γ a,1 )H B 1 γ a,1 γ b,1 (1 p)(1 γ a,1 ) After receiving m b,1, the communication rate reliably decodable by user a is 6.2 Decoding in Many Rounds ) + γ b,1p 1 γ a,1 + ɛ. (20) R comm,a,1 = 1 p + pγ b,1 /(1 γ a,1 ). (21) In this section we use the reinterpretation of (10) and (11) given in the last section to present results for decoding in many rounds of discussion. We begin by defining a number of useful quantities. First, we define f a,k and f b,k to be the fractions of symbols not already discussed when a and b formulate their kth messages, respectively. These fractions indicate the size of the set of symbols about which discussion can continue. k 1 k 1 f a,k = 1 γ a,i γ b,i, f b,k = 1 k k 1 γ a,i γ b,i. The difference in the limits of summation occurs because a is assumed to transmit first. Next, define P a,k (P b,k ) to be the probabilities that a symbol discussed in m a,k (m b,k ) is 8
9 useful, i.e., is erased on y b (y a ). These probabilities are P a,k = P b,k = p(1 p) k 1 γ a,ip a,i 1 p k 1 γ a,ip a,i k 1 γ b,i(1 P b,i ) n a,k (22) d a,k p(1 p) k 1 γ b,ip b,i 1 p k 1 γ b,ip b,i k γ a,i(1 P a,i ) n b,k. (23) d b,k The denominators d a,k and d b,k are particularly useful as they give the fraction of unerased and undiscussed, and therefore possibly useful, symbols remaining on y a and y b, respectively. With these definitions, and for any choices of γ a,i (0 γ a,i d a,i ) and γ b,i (0 γ b,i d b,i ), the message rates at each step are R a,k = f a,k H B (γ a,k /f a,k ) d a,k H B (γ a,k /d a,k ) + γ a,k P a,k, (24) R b,k = f b,k H B (γ b,k /f b,k ) d b,k H B (γ b,k /d b,k ) + γ b,k P b,k. (25) The communicate rates supportable by each user after receiving the first k messages are R comm,a,k = 1 p + R comm,b,k = 1 p + k P b,i γ b,i, (26) k P a,i γ a,i. (27) Discussion continues until, say R comm,b,k > R, the rate of the channel code. At this point user b decodes the message, bins the channel codebook into 2 n(r R comm,a,k) bins, and sends the bin index in which the transmitted codeword lies. User a intersects the contents of this bin with its list of remaining codeword possibilities. 6.3 Comparison of schemes We now compare the many-round and single-round strategies. In Fig. 2 we plot the sumrate of conversation R sum = k (R a,i + R b,i ) for probability of erasure p = The highest sum-rate is given by a single-round conversation consisting of two messages. As we allow the number of rounds of discussion till decoding to increase, the sum-rate declines. Assuming k rounds till decoding, in plots 2 and 3, at step i each user described to the other a fraction 1/(k i+1) of its remaining unerased and undiscussed bits. Equivalently, γ a,i = d a,i /(k+1 i). A lower bound on the sum-rate is given by 2R I(x; y a ) I(x; y b ) = 2p(1 p). This is the additional information flow needed for the flow to each user to equal the code rate R. As mentioned earlier, Zhang [6] showed this lower bound is unachievable for the situations we discuss. In Fig. 3 we plot the percentage decrease in sum-rate versus decoding in a single round, where k is the number of rounds to full decoding. We plot k = 1, for four probabilities of erasure, p = 0.05, 0.1, 0.2, 0.3. Note that there is more gain for systems with lower probability of erasure. This is because lower probability of erasure means a higher percentage of symbols unerased on both observations, and thus more redundancy and greater savings from side information coding. Note that for p = 0.3 rate savings decline for larger numbers of rounds of discussion. Clearly, this is a function of poor choices for the γ a,i and γ b,i. Determining efficient optimizations for the γ a,i and γ b,i is part of on-going research. 9
10 Single round decoding Many round decoding Cut set bound (unachievable) Sum rate of conversation Rounds of discussion till decoding Figure 2: Sum-rate of conversation for p = 0.05 versus number of rounds of discussion. 7 Discussion and Directions In this paper we introduce the problem of interactive decoding of a broadcast message. We give an approach based on a generalization of the most efficient relaying techniques known. We show that multi-round conversations can give strictly better performance than single-round conversations, where performance is measured by the sum-rate of the conversation. In demonstrating this result we define a family of candidate test channels for the binary erasure version of the problem. In later rounds the test channel output is conditioned on the results of earlier rounds. There are many interesting directions to pursue with regard to the basic interactive decoding problem posed herein. A central question is what should be discussed each round. The candidate test channel introduced gives one answer to this question but, except in the single-round situation with p u = 0, this is not always the optimal answer. The interplay between the source and channel coding aspects of the problem means that finding the optimal input distribution and test channels which determine what is discussed each round is generally not a convex optimization. Further, what is discussed by a in the ith round (as specified by the test channel defining u a,i ) may be radically different from what a should discuss in the i + 1th round. Even if we fix a family of test channels, as we did here, we still have parameters to optimize for each round. For the case presented herein, this corresponds to deciding how much to discuss each round. In the single-round situation we discuss everything in one round. Clearly, you can only gain by allowing more rounds of discussion. But, determining the optimal choice of how much to discuss each round is difficult and test-channel dependent. We are also working on applying these ideas to other types of channels, such as binarysymmetric or Gaussian. It is very important to note that while, e.g., in the Gaussian mean-squared-error case, having the decoder s side information at the encoder doesn t help in Wyner-Ziv coding, it helps hugely here. If the encoder (user a) knew both his observation and the decoder s (user b s), then the encoder could use both observations to 10
11 Percent sum rate savings over one round p = 0.05 p = 0.1 p = 0.2 p = Rounds of discussion till decoding Figure 3: Sum-rate savings versus decoding in one round for various probabilities of erasure, and rounds of discussion till decoding. jointly decode the message. Following that he would use the perfectly efficient messagebinning scheme we introduced to communicate to b. As shown by Zhang, however, this performance is unachievable. Therefore, even in the Gaussian case the underlying detection problem dramatically changes the quality of the solution. Finally, we can think of casting interactive decoding as a type of network coding problem. Interaction is allowed among various network nodes to determine the information they want. This perspective may allow us to connect our work to the very active network coding community, and give insight into practical code designs for interactive decoding. References [1] T. Berger. Multiterminal source coding. In G. Longo, editor, The Information Theory Approach to Communications, chapter 4. Springer-Verlag, [2] T. M. Cover and A. El Gamal. Capacity theorems for the relay channel. IEEE Trans. Inform. Theory, 25: , September [3] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley and Sons, [4] A. Orlitsky. Worst-case interactive communication I: Two messages are almost optimal. IEEE Trans. Inform. Theory, 36: , September [5] A. Orlitsky and J. R. Roche. Coding for computing. IEEE Trans. Inform. Theory, 47: , March [6] Z. Zhang. Partial converse for a relay channel. IEEE Trans. Inform. Theory, 34:1106,
Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results
Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel
More informationMultiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets
Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,
More informationLecture 6 I. CHANNEL CODING. X n (m) P Y X
6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder
More informationSuperposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels
Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:
More informationUniversal Incremental Slepian-Wolf Coding
Proceedings of the 43rd annual Allerton Conference, Monticello, IL, September 2004 Universal Incremental Slepian-Wolf Coding Stark C. Draper University of California, Berkeley Berkeley, CA, 94720 USA sdraper@eecs.berkeley.edu
More informationVariable Length Codes for Degraded Broadcast Channels
Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates
More informationEE 4TM4: Digital Communications II. Channel Capacity
EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.
More informationThe Gallager Converse
The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing
More informationThe Poisson Channel with Side Information
The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH
More informationJoint Source-Channel Coding for the Multiple-Access Relay Channel
Joint Source-Channel Coding for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora Department of Electrical and Computer Engineering Ben-Gurion University, Israel Email: moriny@bgu.ac.il, ron@ee.bgu.ac.il
More informationDistributed Functional Compression through Graph Coloring
Distributed Functional Compression through Graph Coloring Vishal Doshi, Devavrat Shah, Muriel Médard, and Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology
More informationECE Information theory Final (Fall 2008)
ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1
More informationLecture 3: Channel Capacity
Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.
More informationA Comparison of Superposition Coding Schemes
A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA
More informationCapacity Region of the Permutation Channel
Capacity Region of the Permutation Channel John MacLaren Walsh and Steven Weber Abstract We discuss the capacity region of a degraded broadcast channel (DBC) formed from a channel that randomly permutes
More informationOn Scalable Coding in the Presence of Decoder Side Information
On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,
More informationEE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions
EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where
More informationLecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity
5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke
More informationDistributed Source Coding Using LDPC Codes
Distributed Source Coding Using LDPC Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete May 29, 2010 Telecommunications Laboratory (TUC) Distributed Source Coding
More informationOn the Duality between Multiple-Access Codes and Computation Codes
On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch
More informationNational University of Singapore Department of Electrical & Computer Engineering. Examination for
National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:
More informationOn Multiple User Channels with State Information at the Transmitters
On Multiple User Channels with State Information at the Transmitters Styrmir Sigurjónsson and Young-Han Kim* Information Systems Laboratory Stanford University Stanford, CA 94305, USA Email: {styrmir,yhk}@stanford.edu
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationShannon s noisy-channel theorem
Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for
More informationLossy Distributed Source Coding
Lossy Distributed Source Coding John MacLaren Walsh, Ph.D. Multiterminal Information Theory, Spring Quarter, 202 Lossy Distributed Source Coding Problem X X 2 S {,...,2 R } S 2 {,...,2 R2 } Ẑ Ẑ 2 E d(z,n,
More informationOn the Capacity of the Two-Hop Half-Duplex Relay Channel
On the Capacity of the Two-Hop Half-Duplex Relay Channel Nikola Zlatanov, Vahid Jamali, and Robert Schober University of British Columbia, Vancouver, Canada, and Friedrich-Alexander-University Erlangen-Nürnberg,
More informationNetwork coding for multicast relation to compression and generalization of Slepian-Wolf
Network coding for multicast relation to compression and generalization of Slepian-Wolf 1 Overview Review of Slepian-Wolf Distributed network compression Error exponents Source-channel separation issues
More informationSecret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper
Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper Mohsen Bahrami, Ali Bereyhi, Mahtab Mirmohseni and Mohammad Reza Aref Information Systems and Security
More informationLecture 22: Final Review
Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information
More informationLecture 11: Polar codes construction
15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last
More informationSource and Channel Coding for Correlated Sources Over Multiuser Channels
Source and Channel Coding for Correlated Sources Over Multiuser Channels Deniz Gündüz, Elza Erkip, Andrea Goldsmith, H. Vincent Poor Abstract Source and channel coding over multiuser channels in which
More informationSOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003
SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SLEPIAN-WOLF RESULT { X i} RATE R x ENCODER 1 DECODER X i V i {, } { V i} ENCODER 0 RATE R v Problem: Determine R, the
More informationOn Common Information and the Encoding of Sources that are Not Successively Refinable
On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa
More informationVariable-Rate Universal Slepian-Wolf Coding with Feedback
Variable-Rate Universal Slepian-Wolf Coding with Feedback Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Dept. of Electrical and Computer Engineering Rice University, Houston, TX 77005 Abstract
More informationCapacity of the Discrete Memoryless Energy Harvesting Channel with Side Information
204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener
More information(Classical) Information Theory III: Noisy channel coding
(Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way
More informationA Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying
A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying Ahmad Abu Al Haija, and Mai Vu, Department of Electrical and Computer Engineering McGill University Montreal, QC H3A A7 Emails: ahmadabualhaija@mailmcgillca,
More informationAn Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel
Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 8-3, An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Invited Paper Bernd Bandemer and
More informationOn the Capacity of the Interference Channel with a Relay
On the Capacity of the Interference Channel with a Relay Ivana Marić, Ron Dabora and Andrea Goldsmith Stanford University, Stanford, CA {ivanam,ron,andrea}@wsl.stanford.edu Abstract Capacity gains due
More informationOn Source-Channel Communication in Networks
On Source-Channel Communication in Networks Michael Gastpar Department of EECS University of California, Berkeley gastpar@eecs.berkeley.edu DIMACS: March 17, 2003. Outline 1. Source-Channel Communication
More informationLecture 14 February 28
EE/Stats 376A: Information Theory Winter 07 Lecture 4 February 8 Lecturer: David Tse Scribe: Sagnik M, Vivek B 4 Outline Gaussian channel and capacity Information measures for continuous random variables
More informationThe Method of Types and Its Application to Information Hiding
The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,
More informationThe Sensor Reachback Problem
Submitted to the IEEE Trans. on Information Theory, November 2003. 1 The Sensor Reachback Problem João Barros Sergio D. Servetto Abstract We consider the problem of reachback communication in sensor networks.
More informationEfficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel
Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Ahmad Abu Al Haija ECE Department, McGill University, Montreal, QC, Canada Email: ahmad.abualhaija@mail.mcgill.ca
More informationELEC546 Review of Information Theory
ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random
More informationLecture 10: Broadcast Channel and Superposition Coding
Lecture 10: Broadcast Channel and Superposition Coding Scribed by: Zhe Yao 1 Broadcast channel M 0M 1M P{y 1 y x} M M 01 1 M M 0 The capacity of the broadcast channel depends only on the marginal conditional
More informationX 1 : X Table 1: Y = X X 2
ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multiple-access
More informationII. THE TWO-WAY TWO-RELAY CHANNEL
An Achievable Rate Region for the Two-Way Two-Relay Channel Jonathan Ponniah Liang-Liang Xie Department of Electrical Computer Engineering, University of Waterloo, Canada Abstract We propose an achievable
More informationDistributed Lossless Compression. Distributed lossless compression system
Lecture #3 Distributed Lossless Compression (Reading: NIT 10.1 10.5, 4.4) Distributed lossless source coding Lossless source coding via random binning Time Sharing Achievability proof of the Slepian Wolf
More informationNotes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel
Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic
More informationOn Scalable Source Coding for Multiple Decoders with Side Information
On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,
More informationLECTURE 13. Last time: Lecture outline
LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to
More information1 Introduction to information theory
1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through
More informationCoding Techniques for Primitive Relay Channels
Forty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 26-28, 2007 WeB1.2 Coding Techniques for Primitive Relay Channels Young-Han Kim Abstract We give a comprehensive discussion
More informationDistributed Lossy Interactive Function Computation
Distributed Lossy Interactive Function Computation Solmaz Torabi & John MacLaren Walsh Dept. of Electrical and Computer Engineering Drexel University Philadelphia, PA 19104 Email: solmaz.t@drexel.edu &
More informationOn the Capacity Region of the Gaussian Z-channel
On the Capacity Region of the Gaussian Z-channel Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@eng.umd.edu ulukus@eng.umd.edu
More informationEE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.
EE376A - Information Theory Final, Monday March 14th 216 Solutions Instructions: You have three hours, 3.3PM - 6.3PM The exam has 4 questions, totaling 12 points. Please start answering each question on
More informationMulticoding Schemes for Interference Channels
Multicoding Schemes for Interference Channels 1 Ritesh Kolte, Ayfer Özgür, Haim Permuter Abstract arxiv:1502.04273v1 [cs.it] 15 Feb 2015 The best known inner bound for the 2-user discrete memoryless interference
More informationEnergy State Amplification in an Energy Harvesting Communication System
Energy State Amplification in an Energy Harvesting Communication System Omur Ozel Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland College Park, MD 20742 omur@umd.edu
More informationNoisy channel communication
Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University
More informationThe Capacity Region of the Gaussian Cognitive Radio Channels at High SNR
The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR 1 Stefano Rini, Daniela Tuninetti and Natasha Devroye srini2, danielat, devroye @ece.uic.edu University of Illinois at Chicago Abstract
More informationThe Capacity Region of the Cognitive Z-interference Channel with One Noiseless Component
1 The Capacity Region of the Cognitive Z-interference Channel with One Noiseless Component Nan Liu, Ivana Marić, Andrea J. Goldsmith, Shlomo Shamai (Shitz) arxiv:0812.0617v1 [cs.it] 2 Dec 2008 Dept. of
More informationA New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality
0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California
More informationInformation Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem
Information Theory for Wireless Communications. Lecture 0 Discrete Memoryless Multiple Access Channel (DM-MAC: The Converse Theorem Instructor: Dr. Saif Khan Mohammed Scribe: Antonios Pitarokoilis I. THE
More informationOn Function Computation with Privacy and Secrecy Constraints
1 On Function Computation with Privacy and Secrecy Constraints Wenwen Tu and Lifeng Lai Abstract In this paper, the problem of function computation with privacy and secrecy constraints is considered. The
More informationOn the Rate-Limited Gelfand-Pinsker Problem
On the Rate-Limited Gelfand-Pinsker Problem Ravi Tandon Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 ravit@umd.edu ulukus@umd.edu Abstract
More informationUpper Bounds on the Capacity of Binary Intermittent Communication
Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,
More information18.2 Continuous Alphabet (discrete-time, memoryless) Channel
0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not
More informationOn the Duality of Gaussian Multiple-Access and Broadcast Channels
On the Duality of Gaussian ultiple-access and Broadcast Channels Xiaowei Jin I. INTODUCTION Although T. Cover has been pointed out in [] that one would have expected a duality between the broadcast channel(bc)
More informationCommon Information. Abbas El Gamal. Stanford University. Viterbi Lecture, USC, April 2014
Common Information Abbas El Gamal Stanford University Viterbi Lecture, USC, April 2014 Andrew Viterbi s Fabulous Formula, IEEE Spectrum, 2010 El Gamal (Stanford University) Disclaimer Viterbi Lecture 2
More informationCapacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel
Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Jun Chen Dept. of Electrical and Computer Engr. McMaster University Hamilton, Ontario, Canada Chao Tian AT&T Labs-Research 80 Park
More informationCapacity of a channel Shannon s second theorem. Information Theory 1/33
Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,
More informationEE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15
EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 1. Cascade of Binary Symmetric Channels The conditional probability distribution py x for each of the BSCs may be expressed by the transition probability
More informationCapacity of channel with energy harvesting transmitter
IET Communications Research Article Capacity of channel with energy harvesting transmitter ISSN 75-868 Received on nd May 04 Accepted on 7th October 04 doi: 0.049/iet-com.04.0445 www.ietdl.org Hamid Ghanizade
More informationInteractive Hypothesis Testing with Communication Constraints
Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October - 5, 22 Interactive Hypothesis Testing with Communication Constraints Yu Xiang and Young-Han Kim Department of Electrical
More informationSecret Key Agreement Using Asymmetry in Channel State Knowledge
Secret Key Agreement Using Asymmetry in Channel State Knowledge Ashish Khisti Deutsche Telekom Inc. R&D Lab USA Los Altos, CA, 94040 Email: ashish.khisti@telekom.com Suhas Diggavi LICOS, EFL Lausanne,
More informationAn Achievable Rate for the Multiple Level Relay Channel
An Achievable Rate for the Multiple Level Relay Channel Liang-Liang Xie and P. R. Kumar Department of Electrical and Computer Engineering, and Coordinated Science Laboratory University of Illinois, Urbana-Champaign
More informationarxiv: v1 [cs.it] 5 Feb 2016
An Achievable Rate-Distortion Region for Multiple Descriptions Source Coding Based on Coset Codes Farhad Shirani and S. Sandeep Pradhan Dept. of Electrical Engineering and Computer Science Univ. of Michigan,
More informationRepresentation of Correlated Sources into Graphs for Transmission over Broadcast Channels
Representation of Correlated s into Graphs for Transmission over Broadcast s Suhan Choi Department of Electrical Eng. and Computer Science University of Michigan, Ann Arbor, MI 80, USA Email: suhanc@eecs.umich.edu
More informationEECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have
EECS 229A Spring 2007 * * Solutions to Homework 3 1. Problem 4.11 on pg. 93 of the text. Stationary processes (a) By stationarity and the chain rule for entropy, we have H(X 0 ) + H(X n X 0 ) = H(X 0,
More informationCut-Set Bound and Dependence Balance Bound
Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems
More informationReliable Computation over Multiple-Access Channels
Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,
More informationInterference Channel aided by an Infrastructure Relay
Interference Channel aided by an Infrastructure Relay Onur Sahin, Osvaldo Simeone, and Elza Erkip *Department of Electrical and Computer Engineering, Polytechnic Institute of New York University, Department
More informationSOURCE coding problems with side information at the decoder(s)
1458 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 3, MARCH 2013 Heegard Berger Cascade Source Coding Problems With Common Reconstruction Constraints Behzad Ahmadi, Student Member, IEEE, Ravi Ton,
More informationECE Information theory Final
ECE 776 - Information theory Final Q1 (1 point) We would like to compress a Gaussian source with zero mean and variance 1 We consider two strategies In the first, we quantize with a step size so that the
More informationDigital Communications III (ECE 154C) Introduction to Coding and Information Theory
Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I
More informationThe Capacity Region of a Class of Discrete Degraded Interference Channels
The Capacity Region of a Class of Discrete Degraded Interference Channels Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@umd.edu
More informationLecture 5 Channel Coding over Continuous Channels
Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From
More informationLayered Synthesis of Latent Gaussian Trees
Layered Synthesis of Latent Gaussian Trees Ali Moharrer, Shuangqing Wei, George T. Amariucai, and Jing Deng arxiv:1608.04484v2 [cs.it] 7 May 2017 Abstract A new synthesis scheme is proposed to generate
More informationAN INTRODUCTION TO SECRECY CAPACITY. 1. Overview
AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits
More informationProblemsWeCanSolveWithaHelper
ITW 2009, Volos, Greece, June 10-12, 2009 ProblemsWeCanSolveWitha Haim Permuter Ben-Gurion University of the Negev haimp@bgu.ac.il Yossef Steinberg Technion - IIT ysteinbe@ee.technion.ac.il Tsachy Weissman
More informationGraph Coloring and Conditional Graph Entropy
Graph Coloring and Conditional Graph Entropy Vishal Doshi, Devavrat Shah, Muriel Médard, Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology Cambridge,
More informationLecture 15: Conditional and Joint Typicaility
EE376A Information Theory Lecture 1-02/26/2015 Lecture 15: Conditional and Joint Typicaility Lecturer: Kartik Venkat Scribe: Max Zimet, Brian Wai, Sepehr Nezami 1 Notation We always write a sequence of
More informationAppendix B Information theory from first principles
Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes
More informationEECS 750. Hypothesis Testing with Communication Constraints
EECS 750 Hypothesis Testing with Communication Constraints Name: Dinesh Krithivasan Abstract In this report, we study a modification of the classical statistical problem of bivariate hypothesis testing.
More informationEE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes
EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check
More informationKeyless authentication in the presence of a simultaneously transmitting adversary
Keyless authentication in the presence of a simultaneously transmitting adversary Eric Graves Army Research Lab Adelphi MD 20783 U.S.A. ericsgra@ufl.edu Paul Yu Army Research Lab Adelphi MD 20783 U.S.A.
More informationPerformance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University)
Performance-based Security for Encoding of Information Signals FA9550-15-1-0180 (2015-2018) Paul Cuff (Princeton University) Contributors Two students finished PhD Tiance Wang (Goldman Sachs) Eva Song
More informationA digital interface for Gaussian relay and interference networks: Lifting codes from the discrete superposition model
A digital interface for Gaussian relay and interference networks: Lifting codes from the discrete superposition model M. Anand, Student Member, IEEE, and P. R. Kumar, Fellow, IEEE Abstract For every Gaussian
More informationHomework Set #2 Data Compression, Huffman code and AEP
Homework Set #2 Data Compression, Huffman code and AEP 1. Huffman coding. Consider the random variable ( x1 x X = 2 x 3 x 4 x 5 x 6 x 7 0.50 0.26 0.11 0.04 0.04 0.03 0.02 (a Find a binary Huffman code
More information