The Unbounded Benefit of Encoder Cooperation for the k-user MAC

Size: px
Start display at page:

Download "The Unbounded Benefit of Encoder Cooperation for the k-user MAC"

Transcription

1 The Unbounded Benefit of Encoder Cooperation for the k-user MAC Parham Noorzad, Student Member, IEEE, Michelle Effros, Fellow, IEEE, and Michael Langberg, Senior Member, IEEE arxiv: v2 [cs.it] 30 Sep 2016 Abstract Cooperation strategies allow communication devices to work together to improve network capacity. Consider a network consisting of a k-user multiple access channel MAC and a node that is connected to all k encoders via rate-limited bidirectional links, referred to as the cooperation facilitator CF. Define the cooperation benefit as the sum-capacity gain resulting from the communication between the encoders and the CF and the cooperation rate as the total rate the CF shares with the encoders. This work demonstrates the existence of a class of k-user MACs where the ratio of the cooperation benefit to cooperation rate tends to infinity as the cooperation rate tends to zero. Examples of channels in this class include the binary erasure MAC for k = 2 and the k-user Gaussian MAC for any k 2. Index Terms Conferencing encoders, cooperation facilitator, cost constraints, edge removal problem, multiple access channel, multivariate covering lemma, network information theory. I. INTRODUCTION In large networks, resources may not always be distributed evenly across the network. There may be times where parts of a network are underutilized, while others are overconstrained, leading to suboptimal performance. In such situations, end users are not able to use their devices to their full capabilities. One approach to address this problem allows some nodes in the network to cooperate, that is, work together, either directly or indirectly, to achieve common goals. The model we next introduce is based on this idea. In the classical k-user multiple access channel MAC [3], there are k encoders and a single decoder. Each encoder has a private message which it transmits over n channel uses to the decoder. The decoder, once it receives n output symbols, finds the messages of all k encoders with small average probability of error. In this model, the encoders cannot cooperate, since each encoder only has access to its own message. This paper was presented in part at the 2015 IEEE International Symposium of Information Theory ISIT in Hong Kong [1] and the 2016 IEEE ISIT in Barcelona, Spain [2]. This material is based upon work supported by the National Science Foundation under Grant Numbers , , and P. Noorzad and M. Effros are with the California Institute of Technology, Pasadena, CA USA s: parham@caltech.edu, effros@caltech.edu. M. Langberg is with the State University of New York at Buffalo, Buffalo, NY USA mikel@buffalo.edu.

2 1 Figure 1. The network consisting of a k-user MAC and a CF. For j [k], encoder j has access to message w j [2 nr j ]. We now consider an alternative scenario where our k-user MAC is part of a larger network. In this network, there is a node that is connected to all k encoders and acts as a cooperation facilitator CF. Specifically, for every j [k], 1 there is a link of capacity C j in 0 going from encoder j to the CF and a link of capacity Cj out 0 going back. The CF helps the encoders exchange information before they transmit their codewords over the MAC. Figure 1 depicts a network consisting of a k-user MAC and a C in, C out -CF, where C in = C j in and C out = Cout j denote the capacities of the CF input and output links. In this figure, X[k] n = Xn 1,..., Xk n is the vector of the channel inputs of the k encoders, and Ŵ[k] = Ŵ1,..., Ŵk is the vector of message reproductions at the decoder. The communication between the CF and the encoders occurs over a number of rounds. In the first round of cooperation, each encoder sends a rate-limited function of its message to the CF, and the CF sends a rate-limited function of what it receives back to each encoder. Communication between the encoders and the CF may continue for a finite number of rounds, with each node potentially using information received in prior rounds to determine its next transmission. Once the communication between the CF and the encoders is done, each encoder uses its message and what it has learned through the CF to choose a codeword, which it transmits across the channel. Our main result Theorem 3 determines a set of MACs where the benefit of encoder cooperation through a CF grows very quickly with C out. Specifically, we find a class of MACs C, where every MAC in C has the property that for any fixed C in R k >0, the sum-capacity of that MAC with a C in, C out -CF has an infinite derivative in the direction of every v R k >0 at C out = 0. In other words, as a function of C out, the sum-capacity grows faster than any function with bounded derivative at C out = 0. This means that for any MAC in C, sharing a small number of bits with each encoder leads to a large gain in sum-capacity. An important implication of this result is the existence of a memoryless network that does not satisfy the edge removal property [4], [5]. A network satisfies the edge removal property if removing an edge of capacity δ > 0 changes the capacity region by at most δ in each dimension. Thus removing an edge of capacity δ from a network which has k sources and satisfies the edge removal property, decreases sum-capacity by at most kδ, a linear function of δ. Now consider a network consisting of a MAC in C and a C in, C out -CF, where C in R k >0. Our main result Theorem 3 implies that for small C out, removing all the output edges reduces sum-capacity by an amount much 1 The notation [x] describes the set {1,..., x } for any real number x 1.

3 2 larger than k Cj out. Thus there exist memoryless networks that do not satisfy the edge removal property. The first example of such a network appeared in [6]. We introduce the coding scheme that leads to Theorem 3 in Section IV. This scheme combines forwarding, coordination, and classical MAC coding. In forwarding, each encoder sends part of its message to all other encoders by passing that information through the CF. 2 When k = 2, forwarding is equivalent to a single round of conferencing as described in [8]. The coordination strategy is a modified version of Marton s coding scheme for the broadcast channel [9], [10]. To implement this strategy, the CF shares information with the encoders that enables them to transmit codewords that are jointly typical with respect to a dependent distribution; this is proven using a multivariate version of the covering lemma [11, p. 218]. The multivariate covering lemma is stated for strongly typical sets in [11]. In Appendix A, using the proof of the 2-user case from [11] and techniques from [12], we prove this lemma for weakly typical sets [13, p. 251]. Using weakly typical sets in our achievability proof allows our results to extend to continuous e.g., Gaussian channels without the need for quantization. Finally, the classical MAC strategy is Ulrey s [3] extension of Ahlswede s [14], [15] and Liao s [16] coding strategy to the k-user MAC. Using techniques from Willems [8], we derive an outer bound Proposition 5 for the capacity region of the MAC with a C in, C out -CF. This outer bound does not capture the dependence of the capacity region on C out and is thus loose for some values of C out. However, if the entries of C out are sufficiently larger than the entries of C in, then our inner and outer bounds agree and we obtain the capacity region Corollary 6. In Section V, we apply our results to the 2-user Gaussian MAC with a CF that has access to the messages of both encoders and has links of output capacity C out. We show that for small C out, the achievable sum-rate approximately equals a constant times C out. A similar approximation holds for a weighted version of the sum-rate as well, as we see in Proposition 7. This result implies that at least for the 2-user Gaussian MAC, the benefit of cooperation is not limited to sum-capacity and applies to other capacity region metrics as well. In Section VI, we consider the extension of Willems conferencing model [8] from 2 to k users. A special case of this model with k = 3 is studied in [17] for the Gaussian MAC. While the authors of [17] use two conferencing rounds in their achievability result, it is not clear from [17] if there is a benefit in using two rounds instead of one, and if so, how large that benefit is. Here we explicitly show that a single conferencing round is not optimal for k 3, even though it is known to be optimal when k = 2 [8]. Finally, we apply our outer bound for the k-user MAC with a CF to obtain an outer bound for the k-user MAC with conferencing. The resulting outer bound is tight when k = 2. In the next section, we formally define the capacity region of the network consisting of a k-user MAC and a CF. II. MODEL Consider a network with k encoders, a CF, a k-user MAC, and a decoder Figure 1. For each j [k], encoder j communicates with the CF using noiseless links of capacities C j in 0 and Cj out 0 going to and from the CF, 2 While it is possible to consider encoders that send different parts of their messages to different encoders using Han s result for the MAC with correlated sources [7], we avoid these cases for simplicity.

4 3 respectively. The k encoders communicate with the decoder through a MAC X [k], py x [k], Y, where X [k] = k X j, and an element of X [k] is denoted by x [k]. We say a MAC is discrete if X [k] and Y are either finite or countably infinite, and py x [k] is a probability mass function on Y for every x [k] X [k]. We say a MAC is continuous if X [k] = R k, Y = R, and py x [k] is a probability density function on Y for all x [k]. In addition, we assume that our channel is memoryless and without feedback [13, p. 193], so that for every positive integer n, the nth extension channel of our MAC is given by py n x n [k], where n x n [k], yn X[k] n Yn : py n x n [k] = py t x [k]t. An example of a continuous MAC is the k-user Gaussian MAC with noise variance N > 0, where 1 [ py x [k] = exp 1 y 2 ] x j 2πN 2N Henceforth, all MACs are memoryless and without feedback, and either discrete or continuous. We next describe a t=1 2 nr 1,..., 2 nr k, n, L -code 1 for the MAC X [k], py x [k], Y with a C in, C out -CF with cost functions b j and cost constraint vector B = B j R k 0. For each j [k], cost function b j is a fixed mapping from X j to R 0. Each encoder j [k] wishes to transmit a message w j [2 nrj ] to the decoder. This is accomplished by first exchanging information with the CF and then transmitting across the MAC. Communication with the CF occurs in L rounds. For each j [k] and l [L], sets U jl and V jl, respectively, describe the alphabets of symbols that encoder j can send to and receive from the CF in round l. These alphabets satisfy the link capacity constraints L log U jl nc j in The operation of encoder j and the CF, respectively, in round l are given by l=1 L log V jl ncout. j 2 l=1 ϕ jl : [2 nrj ] V l 1 j k ψ jl : Ui l V jl. i=1 U jl where Uj l = l l =1 U jl and Vl j = l l =1 V jl. After its exchange with the CF, encoder j applies a function f j : [2 nrj ] V L j X n j, to choose a codeword, which it transmits across the channel. In addition, every x n j n b j x jt nb j. t=1 in the range of f j satisfies

5 4 The decoder receives channel output Y n and applies g : Y n to obtain estimate Ŵ[k] of the message vector w [k]. The encoders, CF, and decoder together define a k [2 nrj ] 2 nr 1,..., 2 nr k, n, L -code. The average error probability of the code is P n e = Pr { gy n W [k] }, where W[k] is the transmitted message vector and is uniformly distributed on k [2nRj ]. A rate vector R [k] = R 1,..., R k is achievable if there exists a sequence of 2 nr1,..., 2 nr k, n, L codes with P n e defined as the closure of the set of all achievable rate vectors. 0 as n. The capacity region, C C in, C out, is III. RESULTS In this section, we describe the key results. In Subsection III-A, we present our inner bound. In Subsection III-B, we state our main result, which proves the existence of a class of MACs with large cooperation gain. Finally, in Subsection III-C, we discuss our outer bound. A. Inner Bound Using the coding scheme we introduce in Section IV, we obtain an inner bound for the capacity region of the k-user MAC with a C in, C out -CF. The following definitions are useful for describing that bound. Choose vectors C 0 = C j0 k and C d = C jd k in Rk 0 such that for all j [k], C j0 C j in 3 C jd + C i0 Cout. j 4 i j Here C j0 is the number of bits per channel use encoder j sends directly to the other encoders via the CF and C jd is the number of bits per channel use the CF transmits to encoder j to implement the coordination strategy. Subscript d in C jd alludes to the dependence created through coordination. Let S d = { j [k] : C jd 0 } be the set of encoders that participate in this dependence. Fix alphabets U 0, U 1,..., U k. For every nonempty S [k], let U S be the set of all u S = u j j S where u j U j for all j S. Define the set X S similarly. Let PU 0, U [k], X [k], S d be the set of all distributions on U 0 U [k] X [k] that are of the form pu 0 pu i u 0 pu Sd u 0, u S c d px j u 0, u j, 5 i S c d satisfy the dependence constraints 3 ζ S := j S C jd j S HU j U 0 + HU S U 0, U S c d > 0 S S d, 3 The constraint on ζ S is imposed by the multivariate covering lemma Appendix A, which we use in the proof of our inner bound.

6 5 and cost constraints E [ b j X j ] B j j [k]. 6 Here U 0 encodes the common message, which, for every j [k], contains nc j0 bits from the message of encoder j and is shared with all other encoders through the CF; each random variable U j captures the information encoder j receives from the CF to create dependence with the codewords of other encoders. The random variable X j represents the symbol encoder j transmits over the channel. For any C 0, C d R k 0 satisfying 3 and 4 and any p PU 0, U [k], X [k], S d, let RC 0, C d, p be the set of all R 1,..., R k for which and for every S, T [k], j A R j < IX [k] ; Y ζ Sd, 7 R j C j0 + + R j C j in + j B T < I U A, X A B T ; Y U0, U B, X B\T ζa B Sd 8 holds for some sets A and B for which S S c d A S and Sc S c d B Sc. We next state our inner bound for the k-user MAC with encoder cooperation via a CF. The coding strategy that achieves this inner bound uses only a single round of cooperation L = 1. The proof is given in Subsection VII-A. Theorem 1 Inner Bound. For any MAC X [k], py x [k], Y with a C in, C out -CF, C C in, C out RC 0, C d, p where Ā denotes the closure of set A and the union is over all C 0 and C d satisfying 3 and 4, and p PU 0, U [k], X [k], S d. The achievable region given in Theorem 1 is convex and thus we do not require the convex hull operation. The proof is similar to [1], [18] and is omitted. The next corollary treats the case where the CF transmits the bits it receives from each encoder to all other encoders without change. In this case, our coding strategy simply combines forwarding with classical MAC encoding. We obtain this result from Theorem 1 by setting C jd = 0 and U j = 1 for all j [k] and choosing A = S and B = S c for every S, T [k]. In Corollary 2, P ind U 0, X [k] is the set of all distributions pu 0 px j u 0 that satisfy the cost constraints 6. Corollary 2 Forwarding Inner Bound. The capacity region of any MAC with a C in, C out -CF contains the set of all rate vectors that for some constants C j0 satisfying 3 and 4 with C jd = 0 for all j and some distribution p P ind U 0, X [k], satisfy R j < I X S ; Y U 0, X S c + j S j S R j < IX [k] ; Y. C j0 S [k]

7 6 B. Sum-Capacity Gain We wish to understand when cooperation leads to a benefit that exceeds the resources employed to enable it. Therefore, we compare the gain in sum-capacity obtained through cooperation to the number of bits shared with the encoders to enable that gain. For any k-user MAC with a C in, C out -CF, define the sum-capacity as C sum C in, C out = max C C in,c out k R j. For a fixed C in R k 0, define the sum-capacity gain G : Rk 0 R 0 as GC out = C sum C in, C out C sum C in, 0, where C out = C j out k and 0 = 0,..., 0. Note that regardless of C in, it follows from 2 that no cooperation is possible when C out = 0. Thus C sum C in, 0 = C sum 0, 0 = where P ind X [k] is the set of all independent distributions px [k] = px j max IX [k]; Y, p P ind X [k] on X [k] that satisfy the cost constraints 6. Similarly, PX [k] is the set of all distributions on X [k] that satisfy 6. For sets X 1,..., X k, Y, cost functions b j, and cost constraints B j, we next define a special class of MACs C X [k], Y. We say a MAC X [k], py x [k], Y is in C X [k], Y, if there exists p ind P ind X [k] that satisfies I ind X [k] ; Y = max IX [k]; Y, p P ind X [k] and p dep PX [k] whose support is contained in the support of p ind and satisfies I dep X [k] ; Y + D p dep y p ind y > I ind X [k] ; Y. 9 In the above equation, p dep y and p ind y are the output distributions corresponding to the input distributions p dep x [k] and p ind x [k], respectively. We remark that 9 is equivalent to E dep [ D py X [k] p ind y ] > E ind [D py X [k] p ind y ], where the expectations are with respect to p dep x [k] and p ind x [k], respectively. Using these definitions, we state our main result which captures a family of MACs for which the slope of the gain function is infinite in every direction at C out = 0. In this statement, for any unit vector v R k 0, D vg is the directional derivative of G in the direction of v. The proof appears in Subsection VII-B. Theorem 3 Sum-capacity. Let X [k], py x [k], Y be a MAC in C X [k], Y and C in R k >0. Then for any unit vector v R k >0, D v G0 =.

8 7 Note that for continuous MACs, when for j [k] and x R, b j x = x 2, cost constraints are referred to as power constraints. In addition, for every j [k], the variable P j is commonly used instead of B j. Our next proposition provides necessary and sufficient conditions under which the k-user Gaussian MAC with power constraints is in C R k, R. The proof is provided in Subsection VII-C. Proposition 4. The k-user Gaussian MAC with power constraint vector P = P j R k 0 is in C R k, R if and only if at least two entries of P are positive. C. Outer Bound We next describe our outer bound. While we only make use of a single round of cooperation in our inner bound Theorem 1, the outer bound applies to all coding schemes regardless of the number of rounds. Proposition 5 Outer Bound. For the MAC X [k], py x [k], Y, C C in, C out is a subset of the set of all rate vectors that for some distribution p P ind U 0, X [k] satisfy R j I X S ; Y U 0, X S c + C j in S [k] 10 j S j S R j IX [k] ; Y. 11 The proof of this proposition is given in Subsection VII-D. Our proof uses ideas similar to the proof of the converse for the 2-user MAC with conferencing [8]. If the capacities of the CF output links are sufficiently large, our inner and outer bounds coincide and we obtain the capacity region. This follows by setting C j0 = C j in for all j [k] in our forwarding inner bound Corollary 2 and comparing it with the outer bound given in Proposition 5. Corollary 6. For the MAC X [k], py x [k], Y with a C in, C out -CF, if then our inner and outer bounds agree. j [k] : C j out i:i j C i in, IV. THE CODING SCHEME Choose nonnegative constants C j0 k and C jd k such that 3 and 4 hold for all j [k]. Fix a distribution p PU 0, U [k], X [k], S d and constants ɛ, δ > 0. Let R j0 = min{r j, C j0 } R jd = min{r j, C j in } R j0 R jj = R j R j0 R jd = R j C j in +, where x + = max{x, 0} for any real number x. For every j [k], split the message of encoder j as w j = w j0, w jd, w jj, where w j0 [2 nrj0 ], w jd [2 nr jd ], w jj [2 nrjj ]. For all j [k], encoder j sends w j0, w jd

9 8 noiselessly to the CF. This is possible, since R j0 + R jd is less than or equal to C j in. The CF sends w j0 to all other encoders via its output links and uses w jd to implement the coordination strategy to be descibed below. Due to the CF rate constraints, encoder j cannot share the remaining part of its message, w jj, with the CF. Instead, it transmits w jj over the channel using the classical MAC strategy. Let W 0 = k [2nRj0 ]. For every w 0 W 0, let U n 0 w 0 be drawn independently according to Pr { U0 n w 0 = u n } n 0 = pu 0t. Given U n 0 w 0 = u n 0, for every j [k], w jd [2 nr jd ], and z j [2 nc jd ], let U n j w jd, z j u n 0 be drawn independently according to { Pr Uj n w jd, z j u n 0 = u n j t=1 } U0 n w 0 = u n 0 = n pu jt u 0t. 12 For every w 1,..., w k, define Eu n 0, µ 1,..., µ k as the event where U n 0 w 0 = u n 0 and for every j [k], t=1 U n j w jd, u n 0 = µ j, 13 where µ j is a mapping from [2 nc jd ] to U n j. Let Aun 0, µ [k] be the set of all z [k] = z 1,..., z k such that u n 0, µ [k] z [k] A n δ U 0, U [k], 14 where µ [k] z [k] = µ 1 z 1,..., µ k z k and A n δ U 0, U [k] is the weakly typical set with respect to the distribution pu 0, u [k]. If Au n 0, µ [k] is empty, set Z j = 1 for all j [k]. Otherwise, let the k-tuple Z [k] = Z 1,..., Z k be the smallest element of Au n 0, µ [k] with respect to the lexicographical order. Finally, given U n 0 w 0 = u n 0 and Uj nw jd, Z j u n 0 = u n j, for each w jj [2 nrjj ], let Xj nw jj u n 0, u n j be a random vector drawn independently according to { } Pr Xj n w jj u n 0, u n j = x n j U0 n w 0 = u n 0, Uj n w jd, Z j = u n j n = px jt u 0t, u jt. We next describe the encoding and decoding processes. Encoding. For every j [k], encoder j sends the pair w j0, w jd to the CF. The CF sends w i0 i j, Z j back to encoder j. Encoder j, having access to w 0 = w j0 j and Z j, transmits X n j w jj U n 0 w 0, U n j w jd, Z j over the channel. Decoding. The decoder, upon receiving Y n, maps Y n to the unique k-tuple Ŵ[k] such that U n 0 Ŵ0, U n j Ŵjd, Ẑj U n 0 j, X n j Ŵjj U n 0, U n j j, Y n A n ɛ U 0, U [k], X [k], Y. 15 If such a k-tuple does not exist, the decoder sets its output to the k-tuple 1, 1,..., 1. The analysis of the expected error probability for the proposed random code appears in Subsection VII-A.

10 9 V. CASE STUDY: 2-USER GAUSSIAN MAC In this section, we study the network consisting of the 2-user Gaussian MAC with power constraints and a CF whose input link capacities are sufficiently large so that the CF has full access to the messages and output link capacities both equal C out. We show that in this scenario, the benefit of cooperation extends beyond sum-capacity; that is, capacity metrics other than sum-capacity also exhibit an infinite slope at C out = 0. In addition, we show that the behavior of these metrics including sum-capacity is bounded from below by a constant multiplied C out. From Theorem 1, it follows that the capacity region of our network contains the set of all rate pairs R 1, R 2 that satisfy R 1 max{ix 1 ; Y U 0 C 1d, IX 1 ; Y X 2, U 0 ζ} + C 10 R 2 max{ix 2 ; Y U 0 C 2d, IX 2 ; Y X 1, U 0 ζ} + C 20 R 1 + R 2 IX 1, X 2 ; Y U 0 ζ + C 10 + C 20 R 1 + R 2 IX 1, X 2 ; Y ζ for some nonnegative constants C 1d, C 2d C out, C 10 = C out C 2d C 20 = C out C 1d, and some distribution pu 0 px 1, x 2 u 0 that satisfies E[X 2 i ] P i for i {1, 2} and ζ := C 1d + C 2d IX 1 ; X 2 U 0 0. By 1, the 2-user Gaussian MAC can be represented as Y = X 1 + X 2 + Z, where Z is independent of X 1, X 2, and is distributed as Z N 0, N for some noise variance N > 0. Let U 0 N 0, 1, and X 1, X 2 be a pair of random variables independent of U 0 and jointly distributed as N µ, Σ, where for some ρ 0 [0, 1]. Finally, for i {1, 2}, set µ = 0, Σ = 1 ρ 0 0 ρ Pi X i = ρ i X i + 1 ρ 2 i U 0, for some ρ i [0, 1]. Calculating the region described above for the Gaussian MAC using the joint distribution of U 0, X 1, X 2 and setting γ i = P i /N for i {1, 2} and γ = γ 1 γ 2, gives the set of all rate pairs R 1, R 2 satisfying { 1 R 1 max 2 log 1 + ρ2 1γ 1 + ρ 2 2γ 2 + 2ρ 0 ρ 1 ρ 2 γ ρ 2 0 ρ2 2 γ C 1d, log } ρ 2 0ρ 2 1γ 1 ζ + C 10 { 1 R 2 max 2 log 1 + ρ2 1γ 1 + ρ 2 2γ 2 + 2ρ 0 ρ 1 ρ 2 γ ρ 2 0 ρ2 1 γ C 2d, log } ρ 2 0ρ 2 2γ 2 ζ + C 20

11 10 Figure 2. Plot of the achievable sum-rate gain given by Theorem 1 and Corollary 2 for Gaussian input distributions, and the C out-term given in Proposition 7. Here γ 1 = γ 2 = 100. and R 1 + R log 1 + ρ 2 1γ 1 + ρ 2 2γ 2 + 2ρ 0 ρ 1 ρ 2 γ ζ + C 10 + C 20 R 1 + R log + γ 1 + γ ρ 0 ρ 1 ρ ρ ρ2 2 γ ζ for some ρ 1, ρ 2 [0, 1], and 0 ρ C 1d+C 2d. Denote this region with C ach C out. We next introduce a lower bound for the weighted version of the sum-capacity. Denote the capacity region of this network with C C out. For every α [0, 1], define C α C out = max αr αr 2 R 1,R 2 C C out Note that C α C out is a generalization of the notion of sum-capacity where the weighted sum of the encoders rates is considered. The main result of this section demonstrates that for small C out, C α C out is bounded from below by a constant times C out when C out is small. The proof is given in Subsection VII-E. Proposition 7. For the Gaussian MAC Y = X 1 + X 2 + Z with Z N 0, N and input SNRs γ 1, γ 2, we have In particular, for every α 0, 1, C α C out C α 0 2 γ 1 γ 2 log e 1 + γ 1 + γ 2 min{α, 1 α} C out + o C out. dc α dc out Cout=0 + =. In Figure 2, using [19], we plot the sum-rate of the region C ach C out and the forwarding inner bound Corollary 2 for γ 1 = γ 2 = 100. We also plot the C out -term in the lower bound given by Proposition 7. Notice that the forwarding inner bound provides a cooperation gain that is at most linear in C out.

12 11 Figure 3. In k-user MAC with conferencing, for every i, j [k], there are links of capacities C ij and C ji connecting encoders i and j. VI. THE k-user MAC WITH CONFERENCING ENCODERS In this section, we extend Willems conferencing encoders model [8] from the 2-user MAC to the k-user MAC and provide an outer bound on the capacity region. Consider a k-user MAC where for every i, j [k] in this section, i j by assumption, there is a noiseless link of capacity C ij 0 going from encoder i to encoder j and a noiseless link of capacity C ji 0 going back Figure 3. As in 2-user conferencing, the conference occurs over a finite number of rounds. In the first round, for every i, j [k] with C ij > 0, encoder i transmits some information to encoder j that is a function of its own message w i [2 nri ]. In each subsequent round, every encoder transmits information that is a function of its message and information it receives before that round. Once the conference is over, each encoder transmits its codeword over the k-user MAC. We next define a 2 nr1,..., 2 nr k, n, L -code for the k-user MAC with an L-round C ij k i,-conference. For every i, j [k] and l [L], fix a set V l ij so that for every i, j [k], L l=1 log Vl ij nc ij. Here V l ij represents the alphabet of the symbol encoder i sends to encoder j in round l of the conference. For every l [L], define V l ij = l l =1 Vl ij. For j [k], encoder j is represented by the collection of functions f j, h l ji i,l where f j : [2 nrj ] Vij L Xj n h l ji i:i j : [2 nrj ] i :i j V l 1 i j V l ji The decoder is a mapping g : Y n k [2nRj ]. The definitions of cost constraints, achievable rate vectors, and the capacity region are similar to those given in Section II. The next result compares the capacity region of a MAC with cooperation under the conferencing and CF models. The proof is given in Subsection VII-F. Proposition 8. The capacity region of a MAC with an L-round C ij k i,-conference is a subset of the capacity region of the same MAC with an L-round C in, C out -CF cooperation if for all j [k], C j in C ji and C j out C ij. i:i j Similarly, for every L, the capacity region of a MAC with L-round C in, C out -CF cooperation is a subset of the capacity region of the same MAC with a single-round C ij k i, -conference if for all i, j [k], C ij C i in. i:i j

13 12 Figure 4. a The conferencing structure studied in [17]. b An example of a structure where allowing two conferencing rounds leads to a substantial gain over a single round. Combining the first part of Proposition 8 with the outer bound from Proposition 5 results in the next corollary, which holds regardless of the number of conferencing rounds. Corollary 9 Conferencing Outer Bound. The capacity region of a MAC with a C ij k i,-conference is a subset of the set of all rate vectors R 1,..., R k that for some distribution p P ind U 0, X [k], satisfy R j I X S ; Y U 0, X S c + j S R j IX [k] ; Y. j S i j C ji S [k] While k-user conferencing is a direct extension of 2-user conferencing, there is nonetheless an important difference when k 3. While a single conferencing round suffices to achieve the capacity region in the 2-user case [8], the same is not true when k 3, as we next see. A special case of this model for the 3-user Gaussian MAC, depicted in Figure 4a, is studied in [17]. While the achievability scheme in [17] uses two conferencing rounds, the magnitude of the gain resulting from using an additional conferencing round is not clear. Here, using the idea of a cooperation facilitator, we consider an alternative shown in Figure 4b, where we show the possibility of a large cooperation gain when conferencing occurs in two rounds rather than one. Consider a 3-user MAC with conferencing. Fix positive constants C 1 in and C 2 in. Let C 13 = C 1 in, C 23 = C 2 in, C 31 = C 32 = C out for C out R 0, and C 12 = C 21 = 0. Let C 1 C out and C 2 C out denote the capacity region of this network with one and two rounds of conferencing, respectively. For each L {1, 2}, define the function g L C out as g L C out = max R 1 + R 2. R 1,R 2,0 C L C out Note that when L = 1, we have g 1 C out = g 1 0 for all C out 0, since no cooperation is possible when encoder 3 is transmitting at rate zero. On the other hand, we next show that at least for some MACs, g 20 = ; that is, g 2 has an infinite slope at C out = 0. Note that g 2 0 = g 1 0 = max IX 1, X 2 ; Y X 3 = x 3. px 1px 2,x 3

14 13 Suppose x 3 satisfies max IX 1, X 2 ; Y X 3 = x 3 = max IX 1, X 2 ; Y X 3 = x 3. px 1px 2,x 3 px 1px 2 If the MAC X 1 X 2, py x 1, x 2, x 3, Y is in C X 1 X 2, Y, then by Theorem 3, we have g 20 =. Since g 1 is constant for all C out, while g 2 has an infinite slope at C out = 0, and g 1 0 = g 2 0, the two-round conferencing region is strictly larger than the single-round conferencing region. Using the same technique, we can show a similar result for any k 3; that is, there exist k-user MACs where the two-round conferencing region strictly contains the single-round region. VII. PROOFS A. Theorem 1 Inner bound Fix η > 0, and choose a distribution pu 0, u [k], x [k] on U 0 U [k] X [k] of the form pu 0 pu i u 0 pu Sd u 0, u S c d px j u 0, u j, i Sd c that satisfies the dependence constraints ζ S := C jd HU j U 0 + HU S U 0, U S c d > 0 S S d, j S j S and cost constraints E [ b j X j ] B j η j [k]. 16 Let w 1,..., w k denote the transmitted k-tuple of messages and Ŵ1,..., Ŵk denote the output of the decoder. To simplify notation, denote U0 n w 0, Uj n w jd, Z j U0 n, Xj n w jj U0 n, Uj n with U n 0, U n j, and Xn j, respectively. Similarly, define Û n 0, Û n j, and ˆX n j as U0 n Ŵ0, Uj n Ŵjd, Z j U0 n, Xj n Ŵjj U0 n, Uj n. Here Ŵ0, Ŵ jd, and Ŵjj are defined in terms of Ŵj j similar to the definitions of w 0, w jd, and w jj in Section IV. Let Y n denote the channel output when X[k] n is transmitted. Then the joint distribution of U 0 n, U[k] n, Xn [k], Y n is given by p code u n 0, u n [k], xn [k], yn = pu n 0 p code u n [k] un 0 px n [k] un 0, u n [k] pyn x n [k], where p code u n [k] un 0 = pµ 1 u n 0... pµ k u n 0 pu n [k] un 0, µ [k] µ [k] and pµ j u n 0 and pu n [k] un 0, µ [k] are calculated according to pµ j u n 0 = pµ j z j u n 0, z j [2 nc jd ]

15 14 and pu n [k] un 0, µ [k] = z [k] Define the distribution p ind u n 0, u n [k], xn [k], yn as pz [k] u n 0, µ [k] k 1{µ j z j = u n j }. k p ind u n 0, u n [k], xn [k], yn = pu n 0 px n [k] un 0, u n [k] pyn x n [k] pu n j u n 0, which is the joint input-output distribution if independent codewords are transmitted. We next mention some results regarding weakly typical sets that are required for our error analysis. For any S [k], let A n δ U 0, U S denote the weakly typical set with respect to the distribution pu 0, u S, a marginal of pu 0, u [k]. In addition, for every u n 0, u n S An δ U 0, U S, let A n δ u n 0, u n S be the set of all un S such c that u n 0, u n [k] An δ U 0, U [k]. Similarly, let A n ɛ U 0, U [k], X [k], Y be the weakly typical set with respect to the distribution pu 0, u [k], x [k] py x [k], where py x [k] is given by the channel definition. For subsets S, T A n ɛ U 0, U S, X T, Y and A n ɛ [13, p. 523] [k], define u n 0, u n S, xn T, yn accordingly. If u n 0, u n S, xn T, yn A ɛ n U 0, U S, X T, Y, we have log A n ɛ u n 0, u n S, x n T, y n n HU S c, X T c U 0, U S, X T, Y + 2ɛ. 17 Finally, under fairly general conditions described in Appendix B, 4 there exists an increasing function I : R >0 R >0 such that if U n 0, U n [k], Xn [k], Y n consists of n i.i.d. copies of U 0, U [k], X [k], Y distributed according to pu 0, u [k], x [k], y, then Fix any such function I. { } Pr U0 n, U[k] n, Xn [k], Y n A n ɛ U 0, U [k], X [k], Y 1 2 niɛ. 18 We next study the relationship between p code and p ind. Our first lemma provides an upper bound for p code in terms of p ind. Lemma 10. For every nonempty S [k] and all u n 0, u n S, where C Sd = j S C jd. Proof: Recall To bound p code u n S un 0, note that 1 n log p codeu n S un 0 p ind u n S un 0 nc Sd, p code u n S u n 0 = µ [k] pu n S u n 0, µ [k] j S pu n S u n 0, µ [k] pµ j u n 0. 1 { µ 1 j u n j }, 4 Distributions that satisfy these conditions include any distribution with finite support and the Gaussian distribution.

16 15 where Now for every j S, µ 1 j u n j = { z j [2 nc jd ] : µ j z j = u n j }. pµ j u n 0 1 { µ 1 j u n j } = Pr { z j : Uj n z j = u n j U0 n = u n } 0 µ j 2 nc jd pu n j u n 0. Thus p code u n S u n 0 pµ j u n 0 1 { µ 1 j u n j } µ S j S = pµ j u n 0 1 { µ 1 j u n j } j S µ j 2 n j S C jd p ind u n S u n 0. Our second lemma provides an upper bound for p ind u n S un 0 when u n 0, u n S is typical. Lemma 11. For all nonempty S c d S [k] and un 0, u n S An δ U 0, U S, Proof: Recall that 1 n log p indu n S un 0 pu n S un 0 j S S d HU j U 0 + HU S Sd U 0, U S c d + 2 S S d + 1δ. pu n [k] un 0 = pu n S d u n 0, u n S pu n d c j u n 0. j S c d Thus for all S Sd c, we have pu n S u n 0 = pu n S S d u n 0, u n S c d j S c d pu n j u n 0. Therefore, p ind u n S un 0 pu n S un 0 = p indu n S S d u n 0 pu n S S d u n 0, un S d c j S S = d pu n j un 0 pu n S S d u n 0, un S. d c The proof now follows from the definition of A n δ U 0, U S. Combining the previous two lemmas results in the next corollary, which we use in our error analysis. Corollary 12. For every nonempty S satisfying S c d S [k] and all un 0, u n S An δ U 0, U S, 1 n log p codeu n S un 0 pu n S un 0 ζ S Sd + 2 S S d + 1 δ. Let E denote the event where either the output of an encoder does not satisfy the corresponding cost constraint, or the output of the decoder differs from the transmitted k-tuple of messages; that is Ŵj k w j k. Denote

17 16 the former event with E cost and the latter event with E dec. When E dec occurs, it is either the case that w j k does not satisfy 15 denote this event with E typ, or that there is another k-tuple, Ŵj k w j k, that also satisfies 15. If the latter event occurs, we either have Ŵ0 w 0 denote event with E,, or Ŵ 0 = w 0. When Ŵ 0 = w 0, define the subsets S, T [k] as S = { j : Ŵ jd w jd } T = { j : Ŵ jj w jj }. Now for every pair of subsets S, T [k] such that S T, define E S,T as the event where there exists a Ŵj k that satisfies 15, Ŵ 0 = w 0, Ŵ jd w jd if and only if j S, and Ŵjj w jj if and only if j T. Thus we may write E E cost E typ S,T [k] E S,T. The union over all E S,T also contains the event E,. By the union bound, PrE PrE cost + PrE typ + S,T [k] PrE S,T. Thus to find a set of achievable rates for our random code design, it suffices to find conditions under which PrE cost, PrE typ, and each PrE S,T go to zero as n. We begin our analysis with the event E cost. For j [k], let E j cost Xj nw jj U0 n w 0, Uj nw jd, Z j does not satisfy the cost constraint of encoder j. We have n b j X jt wjj U0 n w 0, Uj n w jd, Z j } > B j Pr { E j 1 cost = Pr n Since for all z j, by the AEP, t=1 = z j Pr{Z j = z j } Pr Pr { 1 n n t=1 { 1 n n t=1 denote the event where the codeword b j X jt wjj U n 0 w 0, U n j w jd, z j > B j }. b j X jt wjj U n 0 w 0, U n j w jd, z j > B j } 0 as n, it follows that Pr E j cost 0. Applying the union bound now implies Pr E cost Pr Ecost j 0. We next consider the event E typ. Define E enc as the event where and note that E typ is the event where U n 0, U n [k] / A n δ U 0, U [k] U n 0, U n [k], Xn [k], Y n / A n ɛ U 0, U [k], X [k], Y. The event E enc occurs if and only if AU0 n, U[k] n. defined in Section IV is empty. Thus PrE enc = Pr { AU n 0, U n [k]. = }.

18 17 If S d =, PrE enc goes to zero by the AEP since in this case p code u n [k] un 0 = pu n [k] un 0. Otherwise, recall that for every nonempty S S d, ζ S is defined as ζ S = C jd HU j U 0 + HU S U 0, U S c d. j S j S From the multivariate covering lemma Appendix A, it follows that PrE enc 0 if for all nonempty S S d, A n δ ζ S > 8 S d 2 S + 10δ. 19 Next we find an upper bound for PrE typ \E enc. Let B n be the set of all u n 0, u n [k], xn [k], yn such that u n 0, u n [k] but u n 0, u n [k], xn [k], yn / A n ɛ. Then PrE typ \ E enc = B n pu n 0 p code u n [k] un 0 px n [k] un 0, u n [k] pyn x n [k] a 2 nζ S d +2 S d +1δ B n pu n 0, u n [k], xn [k], yn b 2 nζ S d +2 S d +1δ Pr { A n ɛ c} c 2 nζ S d +2 S d +1δ Iɛ, where a follows from Corollary 12, b holds since B n A ɛ n c, and c follows from the definition of Iɛ given by 18. Thus PrE typ \ E enc 0 if Therefore, if 19 and 20 both hold, then PrE typ 0 since ζ Sd < Iɛ 2 S d + 1δ. 20 PrE typ PrE enc E typ = PrE enc + PrE typ \ E enc. We next study E,, which is the event where there exists a k-tuple Ŵj j that satisfies 15 but Ŵ 0 w 0. If this event occurs, then Û n 0, Û n [k], ˆX n [k] and Y n are independent. By the union bound, PrE, 2 n k Rj We rewrite the sum in the above inequality as A n ɛ Y p code y n A n ɛ Using Corollary 12, we upper bound the inner sum by A n ɛ y n p code u n 0, u n [k], xn [k] p codey n. A n ɛ y n 2 nζ S d +2 S d +1δ pu n 0, u n [k], xn [k] p code u n 0, u n [k], xn [k], 2 nhu0,u [k],x [k] Y +2ɛ 2 nζ S d +2 S d +1δ 2 nhu0,u [k],x [k] +ɛ, where follows from 17. This implies PrE, 0 if k R j < IX [k] ; Y ζ Sd 2 S d + 1δ 3ɛ.

19 18 Next, let S, T [k] be sets such that S T and consider the event E S,T. Recall that this is the event where there exists a k-tuple Ŵj j that satisfies 15 and Ŵ0 = w 0, Ŵ jd w jd if and only if j S, and Ŵjj w jj if and only if j T. For every A S and B S c, let E A,B S,T that satisfies U n 0 w 0, U n j Ŵjd, Ẑj U n 0 j A, U n j w jd, Ẑj U n 0 j B, E S,T be the event where there exists a k-tuple Ŵj j X n j Ŵjj U n 0, Û n j j A B T, X n j w jj U n 0, Û n j j B\T, Y n A n ɛ 21 and Ŵ0 = w 0, Ŵ jd w jd if and only if j S, and Ŵjj w jj if and only if j T. If E S,T occurs, then so does E A,B S,T for every A S and B Sc. Thus This implies E S,T A,B E A,B S,T. PrE S,T min Pr E A,B S,T. 22 A,B Therefore, to bound PrE S,T, we find an upper bound on PrE A,B S,T for any A S and B Sc such that A B T. This is the key difference between our error analysis here and the error analysis for the 2-user MAC with transmitter cooperation presented in [1]. For independent distributions, using the constraint that subsets of typical codewords are also typical does not lead to a larger region; the same may not be true when dealing with dependent distributions. That being said, to include all independent random variables in our error analysis, instead of calculating the minimum in 22 over all A S and B S c, we limit ourselves to subsets A and B that satisfy S S c d A S S c S c d B S c, since all the random vectors Uj n j Sd c are independent given U 0 n. Choose any such A and B. Note that for every j A B T, either Ŵjd w jd or Ŵjj w jj. In addition, in 21, U n j Ŵjd, Ẑj U n 0 j A, U n j w jd, Ẑj U n 0 j B, X n j Ŵjj U n 0, U n j j A B T, X n j w jj U n 0, U n j j B\T is independent of Y n given U0 n w 0, Uj n w jd,. U0 n, X n j S c j w jj U0 n, Uj n.. j S c \T Therefore, by the union bound, PrE A,B S,T is bounded from above by 2 n j A R jd+ j A B T Rjj A n ɛ px n A B T un 0, u n A B T pu n 0, µ S c, χ S c \T, y n pµ A u n 0 pu n A B, x n B\T un 0, µ A S c, χ S c \T, 23 µ A S c,χ S c \T

20 19 where the inner sum is over all mappings µ j : [2 nc jd ] U n j for j A Sc and χ j : [2 nc jd ] X n j for j Sc \ T. The distribution pu n 0, µ S c, χ Sc \T, y n is a marginal of pu n 0, µ [k], χ [k], y n, which is defined as pu n 0, µ [k], χ [k], y n = pu n 0, µ [k] pχ [k] u n 0, µ [k] py n u n 0, µ [k], χ [k], where and pχ [k] u n 0, µ [k] = pχ j u n 0, µ j = z j [2 nc jd ] pχ j z j u n 0, µ j z j, py n u n 0, µ [k], χ [k] = pz [k] u n 0, µ [k] py n χ [k] z [k]. z [k] We have pu n A B, x n B\T un 0, µ A S c, χ S c \T { 1 z j j B j B[2 } nc jd ] : j B : µ j z j = u n j j B \ T : χ j z j = x n j { 1 z j j A j A[2 } nc jd ] : j A : µ j z j = u n j. 24 We can thus upper bound the inner sum in 23 as a product of the sums pu n 0, µ S c, χ S c \T, y n µ S c,χ S c \T { 1 z j j B j B[2 } nc jd ] : j B : µ j z j = u n j j B \ T : χ j z j = x n j and { pµ A u n 0 1 z j j A µ A j A[2 nc jd ] : j A, µ j z j = u n j We first find an upper bound for the first sum. Define the distribution pu n 0, u n [k], xn [k], yn as pu n 0, u n [k], xn [k], yn = µ [k],χ [k] pu n 0, µ [k], χ [k], y n The following argument demonstrates that pu n 0, u n [k], xn [k] = p indu n 0, u n [k], xn [k], pu n 0, u n [k], xn [k] = y n pu n 0, u n [k], xn [k], yn = µ [k],χ [k] pu n 0, µ [k], χ [k] = pu n 0 k }. k 1 { µ j 1 = u n j, χ j 1 = x n } j. 1 { µ j 1 = u n j, χ j 1 = x n } j k pµ j, χ j u n 0 1 { µ j 1 = u n j, χ j 1 = x n } j µ j,χ j = p ind u n 0, u n [k], xn [k]. 25

21 20 For every z = z j j B, where z j [2 nc jd ] for all j B, let E z denote the event where for all j B, U n j w jd, z j U n 0 = u n j, and for all j B \ T, Xn j w jj U n 0, U n j = xn j. Then pu n 0, µ S c, χ S c \T, y n µ S c,χ S c \T { 1 z j B[2 } nc jd ] : j B : µ j z j = u n j j B \ T : χ j z j = x n j = Pr {U0 n = u n 0, Y n = y n } E z z = Pr {U n 0 = u n 0, Y n = y n } E z z a 2 nc Bd Pr {U0 n = u n 0, Y n = y n } E z=1 = 2 nc Bd pu n 0, u n B, x n B\T, yn b = 2 nc Bd pu n 0 p ind u n B, x n B\T un 0 py n u n 0, u n B, x n B\T, 26 where a follows by the union bound and b follows by 25. Using a similar argument we can show { pµ A u n 0 1 z } µ A j A[2 nc jd ] : j A, µ j z j = u n j 2 nc Ad p ind u n A u n Thus by 24, 26, and 27, the expression 2 n j A R jd+ j A B T Rjj+C Ad+C Bd A n ɛ pu n 0 p ind u n A B u n 0 px n A B u n 0, u n A B py n u n 0, u n B, x n B\T is an upper bound for 23. Applying Lemma 11 to p ind u n A B un 0 and dropping the epsilon term, this expression can be further bounded from above by 2 n j A R jd+ j A B T Rjj+ζ A B S d pu n 0, u n B, x n B\T pyn u n 0, u n B, x n B\T A n ɛ U 0,U B,X B\T,Y pu n A u n 0, u n Bpx n A B T un 0, u n A B T A n ɛ u n 0,un B,xn B\T,yn Using 17, we can further upper bound the logarithm of this expression by [ ] n R jd + R jj + ζ A B Sd j A + log j A B T pu n 0, u n B, x n B\T pyn u n 0, u n B, x n B\T A n ɛ U 0,U B,X B\T,Y nhu A U 0, U B nhx A B T U 0, U A B T + nhu A, X A B T U 0, U B, X B\T, Y

22 21 Hence PrE A,B S,T 0 if R jd + j A j A B T R jj < ζ A B Sd + HU A U 0, U B + HX A B T U 0, U A B T HU A, X A B T U 0, U B, X B\T, Y where the last equality follows from the fact that and = IU A, X A B T ; Y U 0, U B, X B\T ζ A B Sd, HU A U 0, U B = HU A U 0, U B, X B\T + IU A ; X B\T U 0, U B = HU A U 0, U B, X B\T HX A B T U 0, U A B = HX A B T U 0, U A B, X B\T + IX A B T ; X B\T U 0, U A B = HX A B T U 0, U A B, X B\T. Thus PrE S,T 0 if for some S S c d A S and Sc S c d B Sc such that A B T, R jd + j A j A B T R jj < IU A, X A B T ; Y U 0, U B, X B\T ζ A B Sd 28 The bounds we obtain above are in terms of R jd k and R jj k. To convert these to bounds in terms of R j k, recall that R j0 = min{c j0, R j }, R jj = R j C j in +, and R jd = R j R j0 R jj = R j min{c j0, R j } R jj = max{r j C j0, 0} R j C j in + = R j C j0 + R j C j in +. Thus 28 can be written as R j C j0 + + R j C j in + j A j B T < IU A, X A B T ; Y U 0, U B, X B\T ζ A B Sd B. Theorem 3 Sum-capacity gain Fix any unit vector v R k >0, rate vector C in R k >0, and B R k 0. For every h 0, define C outh = hv. In the achievable region defined in Section II, let U 0 = {0, 1}, and for every j [k], let U j = X j. Set C j0 = 0 and C jd = Couth j for every j [k]. For h > 0, let Ph be the set of all distributions of the form pu 0, u [k] px j u 0, u j

23 22 that satisfy dependence constraints Couth j HU j U 0 + HU S U 0 > 0 j S j S and cost constraints E [ b j X j ] B j j [k]. S [k], Using Lemma 13 see end of this section, we see that every rate vector R j that for some distribution p Ph and every pair of subsets S, T [k] satisfies and j S T R j < IX S T ; Y U 0, U S c, X S c T c + R j < IX [k] ; Y ζ [k], j T \S C j in ζ [k] 29 is achievable. This follows from setting A = S and B = S c for every S, T [k] in 8. To obtain a lower bound on sum-capacity, we evaluate this region for a specific distribution in Ph. Since our MAC is in C X [k], Y, there exists a distribution p a P ind X [k] that satisfies and a distribution p b PX [k] that satisfies I a X [k] ; Y = max IX [k]; Y, p P ind X [k] E b [ D py X [k] p a y ] > E a [D py X [k] p a y ], and whose support is contained in the support of p a. Here we also assume that for all j [k], I a X j ; Y X [k]\{j} > 0. At the end of the proof, we show that in the case where this property does not hold, the same result follows by considering a MAC with a smaller number of users. Choose µ 0, 1 such that for every nonempty S [k], µi a X S ; Y X S c < j S C j in. 30 For every λ [0, 1], define the distribution p λ u 0, u [k], x [k] as p λ u 0, u [k], x [k] = p λ u 0 p λ u [k] p λ x [k] u 0, u [k], where µ if u 0 = 1 p λ u 0 = 1 µ if u 0 = 0, and for every u [k] U [k] recall U [k] = X [k], p λ u [k] = 1 λp a u [k] + λp b u [k].

24 23 Finally, for every u 0, u [k], x [k], where for all j [k], k p λ x [k] u 0, u [k] = p λ x j u 0, u j, 1{x j = u j } if u 0 = 1 p λ x j u 0, u j = p a x j if u 0 = 0. Note that p λ u 0 and p λ x [k] u 0, u [k] do not depend on λ. In addition, since p a and p b satisfy the cost constraints and p λ x [k] = 1 λp a x [k] + λp b x [k], for all λ 0, 1, p λ satisfies the cost constraints as well. We next find a function λ h so that p λ hu 0, u [k], x [k] Ph for sufficiently small h. Fix ɛ > 0, and consider the equation h v j = H λ U j H λ U [k] + ɛλ v j. 31 By Lemma 14 see end of this section, dh dλ = ɛ > 0. λ=0 + Thus the inverse function theorem implies that there exists a function λ = λ h defined on [0, h 0 for some h 0 > 0 that satisfies 31, and dλ dh For every nonempty S [k], define the function ζ S : [0, h 0 R as ζ S h = j S = 1 h=0 + ɛ. 32 C j outh j S H λ U j + H λ U S, 33 If we calculate the derivative of ζ S at h = 0, by Lemma 14, we get dζ S dh = v j > 0. h=0 + j S This implies that there exists 0 < h 1 h 0 such that for every 0 < h < h 1 and all nonempty S [k], ζ S h > 0. Therefore, for all sufficiently small h, p λ hu 0, u [k], x [k] is in Ph. We next find a lower bound for the achievable sum-rate using the distribution p λ u 0, u [k], x [k] for small h. For every S, T [k], define the function f S,T : [0, h 1 R as f S,T h = I λ X S T ; Y U 0, U S c, X S c T c + j T \S C j in ζ [k]h.

25 24 In the above equation, expanding the mutual information term with respect to U 0 gives I λ X S T ; Y U 0, U S c, X S c T c = µi λ X S ; Y X S c + 1 µi a X S T ; Y X S c T c, where the term I λ X S ; Y X S c is calculated with respect to the distribution 1 λp a x [k] + λp b x [k]. Next, for every S [k], define the function F S : [0, h 1 R as F S h = I λ X S ; Y U 0, U S c, X S c ζ [k] h. The following argument shows that for sufficiently small h and for all S, T [k], f S,T h F S T h. Consider some S and T for which T \ S is not empty. Then f S,T 0 = µi a X S ; Y X S c + 1 µi a X S T ; Y X S c T c + j T \S > µi a X S ; Y X S c + 1 µi a X S T ; Y X S c T c + µi ax T \S ; Y X T \S c I a X S T ; Y X S c T c = F S T 0, where follows from 30. Note that f S,T and F S T are continuous functions of h for all S and T. Thus there exists 0 < h 2 h 1 such that for every h [0, h 2 and S, T [k] with T \ S, f S,T h F S T h. Next consider S and T for which T \ S is empty; that is, T is a subset of S. In this case f S,T h = I λ X S T ; Y U 0, U S c, X S c T c + = I λ X S ; Y U 0, U S c, X S c ζ [k] h = F S h = F S T h. j T \S C j in C j in ζ [k]h Thus f S,T h F S T h for all such S and T as well. Now fix h [0, h 2. From the above argument, it follows that the set of all rate vectors that satisfy 0 j S R j F S h S [k] is achievable. Denote this region with C ach h. Now consider the set of all rate vectors that satisfy 0 j S R j Φ S h S [k], where Φ S h is defined as Φ S h = F S h + ζ S ch + j S C j outh.

26 25 Denote this set with C out h. Note that C out h is an outer bound for C ach h. We next show that there exists 0 < h 3 h 2 such that for every j [k] and all 0 < h < h 3, Φ {j} h > k k Couth. i 34 To see this, first note that the right hand side of the above equation equals zero at h = 0, while i=1 Φ {j} 0 = I a X j ; Y X [k]\{j} > 0. Inequality 34 now follows from the fact that both sides are continuous in h. By Lemma 15, for a fixed h, the mapping S Φ S h is submodular and nondecreasing. Thus for every j [k], there exists a rate vector R i i [k] in C out h such that and R j > k k Couth, i i=1 R j = Φ [k] h. For example, for j = 1, consider the rate vector R i i [k], where R 1 = Φ {1} h, and for all i > 1, R i = Φ [i] Φ [i 1]. From Corollary 44.3a in [20, pp. 772] it follows that the defined rate vector is in C out h. Now since C out h is a convex region, it follows that there exists a rate vector R j h j such that for all j [k], and Thus R j h > k Couth, j k Rj h = Φ [k] h. On the other hand, from the definition of ζ S h, given by 33, it follows This implies that the sum-rate is achievable. In addition, since Φ S h F S h + R j h R sum h = Φ [k] h k k k Couth. j Couth j C ach h. k Couth j = µi λ X [k] ; Y + 1 µi a X [k] ; Y k R sum 0 = I a X [k] ; Y = k Couth j max IX [k]; Y, p P ind X [k]

Can Negligible Cooperation Increase Network Reliability?

Can Negligible Cooperation Increase Network Reliability? Can Negligible Cooperation Increase Network Reliability? Parham Noorzad, Student Member, IEEE, Michelle Effros, Fellow, IEEE, and Michael Langberg, Senior Member, IEEE Abstract arxiv:60.05769v2 [cs.it]

More information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation Mehdi Ashraphijuo, Vaneet Aggarwal and Xiaodong Wang 1 arxiv:1308.3310v1 [cs.it] 15 Aug 2013

More information

The Capacity Region for Multi-source Multi-sink Network Coding

The Capacity Region for Multi-source Multi-sink Network Coding The Capacity Region for Multi-source Multi-sink Network Coding Xijin Yan Dept. of Electrical Eng. - Systems University of Southern California Los Angeles, CA, U.S.A. xyan@usc.edu Raymond W. Yeung Dept.

More information

Multicoding Schemes for Interference Channels

Multicoding Schemes for Interference Channels Multicoding Schemes for Interference Channels 1 Ritesh Kolte, Ayfer Özgür, Haim Permuter Abstract arxiv:1502.04273v1 [cs.it] 15 Feb 2015 The best known inner bound for the 2-user discrete memoryless interference

More information

On the Capacity of the Interference Channel with a Relay

On the Capacity of the Interference Channel with a Relay On the Capacity of the Interference Channel with a Relay Ivana Marić, Ron Dabora and Andrea Goldsmith Stanford University, Stanford, CA {ivanam,ron,andrea}@wsl.stanford.edu Abstract Capacity gains due

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Cut-Set Bound and Dependence Balance Bound

Cut-Set Bound and Dependence Balance Bound Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

On Multiple User Channels with State Information at the Transmitters

On Multiple User Channels with State Information at the Transmitters On Multiple User Channels with State Information at the Transmitters Styrmir Sigurjónsson and Young-Han Kim* Information Systems Laboratory Stanford University Stanford, CA 94305, USA Email: {styrmir,yhk}@stanford.edu

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 8-3, An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Invited Paper Bernd Bandemer and

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Ahmad Abu Al Haija ECE Department, McGill University, Montreal, QC, Canada Email: ahmad.abualhaija@mail.mcgill.ca

More information

Capacity of All Nine Models of Channel Output Feedback for the Two-user Interference Channel

Capacity of All Nine Models of Channel Output Feedback for the Two-user Interference Channel Capacity of All Nine Models of Channel Output Feedback for the Two-user Interference Channel Achaleshwar Sahai, Vaneet Aggarwal, Melda Yuksel and Ashutosh Sabharwal 1 Abstract arxiv:1104.4805v3 [cs.it]

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Simultaneous Nonunique Decoding Is Rate-Optimal

Simultaneous Nonunique Decoding Is Rate-Optimal Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 1-5, 2012 Simultaneous Nonunique Decoding Is Rate-Optimal Bernd Bandemer University of California, San Diego La Jolla, CA

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

A Summary of Multiple Access Channels

A Summary of Multiple Access Channels A Summary of Multiple Access Channels Wenyi Zhang February 24, 2003 Abstract In this summary we attempt to present a brief overview of the classical results on the information-theoretic aspects of multiple

More information

On the Capacity Region of the Gaussian Z-channel

On the Capacity Region of the Gaussian Z-channel On the Capacity Region of the Gaussian Z-channel Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@eng.umd.edu ulukus@eng.umd.edu

More information

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying Ahmad Abu Al Haija, and Mai Vu, Department of Electrical and Computer Engineering McGill University Montreal, QC H3A A7 Emails: ahmadabualhaija@mailmcgillca,

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Cognitive Multiple Access Networks

Cognitive Multiple Access Networks Cognitive Multiple Access Networks Natasha Devroye Email: ndevroye@deas.harvard.edu Patrick Mitran Email: mitran@deas.harvard.edu Vahid Tarokh Email: vahid@deas.harvard.edu Abstract A cognitive radio can

More information

K User Interference Channel with Backhaul

K User Interference Channel with Backhaul 1 K User Interference Channel with Backhaul Cooperation: DoF vs. Backhaul Load Trade Off Borna Kananian,, Mohammad A. Maddah-Ali,, Babak H. Khalaj, Department of Electrical Engineering, Sharif University

More information

Interference Channels with Source Cooperation

Interference Channels with Source Cooperation Interference Channels with Source Cooperation arxiv:95.319v1 [cs.it] 19 May 29 Vinod Prabhakaran and Pramod Viswanath Coordinated Science Laboratory University of Illinois, Urbana-Champaign Urbana, IL

More information

Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels

Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Bike Xie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract arxiv:0811.4162v4 [cs.it] 8 May 2009

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

arxiv: v2 [cs.it] 28 May 2017

arxiv: v2 [cs.it] 28 May 2017 Feedback and Partial Message Side-Information on the Semideterministic Broadcast Channel Annina Bracher and Michèle Wigger arxiv:1508.01880v2 [cs.it] 28 May 2017 May 30, 2017 Abstract The capacity of the

More information

arxiv: v1 [cs.it] 4 Jun 2018

arxiv: v1 [cs.it] 4 Jun 2018 State-Dependent Interference Channel with Correlated States 1 Yunhao Sun, 2 Ruchen Duan, 3 Yingbin Liang, 4 Shlomo Shamai (Shitz) 5 Abstract arxiv:180600937v1 [csit] 4 Jun 2018 This paper investigates

More information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora

More information

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12 Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user

More information

The Sensor Reachback Problem

The Sensor Reachback Problem Submitted to the IEEE Trans. on Information Theory, November 2003. 1 The Sensor Reachback Problem João Barros Sergio D. Servetto Abstract We consider the problem of reachback communication in sensor networks.

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

Capacity Theorems for Distributed Index Coding

Capacity Theorems for Distributed Index Coding Capacity Theorems for Distributed Index Coding 1 Yucheng Liu, Parastoo Sadeghi, Fatemeh Arbabjolfaei, and Young-Han Kim Abstract arxiv:1801.09063v1 [cs.it] 27 Jan 2018 In index coding, a server broadcasts

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

Multiuser Successive Refinement and Multiple Description Coding

Multiuser Successive Refinement and Multiple Description Coding Multiuser Successive Refinement and Multiple Description Coding Chao Tian Laboratory for Information and Communication Systems (LICOS) School of Computer and Communication Sciences EPFL Lausanne Switzerland

More information

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Stefano Rini, Ernest Kurniawan and Andrea Goldsmith Technische Universität München, Munich, Germany, Stanford University,

More information

Random Access: An Information-Theoretic Perspective

Random Access: An Information-Theoretic Perspective Random Access: An Information-Theoretic Perspective Paolo Minero, Massimo Franceschetti, and David N. C. Tse Abstract This paper considers a random access system where each sender can be in two modes of

More information

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case 1 arxiv:0901.3580v1 [cs.it] 23 Jan 2009 Feedback Capacity of the Gaussian Interference Channel to Within 1.7075 Bits: the Symmetric Case Changho Suh and David Tse Wireless Foundations in the Department

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

Interference Channel aided by an Infrastructure Relay

Interference Channel aided by an Infrastructure Relay Interference Channel aided by an Infrastructure Relay Onur Sahin, Osvaldo Simeone, and Elza Erkip *Department of Electrical and Computer Engineering, Polytechnic Institute of New York University, Department

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

A Comparison of Two Achievable Rate Regions for the Interference Channel

A Comparison of Two Achievable Rate Regions for the Interference Channel A Comparison of Two Achievable Rate Regions for the Interference Channel Hon-Fah Chong, Mehul Motani, and Hari Krishna Garg Electrical & Computer Engineering National University of Singapore Email: {g030596,motani,eleghk}@nus.edu.sg

More information

On the Capacity Region for Secure Index Coding

On the Capacity Region for Secure Index Coding On the Capacity Region for Secure Index Coding Yuxin Liu, Badri N. Vellambi, Young-Han Kim, and Parastoo Sadeghi Research School of Engineering, Australian National University, {yuxin.liu, parastoo.sadeghi}@anu.edu.au

More information

Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper

Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper Mohsen Bahrami, Ali Bereyhi, Mahtab Mirmohseni and Mohammad Reza Aref Information Systems and Security

More information

A Comparison of Superposition Coding Schemes

A Comparison of Superposition Coding Schemes A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA

More information

Achievable Rates and Outer Bound for the Half-Duplex MAC with Generalized Feedback

Achievable Rates and Outer Bound for the Half-Duplex MAC with Generalized Feedback 1 Achievable Rates and Outer Bound for the Half-Duplex MAC with Generalized Feedback arxiv:1108.004v1 [cs.it] 9 Jul 011 Ahmad Abu Al Haija and Mai Vu, Department of Electrical and Computer Engineering

More information

Linearly Representable Entropy Vectors and their Relation to Network Coding Solutions

Linearly Representable Entropy Vectors and their Relation to Network Coding Solutions 2009 IEEE Information Theory Workshop Linearly Representable Entropy Vectors and their Relation to Network Coding Solutions Asaf Cohen, Michelle Effros, Salman Avestimehr and Ralf Koetter Abstract In this

More information

LECTURE 13. Last time: Lecture outline

LECTURE 13. Last time: Lecture outline LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to

More information

An Achievable Error Exponent for the Mismatched Multiple-Access Channel

An Achievable Error Exponent for the Mismatched Multiple-Access Channel An Achievable Error Exponent for the Mismatched Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@camacuk Albert Guillén i Fàbregas ICREA & Universitat Pompeu Fabra University of

More information

X 1 : X Table 1: Y = X X 2

X 1 : X Table 1: Y = X X 2 ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multiple-access

More information

ProblemsWeCanSolveWithaHelper

ProblemsWeCanSolveWithaHelper ITW 2009, Volos, Greece, June 10-12, 2009 ProblemsWeCanSolveWitha Haim Permuter Ben-Gurion University of the Negev haimp@bgu.ac.il Yossef Steinberg Technion - IIT ysteinbe@ee.technion.ac.il Tsachy Weissman

More information

Joint Write-Once-Memory and Error-Control Codes

Joint Write-Once-Memory and Error-Control Codes 1 Joint Write-Once-Memory and Error-Control Codes Xudong Ma Pattern Technology Lab LLC, U.S.A. Email: xma@ieee.org arxiv:1411.4617v1 [cs.it] 17 ov 2014 Abstract Write-Once-Memory (WOM) is a model for many

More information

An Outer Bound for the Gaussian. Interference channel with a relay.

An Outer Bound for the Gaussian. Interference channel with a relay. An Outer Bound for the Gaussian Interference Channel with a Relay Ivana Marić Stanford University Stanford, CA ivanam@wsl.stanford.edu Ron Dabora Ben-Gurion University Be er-sheva, Israel ron@ee.bgu.ac.il

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

ECE Information theory Final

ECE Information theory Final ECE 776 - Information theory Final Q1 (1 point) We would like to compress a Gaussian source with zero mean and variance 1 We consider two strategies In the first, we quantize with a step size so that the

More information

List of Figures. Acknowledgements. Abstract 1. 1 Introduction 2. 2 Preliminaries Superposition Coding Block Markov Encoding...

List of Figures. Acknowledgements. Abstract 1. 1 Introduction 2. 2 Preliminaries Superposition Coding Block Markov Encoding... Contents Contents i List of Figures iv Acknowledgements vi Abstract 1 1 Introduction 2 2 Preliminaries 7 2.1 Superposition Coding........................... 7 2.2 Block Markov Encoding.........................

More information

Network coding for multicast relation to compression and generalization of Slepian-Wolf

Network coding for multicast relation to compression and generalization of Slepian-Wolf Network coding for multicast relation to compression and generalization of Slepian-Wolf 1 Overview Review of Slepian-Wolf Distributed network compression Error exponents Source-channel separation issues

More information

Subset Typicality Lemmas and Improved Achievable Regions in Multiterminal Source Coding

Subset Typicality Lemmas and Improved Achievable Regions in Multiterminal Source Coding Subset Typicality Lemmas and Improved Achievable Regions in Multiterminal Source Coding Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California - Santa Barbara {kumar,eakyol,rose}@ece.ucsb.edu

More information

Approximate Capacity of Fast Fading Interference Channels with no CSIT

Approximate Capacity of Fast Fading Interference Channels with no CSIT Approximate Capacity of Fast Fading Interference Channels with no CSIT Joyson Sebastian, Can Karakus, Suhas Diggavi Abstract We develop a characterization of fading models, which assigns a number called

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Bike ie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract Certain degraded broadcast channels

More information

A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels

A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels S. Sandeep Pradhan a, Suhan Choi a and Kannan Ramchandran b, a {pradhanv,suhanc}@eecs.umich.edu, EECS Dept.,

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Joint Source-Channel Coding for the Multiple-Access Relay Channel

Joint Source-Channel Coding for the Multiple-Access Relay Channel Joint Source-Channel Coding for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora Department of Electrical and Computer Engineering Ben-Gurion University, Israel Email: moriny@bgu.ac.il, ron@ee.bgu.ac.il

More information

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet. EE376A - Information Theory Final, Monday March 14th 216 Solutions Instructions: You have three hours, 3.3PM - 6.3PM The exam has 4 questions, totaling 12 points. Please start answering each question on

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

Classical codes for quantum broadcast channels. arxiv: Ivan Savov and Mark M. Wilde School of Computer Science, McGill University

Classical codes for quantum broadcast channels. arxiv: Ivan Savov and Mark M. Wilde School of Computer Science, McGill University Classical codes for quantum broadcast channels arxiv:1111.3645 Ivan Savov and Mark M. Wilde School of Computer Science, McGill University International Symposium on Information Theory, Boston, USA July

More information

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits

More information

Equivalence for Networks with Adversarial State

Equivalence for Networks with Adversarial State Equivalence for Networks with Adversarial State Oliver Kosut Department of Electrical, Computer and Energy Engineering Arizona State University Tempe, AZ 85287 Email: okosut@asu.edu Jörg Kliewer Department

More information

Amobile satellite communication system, like Motorola s

Amobile satellite communication system, like Motorola s I TRANSACTIONS ON INFORMATION THORY, VOL. 45, NO. 4, MAY 1999 1111 Distributed Source Coding for Satellite Communications Raymond W. Yeung, Senior Member, I, Zhen Zhang, Senior Member, I Abstract Inspired

More information

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Jun Chen Dept. of Electrical and Computer Engr. McMaster University Hamilton, Ontario, Canada Chao Tian AT&T Labs-Research 80 Park

More information

IN this paper, we show that the scalar Gaussian multiple-access

IN this paper, we show that the scalar Gaussian multiple-access 768 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 5, MAY 2004 On the Duality of Gaussian Multiple-Access and Broadcast Channels Nihar Jindal, Student Member, IEEE, Sriram Vishwanath, and Andrea

More information

The Gallager Converse

The Gallager Converse The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing

More information

II. THE TWO-WAY TWO-RELAY CHANNEL

II. THE TWO-WAY TWO-RELAY CHANNEL An Achievable Rate Region for the Two-Way Two-Relay Channel Jonathan Ponniah Liang-Liang Xie Department of Electrical Computer Engineering, University of Waterloo, Canada Abstract We propose an achievable

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

EE376A: Homeworks #4 Solutions Due on Thursday, February 22, 2018 Please submit on Gradescope. Start every question on a new page.

EE376A: Homeworks #4 Solutions Due on Thursday, February 22, 2018 Please submit on Gradescope. Start every question on a new page. EE376A: Homeworks #4 Solutions Due on Thursday, February 22, 28 Please submit on Gradescope. Start every question on a new page.. Maximum Differential Entropy (a) Show that among all distributions supported

More information

Relay Networks With Delays

Relay Networks With Delays Relay Networks With Delays Abbas El Gamal, Navid Hassanpour, and James Mammen Department of Electrical Engineering Stanford University, Stanford, CA 94305-9510 Email: {abbas, navid, jmammen}@stanford.edu

More information

Wideband Fading Channel Capacity with Training and Partial Feedback

Wideband Fading Channel Capacity with Training and Partial Feedback Wideband Fading Channel Capacity with Training and Partial Feedback Manish Agarwal, Michael L. Honig ECE Department, Northwestern University 145 Sheridan Road, Evanston, IL 6008 USA {m-agarwal,mh}@northwestern.edu

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR 1 Stefano Rini, Daniela Tuninetti and Natasha Devroye srini2, danielat, devroye @ece.uic.edu University of Illinois at Chicago Abstract

More information

Lecture 10: Broadcast Channel and Superposition Coding

Lecture 10: Broadcast Channel and Superposition Coding Lecture 10: Broadcast Channel and Superposition Coding Scribed by: Zhe Yao 1 Broadcast channel M 0M 1M P{y 1 y x} M M 01 1 M M 0 The capacity of the broadcast channel depends only on the marginal conditional

More information

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages Degrees of Freedom Region of the Gaussian MIMO Broadcast hannel with ommon and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and omputer Engineering University of Maryland, ollege

More information

On Network Interference Management

On Network Interference Management On Network Interference Management Aleksandar Jovičić, Hua Wang and Pramod Viswanath March 3, 2008 Abstract We study two building-block models of interference-limited wireless networks, motivated by the

More information

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality 0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California

More information

On Capacity Under Received-Signal Constraints

On Capacity Under Received-Signal Constraints On Capacity Under Received-Signal Constraints Michael Gastpar Dept. of EECS, University of California, Berkeley, CA 9470-770 gastpar@berkeley.edu Abstract In a world where different systems have to share

More information

arxiv: v1 [cs.it] 5 Feb 2016

arxiv: v1 [cs.it] 5 Feb 2016 An Achievable Rate-Distortion Region for Multiple Descriptions Source Coding Based on Coset Codes Farhad Shirani and S. Sandeep Pradhan Dept. of Electrical Engineering and Computer Science Univ. of Michigan,

More information

On Achievability for Downlink Cloud Radio Access Networks with Base Station Cooperation

On Achievability for Downlink Cloud Radio Access Networks with Base Station Cooperation On Achievability for Downlink Cloud Radio Access Networks with Base Station Cooperation Chien-Yi Wang, Michèle Wigger, and Abdellatif Zaidi 1 Abstract arxiv:1610.09407v1 [cs.it] 8 Oct 016 This work investigates

More information

Polar codes for the m-user MAC and matroids

Polar codes for the m-user MAC and matroids Research Collection Conference Paper Polar codes for the m-user MAC and matroids Author(s): Abbe, Emmanuel; Telatar, Emre Publication Date: 2010 Permanent Link: https://doi.org/10.3929/ethz-a-005997169

More information

Lecture 11: Polar codes construction

Lecture 11: Polar codes construction 15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last

More information

Alphabet Size Reduction for Secure Network Coding: A Graph Theoretic Approach

Alphabet Size Reduction for Secure Network Coding: A Graph Theoretic Approach ALPHABET SIZE REDUCTION FOR SECURE NETWORK CODING: A GRAPH THEORETIC APPROACH 1 Alphabet Size Reduction for Secure Network Coding: A Graph Theoretic Approach Xuan Guang, Member, IEEE, and Raymond W. Yeung,

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

On the Capacity of the Two-Hop Half-Duplex Relay Channel

On the Capacity of the Two-Hop Half-Duplex Relay Channel On the Capacity of the Two-Hop Half-Duplex Relay Channel Nikola Zlatanov, Vahid Jamali, and Robert Schober University of British Columbia, Vancouver, Canada, and Friedrich-Alexander-University Erlangen-Nürnberg,

More information

Linear Codes, Target Function Classes, and Network Computing Capacity

Linear Codes, Target Function Classes, and Network Computing Capacity Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information