On Achievability for Downlink Cloud Radio Access Networks with Base Station Cooperation

Size: px
Start display at page:

Download "On Achievability for Downlink Cloud Radio Access Networks with Base Station Cooperation"

Transcription

1 On Achievability for Downlink Cloud Radio Access Networks with Base Station Cooperation Chien-Yi Wang, Michèle Wigger, and Abdellatif Zaidi 1 Abstract arxiv: v1 [cs.it] 8 Oct 016 This work investigates the downlink of a cloud radio access network (C-RAN) in which a central processor communicates with two mobile users through two base stations (BSs). The BSs act as relay nodes and cooperate with each other through error-free rate-limited links. We develop and analyze two coding schemes for this scenario. The first coding scheme is based on Liu Kang scheme for C-RANs without BS cooperation; and extends it to scenarios allowing conferencing between the BSs. Among few other features, our new coding scheme enables arbitrary correlation among the auxiliary codewords that are recovered by the BSs. It also introduces common codewords to be described to both BSs. For the analysis of this coding scheme, we extend the multivariate covering lemma to non-cartesian product sets, thereby correcting an erroneous application of this lemma in Liu Kang s related work. We highlight key aspects of this scheme by studying three important instances of it. The second coding scheme extends the so-called compression scheme that was originally developed for memoryless Gaussian C-RANs without BS cooperation to general discrete memoryless C-RANs with BS cooperation. We show that this scheme subsumes the original compression scheme when applied to memoryless Gaussian C-RAN models. In the analysis of this scheme, we also highlight important connections with the so-called distributed decode forward scheme, and refine the approximate capacity of a general N-BS L-user C-RAN model in the memoryless Gaussian case. Index Terms Broadcast relay networks, cloud radio access networks, compression, conferencing relays, data sharing, distributed decode forward, Gaussian networks. I. INTRODUCTION Cloud radio access networks (C-RANs) are promising candidates for fifth generation (5G) wireless communication networks. In a C-RAN, the base stations (BSs) are connected to a central processor through digital fronthaul links. Comprehensive surveys on C-RANs can be found in [1], []. The -BS -user case is depicted in Figure 1. The two most important coding schemes for downlink C-RANs are The data-sharing scheme: The central processor splits each message into independent submessages and conveys these independent submessages to one or multiple BSs. The BSs map the received submessages into codewords and transmit these codewords over the interference network. The mobile users decode their intended message parts by treating interference as noise. If there are N BSs, in general there can be up to N 1 submessages, each of which is sent to a specific subset of BSs. Two special cases have been considered in the literature: Zakhour and Gesbert [3] studied the -BS -user case. On the other hand, Dai and Yu [4] focused on BS clustering for general C-RANs: The messages are sent as a whole to subsets of BSs and there is no message splitting. The compression scheme: The central processor first precalculates idealized channel inputs and then sends lossy representations of these idealized inputs over the rate-limited fronthaul links to the BSs. The BSs reconstruct the compressed signals and transmit them over the interference network. The first hop, from the central processor to the BSs, is conceptually a lossy source coding problem. The goal is to make the compressed signals correlated in a way that would be useful for the second hop, from the BSs to the mobile users. The compression scheme was first investigated by Park et al. [5] for the memoryless Gaussian case. The work of C.-Y. Wang and M. Wigger has been supported by Huawei Technologies France SASU, under grant agreement YB C.-Y. Wang and M. Wigger are with the Communications and Electronics Department, Telecom ParisTech, Université Paris-Saclay, Paris, France. s: { chien-yi.wang, michele.wigger}@telecom-paristech.fr A. Zaidi is with the Mathematics and Algorithmic Sciences Lab, Huawei Technologies France, Boulogne-Billancourt, France. abdellatif.zaidi@huawei.com

2 C 1 BS 1 User 1 ^M 1 M 1 M Central Processor p Y1 ;Y jx 1 ;X C BS User ^M Fig. 1. Downlink C-RAN with base stations and mobile users. A third scheme, the reverse compute forward, was proposed by Hong and Caire [6], which uses nested lattice codes to perform precalculations in a finite field. The reverse compute forward scheme can enhance the performance under the condition of weak fronthaul links, but it suffers from non-integer penalty and thus is less competitive than the first two schemes when the fronthaul links are strong. Recently, for the downlink of C-RANs some advanced coding schemes have been developed based on random coding: Liu and Kang [7] generalized the data-sharing scheme to a new scheme, which we will refer to as Liu Kang scheme. In the Liu Kang scheme, the central processor maps the message pair (M 1, M ) into -dimensional Marton codewords: codewords U1 n, U n for message M 1 and V1 n, V n for message M. The central processor then describes codewords U1 n, V 1 n to BS 1 and codewords U n, V n to BS, where the descriptions are obtained by enumerating all possible pairs of codewords (U1 n, V 1 n) and (U n, V n ). However, the performance analysis in [7] is flawed due to an erroneous application of the mutual covering lemma. This leads to a rate region that is not achievable using the described coding scheme, because of some missing rate constraints. On the other hand, it was observed in [8] that for the -BS -user case, distributed decode forward (DDF) [9] subsumes the compression scheme. The DDF scheme precodes every codeword involved in the entire communication already at the source (the central processor, in our setup). The codewords carry the information of the messages in an implicit manner. In this paper, we study the downlink of a C-RAN with two BSs and two mobile users in which the BSs cooperate over error-free rate-limited links. We develop two coding schemes for this model. The first coding scheme, termed generalized data-sharing (G-DS), is based on a variation of the Liu Kang scheme [7], which is developed for C-RANs without BS cooperation. Our G-DS scheme accounts for the conferencing between the BSs by introducing common codewords (U0 n, V 0 n ) intended to be recovered by both of them. The analysis generalizes that of [7] and fixes an erroneous step in the achievability proof therein. To this end, in particular we extend the multivariate covering lemma to non-cartesian product sets. The second coding scheme, termed generalized compression (G-Compression), is based on the compression scheme developed by Park et al. [5] in the context of Gaussian C-RAN without BS cooperation. Our G-Compression scheme also accounts for conferencing between the BSs and applies to general discrete memoryless channels on the second hop. We analyze this scheme and show that its performance subsumes that of the DDF scheme when the latter is applied to the studied downlink C-RAN model. Furthermore, we characterize the capacity region of a general N-BS L-user C-RAN model under the memoryless Gaussian model to within a better (i.e., smaller) constant gap, independent of power. The main contributions of this work can be summarized as follows: 1) We modify the Liu Kang scheme [7] and introduce common codewords to the new G-DS scheme. We use the cooperation links to exchange part of common codewords and to redirect private codewords for asymmetric link or channel conditions. The new G-DS scheme subsumes the data-sharing scheme proposed in [3]. To highlight distinct features of the code components, we consider three representative simplifications. ) We introduce a cloud center and incorporate BS cooperation to the compression scheme in [5] and derive the corresponding achievable rate region for general discrete memoryless channels on the second hop. The new G-Compression scheme subsumes the scheme proposed in [5] when adapted to the memoryless Gaussian case. 3) We simplify the achievable rate region of the DDF scheme for downlink C-RAN with BS cooperation. Under the memoryless Gaussian model, we characterize the capacity region of a downlink N-BS L-user C-RAN with

3 3 BS cooperation to within a gap of L min{n,l log N} + bits per dimension, which improves the previous result L+N. 4) We show that under the memoryless Gaussian model, the G-DS scheme outperforms the G-Compression scheme in the low-power regime and when the channel gain matrix is ill-conditioned. Furthermore, compared to the G-Compression scheme, the G-DS scheme benefits more from BS cooperation. The paper is organized as follows. In Section II, we provide the problem formulation for the -BS -user case. Section III is devoted to the G-DS scheme, in which we describe the detailed coding scheme and consider three representative special cases and two examples with simpler network topologies. Section IV is devoted to the G- Compression scheme. In this section, we describe the G-Compression scheme with a cloud center and then conduct a performance analysis on the DDF scheme. Finally, in Section V we compare the G-DS scheme and the G- Compression scheme through examples and evaluation for the memoryless Gaussian model. The lengthy proofs are deferred to appendices. A. Notations Random variables and their realizations are represented by uppercase letters (e.g., X) and lowercase letters (e.g., x), respectively. Matrices are represented by uppercase letters in sans-serif font (e.g., M) and vectors are in boldface font (e.g., v). We use calligraphic symbols (e.g., X ) and the Greek letter Ω to denote sets. The probability distribution of a random variable X is denoted by p X. Denote by the cardinality of a set and by 1{ } the indicator function of an event. We denote [a] := {1,,, a } for all a 1, X k := (X 1, X,, X k ), and X(Ω) = (X i : i Ω). Throughout the paper, all logarithms are to the base two. The usual notation for entropy, H(X), and mutual information, I(X; Y ), is used. We follow the ɛ δ notation in [10] and the robust typicality introduced in [11]: For X p X and ɛ (0, 1), the set of typical sequences of length k with respect to the probability distribution p X and the parameter ɛ is denoted by T ɛ (k) (X), which is defined as { T ɛ (k) (X) := x k X k : #(a x k } ) p X (a) k ɛp X(a), a X, where #(a x k ) is the number of occurrences of a in x k. Finally, the total correlation among the random variables X(Ω) is defined as Γ(X(Ω)) := i Ω H(X i ) H(X(Ω)). II. PROBLEM STATEMENT Consider the downlink -BS -user C-RAN with BS cooperation depicted in Figure. The network consists of one central processor, two BSs, and two mobile users. The central processor communicates with the two BSs through individual noiseless bit pipes of finite capacities. Denote by C k the capacity of the link from the central processor to BS k. In addition, the two BSs can also communicate with each other through individual noiseless bit pipes of finite capacities. Denote by C kj the capacity of the link from BS j to BS k. The network from the BSs to the mobile users is modeled as a discrete memoryless interference channel (DM-IC) X 1 X, p Y1,Y X 1,X, Y 1 Y that consists of four finite sets X 1, X, Y 1, Y and a collection of conditional probability mass functions (pmf) p Y1,Y X 1,X. With the help of the two BSs, the central processor wants to communicate two messages M 1 and M to users 1 and, respectively. Assume that M 1 and M are independent and uniformly distributed over [ nr1 ] and [ nr ], respectively. In this paper, we restrict attention to information processing on a block-by-block basis. Each block consists of a sequence of n symbols. The entire communication is divided into three successive phases: 1) central processor to BSs The central processor conveys two indices (W 1, W ) := f 0 (M 1, M ) to BS 1 and BS, respectively, where f 0 : [ nr1 ] [ nr ] [ nc1 ] [ nc ] is the encoder of the central processor. ) BS to BS conferencing communication BS 1 conveys an index W 1 := f 1 (W 1 ) to BS, where f 1 : [ nc1 ] [ nc1 ] is the conferencing encoder of BS 1. BS conveys an index W 1 := f (W ) to BS 1, where f : [ nc ] [ nc1 ] is the conferencing encoder of BS.

4 4 C 1 BS 1 User 1 ^M 1 M 1 M Central Processor C 1 C 1 p Y1 ;Y jx 1 ;X C BS User ^M Fig.. Downlink C-RAN with BS cooperation: base stations and mobile users. 3) BSs to mobile users BS 1 transmits a sequence X1 n := g 1(W 1, W 1 ) over the DM-IC, where g 1 : [ nc1 ] [ nc1 ] X1 n is the channel encoder of BS 1. BS transmits a sequence X n := g (W, W 1 ) over the DM-IC, where g : [ nc ] [ nc1 ] X n is the channel encoder of BS. Upon receiving the sequence Yl n Yl n, user l {1, } finds an estimate ˆM l := d l (Yl n) of message M l, where d l : Yl n ] is the decoder of user l. The collection of the encoders f [nrl 0, f 1, f, g 1, g and the decoders d 1, d is called a ( nr1, nr, n) channel code for the downlink -BS -user C-RAN model with BS cooperation. The average error probability is defined as ( ) := P { ˆM l M l }. P (n) e l=1 We say that a rate pair (R 1, R ) is achievable if there exists a sequence of ( nr1, nr, n) codes such that lim n P (n) e = 0. The capacity region of the downlink C-RAN is the closure of the set of achievable rate pairs. Finally, we remark that using the discretization procedure [10, Section 3.4.1] and appropriately introducing input costs, our developed results for DM-ICs can be adapted to the Gaussian interference channel with constrained input power. The input output relation of this channel is [ ] [ ] [ ] [ ] Y1 g11 g = 1 X1 Z1 +, (1) Y g 1 g X Z where X k R is the channel input from BS k, Y l is the channel output observed at user l, g lk R is the channel gain from BS k to user l, and (Z 1, Z ) are i.i.d. N (0, 1) and each BS has to satisfy an average power constraint 1 P, i.e., n n i=1 x ki P for all k {1, }. III. GENERALIZED DATA-SHARING SCHEME We now propose a new coding scheme, which we term generalized data-sharing (G-DS) scheme and which generalizes the data-sharing scheme [3]. It is instructive to briefly review the encoding of the data-sharing scheme before presenting the details of our new G-DS scheme. A. Preliminary: Data-Sharing Scheme The conventional data-sharing scheme follows from a rate-splitting approach: Each message M l is split into three independent submessages M l0, M l1, and M l, where l {1, }. The central processor sends the private messages (M 1k, M k ) to BS k, where k {1, }, and the common messages (M 10, M 0 ) to both BSs. The BSs map the received submessages into codewords, i.e., m 1j u n j and m j vj n, for all j {0, 1, }, and each BS k {1, } applies a symbol-by-symbol mapping x k (u 0, v 0, u k, v k ) to map the codewords (U0 n, V 0 n, U k n, V k n) into channel inputs Xk n. In the conventional data-sharing scheme, the codewords are generated according to the distribution p U0,U 1,U,V 0,V 1,V = p Uj p Vj. j=0

5 5 In [3], Zakhour and Gesbert specialized the data-sharing scheme to the memoryless Gaussian model and to linear mapping of x k, which is known as linear beamforming in the literature. Our aim is to develop a coding scheme that allows to exploit the full joint distribution p U0,U 1,U,V 0,V 1,V. However, to the best of our knowledge, the rate-splitting approach seems to admit at best the structure p U0,V 0 j=1 p U j,v j U 0,V 0. The reason is that since the private messages are independent of each other and of the common messages, the BSs cannot coordinate with each other to have (U 1, V 1 ) directly correlate with (U, V ). In order to overcome this obstacle, we modify and extend the Liu Kang scheme [7]: Each message, instead of being split into three independent parts, is now represented by a set of auxiliary index tuples. Each auxiliary index is referred to a codeword of independently generated codebooks. Through joint typicality test, we find auxiliary indices such that the set of corresponding codewords are coordinated. B. Performance First, let us give a high-level summary of the G-DS scheme. The encoding is based on multicoding. We fix a joint pmf p U0,V 0,U 1,V 1,U,V and independently generate six codebooks U j, V j, j {0, 1, }, from the marginals p Uj, p Vj, j {0, 1, }, respectively. For j {0, 1, }, the codebook U j contains nruj codewords and the codebook V j contains nrvj codewords. Each message m 1 [ nr1 ] is associated with a unique bin B(m 1 ) of index tuples (k 0, k 1, k ) [ Ru0 ] [ Ru1 ] [ Ru ], which are indices to the codebooks U 0, U 1, U, respectively. Similarly, each message m [ nr ] is associated with a unique bin B(m ) of index tuples (l 0, l 1, l ) [ Rv0 ] [ Rv1 ] [ Rv ], which are indices to the independently generated codebooks V 0, V 1, V, respectively. Then, given (m 1, m ), we apply joint typicality encoding to find index tuples (k 0, k 1, k ) B(m 1 ) and (l 0, l 1, l ) B(m ) such that (U0 n(k 0), U1 n(k 1), U n(k ), V0 n(l 0), V1 n(l 1), V n(l )) are jointly typical. Remark 1: In addition to including the common auxiliaries U 0 and V 0, as already mentioned in [7], the main difference of our proposed scheme from the Liu Kang scheme is that we do not enumerate the jointly typical pairs (U1 n(k 1), U n(k )) and (V1 n(l 1), V n(l )), which renders the analysis of the success probability of finding jointly typical tuples (U1 n(k 1), U n(k ), V1 n(l 1), V n(l )) difficult. The next step is to convey (k 0, l 0, k 1, l 1 ) to BS 1 and (k 0, l 0, k, l ) to BS. By taking advantage of the following facts, we can reduce the conventional sum rate R u0 + R v0 + R uj + R vj, j {1, }: 1) Correlated index tuples The index tuple to be sent represents certain jointly typical codewords. As long as U 0, V 0, U j, V j are not mutually independent, some members of [ nru0 ] [ nrv0 ] [ nruj ] [ nrvj ] will never be used. Thus, instead of sending (k 0, l 0, k j, l j ) separately, we can enumerate all jointly typical codewords and simply convey an enumeration index. ) Opportunity of exploiting the cooperation links In the presence of cooperation links, the BSs do not need to learn all the information directly over the link from the central processor, but can learn part of it over the cooperation link. Finally, user 1 applies joint typicality decoding to recover (k 0, k 1, k ) and then the message m 1 can be uniquely identified. Similarly, user applies joint typicality decoding to recover (l 0, l 1, l ) and then the message m can be uniquely identified. The achieved rate region of the G-DS scheme is presented in the following theorem. Theorem 1: A rate pair (R 1, R ) is achievable for the downlink -BS -user C-RAN with BS cooperation if there exist some rates R uj, R vj 0, j {0, 1, }, some joint pmf p U0,V 0,U 1,V 1,U,V, and some functions x k (u 0, v 0, u k, v k ), k {1, }, such that for all Ω u, Ω v {0, 1, } satisfying Ω u + Ω v, 1{ Ω u = 3}R 1 + 1{ Ω v = 3}R < i Ω u R ui + j Ω v R vj Γ(U(Ω u ), V (Ω v )); () for all non-empty Ω u, Ω v {0, 1, }, i Ω u R ui < I(U(Ω u ); U(Ω c u), Y 1 ) + Γ(U(Ω u )), (3) j Ω v R vj < I(V (Ω v ); V (Ω c v), Y ) + Γ(V (Ω v )); (4)

6 6 and i={0,1} i {0,} R ui + R ui + R ui + i=0 j {0,1} j {0,} R vj < C 1 + C 1 + Γ(U 0, V 0, U 1, V 1 ), (5) R vj < C + C 1 + Γ(U 0, V 0, U, V ), (6) R vj < C 1 + C + Γ(U 0, V 0, U 1, V 1 ) + Γ(U 0, V 0, U, V ) Γ(U 0, V 0 ). (7) j=0 Unfortunately, the rate region in Theorem 1 is hard to evaluate. Besides, we find it insightful to learn the effects of different code components. Thus, now we present three corollaries to Theorem 1 where we restrict the correlation structure: 1) Corollary 1: U j = V j = and R uj = R vj = 0, j {1, }, ) Corollary : p U0,V 0,U 1,V 1,U,V = j=1 p U j p Vj, 3) Corollary 3: U 0 = V 0 = and R u0 = R v0 = 0. In all the corollaries, the auxiliaries (R uj, R vj : j {0, 1, }) are eliminated through the Fourier Motzkin elimination. 1 We remark that the first two correlation structures can also be realized through the rate-splitting approach mentioned in Section III-A. Corollary 1 (Scheme I): A rate pair (R 1, R ) is achievable for the downlink -BS -user C-RAN with BS cooperation if R 1 < I(U 0 ; Y 1 ), R < I(V 0 ; Y ), R 1 + R < I(U 0 ; Y 1 ) + I(V 0 ; Y ) I(U 0 ; V 0 ), R 1 + R < min{c 1 + C 1, C + C 1, C 1 + C }, for some joint pmf p U0,V 0 and some functions x k (u 0, v 0 ), k {1, }. Corollary (Scheme II): A rate pair (R 1, R ) is achievable for the downlink -BS -user C-RAN with BS cooperation if R 1 < C 1 + C 1 + I(U ; Y 1 U 0, U 1 ), R 1 < C + C 1 + I(U 1 ; Y 1 U 0, U ), R 1 < I(U 0, U 1, U ; Y 1 ), R < C 1 + C 1 + I(V ; Y V 0, V 1 ), R < C + C 1 + I(V 1 ; Y V 0, V ), R < I(V 0, V 1, V ; Y ), R 1 + R < C 1 + C, R 1 + R < C 1 + C 1 + I(U ; Y 1 U 0, U 1 ) + I(V ; Y V 0, V 1 ), R 1 + R < C + C 1 + I(U 1 ; Y 1 U 0, U ) + I(V 1 ; Y V 0, V ), R 1 + R < C 1 + C + C 1 + C 1 + I(V 1, V ; Y V 0 ), R 1 + R < C 1 + C + C 1 + C 1 + I(U 1, U ; Y 1 U 0 ), R 1 + R < C 1 + C + C 1 + C 1 + I(U 1, U ; Y 1 U 0 ) + I(V 1, V ; Y V 0 ), for some joint pmf j=0 p U j p Vj and some functions x k (u 0, v 0, u k, v k ), k {1, }. When applied to the memoryless Gaussian model (1), Corollary with C 1 = C 1 = 0 recovers the rate region of the scheme of Zakhour and Gesbert [3, Proposition 1]. 1 In this paper, all Fourier Motzkin eliminations are performed using the software developed by Gattegno, et al. [1].

7 7 Corollary 3 (Scheme III): A rate pair (R 1, R ) is achievable for the downlink -BS -user C-RAN with BS cooperation if R 1 < C 1 + C 1 + I(U ; U 1, Y 1 ) I(U ; U 1, V 1 ), R 1 < C + C 1 + I(U 1 ; U, Y 1 ) I(U 1 ; U, V ), R 1 < I(U 1, U ; Y 1 ) + min 0, I(V 1 ; V, Y ) I(V 1 ; U 1, U ), I(V ; V 1, Y ) I(V ; U 1, U ) R < C 1 + C 1 + I(V ; V 1, Y ) I(V ; U 1, V 1 ), R < C + C 1 + I(V 1 ; V, Y ) I(V 1 ; U, V ), R < I(V 1, V ; Y ) + min 0, I(U ; U 1, Y 1 ) I(U ; V 1, V ), I(U 1 ; U, Y 1 ) I(U 1 ; V 1, V ) R 1 + R < I(U 1, U ; Y 1 ) + I(V 1, V ; Y ) I(U 1, U ; V 1, V ), R 1 + R < C 1 + C I(U 1, V 1 ; U, V ), R 1 + R < C 1 + C 1 I(U 1, V 1 ; U, V ) + min R 1 + R < C + C 1 I(U 1, V 1 ; U, V ) + min for some joint pmf p U1,V 1,U,V C. Examples,, I(U ; U 1, Y 1 ) + I(V ; V 1, Y ) I(U ; V ), I(U ; U 1, Y 1 ) + I(V 1, V ; Y ) I(U ; V 1 ) I(U ; V ) + I(V 1 ; V ), I(U 1, U ; Y 1 ) + I(V ; V 1, Y ) I(U 1 ; V ) I(U ; V ) + I(U 1 ; U ) I(U 1 ; U, Y 1 ) + I(V 1 ; V, Y ) I(U 1 ; V 1 ), I(U 1 ; U, Y 1 ) + I(V 1, V ; Y ) I(U 1 ; V 1 ) I(U 1 ; V ) + I(V 1 ; V ), I(U 1, U ; Y 1 ) + I(V 1 ; V, Y ) I(U 1 ; V 1 ) I(U ; V 1 ) + I(U 1 ; U ) and some functions x k (u k, v k ), k {1, } such that I(U 1 ; V 1 ) < I(U 1 ; U, Y 1 ) + I(V 1 ; V, Y ), I(U ; V ) < I(U ; U 1, Y 1 ) + I(V ; V 1, Y ), I(U 1 ; V ) < I(U 1 ; U, Y 1 ) + I(V ; V 1, Y ), I(U ; V 1 ) < I(U ; U 1, Y 1 ) + I(V 1 ; V, Y ). Now let us consider two special cases with simpler topologies. Example 1 (1 BS and users): The downlink 1-BS -user C-RAN can be considered as a special case of the downlink -BS -user C-RAN with C = C 1 = C 1 = 0 and p Y1,Y X 1,X = p Y1,Y X 1. We fix a joint pmf p U,V and substitute (U 1, V 1 ) = (U, V ), U j = V j =, and R uj = R vj = 0, j {0, }, in Theorem 1. Then, after removing R u1 and R v1 by the Fourier Motzkin elimination, we have the following corollary. Corollary 4: A rate pair (R 1, R ) is achievable for the downlink 1-BS -user C-RAN if there exist some pmf and some function x 1 (u, v) such that p U,V R 1 < I(U; Y 1 ), R < I(V ; Y ), R 1 + R < I(U; Y 1 ) + I(V ; Y ) I(U; V ),,, R 1 + R < C 1. (8) Thus, the achieved rate region is essentially Marton s inner bound [13] with the additional constraint (8) due to the fact that the digital link is of finite capacity. Example ( BSs and 1 user): The downlink -BS 1-user C-RAN is a class of diamond networks [14], [15], which can be considered as a special case of the downlink -BS -user C-RAN by setting R = 0. We fix a joint

8 8 pmf p U,X1,X and substitute (U 0, U 1, U ) = (U, X 1, X ), V j =, and R vj = 0, j {0, 1, }, in Theorem 1. Then, after removing R u0, R u1, and R u by the Fourier Motzkin elimination, we have the following corollary. Corollary 5: Any rate R 1 is achievable for the downlink -BS 1-user C-RAN with BS cooperation if there exists some pmf p U,X1,X such that C 1 + C I(X 1 ; X U), C 1 + C 1 + I(X ; Y 1 U, X 1 ), R 1 < min C + C 1 + I(X 1 ; Y 1 U, X ),. I(X 1, X ; Y 1 ), 1 [C 1 + C + C 1 + C 1 + I(X 1, X ; Y 1 U) I(X 1 ; X U)] Remark : Considering diamond networks with an orthogonal broadcast channel, the proposed G-DS scheme recovers the achievability results in [14, Theorem ] and [15, Theorem 1]. It is shown in [15] that the achievability is optimal when the second hop is the binary-adder multiple-access channel, i.e., X 1 = X = {0, 1}, Y 1 = {0, 1, }, and Y 1 = X 1 + X. Furthermore, the proposed G-DS scheme recovers the achievability result in [16, Theorem ] in which cooperation between relays is also included in the network model. D. Coding Scheme Codebook generation: Fix a joint pmf p U0,V 0,U 1,V 1,U,V and functions x j (u 0, v 0, u j, v j ), j {1, }. Randomly and independently generate sequences u n 0 (k 0), each according to n i=1 p U 0 (u 0i ), for k 0 [ nru 0 ]; u n 1 (k 1), each according to n i=1 p U 1 (u 1i ), for k 1 [ nru 1 ]; u n (k ), each according to n i=1 p U (u i ), for k [ nru ]; v0 n(l 0), each according to n i=1 p V 0 (v 0i ), for l 0 [ nrv 0 ]; v1 n(l 1), each according to n i=1 p V 1 (v 1i ), for l 1 [ nrv 1 ]; v n(l ), each according to n i=1 p V (v i ), for l [ nrv ]. Next, we generate three dictionaries: D 0 = {(k 0, l 0 ) [ nru0 ] [ nrv0 ] : (u n 0 (k 0 ), v0 n (l 0 )) T (n) ɛ }, D 1 (k, l) = {(k 1, l 1 ) [ nru1 ] [ nrv1 ] : (u n 1 (k 1 ), v1 n (l 1 )) T (n) ɛ (U 1, V 1 u n 0 (k), v0 n (l))}, D (k, l) = {(k, l ) [ nru ] [ nrv ] : (u n (k ), v n (l )) T (n) ɛ (U, V u n 0 (k), v0 n (l))}, for all (k, l) D 0. Every index tuple in the dictionaries is assigned a unique reference label. For example, the first index tuple in D 1 (k, l) is referred to as D 1 (1 k, l). Also, we denote by Dj 1 the inverse map of D j, j {0, 1, }. Finally, we randomly and independently assign an index m 1 (k 0, k 1, k ) to each index tuple (k 0, k 1, k ) [ nru0 ] [ nru1 ] [ nru ] according to a uniform pmf over [ nr1 ]. Similarly, we randomly and independently assign an index m (l 0, l 1, l ) to each index tuple (l 0, l 1, l ) [ nrv0 ] [ nrv1 ] [ nrv ] according to a uniform pmf over [ nr ]. We refer to each subset of index tuples with the same index m j as a bin B j (m j ), j {1, }. Central Processor: Upon seeing (m 1, m ), the central processor finds (k 0, k 1, k ) B 1 (m 1 ) and (l 0, l 1, l ) B (m ) such that (u n 0 (k 0 ), u n 1 (k 1 ), u n (k ), v n 0 (l 0 ), v n 1 (l 1 ), v n (l )) T (n) ɛ. If there is more than one such tuple, choose an arbitrary one among them. If no such tuple exists, choose (k 0, k 1, k, l 0, l 1, l ) = (1, 1, 1, 1, 1, 1). Then, the central processor splits D0 1 (k 0, l 0 ) into three subindices w (0) 0, w (1) 0, and w() 0 of rates R 00, R 01, and R 0, respectively. Also, for j {1, }, the central processor splits the index Dj 1 (k j, l j k 0, l 0 ) into two subindices w (1) j and w () j of rates R j1 and R j, respectively. Finally, the central processor sends the index tuple (w (0) 0, w(1) 0, w(1) 1, w(1) ) to BS 1 and (w(0) 0, w() 0, w() 1, w() ) to BS. The encoding operation at the central processor is illustrated in Figure 3. BS: BS 1 forwards (w (1) 0, w(1) ) to BS over the cooperation link. BS forwards (w() 0, w() 1 ) to BS 1 over the cooperation link. Given (w (0) 0, w(1) 0, w() 0 ), both BSs can recover D 1 0 (k 0, l 0 ) and thus the common indices (k 0, l 0 ).

9 9 k 0 B 1 (m 1 ) B (m ) U 0 V 0 `0 V 0 U 1 k 1 T (n) ² 0 `1 V 1 U k k 1 ;`1jk 0 ;`0jk ;` ` V D D 1 D 0 D w (1) w (1) w (0) 0 0 w () 1 w() w () 1 0 w (1) w() w () BS 1 BS Fig. 3. Illustration of the encoding operation at the central processor in the G-DS scheme. Then, BS j {1, }, can recover Dj 1 (k j, l j k 0, l 0 ) from (w (1) j, w () j ) and (k 0, l 0 ). Finally, BS j transmits the symbol x ji (u 0i (k 0 ), v 0i (l 0 ), u ji (k j ), v ji (l j )) at time i [n]. Decoding: Let ɛ > ɛ. User 1 declares that ˆm 1 is sent if it is the unique message such that for some (k 0, k 1, k ) B 1 ( ˆm 1 ) it holds that (u n 0 (k 0), u n 1 (k 1), u n (k ), y1 n) T ɛ (n) ; otherwise it declares an error. Decoder declares that ˆm is sent if it is the unique message such that (v0 n(l 0), v1 n(l 1), v n(l ), y n) T ɛ (n) for some (l 0, l 1, l ) B ( ˆm ); otherwise it declares an error. Analysis of Error Probability: Let (M 1, M ) be the messages and let (K 0, K 1, K, L 0, L 1, L ) be the indices chosen at the encoder. In order to have a lossless transmission over the digital links, it requires that Also, we note that R 00 + R 01 + R 11 + R 1 C 1, R 00 + R 0 + R 1 + R C, R 01 + R 1 C 1, R 0 + R 1 C 1. R 00 + R 01 + R 0 = log D 0, R 11 + R 1 = log D 1 (K 0, L 0 ), R 1 + R = log D (K 0, L 0 ). Thus, after applying Fourier-Motzkin elimination to remove R 00 and (R j1, R j ), j {0, 1, }, we have log D 0 + log D 1 (K 0, L 0 ) C 1 + C 1, (9) log D 0 + log D (K 0, L 0 ) C + C 1, (10) log D 0 + log D 1 (K 0, L 0 ) + log D (K 0, L 0 ) C 1 + C. (11) We denote by A the intersection of the random events (9), (10), and (11). From Lemma 1 proved in Appendix A, the random event A happens with high probability as n if C 1 + C 1 R u0 + R v0 I(U 0 ; V 0 ) + R u1 + R v1 I(U 1 ; V 1 ) I(U 0, V 0 ; U 1, V 1 ), C + C 1 R u0 + R v0 I(U 0 ; V 0 ) + R u + R v I(U ; V ) I(U 0, V 0 ; U, V ), C 1 + C R u0 + R v0 I(U 0 ; V 0 ) + R u1 + R v1 I(U 1 ; V 1 ) I(U 0, V 0 ; U 1, V 1 ) +R u + R v I(U ; V ) I(U 0, V 0 ; U, V ).

10 10 Besides the error event A c, the decoding at User 1 fails if one or more of the following events occur: E s = {(U0 n (k 0 ), U1 n (k 1 ), U n (k ), V0 n (l 0 ), V1 n (l 1 ), V n (l )) / T (n) ɛ for all (k 0, k 1, k ) B 1 (M 1 ), (l 0, l 1, l ) B (M )}, E d0 = {(U0 n (K 0 ), U1 n (K 1 ), U n (K ), Y1 n ) / T ɛ (n) }, E d1 = {(U 0 (K 0 ), U1 n (k 1 ), U n (K ), Y1 n ) T ɛ (n) for some k 1 K 1 }, E d = {(U 0 (K 0 ), U1 n (K 1 ), U n (k ), Y1 n ) T ɛ (n) for some k K }, E d3 = {(U 0 (K 0 ), U1 n (k 1 ), U n (k ), Y1 n ) T ɛ (n) for some k 1 K 1, k K } E d4 = {(U 0 (k 0 ), U1 n (K 1 ), U n (K ), Y1 n ) T ɛ (n) for some k 0 K 0 }, E d5 = {(U 0 (k 0 ), U1 n (k 1 ), U n (K ), Y1 n ) T ɛ (n) for some k 0 K 0, k 1 K 1 }, E d6 = {(U 0 (k 0 ), U1 n (K 1 ), U n (k ), Y1 n ) T ɛ (n) for some k 0 K 0, k K }, E d7 = {(U 0 (k 0 ), U1 n (k 1 ), U n (k ), Y1 n ) T ɛ (n) for some k 0 K 0, k 1 K 1, k K }. Thus, the average error probability for M 1 is upper bounded as P({ ˆM 1 M 1 }) P(E s ) + P(A c ) + P(E d0 E c s A) + From Lemma proved in Appendix B, the term P(E s ) tends to zero as n if R ui + R vj i Ω u j Ω v 7 P(E di ). > 1{Ω u = {0, 1, }}R 1 + 1{Ω v = {0, 1, }}R + Γ(U(Ω u ), V (Ω v )), for all Ω u, Ω v {0, 1, } such that Ω u + Ω v. Next, due to the codebook construction and the conditional typicality lemma [10, p. 7], P(E d0 E c s A) tends to zero as n. Finally, using the joint typicality lemma [10, p. 9], 7 i=1 P(E di) tends to zero as n if R u1 < I(U 1 ; U 0, U, Y 1 ) δ(ɛ), R u < I(U ; U 0, U 1, Y 1 ) δ(ɛ), R u1 + R u < I(U 1, U ; U 0, Y 1 ) + I(U 1 ; U ) δ(ɛ), R u0 < I(U 0 ; U 1, U, Y 1 ) δ(ɛ), R u0 + R u1 < I(U 0, U 1 ; U, Y 1 ) + I(U 0 ; U 1 ) δ(ɛ), R u0 + R u < I(U 0, U ; U 1, Y 1 ) + I(U 0 ; U ) δ(ɛ), R u0 + R u1 + R u < I(U 0, U 1, U ; Y 1 ) + I(U 0 ; U 1, U ) + I(U 1 ; U ) δ(ɛ). The average error probability for M can be bounded in a similar manner and then we have the additional rate conditions R v1 < I(V 1 ; V 0, V, Y ) δ(ɛ), R v < I(V ; V 0, V 1, Y ) δ(ɛ), R v1 + R v < I(V 1, V ; V 0, Y ) + I(V 1 ; V ) δ(ɛ), R v0 < I(V 0 ; V 1, V, Y ) δ(ɛ), R v0 + R v1 < I(V 0, V 1 ; V, Y ) + I(V 0 ; V 1 ) δ(ɛ), R v0 + R v < I(V 0, V ; V 1, Y ) + I(V 0 ; V ) δ(ɛ), R v0 + R v1 + R u < I(V 0, V 1, V ; Y ) + I(V 0 ; V 1, V ) + I(V 1 ; V ) δ(ɛ). Finally, the theorem is established by letting ɛ tend to zero. i=1

11 11 IV. GENERALIZED COMPRESSION SCHEME This section is devoted to compression-based schemes, where our contributions are as follows: For the downlink -BS -user C-RAN, we generalize the original compression scheme in [5] in two directions: 1) We introduce a cloud center and incorporate BS cooperation; and ) we derive a single-letter rate region for general discrete memoryless channels on the second hop. We simplify the achievable rate region of the DDF scheme for the general downlink N-BS L-user C-RAN with BS cooperation.we provide a performance analysis of the DDF scheme under the memoryless Gaussian model and show that the DDF scheme achieves within a constant gap (independent of transmission power) from the capacity region. A. Performance and Coding Scheme We start with a high-level summary of the proposed G-Compression scheme. The encoding is based on superposition coding and multicoding. Each message m j, j {1, }, is associated with a set of independently generated codewords Uj n(m j, l j ) of size n R j. Then, we generate three codebooks X 0, X 1, X using superposition coding: the codebook X 0 contains the cloud centers X0 n(k 0) and the codebooks X 1 and X contain the satellite codewords X1 n(k 1 k 0 ) and X n(k k 0 ), respectively. Given (m 1, m ), we apply joint typicality encoding to find an index tuple (k 0, k 1, k, l 1, l ) such that (U1 n(m 1, l 1 ), U n(m, l ), X0 n(k 0), X1 n(k 1), X n(k )) are jointly typical. In words, we first apply Marton s coding on the messages m 1 and m. Then, the auxiliaries U1 n(m 1, l 1 ) and U n(m, l ) are compressed into three descriptions X0 n(k 0), X1 n(k 1 k 0 ), and X n(k k 0 ). The next step is to convey (k 0, k 1 ) to BS 1 and (k 0, k ) to BS, during which the cooperation links are used to reduce the workload of the digital links from the central processor to the BSs. Finally, each user j {1, } applies joint typicality decoding to recover the auxiliary Uj n(m j, l j ) and thus can recover the desired message m j. Theorem : A rate pair (R 1, R ) is achievable for the downlink -BS -user C-RAN with BS cooperation if R 1 < I(U 1 ; Y 1 ) + min R < I(U ; Y ) + min 0, C 1 + C 1 I(U 1 ; X 0, X 1 ), C + C 1 I(U 1 ; X 0, X ) 0, C 1 + C 1 I(U ; X 0, X 1 ), C + C 1 I(U ; X 0, X ) R 1 + R < I(U 1 ; Y 1 ) + I(U ; Y ) I(U 1 ; U ) + min,, 0, C 1 + C 1 I(U 1, U ; X 0, X 1 ), C + C 1 I(U 1, U ; X 0, X ), C 1 + C I(U 1, U ; X 0, X 1, X ) I(X 1 ; X X 0 ) R 1 + R < I(U 1 ; Y 1 ) + I(U ; Y ) I(U 1 ; U ), +C 1 + C + C 1 + C 1 I(U 1, U ; X 0, X 1, X ) I(X 1 ; X X 0 ), +I(U 1 ; Y 1 ) I(U 1 ; X 0 ), R 1 + R < I(U 1 ; Y 1 ) + I(U ; Y ) I(U 1 ; U ) +C 1 + C + C 1 + C 1 I(U 1, U ; X 0, X 1, X ) I(X 1 ; X X 0 ), +I(U ; Y ) I(U ; X 0 ), R 1 + R < I(U 1 ; Y 1 ) + I(U ; Y ) I(U 1 ; U ) +C 1 + C + C 1 + C 1 I(U 1, U ; X 0, X 1, X ) I(X 1 ; X X 0 ) +I(U 1 ; Y 1 ) + I(U ; Y ) I(U 1 ; U ) I(U 1, U ; X 0 ), for some joint pmf p U1,U,X 0,X 1,X. Proof: Codebook generation: Fix a joint pmf p U1,U,X 0,X 1,X. For j {1, }, randomly and independently generate sequences u n j (m j, l j ), according to n i=1 p U j (u ji ), for (m j, l j ) [ nrj ] [ n R j ]. Randomly and independently

12 1 T (n) ² 0 k 0 BS 1 BS Fig. 4. Illustration of the encoding operation at the central processor in the G-Compression scheme. generate sequences x n 0 (k 0), according to n i=1 p X 0 (x 0i ), for k 0 [ nr 0 ]. Finally, for j {1, }, randomly and independently generate sequences x n j (k j k 0 ), each according to n i=1 p X j X 0 (x ji x 0i (k 0 )), for (k 0, k j ) [ nr 0 ] [ nr j ]. Central Processor: Upon seeing (m 1, m ), the central processor finds an index tuple (k 0, k 1, k, l 1, l ) such that (u n 1 (m 1, l 1 ), u n (m, l ), x n 0 (k 0 ), x n 1 (k 1 k 0 ), x n (k k 0 )) T (n) ɛ. If there is more than one such tuple, choose an arbitrary one among them. If no such tuple exists, choose (k 0, k 1, k, l 1, l ) = (1, 1, 1, 1, 1). Then, the central processor splits k 0 into three subindices k (0) 0, k(1) 0, and k() 0 of rates R 00, R 01, and R 0, respectively. Also, for j {1, }, the central processor splits k j into two subindices k (1) j and k () j of rates R j1 and R j, respectively. Finally, the central processor sends the index tuple (k(0) 0, k(1) 0, k(1) 1, k(1) ) ) to BS. The encoding operation at the central processor is illustrated in Figure 4. to BS 1 and (k (0) 0, k() 0, k() BS: BS 1 forwards (k (1) 1, k() 0, k(1) ) to BS over the cooperation link. BS forwards (k() 0, k() 1 ) to BS 1 over the cooperation link. Thus, BS j {1, } learns the value of (k 0, k j ) and transmits x n j (k j k 0 ). Decoding: Let ɛ > ɛ. For j {1, }, upon seeing yj n, user j finds the unique pair ( ˆm j, ˆl j ) such that (u n j ( ˆm j, ˆl j ), yj n) T ɛ (n) and declares that ˆm j is sent; otherwise it declares an error. Analysis of Error Probability: Let (M 1, M ) be the messages and let (K 0, K 1, K, L 1, L ) be the indices chosen at the central processor. In order to have a lossless transmission over the digital links, it requires that R 00 + R 01 + R 11 + R 1 C 1, R 00 + R 0 + R 1 + R C, R 01 + R 1 C 1, R 0 + R 1 C 1. Note that R 00 + R 01 + R 0 = R 0 and R j1 + R j = R j, j {1, }. Assuming the above conditions are satisfied, the decoding at user 1 fails if one or more of the following events occur: E 0 = {(U1 n (M 1, l 1 ), U n (M, l ), X0 n (k 0 ), X1 n (k 1 k 0 ), X n (k k 0 )) / T (n) ɛ for all (k 0, k 1, k, l 1, l )}, E 1 = {(U1 n (M 1, L 1 ), Y1 n ) / T ɛ (n) }, E = {(U1 n (m 1, l 1 ), Y1 n ) T ɛ (n) for some (m 1, l 1 ) (M 1, L 1 )}.

13 13 The average error probability for M 1 is upper bounded as P({ ˆM 1 M 1 }) P(E 0 ) + P(E 1 E c 0) + P(E ). By extending [10, Lemma 14.1, p. 351], it can be shown that the term P(E 0 ) tends to zero as n if R 1 + R > I(U 1 ; U ) + δ(ɛ ) and R 0 + k S R k + j D R j > I(U(D); X 0, X(S)) + 1{S = {1, }}I(X 1 ; X X 0 ) +1{D = {1, }}I(U 1 ; U ) + δ(ɛ ), for all D, S {1, }. Next, due to the codebook construction and the conditional typicality lemma [10, p. 7], P(E 1 E c 0 ) tends to zero as n. Finally, using the joint typicality lemma [10, p. 9], P(E ) tends to zero as n if R 1 + R 1 < I(U 1 ; Y 1 ) δ(ɛ). The average error probability for M can be bounded in a similar manner and then we have the additional rate condition R + R < I(U ; Y ) δ(ɛ). Using the Fourier Motzkin elimination to project out R1, R, and R 0j, R j, j {0, 1, }, we obtain the rate conditions in Theorem. Finally, the theorem is established by letting ɛ 0. B. Distributed Decode Forward for Broadcast The DDF scheme for broadcast [9], which is developed for general memoryless broadcast relay networks, in particular applies to downlink C-RAN with arbitrary N BSs and L users. The following theorem states its performance in this setup. For convenience, we denote X = (W 1,, W N ), Xj = (X j, (W kj : k j)), j [N], and Y k = (W k, (W kj : j k)), k [N]. Theorem 3 (Lim, Kim, Kim [9, Theorem ]): A rate tuple (R 1,, R L ) is achievable for the downlink N-BS L-user C-RAN with BS cooperation if R l < I( X, X(S); Ũ(Sc ), U(D) X(S c )) [ I(Ũk; Ũ(Sc k ), X, X N X k, Y k ) + I( X k ; X(S ] k c )) k S c l D l D I(U l ; U(D l ), Ũ(Sc ), X, X N Y l ), (1) for all S [N], D [L] for some pmf pũ N,U L, X, X, where S c N k = Sc [k 1] and D l = D [l 1]. The following proposition shows that, in Theorem 3, it is without loss in optimality to restrict the distribution of (Ũ N, U L, X, X N ) in the following manner: 1) pũ N,U L, X,(X N,{W = p kj}) Ũ N, X,{W p kj} U L,X ; N ) Ũ k = (W k, (W kj : j k)); 3) p W N,{W kj} = N k=1 p W k (j,k):j k p W kj ; 4) W k Uniform([ Ck ]); and 5) W kj Uniform([ Ckj ]). Proposition 1: A rate tuple (R 1,, R L ) lies in the achieved rate region (1) of the DDF scheme for the downlink N-BS L-user C-RAN with BS cooperation if and only if there exists some joint pmf p XN,U such that L R l < I(U l ; Y l ) + C k + C kj Γ(X(S c ), U(D)), (13) l D l D k S c j S k S c for all S [N] and D [L] such that D 1. The problem statement in Section II has to be expanded to general number of BSs and users and to allow symbol-wise operations.

14 14 M 1 Central Processor C 1 BS 1 p Y1 jx 1 User 1 ^M 1 Fig. 5. The system considered in Example 3. Remark 3: For the case in which N = and L =, it can be shown easily that setting X 0 = in the rate region in Theorem one recovers (13). Thus, for the network model in Figure, our generalized compression scheme (G-Compression) outperforms the DDF scheme for broadcast [9, Theorem ]. Since downlink C-RAN is a special instance of memoryless broadcast relay networks, the DDF scheme achieves any point in the capacity region of an N-BS L-user C-RAN to within a gap of (1 + N + L)/ bits per dimension under the memoryless Gaussian model [9, Corollary 8]. The following theorem tightens this gap for downlink C-RANs. The proof is deferred to Appendix D. Theorem 4: Consider the downlink of any N-BS L-user C-RAN with BS cooperation. Under the memoryless Gaussian model, the DDF scheme for broadcast achieves within L min{n,l log N} + bits per dimension from the capacity region. V. COMPARISON AND NUMERICAL EVALUATIONS In this section, we evaluate and compare our G-DS scheme and G-Compression scheme in two useful examples. In the first example, the G-DS scheme (as well as the data-sharing scheme) is optimal, whereas the G-Compression scheme is strictly suboptimal. In the second example the opposite is true. In the second part of this section, we provide numerical results for the memoryless Gaussian model. A. Examples Example 3 (One BS and One User): Consider the special case with only one BS and one user, as depicted in Figure 5. (Our model reduces to this scenario when the DM-IC is of the form p Y1,Y X 1,X = p Y1,Y X 1 and when C = R = 0.) Decode-and-forward [17] is optimal in this special case and rate R 1 is achievable whenever { } R 1 < min C 1, max I(X 1 ; Y 1 ). p X1 Furthermore, compress-and-forward [17] is also optimal since the first hop is noiseless. This performance is also recovered by the G-DS scheme; see Corollary 4 specialized to R = 0 and the choice of auxiliaries V = and X 1 = U. The G-Compression scheme and the DDF scheme for broadcast achieve all rates R 1 that satisfy: R 1 < I(U 1 ; Y 1 ), R 1 < C 1 + I(U 1 ; Y 1 ) I(U 1 ; X 1 ) = C 1 I(U 1 ; X 1 Y 1 ), for some pmf p U1,X 1 s.t. U 1 X 1 Y 1 form a Markov chain. If the second hop is deterministic, i.e., Y 1 is a deterministic function of X 1, then the G-Compression scheme with U 1 = Y 1 achieves the capacity. However, if the second hop is not deterministic, then setting U 1 = Y 1 violates the Markov condition U 1 X 1 Y 1. In general, the G-Compression scheme is suboptimal. To see this, consider a DM-IC satisfying p Y1 X 1 (y 1 x 1 ) < 1 for all (x 1, y 1 ) X 1 Y 1, i.e., for all inputs x 1 X 1, the output Y 1 is not a deterministic function of x 1. Then, for every pmf p X1, the corresponding joint pmf p X1,Y 1 is indecomposable. 3 Next, let us assume that 0 < C 1 < max px1 I(X 1 ; Y 1 ). Now we show that the G-Compression scheme is not capacity achieving by contradiction. If the G-Compression scheme is capacity achieving, then it holds that the capacity-achieving distribution p U1,X 1 satisfies that I(U 1 ; X 1 Y 1 ) = 0, i.e., U 1 Y 1 X 1 form a Markov chain. However, since U 1 X 1 Y 1 3 A joint pmf p X,Y is said to be indecomposable [18, Problem 15.1, p. 345] if there are no functions f and g with respective domains X and Y so that 1) P(f(X) = g(y )) = 1 and ) f(x) takes at least two values with non-zero probability.

15 15 C 1 BS 1 X 1 Y 1 User 1 ^M 1 M 1 M Central Processor C X Y BS User ^M Fig. 6. The system considered in Example 4. also form a Markov chain, the indecomposability of the joint pmf p X1,Y 1 implies that the capacity-achieving distribution p U1,X 1 satisfies that U 1 is independent of (X 1, Y 1 ) (see [18, Problem 16.5, p. 39]) and thus I(U 1 ; Y 1 ) = 0, which contradicts that the joint pmf p U1,X 1 achieves the capacity C 1 > 0. Example 4 (Z-Interference Channel): Consider the case where C 1 = C = 1, C 1 = C 1 = 0, X 1 = X = {0, 1}, Y 1 = X 1, and Y = X 1 X. The system is depicted in Figure 6. Now we show that the rate pair (R 1, R ) = (1, 1), which is on the boundary of the capacity region, is achievable by the G-Compression scheme but not by the G-DS scheme. The following scheme achieves the desired rate pair (R 1, R ) = (1, 1). Fix a blocklength n and denote by Bl n := (B l,1,..., B l,n ) the n-bits representation of M l, l {1, }. The central processor sends all bits B1 n to BS 1, and it sends the x-or bits B n := (B 1,1 B,1,..., B 1,n B,n ) to BS. BS 1 sends inputs X1 n = Bn 1 over the DM-IC and BS sends inputs X n = Bn. The same performance is achieved by the G-Compression scheme when the auxiliaries (U 1, U ) are chosen i.i.d. Bernoulli(1/), and X 0 =, X 1 = U 1, and X = U 1 U. Now let us investigate the G-DS scheme. We consider the following relaxed conditions, where the inequalities do not need to be strict: R 1 + R (a) I(U 0, U 1, U ; Y 1 ) + I(V 0, V 1, V ; Y ) I(U 0, U 1, U ; V 0, V 1, V ), R 1 + R (b) C 1 + C + C 1 + C 1 + I(U 1, U ; Y 1 U 0 ) I(U 1, V 1 ; U, V U 0, V 0 ), where (a) follows by combining (), (3), and (4) with Ω u = Ω v = {0, 1, } and (b) follows by combining two times of () with (Ω u, Ω v ) = ({0, 1, }, {0, 1, }) and (Ω u, Ω v ) = ({0, 1, }, ), (3) with Ω u = {1, }, (5), and (6). If (R 1, R ) = (1, 1) is achievable by the G-DS scheme, then there must exist a joint pmf p U0,V 0,U 1,V 1,U,V and functions x k (u 0, v 0, u k, v k ), k {1, }, such that 1) I(U 0, U 1, U ; V 0, V 1, V ) = 0; ) I(U 1, V 1 ; U, V U 0, V 0 ) = 0; 3) I(U 0, U 1, U ; Y 1 ) = 1; 4) I(V 0, V 1, V ; Y ) = 1; 5) I(U 1, U ; Y 1 U 0 ) = 1. However, the above constraints cannot be satisfied simultaneously. To see this, let us assume that the first four conditions hold, which imply that 1) (U 0, U 1, U ) is independent of (V 0, V 1, V ); ) the Markov chains U 1 U 0 U and V 1 V 0 V hold; and 3) H(X 1 U 0, U 1, U ) = H(X 1 X V 0, V 1, V ) = 0. Thus, I(X 1 ; V 0, V 1 U 0, U 1 ) = I(X 1, U 0, U 1 ; V 0, V 1 ) I(X 1, U 0, U 1, U ; V 0, V 1 ), (a) = I(U 0, U 1, U ; V 0, V 1 ) = 0,

16 16 where (a) follows since H(X 1 U 0, U 1, U ) = 0. Since X 1 is a function of (U 0, V 0, U 1, V 1 ) by construction, we have H(X 1 U 0, U 1 ) = H(X 1 U 0, V 0, U 1, V 1 ) = 0, i.e., X 1 is a function of (U 0, U 1 ). Finally, it holds that 0 = H(X 1 X V 0, V 1, V ) H(X 1 X U 0, U, V 0, V 1, V ) = H(X 1 U 0, U, V 0, V 1, V ) (a) = H(X 1 U 0 ), where (a) follows since X 1 is a function of (U 0, U 1 ); (U 0, U 1, U ) is independent of (V 0, V 1, V ); and U 1 U 0 U form a Markov chain. From all above we obtain that constraint 5) cannot be satisfied since Y 1 is a function of U 0, which concludes that the G-DS scheme cannot achieve the rate pair (1, 1). B. Numerical Evaluation for the Memoryless Gaussian Model In this subsection, we compare the achieved sum rates of the various coding schemes under the memoryless Gaussian model. We are mainly interested in the scenarios where the G-DS scheme outperforms the G-Compression scheme and the reverse compute forward. Evaluating the G-DS scheme directly is challenging, so we evaluate the special cases with restricted correlation structures and then apply time sharing on them. To summarize, we evaluate the following schemes 1) G-DS scheme I, II, and III (Corollaries 1,, and 3), ) G-Compression scheme (Theorem ), and 3) reverse compute forward with power allocation [6]. For simplicity, we consider the symmetric case, i.e., C 1 = C = C, C 1 = C 1 = T, g 11 = g = 1, and g 1 = g 1. Then, the achievable sum rate R 1 + R can be upper bounded using the cut-set bound as R 1 + R < min{c, R sum}, (14) where Rsum denotes the optimal sum rate assuming C =, which can be computed by evaluating the corresponding Gaussian MIMO broadcast channel. We will use the cut-set bound (14) as a reference for comparison. Now let us specify our choice of auxiliary random variables for the various schemes. Except for the G-DS scheme II, all other schemes are evaluated based on dirty paper coding. Let S (k) be a 1 jointly Gaussian random vector with zero-mean entries and covariance matrix K (k), for k {1, }. We assume that S (1) and S () are independent. For notational convenience, we denote g = [ ] g 1 g. [ ] 1) Description I: U 0 = S (1), V 0 = S () + AS (1) X1, and = S (1) + S (), where A = K () g T X ( 1 + g K () g T ) 1 g. Note that X k = U k + V k, k {1, }. We optimize over the covariance matrices K (1) and K () that satisfy the average power constraints. ) Description II: The random variables (U 0, V 0, U 1, V 1, U, V ) are i.i.d. N (0, 1) and X 1 = a 1 U 0 + a V 0 + a 3 U 1 + a 4 V 1 and X = b 1 U 0 + b V 0 + b 3 U + b 4 V for some a j, b j R, j {1,, 3, 4}. We optimize over the coefficients (a j, b j : j {1,, 3, 4}) that satisfy the average power constraints. 3) Description III: [ ] U1 = S (1), U [ ] V1 = S () + AS (1), V [ ] X1 = (I + A)S (1) + S (), X

17 sum rate R1 +R Cut-Set Bound 0.6 G-DS I 0.5 G-DS II G-DS III link capacity C sum rate R1 +R Cut-Set Bound G-DS I G-DS II G-DS III link capacity C (a) P = 1, (g 1, g 1) = (0.5, 0.5). (b) P = 1, (g 1, g 1) = (0.5, 0.5). sum rate R1 +R Cut-Set Bound G-DS I G-DS II G-DS III link capacity C (c) P = 100, (g 1, g 1) = (0.5, 0.5). sum rate R1 +R Cut-Set Bound G-DS I G-DS II G-DS III link capacity C (d) P = 100, (g 1, g 1) = (0.5, 0.5). Fig. 7. Achieved sum-rates of the G-DS schemes I, II, and III under the symmetric memoryless Gaussian model. Here T = 0 and g 11 = g = 1. where I is the identity matrix and A is a real-valued matrix. Note that X k = U k + V k, k {1, }. 4 We optimize over the covariance matrices K (1), K () and the precoding matrix A that satisfy the average power constraints. [ ] 4) Compression: U 1 = S (1), U = S () + AS (1) X1, and = S (1) + S () + W, where A = K () g T X ( 1 + g (K () + K (w) )g T ) 1 g, and W is a 1 jointly Gaussian random vector with zero-mean entries and covariance matrix K (w), independent of (S (1), S () ). Finally, we let X 0 be an N (0, 1) random variable such that X 0 and (S (1), S (), W) are jointly Gaussian. We optimize over the covariance matrices K (1), K (), and K (w) that satisfy the average power constraints and over the covariances of X 0 with each of (S (1), S (), W). First, let us compare the G-DS schemes with different correlation structures. We assume that T = 0. In Figure 7, we fix g 1 = 0.5 and consider (P, g 1 ) {1, 100} {0.5, 0.5}. From the evaluation results, we make the following observations and remarks for the considered setup: In general, the G-DS scheme I using only common codewords performs well in the strong-fronthaul regime, i.e., when C is large. By contrast, the G-DS scheme III using only private codewords performs well in the weak-fronthaul regime. Introducing correlation among codewords is useful. In fact, time sharing between the G-DS schemes I and III outperforms the G-DS scheme II for all values of link capacity C. The G-DS scheme III is more beneficial in the low-power regime, i.e., when P is small. When the channel gain matrix G is well-conditioned, linear beamforming performs as good as dirty paper coding, which is the reason why the G-DS scheme II outperforms the G-DS scheme I in Figures 7(b) and 7(d). 4 We remark that since the BSs do not have full information about S (1) and S (), setting [ ] X1 X resulting X k is not a function of (U k, V k ), k {1, }. Also, due to this fact, the precoding matrix A = K () g T in general suboptimal. = S (1) + S () is not allowed because the ( ) g K () g T g is

Multicoding Schemes for Interference Channels

Multicoding Schemes for Interference Channels Multicoding Schemes for Interference Channels 1 Ritesh Kolte, Ayfer Özgür, Haim Permuter Abstract arxiv:1502.04273v1 [cs.it] 15 Feb 2015 The best known inner bound for the 2-user discrete memoryless interference

More information

arxiv: v2 [cs.it] 28 May 2017

arxiv: v2 [cs.it] 28 May 2017 Feedback and Partial Message Side-Information on the Semideterministic Broadcast Channel Annina Bracher and Michèle Wigger arxiv:1508.01880v2 [cs.it] 28 May 2017 May 30, 2017 Abstract The capacity of the

More information

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 8-3, An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Invited Paper Bernd Bandemer and

More information

Interference Channels with Source Cooperation

Interference Channels with Source Cooperation Interference Channels with Source Cooperation arxiv:95.319v1 [cs.it] 19 May 29 Vinod Prabhakaran and Pramod Viswanath Coordinated Science Laboratory University of Illinois, Urbana-Champaign Urbana, IL

More information

Simultaneous Nonunique Decoding Is Rate-Optimal

Simultaneous Nonunique Decoding Is Rate-Optimal Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 1-5, 2012 Simultaneous Nonunique Decoding Is Rate-Optimal Bernd Bandemer University of California, San Diego La Jolla, CA

More information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

SINCE van der Meulen [1] studied the 3-node relay channel

SINCE van der Meulen [1] studied the 3-node relay channel IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 63, NO. 7, JULY 2017 4103 Distributed Decode Forward for Relay Networks Sung Hoon Lim, Member, IEEE, KwangTaikKim,Member, IEEE, and Young-Han Kim, Fellow,

More information

820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY Stefano Rini, Daniela Tuninetti, and Natasha Devroye

820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY Stefano Rini, Daniela Tuninetti, and Natasha Devroye 820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY 2012 Inner and Outer Bounds for the Gaussian Cognitive Interference Channel and New Capacity Results Stefano Rini, Daniela Tuninetti,

More information

Cooperative Multi-Sender Index Coding

Cooperative Multi-Sender Index Coding Cooperative Multi-Sender Index Coding 1 Min Li Lawrence Ong and Sarah J. Johnson Abstract arxiv:1701.03877v3 [cs.it] 6 Feb 2017 In this paper we establish new capacity bounds for the multi-sender unicast

More information

A Comparison of Superposition Coding Schemes

A Comparison of Superposition Coding Schemes A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA

More information

Capacity of All Nine Models of Channel Output Feedback for the Two-user Interference Channel

Capacity of All Nine Models of Channel Output Feedback for the Two-user Interference Channel Capacity of All Nine Models of Channel Output Feedback for the Two-user Interference Channel Achaleshwar Sahai, Vaneet Aggarwal, Melda Yuksel and Ashutosh Sabharwal 1 Abstract arxiv:1104.4805v3 [cs.it]

More information

Joint Source-Channel Coding for the Multiple-Access Relay Channel

Joint Source-Channel Coding for the Multiple-Access Relay Channel Joint Source-Channel Coding for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora Department of Electrical and Computer Engineering Ben-Gurion University, Israel Email: moriny@bgu.ac.il, ron@ee.bgu.ac.il

More information

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation Mehdi Ashraphijuo, Vaneet Aggarwal and Xiaodong Wang 1 arxiv:1308.3310v1 [cs.it] 15 Aug 2013

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

Cognitive Multiple Access Networks

Cognitive Multiple Access Networks Cognitive Multiple Access Networks Natasha Devroye Email: ndevroye@deas.harvard.edu Patrick Mitran Email: mitran@deas.harvard.edu Vahid Tarokh Email: vahid@deas.harvard.edu Abstract A cognitive radio can

More information

Interference Channel aided by an Infrastructure Relay

Interference Channel aided by an Infrastructure Relay Interference Channel aided by an Infrastructure Relay Onur Sahin, Osvaldo Simeone, and Elza Erkip *Department of Electrical and Computer Engineering, Polytechnic Institute of New York University, Department

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

Approximate Capacity of Fast Fading Interference Channels with no CSIT

Approximate Capacity of Fast Fading Interference Channels with no CSIT Approximate Capacity of Fast Fading Interference Channels with no CSIT Joyson Sebastian, Can Karakus, Suhas Diggavi Abstract We develop a characterization of fading models, which assigns a number called

More information

A Comparison of Two Achievable Rate Regions for the Interference Channel

A Comparison of Two Achievable Rate Regions for the Interference Channel A Comparison of Two Achievable Rate Regions for the Interference Channel Hon-Fah Chong, Mehul Motani, and Hari Krishna Garg Electrical & Computer Engineering National University of Singapore Email: {g030596,motani,eleghk}@nus.edu.sg

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

Lecture 10: Broadcast Channel and Superposition Coding

Lecture 10: Broadcast Channel and Superposition Coding Lecture 10: Broadcast Channel and Superposition Coding Scribed by: Zhe Yao 1 Broadcast channel M 0M 1M P{y 1 y x} M M 01 1 M M 0 The capacity of the broadcast channel depends only on the marginal conditional

More information

On Function Computation with Privacy and Secrecy Constraints

On Function Computation with Privacy and Secrecy Constraints 1 On Function Computation with Privacy and Secrecy Constraints Wenwen Tu and Lifeng Lai Abstract In this paper, the problem of function computation with privacy and secrecy constraints is considered. The

More information

On the K-user Cognitive Interference Channel with Cumulative Message Sharing Sum-Capacity

On the K-user Cognitive Interference Channel with Cumulative Message Sharing Sum-Capacity 03 EEE nternational Symposium on nformation Theory On the K-user Cognitive nterference Channel with Cumulative Message Sharing Sum-Capacity Diana Maamari, Daniela Tuninetti and Natasha Devroye Department

More information

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Ahmad Abu Al Haija ECE Department, McGill University, Montreal, QC, Canada Email: ahmad.abualhaija@mail.mcgill.ca

More information

arxiv: v1 [cs.it] 5 Feb 2016

arxiv: v1 [cs.it] 5 Feb 2016 An Achievable Rate-Distortion Region for Multiple Descriptions Source Coding Based on Coset Codes Farhad Shirani and S. Sandeep Pradhan Dept. of Electrical Engineering and Computer Science Univ. of Michigan,

More information

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Jun Chen Dept. of Electrical and Computer Engr. McMaster University Hamilton, Ontario, Canada Chao Tian AT&T Labs-Research 80 Park

More information

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12 Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user

More information

On the Capacity of the Interference Channel with a Relay

On the Capacity of the Interference Channel with a Relay On the Capacity of the Interference Channel with a Relay Ivana Marić, Ron Dabora and Andrea Goldsmith Stanford University, Stanford, CA {ivanam,ron,andrea}@wsl.stanford.edu Abstract Capacity gains due

More information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora

More information

EE 4TM4: Digital Communications II. Channel Capacity

EE 4TM4: Digital Communications II. Channel Capacity EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.

More information

UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, Solutions to Take-Home Midterm (Prepared by Pinar Sen)

UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, Solutions to Take-Home Midterm (Prepared by Pinar Sen) UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, 2017 Solutions to Take-Home Midterm (Prepared by Pinar Sen) 1. (30 points) Erasure broadcast channel. Let p(y 1,y 2 x) be a discrete

More information

Capacity bounds for multiple access-cognitive interference channel

Capacity bounds for multiple access-cognitive interference channel Mirmohseni et al. EURASIP Journal on Wireless Communications and Networking, :5 http://jwcn.eurasipjournals.com/content///5 RESEARCH Open Access Capacity bounds for multiple access-cognitive interference

More information

Capacity Bounds for Diamond Networks

Capacity Bounds for Diamond Networks Technische Universität München Capacity Bounds for Diamond Networks Gerhard Kramer (TUM) joint work with Shirin Saeedi Bidokhti (TUM & Stanford) DIMACS Workshop on Network Coding Rutgers University, NJ

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

List of Figures. Acknowledgements. Abstract 1. 1 Introduction 2. 2 Preliminaries Superposition Coding Block Markov Encoding...

List of Figures. Acknowledgements. Abstract 1. 1 Introduction 2. 2 Preliminaries Superposition Coding Block Markov Encoding... Contents Contents i List of Figures iv Acknowledgements vi Abstract 1 1 Introduction 2 2 Preliminaries 7 2.1 Superposition Coding........................... 7 2.2 Block Markov Encoding.........................

More information

Approximately achieving the feedback interference channel capacity with point-to-point codes

Approximately achieving the feedback interference channel capacity with point-to-point codes Approximately achieving the feedback interference channel capacity with point-to-point codes Joyson Sebastian*, Can Karakus*, Suhas Diggavi* Abstract Superposition codes with rate-splitting have been used

More information

Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents

Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents Meryem Benammar, Valerio Bioglio, Frédéric Gabry, Ingmar Land Mathematical and Algorithmic Sciences Lab Paris Research Center, Huawei

More information

On the Capacity of Cloud Radio Access Networks with Oblivious Relaying

On the Capacity of Cloud Radio Access Networks with Oblivious Relaying 1 On the Capacity of Cloud Radio Access Networs with Oblivious Relaying Iñai Estella Aguerri Abdellatif Zaidi Giuseppe Caire Shlomo Shamai (Shitz) arxiv:1710.09275v1 [cs.it] 25 Oct 2017 Abstract We study

More information

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers Albrecht Wolf, Diana Cristina González, Meik Dörpinghaus, José Cândido Silveira Santos Filho, and Gerhard Fettweis Vodafone

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

K User Interference Channel with Backhaul

K User Interference Channel with Backhaul 1 K User Interference Channel with Backhaul Cooperation: DoF vs. Backhaul Load Trade Off Borna Kananian,, Mohammad A. Maddah-Ali,, Babak H. Khalaj, Department of Electrical Engineering, Sharif University

More information

On the Capacity of the Two-Hop Half-Duplex Relay Channel

On the Capacity of the Two-Hop Half-Duplex Relay Channel On the Capacity of the Two-Hop Half-Duplex Relay Channel Nikola Zlatanov, Vahid Jamali, and Robert Schober University of British Columbia, Vancouver, Canada, and Friedrich-Alexander-University Erlangen-Nürnberg,

More information

The Unbounded Benefit of Encoder Cooperation for the k-user MAC

The Unbounded Benefit of Encoder Cooperation for the k-user MAC The Unbounded Benefit of Encoder Cooperation for the k-user MAC Parham Noorzad, Student Member, IEEE, Michelle Effros, Fellow, IEEE, and Michael Langberg, Senior Member, IEEE arxiv:1601.06113v2 [cs.it]

More information

On Gaussian MIMO Broadcast Channels with Common and Private Messages

On Gaussian MIMO Broadcast Channels with Common and Private Messages On Gaussian MIMO Broadcast Channels with Common and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 20742 ersen@umd.edu

More information

Source-Channel Coding Theorems for the Multiple-Access Relay Channel

Source-Channel Coding Theorems for the Multiple-Access Relay Channel Source-Channel Coding Theorems for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora, and Deniz Gündüz Abstract We study reliable transmission of arbitrarily correlated sources over multiple-access

More information

Capacity Theorems for Distributed Index Coding

Capacity Theorems for Distributed Index Coding Capacity Theorems for Distributed Index Coding 1 Yucheng Liu, Parastoo Sadeghi, Fatemeh Arbabjolfaei, and Young-Han Kim Abstract arxiv:1801.09063v1 [cs.it] 27 Jan 2018 In index coding, a server broadcasts

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages Degrees of Freedom Region of the Gaussian MIMO Broadcast hannel with ommon and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and omputer Engineering University of Maryland, ollege

More information

Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback

Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback Achaleshwar Sahai Department of ECE, Rice University, Houston, TX 775. as27@rice.edu Vaneet Aggarwal Department of

More information

On Network Interference Management

On Network Interference Management On Network Interference Management Aleksandar Jovičić, Hua Wang and Pramod Viswanath March 3, 2008 Abstract We study two building-block models of interference-limited wireless networks, motivated by the

More information

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where

More information

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case 1 arxiv:0901.3580v1 [cs.it] 23 Jan 2009 Feedback Capacity of the Gaussian Interference Channel to Within 1.7075 Bits: the Symmetric Case Changho Suh and David Tse Wireless Foundations in the Department

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

Source and Channel Coding for Correlated Sources Over Multiuser Channels

Source and Channel Coding for Correlated Sources Over Multiuser Channels Source and Channel Coding for Correlated Sources Over Multiuser Channels Deniz Gündüz, Elza Erkip, Andrea Goldsmith, H. Vincent Poor Abstract Source and channel coding over multiuser channels in which

More information

THE multiple-access relay channel (MARC) is a multiuser

THE multiple-access relay channel (MARC) is a multiuser IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 60, NO. 10, OCTOBER 2014 6231 On Joint Source-Channel Coding for Correlated Sources Over Multiple-Access Relay Channels Yonathan Murin, Ron Dabora, and Deniz

More information

An Uplink-Downlink Duality for Cloud Radio Access Network

An Uplink-Downlink Duality for Cloud Radio Access Network An Uplin-Downlin Duality for Cloud Radio Access Networ Liang Liu, Prati Patil, and Wei Yu Department of Electrical and Computer Engineering University of Toronto, Toronto, ON, 5S 3G4, Canada Emails: lianguotliu@utorontoca,

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

The Sensor Reachback Problem

The Sensor Reachback Problem Submitted to the IEEE Trans. on Information Theory, November 2003. 1 The Sensor Reachback Problem João Barros Sergio D. Servetto Abstract We consider the problem of reachback communication in sensor networks.

More information

IN this paper, we show that the scalar Gaussian multiple-access

IN this paper, we show that the scalar Gaussian multiple-access 768 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 5, MAY 2004 On the Duality of Gaussian Multiple-Access and Broadcast Channels Nihar Jindal, Student Member, IEEE, Sriram Vishwanath, and Andrea

More information

How to Compute Modulo Prime-Power Sums?

How to Compute Modulo Prime-Power Sums? How to Compute Modulo Prime-Power Sums? Mohsen Heidari, Farhad Shirani, and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, USA.

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 62, NO. 5, MAY

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 62, NO. 5, MAY IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 6, NO. 5, MAY 06 85 Duality of a Source Coding Problem and the Semi-Deterministic Broadcast Channel With Rate-Limited Cooperation Ziv Goldfeld, Student Member,

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

THE fundamental architecture of most of today s

THE fundamental architecture of most of today s IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 61, NO. 4, APRIL 2015 1509 A Unified Approach to Hybrid Coding Paolo Minero, Member, IEEE, Sung Hoon Lim, Member, IEEE, and Young-Han Kim, Fellow, IEEE Abstract

More information

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR 1 Stefano Rini, Daniela Tuninetti and Natasha Devroye srini2, danielat, devroye @ece.uic.edu University of Illinois at Chicago Abstract

More information

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem LECTURE 15 Last time: Feedback channel: setting up the problem Perfect feedback Feedback capacity Data compression Lecture outline Joint source and channel coding theorem Converse Robustness Brain teaser

More information

Layered Synthesis of Latent Gaussian Trees

Layered Synthesis of Latent Gaussian Trees Layered Synthesis of Latent Gaussian Trees Ali Moharrer, Shuangqing Wei, George T. Amariucai, and Jing Deng arxiv:1608.04484v2 [cs.it] 7 May 2017 Abstract A new synthesis scheme is proposed to generate

More information

Distributed Lossless Compression. Distributed lossless compression system

Distributed Lossless Compression. Distributed lossless compression system Lecture #3 Distributed Lossless Compression (Reading: NIT 10.1 10.5, 4.4) Distributed lossless source coding Lossless source coding via random binning Time Sharing Achievability proof of the Slepian Wolf

More information

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Stefano Rini, Ernest Kurniawan and Andrea Goldsmith Technische Universität München, Munich, Germany, Stanford University,

More information

II. THE TWO-WAY TWO-RELAY CHANNEL

II. THE TWO-WAY TWO-RELAY CHANNEL An Achievable Rate Region for the Two-Way Two-Relay Channel Jonathan Ponniah Liang-Liang Xie Department of Electrical Computer Engineering, University of Waterloo, Canada Abstract We propose an achievable

More information

A digital interface for Gaussian relay and interference networks: Lifting codes from the discrete superposition model

A digital interface for Gaussian relay and interference networks: Lifting codes from the discrete superposition model A digital interface for Gaussian relay and interference networks: Lifting codes from the discrete superposition model M. Anand, Student Member, IEEE, and P. R. Kumar, Fellow, IEEE Abstract For every Gaussian

More information

Generalized Writing on Dirty Paper

Generalized Writing on Dirty Paper Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland

More information

Computing sum of sources over an arbitrary multiple access channel

Computing sum of sources over an arbitrary multiple access channel Computing sum of sources over an arbitrary multiple access channel Arun Padakandla University of Michigan Ann Arbor, MI 48109, USA Email: arunpr@umich.edu S. Sandeep Pradhan University of Michigan Ann

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

arxiv: v1 [cs.it] 4 Jun 2018

arxiv: v1 [cs.it] 4 Jun 2018 State-Dependent Interference Channel with Correlated States 1 Yunhao Sun, 2 Ruchen Duan, 3 Yingbin Liang, 4 Shlomo Shamai (Shitz) 5 Abstract arxiv:180600937v1 [csit] 4 Jun 2018 This paper investigates

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

The Gallager Converse

The Gallager Converse The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing

More information

Capacity of a Two-way Function Multicast Channel

Capacity of a Two-way Function Multicast Channel Capacity of a Two-way Function Multicast Channel 1 Seiyun Shin, Student Member, IEEE and Changho Suh, Member, IEEE Abstract We explore the role of interaction for the problem of reliable computation over

More information

Keyless authentication in the presence of a simultaneously transmitting adversary

Keyless authentication in the presence of a simultaneously transmitting adversary Keyless authentication in the presence of a simultaneously transmitting adversary Eric Graves Army Research Lab Adelphi MD 20783 U.S.A. ericsgra@ufl.edu Paul Yu Army Research Lab Adelphi MD 20783 U.S.A.

More information

The Capacity Region for Multi-source Multi-sink Network Coding

The Capacity Region for Multi-source Multi-sink Network Coding The Capacity Region for Multi-source Multi-sink Network Coding Xijin Yan Dept. of Electrical Eng. - Systems University of Southern California Los Angeles, CA, U.S.A. xyan@usc.edu Raymond W. Yeung Dept.

More information

Remote Source Coding with Two-Sided Information

Remote Source Coding with Two-Sided Information Remote Source Coding with Two-Sided Information Basak Guler Ebrahim MolavianJazi Aylin Yener Wireless Communications and Networking Laboratory Department of Electrical Engineering The Pennsylvania State

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

L interférence dans les réseaux non filaires

L interférence dans les réseaux non filaires L interférence dans les réseaux non filaires Du contrôle de puissance au codage et alignement Jean-Claude Belfiore Télécom ParisTech 7 mars 2013 Séminaire Comelec Parts Part 1 Part 2 Part 3 Part 4 Part

More information

Information Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem

Information Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem Information Theory for Wireless Communications. Lecture 0 Discrete Memoryless Multiple Access Channel (DM-MAC: The Converse Theorem Instructor: Dr. Saif Khan Mohammed Scribe: Antonios Pitarokoilis I. THE

More information

Interactive Decoding of a Broadcast Message

Interactive Decoding of a Broadcast Message In Proc. Allerton Conf. Commun., Contr., Computing, (Illinois), Oct. 2003 Interactive Decoding of a Broadcast Message Stark C. Draper Brendan J. Frey Frank R. Kschischang University of Toronto Toronto,

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

Approximately achieving Gaussian relay. network capacity with lattice-based QMF codes

Approximately achieving Gaussian relay. network capacity with lattice-based QMF codes Approximately achieving Gaussian relay 1 network capacity with lattice-based QMF codes Ayfer Özgür and Suhas Diggavi Abstract In [1], a new relaying strategy, quantize-map-and-forward QMF scheme, has been

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback

Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. XX, NO. Y, MONTH 204 Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback Ziad Ahmad, Student Member, IEEE,

More information

On Common Information and the Encoding of Sources that are Not Successively Refinable

On Common Information and the Encoding of Sources that are Not Successively Refinable On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying Ahmad Abu Al Haija, and Mai Vu, Department of Electrical and Computer Engineering McGill University Montreal, QC H3A A7 Emails: ahmadabualhaija@mailmcgillca,

More information

Interference, Cooperation and Connectivity A Degrees of Freedom Perspective

Interference, Cooperation and Connectivity A Degrees of Freedom Perspective Interference, Cooperation and Connectivity A Degrees of Freedom Perspective Chenwei Wang, Syed A. Jafar, Shlomo Shamai (Shitz) and Michele Wigger EECS Dept., University of California Irvine, Irvine, CA,

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

On Scalable Source Coding for Multiple Decoders with Side Information

On Scalable Source Coding for Multiple Decoders with Side Information On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,

More information

THE practical bottleneck of today s communication networks

THE practical bottleneck of today s communication networks 1 Interference Channel with Generalized Feedback (a.k.a. with source cooperation: Part I: Achievable Region Shuang (Echo Yang and Daniela Tuninetti Abstract An Interference Channel with Generalized Feedback

More information

The Capacity Region of the Cognitive Z-interference Channel with One Noiseless Component

The Capacity Region of the Cognitive Z-interference Channel with One Noiseless Component 1 The Capacity Region of the Cognitive Z-interference Channel with One Noiseless Component Nan Liu, Ivana Marić, Andrea J. Goldsmith, Shlomo Shamai (Shitz) arxiv:0812.0617v1 [cs.it] 2 Dec 2008 Dept. of

More information

Equivalence for Networks with Adversarial State

Equivalence for Networks with Adversarial State Equivalence for Networks with Adversarial State Oliver Kosut Department of Electrical, Computer and Energy Engineering Arizona State University Tempe, AZ 85287 Email: okosut@asu.edu Jörg Kliewer Department

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Half-Duplex Gaussian Relay Networks with Interference Processing Relays

Half-Duplex Gaussian Relay Networks with Interference Processing Relays Half-Duplex Gaussian Relay Networks with Interference Processing Relays Bama Muthuramalingam Srikrishna Bhashyam Andrew Thangaraj Department of Electrical Engineering Indian Institute of Technology Madras

More information

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality 0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California

More information