Error Exponent Regions for Gaussian Broadcast and Multiple Access Channels

Size: px
Start display at page:

Download "Error Exponent Regions for Gaussian Broadcast and Multiple Access Channels"

Transcription

1 Error Exponent Regions for Gaussian Broadcast and Multiple Access Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Submitted: December, 5 Abstract In modern communication systems, different users have different requirements for quality of service (QoS). In this work, QoS refers to the average codeword error probability experienced by the users in the network. Although several practical schemes (collectively referred to as unequal error protection schemes) have been studied in the literature and are implemented in existing systems, the corresponding performance limits have not been studied in an information-theoretic framework. In this paper an information-theoretic framework is considered to study communication systems which provide heterogeneous reliabilities for the users. This is done by defining individual probabilities of error for the users in the network and obtaining the fundamental tradeoffs of the corresponding error exponents. In particular, we quantify the reliability tradeoff by introducing the notion of error exponent region (EER), which specifies the set of error exponent vectors that are simultaneously achievable by the users for a fixed vector of users rates. We show the existence of a tradeoff among the users error exponents by deriving inner and outer bounds for the EER. Using this framework, a system can be realized, which can provide a tradeoff of reliabilities among the users for a fixed vector of users rates. This adds a completely new dimension to the performance tradeoff in such networks, which is unique to multi-terminal communication systems, and is beyond what is given by the conventional performance-versus-rate tradeoff in single-user systems. Although this is a very general concept and can be applied to any multi-terminal communication system, in this paper we consider Gaussian broadcast and multiple access channels. Index Terms Error exponent, error exponent region, maximum-likelihood decoding, Gaussian broadcast channel, Gaussian multiple access channel. I. INTRODUCTION In modern communication systems, different applications have different requirements for quality of service (QoS). For example, the third-generation (3G) wireless system is designed to provide various services such as real-time voice, video telephony, high-speed data transfer, full-motion video, high-quality audio, and so on [], []. In the 3G This work was supported in part by the National Science Foundation under ITR Grant CCF This paper was presented in part at the Conference on Information Sciences and Systems (CISS), Princeton, NJ, March 7-9, 4, and at the IEEE International Symposium of Information Theory (ISIT), Chicago, IL, June 7-July, 4. The authors are with the Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, MI 489-, ( {lweng,pradhanv,anastas}@umich.edu).

2 system, the data transfer rates may vary from 3 kb/s in voice service to over Mb/s in full-motion video, the delay requirements may vary from ms in video telephony to a few seconds in web browsing, and the bit error rates may vary from as high as in voice service to as low as 8 in video conferencing. The conventional internet protocols are designed mainly for nonreal-time data services, and are inherently suboptimal for networks running heterogenous applications. One of the biggest challenges for modern communication system designers is to design a system which simultaneously supports several QoS requirements while still providing high-efficiency services. Although the complete design issues regarding QoS in communication networks are quite complex, in this work we focus on one key aspect of QoS - bit error rate. In particular, we are interested in achieving different bit error rates for different users in a multi-user system. Some practical solutions to this problem have been proposed in the literature, collectively known as unequal error protection (UEP). These techniques, which provide UEP to different users, can be divided roughly into two categories - time-division coded modulation (TDCM) and superposition coded modulation (SCM) [3], [4]. TDCM is a form of resource sharing in which different users transmit on disjoint time intervals. In SCM, both users transmit on the same time intervals using superposition of channel codes. For practical channel codes, there exist examples where TDCM, or a hybrid of TDCM and SCM, outperforms SCM [3], [4]. Although practical UEP schemes have been deployed in existing systems, there is currently no adequate theoretical framework which deals with different bit error rates in a multi-user channel. A traditional approach concerning bit error rates in a point-to-point channel is the study of the reliability-rate tradeoff through the notion of error exponent [5], [6], [7], [8], [9], [], [], which is also known as the reliability function of a channel. A straightforward extension of this concept can be realized in a multi-user setting by defining the probability of system error. A system is considered to be in error if at least one user s codeword is decoded erroneously. For the study of the capacity region of a multi-user channel, it is sufficient to show that this single performance measure, probability of system error, approaches zero as the block length increases. This approach, however, does not solve the problem of assigning different error protections to different users in a multi-user channel, since there is only one error probability considered here, i.e., the probability of system error. Therefore, on the one hand, there are practical schemes to provide different error protections for different users, but on the other hand, the current informationtheoretic analysis can not cope with the issues of QoS in a multi-user system. Hence our goal is to provide an information-theoretic framework which can address these issues by giving concrete design methodologies for such systems. We ask the following question: is it possible to simultaneously provide an increased reliability for one user and a reduced reliability for the other user, while keeping their rates the same in a two-user channel? More generally, for a fixed pair of data rates for the two users, is it possible to provide a set of choices of individual reliabilities for these two users? The main contribution of this paper is to provide a positive answer to these questions by formalizing these ideas in the context of information theory, studying the fundamental limits of such tradeoffs of individual reliabilities among the users for a fixed vector of data rates, and developing efficient transmission strategies that approach these limits. This is done by defining individual error probabilities for each user and studying the tradeoff

3 3 of the corresponding error exponents. This tradeoff is quantified by introducing the concept of error exponent region (EER) for a multi-user channel. Although the idea proposed in this paper is very general, we present it in the context of Gaussian broadcast and multiple access channels (MACs) [], [3], [4]. Some earlier works which are related to the results of this paper are those by Korner and Sgarro [5] which considers a broadcast channel with degraded message sets, and by Diggavi et al. [6], [7], which considers a single-user channel with two different messages, i.e., a high- and a low-reliability messages. The rest of the paper is structured as follows. In Section II, we provide an overview of the reliability-rate tradeoff for single-user and multi-user channels as studied in the literature. The notion of error exponent region is introduced in Section III. We derive inner and outer bounds for these performance limits for Gaussian broadcast channels in Section IV and for Gaussian multiple access channels in Section V. In Section VI, we derive outer bounds for Gaussian broadcast and multiple access channels based on a geometric conjecture. We conclude our work in Section VII. The following notation is used throughout this work. R and Z denote the sets of real numbers and integers, respectively. We use boldface letters to denote random variables (e.g. X), and lightface letters to denote their realizations (e.g. X). The calligraphic letters A, B, etc., denote general sets or probability events. The abbreviation i.i.d. stands for independent and identically distributed. The zero-mean, unit-variance, (real) Gaussian distribution is denoted by N (, ). We don t distinguish between a scalar and a matrix in our notation. We write a b to mean a is defined as b. II. BACKGROUND: ERROR EXPONENT It is well-known that the error exponent for a single-user channel provides the rate of exponential decay of the average probability of error as a function of the block length of the codebooks [5], [6], [7], [8], [9], [], []. The concept of the error exponent was extended to the MAC in [8], [9], [], [], [], [3], [4], where an upper bound on the probability of system error (i.e., the probability that any user is in error) was derived for random codes. In the following, we briefly review some basic results regarding error exponents for single-user channels and MACs. Consider a discrete-time memoryless stationary single-user channel. Let P e (N, R) denote the smallest average probability of block decoding error, i.e., codeword error, of any code of block length N and rate R for this channel. The error exponent at rate R is defined as E(R) lim log P e(n, R), () N N where the limit in () (and throughout this paper) should be interpreted as lim sup or lim inf from the context whenever the limit does not exist. Define f(n) = e Nb if log f(n) lim = b, () N N with, defined similarly. Thus, the probability of error P e (N, R) can be written as P e (N, R) = e NE(R).

4 4 Error exponents have been studied in detail for discrete memoryless channels and additive white Gaussian noise (AWGN) channels. Lower and upper bounds are known for the error exponent E(R) for these channels. A lower bound, known as random coding exponent E r (R), was developed by Fano [8]. The random coding exponent was tightened at low rates by Gallager to yield the expurgated exponent E ex (R) []. An upper bound, known as the sphere packing exponent E sp (R), was developed by Shannon, Gallager, and Berlekamp [7], []. Error exponents have also been studied for MACs. For a given MAC, let P e,sys (N, R, R ) denote the smallest average probability of block decoding system error of any code of block length N and rates R, R for user, user, respectively. The error exponent for a MAC is defined as E sys (R, R ) lim log P e,sys(n, R, R ). (3) N N In the following, we summarize the basic technique used by Gallager to provide an upper bound on the probability of system error in a MAC []. A variation of this method will be used later to provide similar upper bounds. Consider a codebook CB = {C,, C,,..., C,M } for user, where C,i is the i th codeword with length N ( i M ) and M is the number of the codewords in the codebook CB. Similarly, CB = {C,, C,,..., C,M } is a codebook for user. Gallager [] derived the random coding exponent using joint maximum likelihood (ML) decoding, i.e., decoding users messages based on the pair (i, j) maximizing P (Y N C,i, C,j ), where Y N is the received sequence of length N. Let (î, ĵ) denote the indexes of the decoded codewords for user and user. The probability of system error can be written as P e,sys = P (î i or ĵ j) = P (î i and ĵ = j) + P (î = i and ĵ j) + P (î i and ĵ j) = P e,t + P e,t + P e,t3, (4) where we define P e,t P (î i and ĵ = j) P e,t P (î = i and ĵ j) P e,t3 P (î i and ĵ j). (5a) (5b) (5c) Thus there are three types of error events. Type error occurs when user s codeword is decoded erroneously, but user s codeword is decoded correctly. Type error occurs when user s codeword is decoded erroneously, but user s codeword is decoded correctly. Type 3 error occurs when both users codewords are decoded as wrong codewords. Applying the random coding argument, it was shown in [] that there exist codebooks CB and CB such that P e,ti can be upper bounded by P e,t e NE t(r ) P e,t e NE t(r ) P e,t3 e NE t3(r +R ), (6a) (6b) (6c)

5 5 where E ti, i 3, is an exponent which accounts for type i error. The probability of system error can be upper bounded by P e,sys = P e,t + P e,t + P e,t3 e NEt(R) + e NEt(R) + e NEt3(R+R) 3e N min{et(r),et(r),et3(r+r)}, (7) and the system error exponent can be lower bounded by E sys (R, R ) min{e t (R ), E t (R ), E t3 (R + R )}. (8) III. ERROR EXPONENT REGION In this section, we introduce the notion of error exponent region (EER) for a multi-user channel. We consider only two kinds of multi-user channels in this paper - two-user MACs and two-user broadcast channels, so the term multi-user channel is referred to either one of them in this paper. Recall that for a multi-user channel, the probability of system error (or equivalently, the corresponding system error exponent) is not sufficient to capture the different reliability requirements of the users. Our approach to addressing this issue hinges on the following two observations. First, one can define a separate probability of error for each user. Therefore, there can be multiple error exponents, one for each user. Second, in contrast to a single-user channel where the error exponent is fixed for a given rate, in a multi-user channel one can tradeoff the error exponents between different users even for a fixed vector of users rates. To illustrate this point, consider the capacity region of a multi-user channel as shown in Fig. (b). As expected, the error exponents for the two users are functions of both the operating point A and the channel capacity region. However, unlike the case of a single-user channel where the channel capacity boundary is a single point (Fig. (a)), in a multi-user channel we have multiple points on the capacity boundary (e.g. B, D in Fig. (b)). Thus one can expect to get different error exponents (and thus a tradeoff between them) depending on which target point on the capacity boundary is considered. For instance, consider the operating point A (corresponding to a rate pair (R, R )), obtained by backing off from a target point B on the capacity boundary in Fig. (c). It is expected that the error exponent for user is smaller than that for user, since user operates at rate R which is very close to the corresponding capacity (determined by B, see Fig. (c)), while user backs off significantly from the corresponding capacity (determined again by B, see Fig. (c)). On the other hand, if we consider point A as if it is obtained by backing off from a target point D on the capacity boundary in Fig. (b), we then expect the error exponent for user to be larger than that for user (see Fig. (d)). Therefore, a tradeoff of error exponents between users might be possible by considering different points on the capacity boundary. The above observations lead us to the notion of error exponent region (EER) for a multi-user channel. For a given operating point characterized by the rate pair (R, R ), the EER consists of all achievable error exponent pairs for the two users. For example, the error exponent region for a channel operated at point A in Fig. is a two-dimensional region which depends on rates R and R (see Fig. ). Note that the concepts of EER and channel

6 6 rate B D R C R A rate R (a) (b) rate rate C B B R A C D R A D rate rate R C B R C D (c) (d) Fig.. Channel capacity region: (a) single-user channel, (b) multi-user channel, (c) users back off from point B to point A, (d) users back off from point D to point A. capacity region (CCR) are fundamentally different. For a given channel, there is only one CCR. One the other hand, an EER depends on the channel operating point (R, R ). Thus, for a given channel, there is one EER for every operating point inside the CCR. IV. ERROR EXPONENT REGION FOR GAUSSIAN BROADCAST CHANNELS A discrete memoryless stationary broadcast channel with two receivers is a tuple {X, Y, Y, P (Y, Y X)} of input alphabet X, output alphabets Y i for i =,, and a conditional probability distribution P (Y, Y X). We formally define the EER for a broadcast channel in the following. Definition An (N, M, M, P e, P e ) code for a broadcast channel consists of an encoder e : {,,..., M } {,,..., M } X N, (9) a pair of decoders d i : Yi N {,,..., M i } ()

7 7 error exponent EER(R,R ) error exponent Fig.. Error exponent region for a rate pair (R, R ). for i =,, and a pair of error probabilities M M P e = P [d (Y N ) k X N = e(k, l)] M M k= l= M M P e = P [d (Y N ) l X N = e(k, l)]. M M k= l= (a) (b) Definition Given a pair of transmission rates (R, R ), a pair of error exponents (E, E ) is said to be achievable for a broadcast channel if for all δ >, there exists a sequence of (N, M, M, P e, P e ) codes such that N log M i > R i δ N log P ei > E i δ. hold simultaneously for i =,, and for all sufficiently large N. (a) (b) Definition 3 Given a pair of transmission rates (R, R ), the error exponent region is the set of all achievable error exponent pairs. Now, let us consider a scalar Gaussian broadcast channel [], [5] Y = X + Z Y = X + Z, (3a) (3b) where X is the channel input with average power constraint P, and Y and Y are the channel outputs for user and user, respectively. Assume that the noise power for Z is σ and for Z is σ, and that Z, Z are independent. We derive inner and outer bounds for the EER in the following subsections. A. Inner Bound for Error Exponent Region (Achievability) Define the shelled Gaussian distribution N sh (N, P ) [6, Chap. 7] as follows.

8 8 Definition 4 The probability density function Q(X N ) of an N-dimensional shelled Gaussian random vector X N = (X,..., X N ) with variance (power) P is given by where Q(X N ) = µ φ(x N ) N e X k P, (4) πp k= φ(x N, for NP δ < N k= ) = X k NP, otherwise, (5) and δ is an arbitrary positive number and µ is a normalizing constant such that Q(X N ) integrates to. We write N sh (N, P ) as N sh (P ) when the dimension N is clear from the context. We now derive an EER inner bound using two encoding strategies - single-code encoding and superposition encoding. In single-code encoding, we construct a random codebook CB = {C i,j i M, j M } of size M 3 M M. Each random vector C i,j is i.i.d. with N sh (N, P ). In the receivers, user decodes the message based on the pair (i, j) maximizing P (Y N C i,j ), and user decodes the message based on the pair (i, j) maximizing P (Y N C i,j ). In superposition encoding, we construct two independent random codebooks CB and CB of size M and M, respectively (see Fig. 3). Let C,i and C,j denote the i th and the j th codewords in the codebooks CB and CB, respectively. The channel input X N is equal to C,i + C,j. Further, let C,i (k) and C,j (k) denote the k th elements in the codewords C,i and C,j, respectively. The random vectors (C,i (),..., C,i (αn)) and (C,i (αn + ),..., C,i (N)) are independent with distributions N sh (αn, P ) and N sh (( α)n, P ), respectively, where α = a N for some a {,,..., N}. Similarly, the random vectors (C,j(),..., C,j (αn)) and (C,j (αn + ),..., C,j (N)) are independent with distributions N sh (αn, P ) and N sh (( α)n, P ), respectively. Due to the power constraint P, we have the following equality α(p + P ) + ( α)(p + P ) = P. (6) Note that superposition subsumes two special and important encoding schemes, namely uniform superposition and on-off superposition. In uniform superposition, the parameter α in Fig. 3 is chosen to be zero or one, so the random codebooks CB and CB have uniform entries. In on-off superposition, the parameters P and P in Fig. 3 are chosen to be zero, so the transmitter switches between user and user (on-off) during the transmission. On-off superposition is more commonly referred to as time-sharing in the literature. In the receivers, the optimum decoding strategy is individual ML decoding, which minimizes the probabilities of error for user and user. In particular, decoding user s message is based on the index i maximizing P (Y N C,i ) = M j= P (YN C,i + C,j )P (C,j ) and decoding user s message is based on the index j maximizing P (Y N C,j ) = M i= P (YN C,i + C,j )P (C,i ), where Y N and Y N are the received channel outputs (with length N) for user and user, respectively. However, it turns out that it is difficult to derive analytical, single-letter expressions for error exponents using individual ML decoding, so we use joint ML decoding to analyze

9 9 C, C, C, C, CB CB C, M C, M E{ C,i (k) } E{ C,j (k) } P P P P α N ( α) N k α N ( α) N k Fig. 3. Random codebooks for user and user using superposition encoding. the performance instead. In joint ML decoding, user s message is decoded based on the pair (i, j) maximizing P (Y N C,i + C,j ), and user s message is decoded based on the pair (i, j) maximizing P (Y N C,i + C,j ). Note that we can substitute the optimal decoder with the joint ML decoder or any other decoding scheme and still provide valid inner bounds for the EER. Furthermore, as it will become evident in the subsequent analysis, the performance bounds based on joint ML decoding can be tightened by considering another decoding strategy, namely the naive single-user decoding. In naive single-user decoding, user simply regards the signal corresponding to user as noise, and similarly, user regards the signal corresponding to user as noise. Before summarizing the EER inner bound in the following theorem, we define a few error exponent functions. Let E r (R, SNR) and E ex (R, SNR) denote the random coding exponent and the expurgated exponent for a scalar Gaussian channel with rate R and signal-to-noise ratio SNR. Define the function Er np (R, SNR, SNR, α) as E np r (R, SNR, SNR, α) max ρ <θ,θ +ρ [ E np + ρ r, (ρ, θ, θ, α) α ln [ + ρ ( α) {E np r, (ρ, θ, θ, α) ρr} (7a) ( eθ + ρ ln ) θ ( eθ + ρ + ρ ( ln ) θ + ρ ln + SNR θ )] + ( + SNR θ )]. (7b) This function represents the random coding error exponent for a single-user scalar Gaussian channel with power allocation resulting in signal to noise ratio SNR for α fraction of the time and SNR for the remaining time.

10 The derivation of this expression is a straightforward extension of that of the standard random coding bound and is omitted here. A similar function for the expurgated error exponent E np ex (R, SNR, SNR, α) is defined as and let E np ex (R, SNR, SNR, α) E np ex, (ρ, θ, θ, α) α max ρ <θ,θ ρ [ ρ ln ( α) ( eθ ρ [ ρ ln {E np ex, (ρ, θ, θ, α) ρr} ) θ ( eθ ρ + ρ ( ln ) θ + ρ ln + SNR θ )] + ( + SNR θ (8a) )], (8b) E np (R, SNR, SNR, α) max {Er np (R, SNR, SNR, α), Eex np (R, SNR, SNR, α)}. (9) Define E np t3 (R, SNR, SNR, SNR, SNR, α) as E np t3 (R,SNR, SNR, SNR, SNR, α) max {E np t3, (ρ, θ, θ, θ, θ, α) ρr} (a) ρ <θ,θ,θ,θ +ρ E np t3, (ρ, θ, θ, θ, θ, α) [ ( e θ θ α ( + ρ) ln + ρ [ ( α) ( + ρ) ln ) θ + θ ( e θ θ + ρ + ρ ( ln + SNR + SNR )] + θ θ ) θ + θ + ρ ( ln + SNR + SNR θ θ )]. (b) This function accounts for type 3 error in a scalar Gaussian MAC [] when the random codebooks for the two users are chosen as in Fig. 3. We now summarize the EER inner bound based on single-code and superposition encoding in the following theorem. Theorem For a Gaussian broadcast channel with power constraint P and noise power σ and σ for user and user, respectively, an inner bound for EER is EER sc (R, R ) EER sp (R, R ), where EER sc (R, R ) and EER sp (R, R ) are given by EER sc (R, R ) = EER sp (R, R ) = { (E, E ) : E max { { { E max min { { E max min ( {E r R + R, P ) ( σ, E ex R + R, P )} σ ( E max {E r R + R, P ) ( σ, E ex R + R, P )} } σ (E, E ) : α, α(p + P ) + ( α)(p + P ) = P, E np( R, P σ P, P ) σ, α, E np t3 P E np( R, σ + P, σ + P E np( R, P σ, P ) ( σ, α, E np t3 P P ( R + R, P ) }, α σ R + R, P σ, P σ, P σ, P σ, P σ, P )} σ, α,, P )} σ, α, () E np( R, σ + P, σ + P, α) }}, ()

11 where the subscripts sc, and sp denote single-code, and superposition, respectively. Proof: The probabilities of error for user and user using single-code encoding can be upper bounded by P e = P (i i ) P (i i or j j ) e N max{er(r+r, P σ ),E ex(r +R, P σ )} (3a) P e = P (j j ) P (i i or j j ) e N max{e r(r +R, P σ ),E ex (R +R, P σ )}, (3b) where user decodes (i, j) as (i, j ) and user decoded (i, j) as (i, j ). The last inequalities in (3a) and (3b) are derived based on the achievable error exponents for Gaussian single-user channels. To prove the existence of a deterministic codebook CB inequality to get satisfying both inequalites, we can apply Markov P (P e > βp e ) β and P (P e > βp e ) β (4) for any β >, where P e and P e are the (random) probabilities of error for user and user, respectively, based on the random codebook CB, and P e and P e are the ensemble averages of P e and P e, respectively. Thus P ({P e βp e } {P e βp e }) = P ({P e > βp e } {P e > βp e }) P (P e > βp e ) P (P e > βp e ) β > (5) by choosing an appropriate β, say β =. This implies that there exist at least one deterministic codebook CB with where the factor β has no effect on the error exponents and P e βe NEsc (6a) P e βe NEsc, (6b) E sc = max{e r (R + R, P σ ), E ex (R + R, P σ )} (7a) E sc = max{e r (R + R, P σ ), E ex (R + R, P σ )}. (7b) We next consider the superposition encoding shown in Fig. 3. The proof is given in three steps. The inner bound () is derived based on joint ML decoding and naive single-user decoding. The achievable error exponents based on joint ML decoding are derived in Step and Step, and the achievable error exponents based on naive single-user decoding are derived in Step 3. In particular, in Step, we show that there exist a pair of random codebooks achieving the error exponents given in () (based on joint ML decoding), and in Step, we show that there exist a pair of deterministic codebooks achieving the error exponents given in () (based on joint ML decoding).

12 Step Let P e denote type error probability, the probability that user decodes (i, j) as (î, j), and let P e3 denote type 3 error probability, the probability that user decodes (i, j) as (î, ĵ), where i î and j ĵ. Similarly, let P e denote type error probability, the probability that user decodes (i, j) as (i, ĵ), and let P e3 denote type 3 error probability, the probability that user decodes (i, j) as (î, ĵ). Applying the random coding argument used in [], it can be shown that average error probabilities over the ensemble of random codebooks for user and user using joint ML decoding satisfy P e e NEnp (R, P σ P e e NEnp (R, P σ P e3 e NEnp P e3 e NEnp, P σ,α), P σ,α) t3 (R +R, P σ t3 (R +R, P σ, P σ, P σ, P σ, P σ, P σ,α), P σ,α). (8a) (8b) (8c) (8d) The probabilities of error for user and user using joint ML decoding can be upper bounded by P e = P e + P e3 e NEnp (R, P σ, P σ,α) NE np + e t3 (R+R, P σ, P σ, P σ, P σ,α) e N min{enp (R, P σ, P σ,α),e np t3 (R+R, P σ, P σ, P σ, P σ,α)} (9a) P e = P e + P e3 e NEnp (R, P σ, P σ,α) NE np + e t3 (R+R, P σ, P σ, P σ, P σ,α) e N min{enp (R, P σ, P σ,α),e np t3 (R+R, P σ, P σ, P σ, P σ,α)}. (9b) Thus the error exponents obtained using joint ML decoding are upper bounded by E sp,jm = min{e np (R, P E sp,jm σ = min{e np (R, P σ, P σ, α), E np, P σ, α), E np t3 (R + R, P σ t3 (R + R, P σ, P σ, P σ, P σ, P σ where the superscript sp,jm denotes superposition encoding and joint ML decoding., P σ, α)} (3a), P σ, α)}, (3b) Step In the previous discussion, we have showed that, averaged over the ensemble of the random codebooks, (CB, CB ), the error probabilities satisfy P e e NEsp,jm and P e e NEsp,jm, where E sp,jm and E sp,jm are given in (3). This implies that there exist a pair of deterministic codebooks (CB, CB ) with user s error probability P e satisfying P e e NEsp,jm, and there exist another pair of deterministic codebooks (CB, CB ) with user s error probability P e satisfying P e e NEsp,jm. However, this does not mean that there exist a pair of deterministic codebooks (CB, CB) with a pair of error probabilities (Pe, Pe) satisfying Pe e NEsp,jm and P e e NEsp,jm simultaneously. To prove the existence of deterministic codebooks (CB, CB ), we can apply Markov inequality as done for the case of single-code encoding. This difficulty does not arise in a multi-user channel when there is only one error probability criterion. For example, in the case of the system error probability for a MAC considered in [], the existence of a pair of random codebooks (CB, CB ) satisfying P e,sys e NE implies directly the existence of a pair of deterministic codebooks (CB, CB ) satisfying P e,sys e NE.

13 3 Step 3 When naive single-user decoding is utilized, the side-interference X N is shelled Gaussian distributed, so the noise X N + Z N seen by user is not exactly Gaussian. Nevertheless, the shelled Gaussian distribution N sh (N, P ) given in Definition 4 can be upper bounded by Q(X N ) = µ φ(x N ) N e X πp k= k P µ N e X k P, (3) πp where the right hand side of the last inequality is a Gaussian distribution except for the factor µ. We can use the upper bound in (3) to derive an upper bound for P e, and the final result is P e µ e NEnp (R, P P σ, +P σ +P k=,α), (3) where the factor µ = µ µ is due to using two shelled Gaussian distributions in each random codebook, and does not have an effect on the error exponent since it can be shown that µ scales polynomially with N. Similarly, the probability of error for user can be upper bounded by P e µ e NEnp (R, P P σ, +P σ +P Therefore, the achievable error exponents using naive single-user decoding are E sp,ns = E np (R, E sp,ns = E np (R, P P,α). (33) σ + P, σ + P, α) (34a) P P σ + P, σ + P, α), (34b) where the superscript sp,ns denotes superposition encoding and naive single-user decoding. Since both users can choose either joint ML decoding or naive single-user decoding, the maximum of the corresponding error exponents are achievable. This completes the proof. Several comments are in order at this point. It may seem surprising that we use two different probability distributions N sh (P ) and N sh (P ) to construct the random codebook CB (and similarly for CB ). This requires some explanation. Consider two special cases of superposition encoding - uniform superposition and on-off superposition. In Fig. 4(a), the achievable EERs obtained by these two special cases are illustrated. The dashed curve is the boundary of the achievable EER using uniform superposition, and the dotted curve is the boundary of the achievable EER using on-off superposition (the dotted curve merging with the solid curve at (E, E ) = (.46,.8) and (E, E ) = (.8,.46)). In Fig. 4(b), the achievable EERs for the same Gaussian channel but with unequal rates for user and user are illustrated. Based on these two encoding schemes, it is now clear that superposition encoding includes these two special cases (uniform and on-off) and also serves as a smooth transition between these two encoding schemes. One may ask if it is possible to improve the EER by using three, four, or even more probability distributions to construct each random codebook. Our numerical results indicated that going beyond two distributions provides only marginal improvements. However, multiple distributions might be beneficial for a broadcast channel with more than two users.

14 E E E E (a) (b) Fig. 4. EER inner bound using on-off superposition (dotted), uniform superposition (dashed), superposition (solid) and singlecode (dash-dotted) for (a) R =.5, R =.5, P =, P = ; (b) R σ σ =., R =.7, P =, P =. σ σ In Fig. 4(a), the maximum equal error exponent pair achieved by superposition encoding is (E, E ) = (.44,.44), which is slightly smaller than the error exponent pair (E, E ) = (.46,.46) achieved by single-code encoding. The broadcast channel in Fig. 4 is symmetric and as will be demonstrated shortly, for this particular choice of transmission rates (illustrated in Fig. 4(a)), single-code encoding is optimum, in the sense of maximizing equal error exponents. It also happens that in Fig. 4 the use of (nonuniform) superposition does not enlarge the achievable EER beyond what is obtained by on-off superposition, and single-code encoding. This is not true in general. We point out that the EER sp (R, R ) achieved by superposition in () is nonvanishing for any point (R, R ) inside the capacity region, but the EER sc (R, R ) achieved by single-code encoding in () is empty when R + R > log( + P ) (assuming σ σ > σ). As illustrated in Fig. 5(a), the EER sc (R, R ) achieved by single-code encoding is empty, and (nonuniform) superposition indeed enlarges the region achieved by using only uniform and on-off superposition. In Fig. 5(b), it happens that the achievable EER using on-off superposition is completely inside the achievable EER using uniform superposition (the dashed curve merging with the solid curve at (E, E ) = (.38,.)). Note that on-off superposition is not a capacity-achieving strategy, whereas uniform superposition is. Hence, it is possible that the achievable EER using on-off superposition is included in the achievable EER using uniform superposition for some operating points. In Fig. 4(a), the maximum achievable equal error exponent pair using uniform superposition is E = E =.39, which is smaller than the maximum achievable equal error exponent pair E = E =.44 using

15 E. E E x 3 (a) E (b) Fig. 5. EER inner bound using on-off superposition (dotted), uniform superposition (dashed), superposition (solid) and singlecode (dash-dotted) for (a) R =, R =., P =, P = 5; (b) R σ σ =., R =.65, P =, P = 5. σ σ (nonuniform) superposition. Given that the broadcast channel is symmetric and is operated at equal rates R = R, why does uniform superposition (P = P = P ) not achieve the maximum equal error exponent pair? A partial answer can be obtained by the following observations. Recall that in joint ML decoding, there are two types of error events for each user - type, type 3, for user and type, type 3 for user. For the point (E, E ) = (.39,.39) in Fig. 4(a) the corresponding lower bound for the error exponent of type error of user (denoted by E t ) is.3, and that for type 3 error (denoted by E t3 ) is.39, and thus type-3 error event is the dominant one. Now, for the (non-uniform) superposition case, if we plot E t and E t3 as a function of P, given by E t = E np (R, P σ E t3 = E np t3 (R + R, P, P P σ, α) (35a) σ, P P σ, P P σ, P σ, α) (35b) while keeping R = R =.5, P =, α = fixed, then E t decreases as P increases from P to P, but E t3 increases as P increases from P to P (see Fig. 6). Since E = min{e t, E t3 }, the error exponent for user (and the error exponent for user ) increases when we use superposition. Thus superposition (compared to uniform superposition) provides one more degree of freedom to tradeoff between type and type 3 errors, which increases the maximum achievable equal error exponent pair when the dominant error event is type 3 error. The result that the performance bound based on joint ML decoding can be improved by naive single-user

16 6...8 E t.6 Error Exponent E t P /σ Fig. 6. E t and E t3 plotted as a function of P. σ decoding might not have been anticipated. To illustrate this, let s consider a broadcast channel operated at (R, R ) = (.4, ) with P = 5, σ = and σ = 5. The sum rate R + R =.4 >. = log( + P ), so σ E sc = and E sp,jm =, because it can be verified (numerically) that for any R, SNR, SNR, SNR, SNR and α, we have so E np t3 E np t3 (R, SNR, SNR, SNR, SNR, α) E r (R, α(snr + SNR ) + ( α)(snr, SNR )), (36) = for user in the case (R, R ) = (.4, ). On the other hand, if we use uniform superposition (α = ) with P = and P = 4, then, even if user simply regards the side-interference X N as noise, the achievable error exponent for user is E sp,ns P = E r (R, P +σ ) =.84 (and E sp,jm =.38). This example illustrates that the achievable error exponents derived using joint ML decoding might be much worse than the actual performance using individual ML decoding. There are two possible explanations for this, though we can not verify which one is the main reason: either joint ML decoding is significantly inferior to individual ML decoding, or the bound derived for joint ML decoding is loose. Nevertheless, naive single-user decoding serves as an assisted decoding scheme to partially close the performance gap between the optimum individual ML decoding and the suboptimum joint ML decoding. In Fig. 4(a), it seems that there are abrupt changes in the achievable EER using superposition encoding around (E, E ) = (.8,.46) and (E, E ) = (.46,.8). This is due to the switch between the joint ML and naive single decoding at the receivers. To illustrate this point, we plot two curves in Fig. 7, where in the solid

17 7 curve, user uses only joint ML decoding and in the dashed curve, user uses only naive single-user decoding (user uses a mixture of joint ML and naive single-user decoding in both curves). Although E sp,jm increases slowly as E decreases from.44 to, E sp,ns increases much more rapidly as E decreases. E sp,ns is equal to E sp,jm around (E, E ) = (.8,.46), and this is why there is an abrupt change over here. We believe that the abrupt change of the achievable EER using superposition in Fig. 4 is an artifact of the switch between joint ML and naive single-user decoding, and we anticipate that the actual achievable EER using the optimum individual ML decoding would be much smoother. However, we can not verify this speculation E sp,jm = E sp,ns.5 E E Fig. 7. EER inner bound for R =.5, R =.5, P σ single-user decoding (dashed). =, P σ =, and user using joint ML decoding (solid) and naive B. Outer Bound for Error Exponent Region We now derive an EER outer bound and summarize the result in the following theorem. Theorem For a Gaussian broadcast channel with power constraint P and noise power σ and σ for user and user, respectively, an outer bound for EER is E E su (R, P σ ) (37a) E E su (R, P σ ) (37b) min{e, E } max { E su (R + R, P σ ), E su (R + R, P σ ) }, (37c) where E su ( ) is any error exponent upper bound for a scalar Gaussian channel and the subscript su denotes single-user upper bound.

18 8 Proof: For any broadcast channel, the probability of decoding error for user i can always be lower bounded by the probability of decoding error for user i operating over a point-to-point channel defined by the marginal distribution P (Y i X), where i = or. This implies that E E su (R, P σ ) (38a) Further, given any encoding and decoding schemes, it is true that E E su (R, P σ ). (38b) P e,sys P e + P e max{p e, P e }, (39) where the first inequality follows from the union bound. The broadcast channel considered in (3) is stochastically degraded [7, Chap. 4]. Since the performance of a broadcast channel depends only on the marginal distributions, we may further assume that the broadcast channel considered in (3) is physically degraded, i.e., P (Y, Y X) = P (Y X)P (Y Y ) if σ σ. If we now allow the two receivers to cooperate, we have a single-user channel, whose probability of error, P e, should be less than or equal to the probability of system error P e,sys in the broadcast channel [8]. The probability of error P e of the new single-user channel can be lower bounded by e N max{e su(r +R, P σ ),E su (R +R, P σ )} P e, (4) since the broadcast channel is physically degraded. Combining (39) and (4), we have which implies that This completes the proof. e N max{esu(r+r, P σ ),E su(r +R, P σ )} P e P e,sys max{p e, P e }, (4) min{e, E } max{e su (R + R, P σ ), E su (R + R, P σ )}. (4) This outer bound is illustrated in Fig. 8(a), where the solid curve is the EER inner bound, and the dash-dotted curve is the EER outer bound. One of the main goals in this work is to show that one can tradeoff the error exponents among the users even for a fixed vector of transmission rates in a multi-user channel. This is equivalent to saying that the EER is not a rectangle. A possible boundary of the EER (dotted curve) is shown in Fig. 8(b), where Fig. 8(b) is a zoom-in version of Fig. 8(a). It is clear from Fig. 8(b) that there is indeed a tradeoff between user s and user s error exponents, i.e., the EER is not a rectangle, when the channel is operated at (R, R ) = (.5,.5). Note that the EER inner and outer bounds are tight at the equal error exponents (E, E ) = (.46,.46). This follows from the fact that the broadcast channel in Fig. 8 is symmetric and is operated at high rates, so the random coding exponent and the sphere packing exponent are tight at the sum rate R + R. V. ERROR EXPONENT REGION FOR GAUSSIAN MULTIPLE ACCESS CHANNELS Consider a discrete-time memoryless stationary scalar Gaussian MAC Y = X + X + Z, (43)

19 9.8.8 Possible EER E.4 E E E (a) (b) Fig. 8. EER inner bound (solid) and outer bound (dash-dotted) for R =.5, R =.5, P σ =, P σ =. where X and X are the channel inputs for user and user with average power constraints P and P, and Y is the channel output. Assume that the noise power for Z is σ. The EER for any MAC can be formally defined in a similar way to Definitions,, and 3. We now derive inner and outer bounds for the EER in the following subsections. A. Inner Bound for Error Exponent Region (Achievability) In the transmitters, we use superposition encoding and construct two independent random codebooks CB and CB of size M and M, respectively. Let C,i and C,j denote the i th and the j th codewords in the codebooks CB and CB, respectively. Let C,i (k) and C,j (k) denote the k th elements in the codewords C,i and C,j. The random vectors (C,i (),..., C,i (αn)) and (C,i (αn + ),..., C,i (N)) are independent with distributions N sh (αn, P ) and N sh (( α)n, P ), respectively, where α = a N for some a {,,..., N}. Similarly, the random vectors (C,j (),..., C,j (αn)) and (C,j (αn + ),..., C,j (N)) are independent with distributions N sh (αn, P ) and N sh (( α)n, P ), respectively. Due to the power constraint P, we have the following equalities αp + ( α)p = P αp + ( α)p = P. (44a) (44b) In the receivers, we use a mixture of joint ML decoding and naive single-user decoding. We summarize the result in the following theorem.

20 Theorem 3 For a Gaussian multiple access channel with power constraints P and P for user and user and noise power σ, an inner bound for EER is { EER(R, R ) = (E, E ) : α, αp + ( α)p = P, αp + ( α)p = P, { { E max min E np (R, P σ, P, α), Enp σ t3 (R + R, P σ, P σ, P σ, P } σ, α), E np (R, P σ + P, { { E max min E np (R, P E np (R, P σ + P, P σ + P, α) where E np ( ), E np t3 ( ) are given by (9), (), respectively. } σ, P, α), Enp σ t3 (R + R, P σ, P σ, P σ, P } σ, α), } } P σ, α). (45) + P Proof: Following [], we define three types of error events, type, type, and type 3, using joint ML decoding. It can be shown by using random coding arguments that there exist random codebooks for user and user using joint ML decoding such that P et e NEnp (R, P σ, P σ,α) P et e NEnp (R, P σ, P σ,α) P et3 e NEnp t3 (R +R, P σ, P σ, P σ, P σ,α). (46a) (46b) (46c) The probabilities of error for user and user using joint ML decoding can be upper bounded by P e = P et + P et3 e NEnp (R, P σ, P σ,α) + e NEnp t3 (R+R, P σ, P σ, P σ, P σ,α) e N min{enp (R, P σ, P σ,α),enp t3 (R+R, P σ, P σ, P σ, P σ,α)} (47a) P e = P et + P et3 e NEnp (R, P σ, P σ,α) + e NEnp t3 (R+R, P σ, P σ, P σ, P σ,α) e N min{enp (R, P σ, P σ,α),enp t3 (R+R, P σ, P σ, P σ, P σ,α)}. (47b) Thus the achievable error exponents using joint ML decoding are E sp,jm = min { E np (R, P σ, P, α), Enp σ t3 (R + R, P σ, P σ, P σ, P σ, α)} E sp,jm = min { E np (R, P σ, P, α), Enp σ t3 (R + R, P σ, P σ, P σ, P σ, α)}. (48a) (48b) So far we have shown that there exist a pair of random codebooks satisfying (48a). The proof for the existence of a pair of deterministic random codebooks satisfying (48a) is the same as given in the Gaussian broadcast channel and is omitted here.

21 When naive single-user decoding is utilized, the achievable error exponents are E sp,ns = E np P P (R, σ, + P σ, α) + P E sp,ns = E np (R, P σ + P, P σ + P, α). Since both users can choose either joint ML or naive single-user decoding, the maximum of the corresponding error exponents are achievable. This completes the proof. In Fig. 9(a), we illustrate this inner bound using an example. The dotted curve is the boundary of the achievable region obtained by on-off superposition, which merges with the solid curve (obtained by superposition) at (E, E ) = (.8,.4) and (E, E ) = (.8,.4). The maximum equal error exponent pair achieved by uniform superposition (dashed curve) is (E, E ) = (.3,.3), which is less than the maximum equal error exponent pair (E, E ) = (.8,.8) achieved by superposition (solid curve). In Fig. 9(b), we illustrate the inner bound using an example when the power constraints for user and user are different. (49a) (49b) 5 x 3 3 x E.5 E E x 3 (a) E (b) Fig. 9. EER inner bound using on-off superposition (dotted), uniform superposition (dashed) and superposition (solid) for (a) R =.5, R =.5, P σ =, P σ = ; (b) R =., R =.5, P σ = 4, P σ =. Before we continue with the outer bound, we make a remark here. Although we have shown that (nonuniform) superposition encoding provides an improvement over uniform superposition in terms of the error exponent region, it can also be shown that the former continues to perform better than the latter when the performance measure is system error exponent. The system error exponent is equal to min{e, E }, which can be derived easily from the EER (the maximum value along the E = E line inside the EER). It is mentioned in [] that

22 This means that we can define a region R α of rate pairs as the convex hull of all pairs R, R for which E r(r, R ) α.... First, the random coding ensemble itself could use different probability assignments Q Q on different letters of the block.... No examples have been found where this approach enlarges the region R α defined above; The example of Fig. 9(a) indeed shows that such an improvement is possible. B. Outer Bound for Error Exponent Region We now derive an EER outer bound and summarize the result in the following theorem. Theorem 4 For a Gaussian multiple access channel with power constraints P and P for user and user and noise power σ, an outer bound for EER is where E su ( ) is defined as in Theorem. E E su (R, P σ ) E E su (R, P σ ) (5a) (5b) min{e, E } E su (R + R, P + P σ ), (5c) Proof: For a Gaussian multiple access channel with power constraints P and P for user and user, respectively, the probabilities of decoding error for user and user can always be lower bounded by the probabilities of decoding error for user and user operating over the point-to-point channel Y = X i +Z with power constraint P i, for i =,. This implies that E E su (R, P σ ) E E su (R, P σ ). (5a) (5b) Now, given any two codebooks CB = {C,,..., C,M } and CB = {C,,..., C,M } for the Gaussian multiple access channel satisfying the power constraints P and P, we have M M C,i + C,j = M C,i + M C,j P + P, (5) M M M M i= j= where we have assumed that the codebooks CB and CB are zero-mean, i.e., M i= i= j= M C,i = M C,j =, (53) M since any nonzero-mean codebook can be modified to zero-mean with the same performance and using less power. Let D i,j denote the decision region associated with the codewords C,i and C,j. Now construct a codebook CB = {C,..., C M3 } with codewords C (i )M+j = C,i + C,j and decision regions D (i )M+j = D i,j, where j=

23 3 M 3 = M M. Then the probability of system error for the MAC is lower bounded by M M P e,sys = P (Y N / D i,j C,i + C,j ) M M i= j= = M 3 P (Y N / D k C k ) M 3 k= CB : M 3 P M3 min k= C k P +P M 3 P (Y N / D k C k) M 3 k= e NE su(r +R, P +P σ ), (54) where CB = {C,..., C M 3 } is any codebook with M 3 codewords and D k associated with the codeword C k. This implies that is the optimum decision region which completes the proof. min{e, E } E su (R + R, P + P σ ), (55) In Fig., we illustrate this outer bound using an example. The solid curve is the achievable EER and the dash-dotted curve is the outer bound for the EER. Although the inner and outer bounds do not coincide, it is clear from Fig. that for the rate pair (R, R ) = (.5,.5), there is a tradeoff between user s and user s error exponents..5. E E Fig.. Error exponent region inner bound (solid) and outer bound (dash-dotted) for R = R =.5, P σ = P σ =. C. Operating Points with Tight Inner and Outer Bounds From Theorem 3 and Theorem 4, we can show that the EER inner and outer bounds are tight for certain operating points (R, R ). It is known that for a single-user channel the random coding exponent E r (R, SNR)

24 4 and the sphere packing exponent E sp (R, SNR) are tight for rates R R crit, where R crit is the critical rate [7], [9]. From Theorem 3, the achievable error exponents using uniform superposition and joint ML decoding are E us,jm = min { E(R, P σ ), E t3(r + R, P σ, P σ )} E us,jm = min { E(R, P σ ), E t3(r + R, P σ, P σ )}, where the superscript us,jm denotes uniform superposition and joint ML, and E(R, SNR) max{e r (R, SNR), E ex (R, SNR)} E t3 (R, SNR, SNR ) E np t3 (R, SNR, SNR, SNR, SNR, α). (56a) (56b) (57a) (57b) The CCR of the Gaussian MAC can be divided into four regions R, R 3, R 3, and R 3 as follows R {(R, R ) : E(R, P σ ) E t3(r + R, P σ, P σ ), E(R, P σ ) E t3(r + R, P σ, P σ )} R 3 {(R, R ) : E(R, P σ ) E t3(r + R, P σ, P σ ) E(R, P σ )} R 3 {(R, R ) : E(R, P σ ) E t3(r + R, P σ, P σ ) E(R, P σ )} R 3 {(R, R ) : E t3 (R + R, P σ, P σ ) E(R, P σ ), E t3(r + R, P σ, P σ ) E(R, P σ )}, depending on whether the bound for type, type, or type 3 error dominates when using uniform superposition and joint ML decoding (see Fig. ). In region R, each user attains the maximal achievable single-user error exponent. In region R 3, the first user achieves the maximal single-user error exponent, while the second user s error probability is dominated by type 3 error. A similar statement also holds for region R 3. In region R 3, type 3 error is dominant over both type and type errors. If (R, R ) R with R R,crit and R R,crit, where R,crit and R,crit are the critical rates for a Gaussian single-user channel with SNR equal to P σ and P σ, respectively, then E us,jm = E r (R, P σ ) = E sp(r, P σ ) E us,jm = E r (R, P σ ) = E sp(r, P σ ), i.e., the EER inner and outer bounds are tight. In Fig, the dotted region within R is the rate region with tight EER inner and outer bounds. (58a) (58b) (58c) (58d) (59a) (59b) VI. CONJECTURED EER OUTER BOUNDS FOR GAUSSIAN MULTI-USER CHANNELS The EER outer bounds derived in the previous sections are essentially based on upper bounds for the error exponent of a single-user channel. In particular, the presented outer bounds are derived by transforming the original multi-user system into a single-user system and utilizing single-user error exponent upper bounds. In this section our objective is to provide improved outer bounds for the EERs which explicitly incorporate the fact that two users are simultaneously communicating with one transmitter or receiver. In particular, we extend the minimum distance error exponent upper bound [9, Ch. ] from a single-user setting to a multi-user setting. To do this, we first consider a

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

An Achievable Error Exponent for the Mismatched Multiple-Access Channel

An Achievable Error Exponent for the Mismatched Multiple-Access Channel An Achievable Error Exponent for the Mismatched Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@camacuk Albert Guillén i Fàbregas ICREA & Universitat Pompeu Fabra University of

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12 Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user

More information

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels Representation of Correlated s into Graphs for Transmission over Broadcast s Suhan Choi Department of Electrical Eng. and Computer Science University of Michigan, Ann Arbor, MI 80, USA Email: suhanc@eecs.umich.edu

More information

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying Ahmad Abu Al Haija, and Mai Vu, Department of Electrical and Computer Engineering McGill University Montreal, QC H3A A7 Emails: ahmadabualhaija@mailmcgillca,

More information

A Comparison of Two Achievable Rate Regions for the Interference Channel

A Comparison of Two Achievable Rate Regions for the Interference Channel A Comparison of Two Achievable Rate Regions for the Interference Channel Hon-Fah Chong, Mehul Motani, and Hari Krishna Garg Electrical & Computer Engineering National University of Singapore Email: {g030596,motani,eleghk}@nus.edu.sg

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

On the Capacity of the Interference Channel with a Relay

On the Capacity of the Interference Channel with a Relay On the Capacity of the Interference Channel with a Relay Ivana Marić, Ron Dabora and Andrea Goldsmith Stanford University, Stanford, CA {ivanam,ron,andrea}@wsl.stanford.edu Abstract Capacity gains due

More information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

Random Access: An Information-Theoretic Perspective

Random Access: An Information-Theoretic Perspective Random Access: An Information-Theoretic Perspective Paolo Minero, Massimo Franceschetti, and David N. C. Tse Abstract This paper considers a random access system where each sender can be in two modes of

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

On the Capacity Region of the Gaussian Z-channel

On the Capacity Region of the Gaussian Z-channel On the Capacity Region of the Gaussian Z-channel Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@eng.umd.edu ulukus@eng.umd.edu

More information

Delay, feedback, and the price of ignorance

Delay, feedback, and the price of ignorance Delay, feedback, and the price of ignorance Anant Sahai based in part on joint work with students: Tunc Simsek Cheng Chang Wireless Foundations Department of Electrical Engineering and Computer Sciences

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

On Network Interference Management

On Network Interference Management On Network Interference Management Aleksandar Jovičić, Hua Wang and Pramod Viswanath March 3, 2008 Abstract We study two building-block models of interference-limited wireless networks, motivated by the

More information

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Sparse Regression Codes for Multi-terminal Source and Channel Coding Sparse Regression Codes for Multi-terminal Source and Channel Coding Ramji Venkataramanan Yale University Sekhar Tatikonda Allerton 2012 1 / 20 Compression with Side-Information X Encoder Rate R Decoder

More information

Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR

Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 3, MARCH 2012 1483 Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR A. Taufiq Asyhari, Student Member, IEEE, Albert Guillén

More information

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR 1 Stefano Rini, Daniela Tuninetti and Natasha Devroye srini2, danielat, devroye @ece.uic.edu University of Illinois at Chicago Abstract

More information

Lecture 15: Thu Feb 28, 2019

Lecture 15: Thu Feb 28, 2019 Lecture 15: Thu Feb 28, 2019 Announce: HW5 posted Lecture: The AWGN waveform channel Projecting temporally AWGN leads to spatially AWGN sufficiency of projection: irrelevancy theorem in waveform AWGN:

More information

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing

More information

A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels

A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels S. Sandeep Pradhan a, Suhan Choi a and Kannan Ramchandran b, a {pradhanv,suhanc}@eecs.umich.edu, EECS Dept.,

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

On Scalable Source Coding for Multiple Decoders with Side Information

On Scalable Source Coding for Multiple Decoders with Side Information On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels Mehdi Mohseni Department of Electrical Engineering Stanford University Stanford, CA 94305, USA Email: mmohseni@stanford.edu

More information

On the Secrecy Capacity of the Z-Interference Channel

On the Secrecy Capacity of the Z-Interference Channel On the Secrecy Capacity of the Z-Interference Channel Ronit Bustin Tel Aviv University Email: ronitbustin@post.tau.ac.il Mojtaba Vaezi Princeton University Email: mvaezi@princeton.edu Rafael F. Schaefer

More information

On Gaussian MIMO Broadcast Channels with Common and Private Messages

On Gaussian MIMO Broadcast Channels with Common and Private Messages On Gaussian MIMO Broadcast Channels with Common and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 20742 ersen@umd.edu

More information

Computing sum of sources over an arbitrary multiple access channel

Computing sum of sources over an arbitrary multiple access channel Computing sum of sources over an arbitrary multiple access channel Arun Padakandla University of Michigan Ann Arbor, MI 48109, USA Email: arunpr@umich.edu S. Sandeep Pradhan University of Michigan Ann

More information

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University)

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University) Performance-based Security for Encoding of Information Signals FA9550-15-1-0180 (2015-2018) Paul Cuff (Princeton University) Contributors Two students finished PhD Tiance Wang (Goldman Sachs) Eva Song

More information

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels Gil Wiechman Igal Sason Department of Electrical Engineering Technion, Haifa 3000, Israel {igillw@tx,sason@ee}.technion.ac.il

More information

The Gallager Converse

The Gallager Converse The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing

More information

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications on the ML Decoding Error Probability of Binary Linear Block Codes and Department of Electrical Engineering Technion-Israel Institute of Technology An M.Sc. Thesis supervisor: Dr. Igal Sason March 30, 2006

More information

On Multiple User Channels with State Information at the Transmitters

On Multiple User Channels with State Information at the Transmitters On Multiple User Channels with State Information at the Transmitters Styrmir Sigurjónsson and Young-Han Kim* Information Systems Laboratory Stanford University Stanford, CA 94305, USA Email: {styrmir,yhk}@stanford.edu

More information

IN this paper, we show that the scalar Gaussian multiple-access

IN this paper, we show that the scalar Gaussian multiple-access 768 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 5, MAY 2004 On the Duality of Gaussian Multiple-Access and Broadcast Channels Nihar Jindal, Student Member, IEEE, Sriram Vishwanath, and Andrea

More information

Universal Anytime Codes: An approach to uncertain channels in control

Universal Anytime Codes: An approach to uncertain channels in control Universal Anytime Codes: An approach to uncertain channels in control paper by Stark Draper and Anant Sahai presented by Sekhar Tatikonda Wireless Foundations Department of Electrical Engineering and Computer

More information

Analytical Bounds on Maximum-Likelihood Decoded Linear Codes: An Overview

Analytical Bounds on Maximum-Likelihood Decoded Linear Codes: An Overview Analytical Bounds on Maximum-Likelihood Decoded Linear Codes: An Overview Igal Sason Department of Electrical Engineering, Technion Haifa 32000, Israel Sason@ee.technion.ac.il December 21, 2004 Background

More information

Time-division multiplexing for green broadcasting

Time-division multiplexing for green broadcasting Time-division multiplexing for green broadcasting Pulkit Grover and Anant Sahai Wireless Foundations, Department of EECS University of California at Berkeley Email: {pulkit, sahai} @ eecs.berkeley.edu

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

Half-Duplex Gaussian Relay Networks with Interference Processing Relays

Half-Duplex Gaussian Relay Networks with Interference Processing Relays Half-Duplex Gaussian Relay Networks with Interference Processing Relays Bama Muthuramalingam Srikrishna Bhashyam Andrew Thangaraj Department of Electrical Engineering Indian Institute of Technology Madras

More information

Secret Key Agreement Using Asymmetry in Channel State Knowledge

Secret Key Agreement Using Asymmetry in Channel State Knowledge Secret Key Agreement Using Asymmetry in Channel State Knowledge Ashish Khisti Deutsche Telekom Inc. R&D Lab USA Los Altos, CA, 94040 Email: ashish.khisti@telekom.com Suhas Diggavi LICOS, EFL Lausanne,

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Information Theory Meets Game Theory on The Interference Channel

Information Theory Meets Game Theory on The Interference Channel Information Theory Meets Game Theory on The Interference Channel Randall A. Berry Dept. of EECS Northwestern University e-mail: rberry@eecs.northwestern.edu David N. C. Tse Wireless Foundations University

More information

An Outer Bound for the Gaussian. Interference channel with a relay.

An Outer Bound for the Gaussian. Interference channel with a relay. An Outer Bound for the Gaussian Interference Channel with a Relay Ivana Marić Stanford University Stanford, CA ivanam@wsl.stanford.edu Ron Dabora Ben-Gurion University Be er-sheva, Israel ron@ee.bgu.ac.il

More information

Lecture 10: Broadcast Channel and Superposition Coding

Lecture 10: Broadcast Channel and Superposition Coding Lecture 10: Broadcast Channel and Superposition Coding Scribed by: Zhe Yao 1 Broadcast channel M 0M 1M P{y 1 y x} M M 01 1 M M 0 The capacity of the broadcast channel depends only on the marginal conditional

More information

Multicoding Schemes for Interference Channels

Multicoding Schemes for Interference Channels Multicoding Schemes for Interference Channels 1 Ritesh Kolte, Ayfer Özgür, Haim Permuter Abstract arxiv:1502.04273v1 [cs.it] 15 Feb 2015 The best known inner bound for the 2-user discrete memoryless interference

More information

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages Degrees of Freedom Region of the Gaussian MIMO Broadcast hannel with ommon and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and omputer Engineering University of Maryland, ollege

More information

arxiv: v1 [cs.it] 5 Feb 2016

arxiv: v1 [cs.it] 5 Feb 2016 An Achievable Rate-Distortion Region for Multiple Descriptions Source Coding Based on Coset Codes Farhad Shirani and S. Sandeep Pradhan Dept. of Electrical Engineering and Computer Science Univ. of Michigan,

More information

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Stefano Rini, Ernest Kurniawan and Andrea Goldsmith Technische Universität München, Munich, Germany, Stanford University,

More information

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation Mehdi Ashraphijuo, Vaneet Aggarwal and Xiaodong Wang 1 arxiv:1308.3310v1 [cs.it] 15 Aug 2013

More information

On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels

On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels On the Throughput, Capacity and Stability Regions of Random Multiple Access over Standard Multi-Packet Reception Channels Jie Luo, Anthony Ephremides ECE Dept. Univ. of Maryland College Park, MD 20742

More information

Information Dimension

Information Dimension Information Dimension Mina Karzand Massachusetts Institute of Technology November 16, 2011 1 / 26 2 / 26 Let X would be a real-valued random variable. For m N, the m point uniform quantized version of

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

A Comparison of Superposition Coding Schemes

A Comparison of Superposition Coding Schemes A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

The Capacity of the Semi-Deterministic Cognitive Interference Channel and its Application to Constant Gap Results for the Gaussian Channel

The Capacity of the Semi-Deterministic Cognitive Interference Channel and its Application to Constant Gap Results for the Gaussian Channel The Capacity of the Semi-Deterministic Cognitive Interference Channel and its Application to Constant Gap Results for the Gaussian Channel Stefano Rini, Daniela Tuninetti, and Natasha Devroye Department

More information

Exact Random Coding Error Exponents of Optimal Bin Index Decoding

Exact Random Coding Error Exponents of Optimal Bin Index Decoding Exact Random Coding Error Exponents of Optimal Bin Index Decoding Neri Merhav Department of Electrical Engineering Technion - Israel Institute of Technology Technion City, Haifa 32000, ISRAEL E mail: merhav@ee.technion.ac.il

More information

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes 1 Zheng Wang, Student Member, IEEE, Jie Luo, Member, IEEE arxiv:0808.3756v1 [cs.it] 27 Aug 2008 Abstract We show that

More information

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 2037 The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani Abstract The capacity

More information

Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem

Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem Osamu Watanabe, Takeshi Sawai, and Hayato Takahashi Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology

More information

Intermittent Communication

Intermittent Communication Intermittent Communication Mostafa Khoshnevisan, Student Member, IEEE, and J. Nicholas Laneman, Senior Member, IEEE arxiv:32.42v2 [cs.it] 7 Mar 207 Abstract We formulate a model for intermittent communication

More information

Two Applications of the Gaussian Poincaré Inequality in the Shannon Theory

Two Applications of the Gaussian Poincaré Inequality in the Shannon Theory Two Applications of the Gaussian Poincaré Inequality in the Shannon Theory Vincent Y. F. Tan (Joint work with Silas L. Fong) National University of Singapore (NUS) 2016 International Zurich Seminar on

More information

On the complexity of maximizing the minimum Shannon capacity in wireless networks by joint channel assignment and power allocation

On the complexity of maximizing the minimum Shannon capacity in wireless networks by joint channel assignment and power allocation On the complexity of maximizing the minimum Shannon capacity in wireless networks by joint channel assignment and power allocation Mikael Fallgren Royal Institute of Technology December, 2009 Abstract

More information

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Ahmad Abu Al Haija ECE Department, McGill University, Montreal, QC, Canada Email: ahmad.abualhaija@mail.mcgill.ca

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case 1 arxiv:0901.3580v1 [cs.it] 23 Jan 2009 Feedback Capacity of the Gaussian Interference Channel to Within 1.7075 Bits: the Symmetric Case Changho Suh and David Tse Wireless Foundations in the Department

More information

Capacity Bounds for. the Gaussian Interference Channel

Capacity Bounds for. the Gaussian Interference Channel Capacity Bounds for the Gaussian Interference Channel Abolfazl S. Motahari, Student Member, IEEE, and Amir K. Khandani, Member, IEEE Coding & Signal Transmission Laboratory www.cst.uwaterloo.ca {abolfazl,khandani}@cst.uwaterloo.ca

More information

Information Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem

Information Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem Information Theory for Wireless Communications. Lecture 0 Discrete Memoryless Multiple Access Channel (DM-MAC: The Converse Theorem Instructor: Dr. Saif Khan Mohammed Scribe: Antonios Pitarokoilis I. THE

More information

Classical codes for quantum broadcast channels. arxiv: Ivan Savov and Mark M. Wilde School of Computer Science, McGill University

Classical codes for quantum broadcast channels. arxiv: Ivan Savov and Mark M. Wilde School of Computer Science, McGill University Classical codes for quantum broadcast channels arxiv:1111.3645 Ivan Savov and Mark M. Wilde School of Computer Science, McGill University International Symposium on Information Theory, Boston, USA July

More information

Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback

Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. XX, NO. Y, MONTH 204 Concatenated Coding Using Linear Schemes for Gaussian Broadcast Channels with Noisy Channel Output Feedback Ziad Ahmad, Student Member, IEEE,

More information

EE 4TM4: Digital Communications II. Channel Capacity

EE 4TM4: Digital Communications II. Channel Capacity EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.

More information

Generalized Writing on Dirty Paper

Generalized Writing on Dirty Paper Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland

More information

Capacity of AWGN channels

Capacity of AWGN channels Chapter 3 Capacity of AWGN channels In this chapter we prove that the capacity of an AWGN channel with bandwidth W and signal-tonoise ratio SNR is W log 2 (1+SNR) bits per second (b/s). The proof that

More information

820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY Stefano Rini, Daniela Tuninetti, and Natasha Devroye

820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY Stefano Rini, Daniela Tuninetti, and Natasha Devroye 820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY 2012 Inner and Outer Bounds for the Gaussian Cognitive Interference Channel and New Capacity Results Stefano Rini, Daniela Tuninetti,

More information

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications on the ML Decoding Error Probability of Binary Linear Block Codes and Moshe Twitto Department of Electrical Engineering Technion-Israel Institute of Technology Haifa 32000, Israel Joint work with Igal

More information

Source-Channel Coding Theorems for the Multiple-Access Relay Channel

Source-Channel Coding Theorems for the Multiple-Access Relay Channel Source-Channel Coding Theorems for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora, and Deniz Gündüz Abstract We study reliable transmission of arbitrarily correlated sources over multiple-access

More information

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 8-3, An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Invited Paper Bernd Bandemer and

More information

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 : Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 Rayleigh Friday, May 25, 2018 09:00-11:30, Kansliet 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless

More information

Arimoto Channel Coding Converse and Rényi Divergence

Arimoto Channel Coding Converse and Rényi Divergence Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code

More information

Bounds on Mutual Information for Simple Codes Using Information Combining

Bounds on Mutual Information for Simple Codes Using Information Combining ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar

More information

Approximate Capacity of Fast Fading Interference Channels with no CSIT

Approximate Capacity of Fast Fading Interference Channels with no CSIT Approximate Capacity of Fast Fading Interference Channels with no CSIT Joyson Sebastian, Can Karakus, Suhas Diggavi Abstract We develop a characterization of fading models, which assigns a number called

More information

Approximately achieving Gaussian relay. network capacity with lattice-based QMF codes

Approximately achieving Gaussian relay. network capacity with lattice-based QMF codes Approximately achieving Gaussian relay 1 network capacity with lattice-based QMF codes Ayfer Özgür and Suhas Diggavi Abstract In [1], a new relaying strategy, quantize-map-and-forward QMF scheme, has been

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

ON BEAMFORMING WITH FINITE RATE FEEDBACK IN MULTIPLE ANTENNA SYSTEMS

ON BEAMFORMING WITH FINITE RATE FEEDBACK IN MULTIPLE ANTENNA SYSTEMS ON BEAMFORMING WITH FINITE RATE FEEDBACK IN MULTIPLE ANTENNA SYSTEMS KRISHNA KIRAN MUKKAVILLI ASHUTOSH SABHARWAL ELZA ERKIP BEHNAAM AAZHANG Abstract In this paper, we study a multiple antenna system where

More information

Simultaneous Nonunique Decoding Is Rate-Optimal

Simultaneous Nonunique Decoding Is Rate-Optimal Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 1-5, 2012 Simultaneous Nonunique Decoding Is Rate-Optimal Bernd Bandemer University of California, San Diego La Jolla, CA

More information

Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication

Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

An Alternative Proof for the Capacity Region of the Degraded Gaussian MIMO Broadcast Channel

An Alternative Proof for the Capacity Region of the Degraded Gaussian MIMO Broadcast Channel IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2427 An Alternative Proof for the Capacity Region of the Degraded Gaussian MIMO Broadcast Channel Ersen Ekrem, Student Member, IEEE,

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information