List of Figures. Acknowledgements. Abstract 1. 1 Introduction 2. 2 Preliminaries Superposition Coding Block Markov Encoding...

Size: px
Start display at page:

Download "List of Figures. Acknowledgements. Abstract 1. 1 Introduction 2. 2 Preliminaries Superposition Coding Block Markov Encoding..."

Transcription

1

2 Contents Contents i List of Figures iv Acknowledgements vi Abstract 1 1 Introduction 2 2 Preliminaries Superposition Coding Block Markov Encoding Two-Way channel Auxiliary random variables Markov Chains Main Results Notation Channel Models Full-duplex Channel Half-duplex Channel Definitions i

3 3.4 System Model Inner Bound Outer Bound Inner Bound Code generation Encoding Scheme Decoding Scheme Probability of Error Half-duplex Channel Channel Encoding scheme Decoding scheme Probability of Error Outer Bound Outer Bound for R Outer Bound for R 1 and R Outer Bound for R 1 + R Special cases Gaussian case Achievable Rate Region Symmetric Case Assymmetric Case Half-duplex MAC with feedback Reduction to Carleial rate region Reduction to Cover-Leung rate region ii

4 7 Conclusions 58 Bibliography 60 Index 64

5 List of Figures 1.1 Three user hidden node topology (a) Date in uplink and only feedback in the downlink and (b) No feedback (inner bound), genie-aided perfect feedback (outer bound) and MAC with finite feedback (two inner regions) with different amount of feedback power TDMA: Users orthogonally separated in time FDMA: Users orthogonally separated in frequency Superposition coding: Codeword occupies the whole bandwidth and time Markov chain formed by the channel blocks Shannon s two-way channel Half-duplex two-way channel Feedback based coding in multiuser two-way channels. For clarity of representation, the broadcast channel is p(y 1, y 2 x) = p(y 1 x)p(y 2 x) Half-duplex two-way network: two states in the system Markov chain model of the multiuser feedback scenario (a) Uplink and (b) Downlink for the half-duplex Gaussian channel Power distribution for Gaussian MAC-MC with half-duplex links Rate region without feedback, R nf iv

6 6.4 The 3-d rate region for the MAC + BC half-duplex two-way channel MAC sum-rate versus BC common rate with and without feedback. User powers are P 1 = P 2 = 50, and gray node power P = 50, 100, N = 1 and N 1 = N 2 = AWGN with pathloss: Near-far situation Percentage gain in the rates of the strong user (a) and weak user (b) Behaviour of Gaussian capacity at high and low SNRs Gaussian case with asymmetric power constraints No feedback (inner bound), genie-aided perfect feedback (outer bound) and MAC with finite feedback (two inner regions) with different amount of feedback power

7 Acknowledgements

8 ABSTRACT Bounds and Protocols for a Two-way Multiple Node Channel by Debashis Dash Feedback is known to enlarge the capacity region of multiuser channels. However, none of the analysis accounts for all the feedback resources. We show that feedback protocols lose the rate gains when analyzed with resource accounting. Also, in typical multiuser systems without feedback, uplink and downlink code designs are usually decoupled and independent of each other. By jointly designing the uplink and the downlink system we show that feedback is able to get back the rate gains partially, but only if the traffic is two-way. We find inner and outer bounds for a three node system and show that our scheme can be used for user cooperation in hidden node topologies. In near-far situations, the weak user gains appreciably from such cooperation. We also show that the well known inner bounds for multiuser channels with feedback can be derived as special cases of our generalized inner bound.

9 Chapter 1 Introduction The information theory of channels with feedback poses challenging problems which often have unexpected solutions G. Kramer, Ph.D. Thesis Feedback is known to improve the capacity of multiuser channels and in some cases reduce the computational complexity. It is interesting to notice that most of the practical feedback protocols are based on genie-aided theoretical analysis, which either have a noiseless feedback link or some other practically difficult assumption like full-duplex radios [6, 12, 10, 4, 32, 23]. Hence, it becomes necessary to put practical constraints and see if the existing feedback protocols still give us gains. We show in this thesis, that such a practical analysis does indeed give a somewhat unexpected solution. User 1 User 3 User 2 Figure 1.1: Three user hidden node topology We focus on a three-node network shown in Figure 1.1 and show that if feedback has to share resources with forward link in conventional multiple access, the receiver 2

10 CHAPTER 1. INTRODUCTION 3 (user 3) has to use an exponentially large power to offset the time loss. In essence, feedback is practically useless to implement in half-duplex networks, especially when there is no downlink data. To get an idea, consider Figure 1.2 which indicates that in order to do better than the well known MAC rate region, even if the downlink user employs enormous power it cannot even get close to the promised noiseless upper bound (more details in Chapter 6). But we claim that this is merely because we are not using the downlink channel optimally by reserving it solely for feedback. In this thesis, we show that the correct way to analyse practical feedback scheme is by using a two-way channel model. P P Data Data P f Feedback only (a) (b) Figure 1.2: (a) Date in uplink and only feedback in the downlink and (b) No feedback (inner bound), genie-aided perfect feedback (outer bound) and MAC with finite feedback (two inner regions) with different amount of feedback power. Two-way channels, first proposed and studied by Shannon in 1961 [29], is a natural model for many communication scenarios in which all parties may have data for each other. Shannon proposed both inner and outer bounds for the two-node two-way channel. Further, he also provided cases where the bounds match (deterministic channels like modulo-2 adding channel and non-deterministic channels like binary erasurenoiseless two-way channel and push-to-talk channel), and where they do not match (e.g. the well-studied example of binary multiplying channel [25, 26, 31, 21, 27, 28]).

11 CHAPTER 1. INTRODUCTION 4 Though comparatively less widely studied than the one-way channel model [30], both the inner and outer bounds were improved subsequently [15, 33, 16]. An interesting point to note is that while feedback is not needed to achieve the capacity of two-way Gaussian channel, we show that it is beneficial in multiple node networks like that in Figure 3.1. The gain can be attributed to cooperative beamforming by the two nodes in uplink communication. Perhaps due to limited attention given to two-way channels, its multi-node extension has received even less attention. Recent work on two-way networks can be attributed to [16, 19], where the authors study both inner and outer bounds for arbitrary network topologies with two-way networks as a special case. The elegant formulation using code trees [16] clearly demonstrates the complexity of incorporating feedback information in code construction for generalized networks. However, the inner and outer bounds derived in [16] are not computable in many cases of interest. Our key contribution is an inner bound giving a larger rate region for the multiuser two-way channel. The inner bound is derived for both full-duplex and half-duplex channels. Our proposed coding strategy is inspired by the two-way coding scheme of Han [15] and generalizes our recent results in [9]. In fact, we show that the achievable rate region presented in [8] is a special case of the new coding structure. The topology considered in this paper is inspired by two applications. First, it resembles the hidden node topology commonly considered in the design of contention resolution protocols, where the two senders do not have a channel between them. Due to the lack of a direct cooperative link and hence cooperative coding is not feasible. However, since the receivers are also senders, the feedback based on the received signals can be used to induce cooperation between uplink senders (Users 1 and 2). To some extent, the above conclusion was also independently derived for multiple access channels with noisy feedback [14]. Our results show that that conclusion holds for both full- and half-duplex networks, with and without downlink data (rate R 3 ).

12 CHAPTER 1. INTRODUCTION 5 Second, for cellular networks, the analysis shows that there is a sum-rate gain in jointly designing uplink and downlink communication. While all networks have decoupled uplink and downlink systems, a joint design can increase the system data rates. We note that the usual min-cut max-flow bound [5, Chapter 14] does not apply to two-way networks. An extension of the bound for full-duplex two-way networks was considered in [18]. An improved outer bound was found for the conventional AWGN multiple access channel (where the third link does not have any data of its own to send) in [14]. In this thesis, we derive the outer bound from first principles, which demonstrates the challenge of deriving such an outer bound. The two-way channel implicitly allows sending feedback about the signal received in one mode (say MAC) while transmitting in the second mode (say BC). Thus it leads to an infinite Markov chain (as shown in Figure 5.1). Our proof uses the techniques proposed in [33], where the authors studied the conventional two-node two-way network. As proposed by Shannon [29], the outer bound typically uses an arbitrary input distribution. On the other hand, the inner bounds are obtained with independent inputs. Following the line of reasoning of [29, 33], our proposed outer bounds too use non-independent inputs to reflect how the feedback can induce correlation in the inputs. In a typical cellular multiuser network, it is common to have data both in the uplink and downlink. For such scenarios, it is not clear if it is better to use disjoint design criteria for the uplink and downlink or design them together. While there has been a lot of work on individual design of multiple-access channel [20, 1, 12, 7, 23, 4, 32, 17] and broadcast channels [6, 13, 2, 11, 10, 22], there is no work on joint encoding of the two channels. But our analysis of feedback protocols using two-way channel models also answer this additional question that for half duplex channels having data in both the directions of the two-way link, it is better to design the uplink and downlink jointly.

13 CHAPTER 1. INTRODUCTION 6 The rest of the thesis is organized as follows. In Chapter 2, we shall discuss some basic principles which will be used very often in the later chapters. In Chapter 3 we shall discuss the channel model, assumptions, nomenclature and give the main results of the work. Chapter 4 contains the proof of the inner bound, where we derive special encoding and decoding functions to find the half-duplex special case of the inner bound. Chapter 5 contains the proof of the outer bound and Chapter 6 has some interesting gaussian half-duplex cases. Then we consider a pathloss model which simulates a near-far situation and show that such cooperation through feedback is more useful for the weak user. For the case when both the uplink transmitters are equidistant from the receiver, we consider cases when they have symmetric or asymmetric power constraints. In the last part of this chapter we show that some of the previous known results are special cases of our rate region in Chapter 4. We conclude our findings in Chapter 7.

14 Chapter 2 Preliminaries In this chapter we review some basic information theoretic ideas, which will be used throughout the thesis to derive our inner and outer bounds. We start with a common encoding method, known as superposition coding, used in multiuser communication systems. 2.1 Superposition Coding The simplest way to separate users (or messages) when the resources are constrained, is to do orthogonal allocation. Users (or messages) can be divided in time (TDMA) or in frequency (FDMA) or both. As shown in Figure 2.1, the messages W 1 and W 2 are independently mapped into channel input X 1 and X 2 respectively and they are sent over the channel in orthogonal time bands. Similar is the situation for FDMA as shown in Figure 2.2. Due to its orthogonal nature, the decoding scheme is very simple. However, this is not an optimal scheme if one can allow a more complicated multiuser decoding scheme. Superposition coding is known to perform better than time and frequency (orthogonal) division of resources. As shown in Figure 2.3, now the channel input X is a function of both W 1 and W 2, where the cloud centers denote the message from the first user and the around each cloud center the alphabet points 7

15 CHAPTER 2. PRELIMINARIES 8 code for the message from the second user, forming a larger code X, which now can take up the whole time bandwidth space on the channel. But due to its structure it is more complicated to decode it than any orthogonal scheme. W 2 X 2 (W 2 ) B f X 1 X 2 W 1 X 1 (W 1 ) t T Figure 2.1: TDMA: Users orthogonally separated in time W 2 X 2 (W 2 ) B X 2 f W 1 X 1 (W 1 ) X 1 t T Figure 2.2: FDMA: Users orthogonally separated in frequency Superposition coding was first used in [6] and was shown to be optimal and perform better than any orthogonal scheme [3]. The idea behind such a coding scheme is to use the channel simultaneously rather than dividing it between users or messages orthogonally. It was introduced in the context of the broadcast channel, so the authors also called it cooperative broadcasting. For Gaussian channels, the Gaussian codewords can just be added with a proper power scaling. For a more general

16 CHAPTER 2. PRELIMINARIES 9 channel, first typical sequences are selected for the input message layers and then a typical channel sequence is selected which is jointly typical with all the message layer sequences. Decoding of a superposition code is more complex than an equivalent orthogonal code. The best way is to first decode the message with the best SINR (considering all other messages as noise) and then subtract (for gaussian channels) the decoded message. Then the next message is decoded following the same procedure till the last message has been decoded. (W 1, W 2 ) X(W 1, W 2 ) (W 2 w 11 ) X(w 11, W 2 ) B W 1.. f X(W 1, W 2 ) (W 2 w 1k ) X(w 1k, W 2 ) t T Figure 2.3: Superposition coding: Codeword occupies the whole bandwidth and time 2.2 Block Markov Encoding The idea of Block Markov superposition codes was introduced in [7]. Instead of superimposing multiple user s data, the authors superimposed messages (independent and incremental) from different blocks and used noiseless feedback to attain a larger rate region for a multiple access channel. Superposition is used to encode messages (of more than one block) at different layers which can be decoded once enough information is acquired about that layer.

17 CHAPTER 2. PRELIMINARIES 10 N channel uses B 1 B 2 B 3... X 1 X 2 X 3... Figure 2.4: Markov chain formed by the channel blocks These layers give rise to different auxiliary random variables which increase the resolution at which we can manage the rate division between the different users depending on the noise variance at the receiver but it increases the complexity of the decoder. As the coding is done over blocks as a whole, as compared to some schemes which take adaptively modify each channel symbol, the blocks form a markov chain as shown in Figure 2.4. The block structure of the coding scheme enables to take off the part of the message that could be decoded after each block, which can reduce the amount of data to be sent as feedback. 2.3 Two-Way channel The two-way channel was first introduced by Shannon in 1961 [29]. A two-way channel model utilizes the previous channel outputs in addition to the message to form the channel input. As shown in Figure 2.5, the channel inputs also depend on the channel outputs for the same user in addition to the message. Such a channel model can be easily extended for feedback schemes. The two-way channel initially introduced by Shannon and those usually considered in the literature are assumed to be full-duplex, in the sense that the transmitters and recievers can send and recieve at the same time. In our system model, in order to put more practical constraints, we consider only half-duplex transmitters and recievers.

18 CHAPTER 2. PRELIMINARIES 11 W 1 ENC X 1 Y 2 DEC W 1 CHANNEL p(y 1,y 2 x 1,x 2 ) W 2 DEC Y 1 X 2 ENC W 2 Figure 2.5: Shannon s two-way channel We call our channel model a two-way model due to the fact that it utilizes a function of the channel output (which separates the feedback part of the message) and the message to form the channel input. X Y 1 2 W ENC 1 η DEC W 1 CHANNEL Y 1 p(y 2 x 1 ) X 2 X 1 1-η Y 2 CHANNEL W 2 DEC p(y 1 x 2 ) Y 1 X 2 ENC W 2 Figure 2.6: Half-duplex two-way channel 2.4 Auxiliary random variables The usual way of writing rate regions is done in terms of the channel input and output random variables. But superposition schemes typically, gives rise to random variables other than channel inputs and outputs due to the layered code structure. These additional random variables are called auxiliary variables. The auxiliary random variables help in encoding the messages into the layered code to form the channel inputs. One of the major concerns of any superposition code, to keep the rate region computable is the cardinality of such auxiliary variables [24].

19 CHAPTER 2. PRELIMINARIES Markov Chains Definition 2.1 Three random variables X, Y and Z form a markov chain (written as X Y Z) if the joint density function has the following form: p(x, y, z) = p(x)p(y x)p(z y) Some well known information theoretic properties of markov chains that are used to prove outer bounds are: Conditioning on functions of random variables H(A B, C) = H(A B, C,D) for D = f(b, C) Typical use: H(Y n W,Y n 1 ) = H(Y n W,Y n 1, X n ) for X n = f(w,y n 1 ). Conditioning reduces entropy H(A B) < H(A) Typical use: H(Y n W,X n, Y n 1 ) < H(Y n X n, Y n 1 ). Markov Chain property H(A B, C) = H(A B) for a markov chain C B A Typical use: H(Y n W,X 1n, X 2n, Y n 1 ) = H(Y n X 1n, X 2n, Y n 1 ). Memoryless channel property H(Y n X n, Y n 1 ) = H(Y n X n ) if p(y N x N ) = i p(y i x i ). Another example: H(Y n X n 1, X n 2, Y n 1 ) = H(Y n X 1n, X 2n ) Data processing theorem The most common way to write the data processing theorem is I(X; Y ) I(X; Z). While finding outer bounds, the following form: H(Z X) H(Z Y ) is also useful.

20 Chapter 3 Main Results Before mentioning our main results, we define our notation, the channel model we follow and the performance criteria. 3.1 Notation We define the following notation for representing all the random variables. Throughout this paper, random variables are represented by capitals, (e.g. U 1, X 1, W 1, etc.), their alphabet sets by the corresponding script symbols, (e.g. U 1, X 1, W 1, etc.), and any particular instance of those random variables by small case symbols, (e.g. u 1, x 1, w 1, etc.). In general, A b u,c will denote random variable A for the c th channel use of block b of user u. Boldface variables will be used to denote vectors. For example, X 1 = (X 1 1,X 2 1,...,X B 1 ) is the vector input to the channel for user 1 for B blocks. Each of the block inputs is a vector (n-tuple) itself (i.e. X i 1 = (X1,1, i X1,2, i...,x1,n)). i 3.2 Channel Models Now we describe the channel models considered in this work. We will consider two types of channels depending on certain assumptions on the transmitter-receiver ca- 13

21 CHAPTER 3. MAIN RESULTS 14 pabilities. If the transmitters and receivers can send and receive at the same time, they are said to be in a full-duplex mode. Usually, by operating in a full-duplex mode, we get a gain in the rate region. This gain is similar to what we had seen in superposition coding as compared to TDMA or FDMA in Chapter 2. On the other hand, if the transmitters and receivers can only send or receive and not do both at the same time, they are said to be in a half-duplex mode. The formal mathematical definitions of these channels are given below: Full-duplex Channel There will be three variables associated with each terminal, w i to represent the message, x i to represent the channel input and y i to represent the channel output at terminal i. Then the two-way network is described by the channel probability distribution p(y 1, y 2, y 3 x 1, x 2, x 3 ). In many cases of interest, the two-way channel can be decomposed such that p(y 1, y 2, y 3 x 1, x 2, x 3 ) = p(y 3 x 1, x 2, x 3 )p(y 1, y 2 x 1, x 2, x 3 ). In the case of Gaussian channels, the channel input-output relation can be described as follows: y 1 = x 1 + x 2 + x 3 + n 1, (3.1) y 1 = x 1 + x 2 + x 3 + n 2, (3.2) y 3 = x 1 + x 2 + x 3 + n 3, (3.3) where n 1, n 2 and n 3 are independent zero mean gaussians with variances N 1, N 2 and N 3 respectively. We further assume that each source operates under an average power constraint of P i, i = 1, 2, 3. Note that the inputs x 2 and x 1 do not appears in outputs

22 CHAPTER 3. MAIN RESULTS 15 y 1 and y 2, since we have assumed that there is no direct channel between Users 1 and Half-duplex Channel Since the users are assumed to be half-duplex, the network time-shares between two possible states: multiple access channel (MAC) mode for η fraction of time and broadcast channel (BC) mode for (1 η) fraction of time, where 0 η 1 denotes the time-sharing variable. In this case, the channel probability law can be written as p(y 3 x 1, x 2 ) p(y 1, y 2, y 3 x 1, x 2, x 3 ) = p(y 1, y 2 x 3 ) MAC BC. (3.4) In the case of Gaussian channels, the input-output relation is given by y 1 = x 3 + n 1, (3.5) y 2 = x 3 + n 2, (3.6) y 3 = x 1 + x 2 + n 3, (3.7) where n 1, n 2 and n 3 are independent zero mean gaussians with variances N 1, N 2 and N 3 respectively. We further assume that each source operates under an average power constraint of P i, i = 1, 2, 3 in their respective transmission modes. 3.3 Definitions We define the alphabet sets from which the input messages m 1, m 2, m 3 are chosen (m 1 M 1, m 2 M 2, m 3 M 3 ) as, M 1 = {1, 2,...,M 1 }, M 2 = {1, 2,...,M 2 }, M 3 = {1, 2,...,M 3 }. P e1 and P e2 are the error probabilities of decoding error at user 3 de-

23 CHAPTER 3. MAIN RESULTS 16 fined by, P e1 = P e2 = 1 M 1 M 2 M 3 1 M 1 M 2 M 3 M 1 M 2 M 3 m 1 =1 m 2 =1 m 3 =1 M 1 M 2 M 3 m 1 =1 m 2 =1 m 3 =1 Pr( ˆm 1 m 1 m 1, m 2, m 3 was sent) Pr( ˆm 2 m 2 m 1, m 2, m 3 was sent) Similarly, P 1 e3 and P 1 e3 are the error probabilities of decoding error at User 1 and User 2 (in decoding what User 3 sent to both of them) defined by P 1 e3 = P 2 e3 = 1 M 1 M 2 M 3 1 M 1 M 2 M 3 M 1 M 2 M 3 m 1 =1 m 2 =1 m 3 =1 M 1 M 2 M 3 m 1 =1 m 2 =1 m 3 =1 Pr( ˆm 1 3 m 1 m 1, m 2, m 3 was sent) Pr( ˆm 2 3 m 2 m 1, m 2, m 3 was sent) Definition 3.1 (Achievable Region) The rate tuple (R 1, R 2, R 3 ) is said to be achievable if there exists a (M 1, M 2, M 3, n) code with, R 1 log M 1 n, R 2 log M 2, R 3 log M 3 n n such that, P e1 0, P e2 0, P 1 e3 0 and P 2 e3 0, as n. Definition 3.2 (Capacity Region) The closure of all achievable rate tuples is called the capacity region of the two-way multiple access channel and denoted by C.

24 CHAPTER 3. MAIN RESULTS System Model Now we define our sytem model in greater detail. Consider the simplest multiuser case with three users as shown in Figure 3.1. Users have data for each other; in our case, User 1 and 2 have messages for User 3 and User 3 has a common message for Users 1 and 2. M 3 Y 1 M 1 X 1 ^ m M 1 i j Y 1 p(y 1 X) M 2 ^ m k l X 1 (i,j,i,k ) X (k,l,i,k ) 2 Y 2 p(y X 1,X 2 ) p(y 2 X) Y 3 X (m,i,k ) 3 X (m,i,k ) 3 i,k m ^^ ^^ (j, l, i, k ) (M ) 3 Figure 3.1: Feedback based coding in multiuser two-way channels. For clarity of representation, the broadcast channel is p(y 1, y 2 x) = p(y 1 x)p(y 2 x). Figure 3.1 shows the coding structure at each user. Since each user has data to send to other users, each user has an encoder to encode their transmissions and decode the received transmissions. In decoupled systems, the encoder and decoder are decoupled. However, in our proposed scheme, the input to decoder is also an input to encoder and vice versa, thereby implicitly allowing feedback from receivers to transmitters.

25 CHAPTER 3. MAIN RESULTS 18 The coding scheme can be understood to operate in three phases. In the first phase, both Users 1 and 2 split their messages, M 1 at rate R 1 and M 2 at rate R 2, into two parts. The rate pair (R 1, R 2 ) is outside the capacity region of multiple access region with no feedback [5]. Both Users 1 and 2 use superposition coding to encode their two-part messages. At the receiver of User 3, only the inner layer of superposition code can be decoded, and a portion of message from both Users 1 and 2 is left undecoded (similar to [4]). In the second phase, User 3 strips away the portion of information it has decoded from the received signal. In essence, it compresses the received message to only that portion of the message that could not be decoded. It then jointly encodes this codeword with the downlink message M 3 at rate R and transmits it in the broadcast channel (BC) mode. Since the users know the portion of their own message which could not be decoded, they use this side information to decode both M 3 and the undecoded message from the other user. Finally, in phase three, both Users 1 and 2 know a part of undecoded message sent by each other from previous transmissions. They superimpose that on top of their new messages leading to a three-layer superposition code. The receiver at User 3, uses the previous reception and the current reception to decode the undecodable portion from last instance and some new information (the inner two layers). The third layer is undecodable. Thus, the system goes through second and third phases for b consecutive blocks. Now we are ready for the two main results: Inner and Outer bounds for the half-duplex 3 user channel model with hidden node topology.

26 CHAPTER 3. MAIN RESULTS Inner Bound Theorem 3.1 All the rate points defined by (R 1, R 2, R 3 ) R which satisfy the following equations: R 1 I(Ũ1; Ũ2, X 3, Y 3, Ũ3, W 3 ) R 2 I(Ũ2; Ũ1, X 3, Y 3, Ũ3, W 3 ) R 3 min[(i(ũ3; X 1, Y 1, Ũ1, W 1 ), I(Ũ3; X 2, Y 2, Ũ2, W 2 )] R 1 + R 2 I(Ũ1, Ũ2; X 3, Y 3, Ũ3, W 3 ) are achievable (i.e. R C) where X 1, X 2, X 3 are the channel inputs, Y 1, Y 2, Y 3 are the channel outputs and U 1, U 2, U 3, Ũ1, Ũ2, Ũ3, W 1, W 2, W 3 are auxiliary random variables whose joint distribution is of the form, P(u 1, u 2, u 3, ũ 1, ũ 2, ũ 3, w 1, w 2, w 3, x 1, x 2, x 3, y 1, y 2, y 3 ) = P 1 (u 1 )P 2 (u 2 )P 3 (u 3 )P(ũ 1, ũ 2, ũ 3, w 1, w 2, w 3 ).P(x 1 u 1, ũ 1, w 1 )P(x 2 u 2, ũ 2, w 2 ).P(x 3 u 3, ũ 3, w 3 )P(y 1, y 2, y 3 x 1, x 2, x 3 ) We shall prove the inner bound and find the half-duplex special case in Chapter 4. The proof is based on typical decoding arguments and superpostion coding based on Carleial s inner bound for a full duplex MAC with feedback [4]. We shall further investigate the inner bound in Chapter 6 to find a lot of interesting special cases.

27 CHAPTER 3. MAIN RESULTS Outer Bound Theorem 3.2 The capacity region C of the three-node half-duplex two-way network is a subset of the following region C {(R 1, R 2, R 3 ) : R 1 I(X 1 ; Y 2 X 2, Z 2,W 3, Q) + I(X 1 ; Y 3 X 2, Z 2, Z 3, Q) R 2 I(X 2 ; Y 1 X 1, Z 1,W 3, Q) + I(X 2 ; Y 3 X 1, Z 1, Z 3, Q) R 3 min[i(x 3 ; Y 1 X 1, Z 1, Q), I(X 3 ; Y 2 X 2, Z 2, Q)] R 1 + R 2 I(X 1, X 2 ; Y 3 Z 3, Q)}, where X 1, X 2, X 3, Y 1, Y 2, Y 3, Z 1, Z 2, Z 3, Q are random variables whose joint distribution is of the form, p(q)p(z 1, z 2, z 3 q)p(x 1 z 1, q)p(x 2 z 2, q)p(x 3 z 3, q).p(y 3 x 1, x 2, q)p(y 1 x 3, q)p(y 2 x 3, q), where Q is the time sharing variable. We shall prove the outer bound in Chapter 5 and find some special cases like its half duplex extension and no feedback case. The proof of the outer bound is based on Han s outer bound for a two user two-way channel [15]. It is based on the markov structure of the channel input, output and auxiliary random variables.

28 Chapter 4 Inner Bound In this section we shall prove Theorem 3.1. In the last part of the chapter, we shall also look into the half-duplex special case. 4.1 Code generation Generate M 1 = M 1 independent identically distributed (i.i.d.) random vectors U 1 according to P 1 (u 1 ) = Π n i=1p 1 (u 1,i ). Then randomly assign each random vector to one of the messages from the input alphabet of User 1(M 1 ). Similarly, we generate M 2 = M 2 and M 3 = M 3 i.i.d. random vectors U 2 and U 3 according to P 2 (u 2 ) = Π n i=1p 2 (u 2,i ) and P 3 (u 3 ) = Π n i=1p 3 (u 3,i ) respectively. Then randomly assign each of these vectors to each member of the input alphabets of User 2 and 3 respectively. 4.2 Encoding Scheme Suppose at the b 1 th block the n-tuples x b 1 1,x b 1 2 and x b 1 3 had been sent by users 1, 2 and 3 and at the end of the b 1 th block, y b 1 1,y b 1 2 and y b 1 3 is received. In the 21

29 CHAPTER 4. INNER BOUND 22 b th block we define the feedback w b 1, w b 2 and w b 3 as: w b 1 = x b 1 1,y b 1 1 w2 b = x2 b 1,y2 b 1 w b 3 = x b 1 3,y b 1 3 and set the incremental part as: ũ b 1 = u b 1 1 (m b 1 1 ) ũ b 2 = u b 1 2 (m b 1 2 ) ũ b 3 = u b 1 3 (m b 1 3 ) The encoding functions use the channel output and the information about the previous blocks (1, 2,...,b 1) that it already has in addition to the new message for the present block to construct the channel inputs for block b. x b 1 = f 1 (u b 1(m b 1),ũ b 1,w b 1) x b 2 = f 2 (u b 2(m b 2),ũ b 2,w b 2) x b 3 = f 3 (u b 3(m b 3),ũ b 3,w b 3) 4.3 Decoding Scheme At the end of block b, x b 1 and y1 b are known to User 1, x b 2 and y2 b are known to User 2 and x b 3 and y3 b are known to User 3. The decoding procedure for the three users are as follows: User 1 decodes the message sent by User 3 during block b as ˆm b 3 K such that, (u b 3( ˆm b 3),x b 1,y1,ũ b b 1,w1) b T ǫ (Ũ3, X 1, Y 1, Ũ1, W 1 ) (4.1)

30 CHAPTER 4. INNER BOUND 23 Note that the information about y b 1 1 is hidden in W 1 and the incremental information about m b 3 is in y b 1. Similarly, User 2 decodes the message sent by User 3 during block b as ˆm b 3 K such that, (u b 3( ˆm b 3),x b 2,y b 2,ũ b 2,w b 2) T ǫ (Ũ3, X 2, Y 2, Ũ2, W 2 ) (4.2) User 3 decodes the messages sent by Users 1 and 2 during block b 1 as ˆm b 1 1 I and ˆm b 1 2 J such that, (u b 1 1 ( ˆm b 1 1 ),u b 1 2 ( ˆm b 1 2 ),x b 3,y b 3,ũ b 3,w b 3) T ǫ (Ũ1, Ũ2, X 3, Y 3, Ũ3, W 3 ) 4.4 Probability of Error Due to the symmetry in the random generation of the codes, any message sent by the users for a given block yields the same probability of error. So, without the loss of generality, we can assume that (m b 1 1, m b 1 2, m b 3) = (1, 1, 1) was sent. Consider decoding at User 1. For block b, define E k = ( ˆm b 3 = k m b 3 = 1) as (i.e. the event that User 1 decodes the message sent by User 3 during block b as k when User 3 sent the message m b 3 = 1). An error event happens if either the correct codeword is not found to be jointly typical with the channel output or an incorrect code word is found to be jointly typical. Therefore, P b e1 = Pr(Ē1 k 1 E k (1, 1, 1)was sent) P(Ē1) + k 1 P(E k ) where P is the conditional probability given that (1, 1, 1) was sent (as defined in Cover and Thomas pg 394). By using the properties of jointly typical sequences we

31 CHAPTER 4. INNER BOUND 24 can show that, P(Ē1) 0 and we can bound the second term as follows: P(E k ) = Pr((u b 1 3 (k),x b 1,y b 1,ũ b 1,w b 1) T ǫ ((Ũ3, X 1, Y 1, Ũ1, W 1 ) (1, 1, 1)was sent) = P(ũ 3 )P(x b 1,y1,ũ b b 1,w1) b (ũ b 3 (k),xb 1,yb 1,ũb 1,wb 1 ) Tǫ T ǫ 2 n(h(ũ3) ǫ) 2 n(h(x 1,Y 1,Ũ1,W 1 ) ǫ) 2 n(h(ũ3,x 1,Y 1,Ũ1,W 1 ) ǫ) 2 n(h(ũ3)) ǫ) 2 n(h(x 1,Y 1,Ũ1,W 1 ) ǫ) = 2 n(h(ũ3) ǫ) 2 n(h(ũ3 X 1,Y 1,Ũ1,W 1 ) 2ǫ) = 2 n(i(ũ3;x 1,Y 1,Ũ1,W 1 ) 3ǫ) Hence, it follows that: P b e1 P(Ē1) + k 1 2 n(i(ũ3;x 1,Y 1,Ũ1,W 1 ) 3ǫ) = P(Ē1) + (K 1)2 n(i(ũ3;x 1,Y 1,Ũ1,W 1 ) 3ǫ) P(Ē1) + 2 nr 3 2 n(i(ũ3;x 1,Y 1,Ũ1,W 1 ) 3ǫ) So, to make the probability of error to be arbitrarily small, R 3 I(Ũ3; X 1, Y 1, Ũ1, W 1 ) (4.3) Following exactly the same steps for User 2, it can be shown that R 3 I(Ũ3; X 2, Y 2, Ũ2, W 2 ) (4.4) Now consider the decoding at User 3. For block b, define E ij = (( ˆm b 1, ˆm b 1) = (i, j) (m b 1, m b 2) = (1, 1)) as (i.e. the event that User 3 decodes the message sent

32 CHAPTER 4. INNER BOUND 25 by User 1 and User 2 during block b as i and j respectively when (m b 1, m b 2) = (1, 1) was sent). An error event happens if either the correct codewords are not found to be jointly typical with the channel output or an incorrect code word is found to be jointly typical. Therefore, Pe3 b = Pr(Ē11 (i,j) (1,1) E k (1, 1, 1)was sent) P(Ē11) + P(E i1 ) + P(E 1j ) + P(E ij ) i 1,j=1 i=1,j 1 i 1,j 1 By using the properties of jointly typical sequences we can show that, P(Ē1) 0 and we can bound the second term as follows: P(E i1 ) = Pr((u b 1 1 (i),u b 1 2 (1),x b 3,y b 3,ũ b 3,w b 3) T ǫ ((Ũ1, Ũ2, X 3, Y 3, Ũ3, W 3 ) (1, 1, 1)was sent) = P(ũ 1 )P(ũ 2,x b 3,y3,ũ b b 3,w3) b (ũ b 1,ũb 2,xb 3,yb 3,ũb 3,wb 3 ) Tǫ T ǫ 2 n(h(ũ1) ǫ) 2 n(h(ũ2,x 3,Y 3,Ũ3,W 3 ) ǫ) 2 n(h(ũ1,ũ2,x 3,Y 3,Ũ3,W 3 ) ǫ) 2 n(h(ũ1)) ǫ) 2 n(h(ũ2,x 3,Y 3,Ũ3,W 3 ) ǫ) = 2 n(h(ũ1) ǫ) 2 n(h(ũ1 Ũ2,X 3,Y 3,Ũ3,W 3 ) 2ǫ) = 2 n(i(ũ1;ũ2,x 3,Y 3,Ũ3,W 3 ) 3ǫ) Hence, it follows that: R 1 I(Ũ1; Ũ2, X 3, Y 3, Ũ3, W 3 ) (4.5) Following the same steps we can show that to make the probability of error arbitrarily small, R 2 I(Ũ2; Ũ1, X 3, Y 3, Ũ3, W 3 ) (4.6) R 1 + R 2 I(Ũ1, Ũ2; X 3, Y 3, Ũ3, W 3 ) (4.7)

33 CHAPTER 4. INNER BOUND Half-duplex Channel In this case, the system is time duplexed between a multiple access (MAC) phase and a multicast (MC) phase 4.1. During the MAC phase, User 1 and User 2 send their codewords to User 3. We also refer to this phase as the uplink. This takes up η fraction of the total time. During the MC phase, User 3, sends a common codeword (message) to Users 1 and 2. This happens 1 η fraction of the total time. We call this phase MC to distinguish between the case when User 3 has independent messages for Users 1 and 2 (which we call a broadcast phase or BC phase). We also refer to this phase as the downlink. x 2 Node 2 Node 2 y 2 y Node 3 Node 3 x x 1 Node 1 Node 1 y 1 Figure 4.1: Half-duplex two-way network: two states in the system Channel Half-duplex two-way channel: ω(y 1, y 2, y 3 x 1, x 2, x 3 ) = ω MAC (y 3 x 1, x 2 )ω MC (y 1, y 2 x 3 )

34 CHAPTER 4. INNER BOUND 27 Test channel: ω (y 1, y 2, y 3 u 1, ũ 1, u 2, ũ 2, u 3, w 3 ) = ω MAC (y 3 x 1, x 2 )γ 1 (x 1 u 1, ũ 1, ũ 2 )γ 2 (x 2 u 2, ũ 1, ũ 2 ) x 1,x 2,x 3.ω BC (y 1, y 2 x 3 )γ 3 (x 3 u 3, w 3 ) where, p(u 1, u 2, u 3, ũ 1, ũ 2, w 3 ) = p(u 1 )p(u 2 )p(u 3 )p(ũ 1 u 1, y 1 )p(ũ 2 u 2, y 2 )p( w 3 y 3 ) (Note the independent new information and the dependent incremental and feedback information) Encoding scheme We shall denote the alphabets for the channel inputs and outputs in script letters. The coding scheme divides the input into blocks each of n channel uses. For each block, the channel input is formed, in general from three parts: New information independent of messages of any previous block (we shall denote this by u, and its alphabet by U ) Feedback information obtained from the previous channel output (we shall denote this by w, and its alphabet by W ) Incremental information about the previous channel input obtained using the previous feedback signal (we shall denote this by ũ, and its alphabet by U ) Let us define the input alphabets and the index sets first.

35 CHAPTER 4. INNER BOUND 28 Input alphabets: U 1 = U 10 U 11 (codewords of length n) U 2 = U 20 U 22 U 3 Index sets: U 10 I, U 11 J U 10 = I, U 11 = J (for the input alphabets) U 20 K, U 22 L U 20 = K, U 22 = L Incremental alphabets: Feedback alphabets: U 3 M Ũ 1 = U 10 Ũ 2 = U 20 W 3 U 3 = M So, in general, x b i = f b i (u i, ũ i, w i ), where (i = 1, 2 or 3). But note that due to our MAC-MC configuration, Users 1 and 2 donot send any feedback information and User 3 does not send any incremental information. Hence the encoding functions are as follows: User 1: x b 1 = f1(u b b 1(m b 1), ũ b 1) f1 b : U 1 n U1 n X1 n User 2: x b 2 = f2(u b b 2(m b 2), ũ b 2) f2 b : U 2 n U2 n X2 n User 3: x b 3 = f3(u b b 3(m), w 3) b f3 b : U3 n W 3 n X3 n At the beginning of block b, Users 1 and 2 form the incremental information ũ b 1andũ b 2 based on y b 1 1, ũ b 1 and u b 1 2, ũ b 2 respectively. It forms x 1 and x 2 as mentioned above and sends through the channel using it for η fraction of the time. User 3 receives y b 3 and forms x 3 as given above. The decoding part is explained in the next section. The incremental information and the feedback information donot assume any decoding. All the decoding is lumped into the encoding functions f 1, f 2 and f 3. (This gives rise to the problem of bounding the size of the incremental and feedback alphabets).

36 CHAPTER 4. INNER BOUND 29 Encoded msgs: u b 1 = (u b 10(i), u b 11(j)) u b 1 U 1, u b 10 U 10, u b 11 U 11 i I, j J u b 2 = (u b 20(k), u b 22(l)) u b 2 U 2, u b 20 U 20, u b 22 U 22 k K, l L u b 3 = u b 3(m) u b 3 U 3, m M Incremental msgs: ũ b 10(i ) = (y b 1 1, u b 1 10 (i )) u b 1 1 (i ) ũ b 20(k ) = (y b 1 2, u b 1 20 (k )) u b 1 2 (k ) Feedback msg: w b 3 = y b Decoding scheme Let us consider block b, (1 b B). It is divided into a MAC phase and a MC phase. During the MAC phase of the previous block, User 3 recieved y b 1 3, and decoded (i, j, k, l ). During block b, User 3 decodes (î, ĵ, ˆk,ˆl). Similarly, during the MC phase of the previous block, User 1 and User 2 have recieved y b 1 1, y b 1 2 and User 1 has decoded m, k and User 2 has decoded m, i. During the current block s MC phase, User 1 decodes ( ˆm, ˆk) and User 2 decodes ( ˆm, î) User 1: Find ˆk K and ˆm M such that (u b 3( ˆm), y1, b ũ b 10(i), ũ b 20(ˆk)) A n ǫ (U 3, Y 1, Ũ10, Ũ20) User 2: Find î I and ˆm M such that (u b 3( ˆm), y2, b ũ b 10(î), ũ b 20(k)) A n ǫ (U 3, Y 2, Ũ10, Ũ20) User 3: Find î I, ĵ J, ˆk K and ˆl L such that ũ b 10(î ), u b 11(ĵ), ũ b 20(ˆk ), u b 22(ˆl), w b 3, y b 3 A n ǫ (Ũ10, U 11, Ũ20, U 22, W 3, Y 3 ) ũ10 b 1 (i ), u10 b 1 (î ), u11 b 1 (j ), ũ20 b 1 (k ), u20 b 1 (ˆk ), u22 b 1 (l ), w 3 b 1, y3 b 1

37 CHAPTER 4. INNER BOUND 30 A n ǫ (Ũ10, U 10, U 11, Ũ20, U 20, U 22, W 3, Y 3 ) Probability of Error List Decoding Error During any block, User 3 makes a list of code indices (i, k) that are typical with the channel output. Let, 1, (u b 10(i), u b 20(k), y3) b A n ǫ (U 10, U 20, Y 3 ), ψ ik (y) = 0, otherwise. Size of the list can be estimated by taking the expected value of the number of sequences that are typical with the channel output. Define the size of the list made from the output signal y as S y and let the correct indices be i and k respectively. S y = i,k ψ ik (y) E S y =Eψ i k (y) + Eψ ik (y) i=i,k k + Eψ ik (y) + Eψ ik (y) i i,k=k i i,k k 1 + (2 nr 10 1)2 n[i(u 10;Y 3 Ũ10,U 20,U 11,Ũ20,U 22 ) ǫ] + (2 nr 20 1)2 n[i(u 20;Y 3 Ũ10,U 11,Ũ20,U 10,U 22 ) ǫ] + (2 nr 10 1)(2 nr 20 1)2 n[i(u 10,U 20 ;Y 3 Ũ10,U 11,Ũ20,U 22 ) ǫ] n(r 10 I(U 10 ;Y 3 Ũ10,U 20,U 11,Ũ20,U 22 )+ǫ) + 2 n(r 20 I(U 20 ;Y 3 Ũ10,U 11,Ũ20,U 10,U 22 )+ǫ) + 2 n(r 10+R 20 I(U 10,U 20 ;Y 3 Ũ10,U 11,Ũ20,U 22 )+ǫ)

38 CHAPTER 4. INNER BOUND 31 Decoding Errors Assume that the indices used to generate the channel inputs were (i = 1, i = 1, j = 1, k = 1, k = 1, l = 1, m = 1), during block b. User 1(E km ): (u b 3( ˆm), y1, b ũ b 10(i), ũ b 20(ˆk)) A n ǫ (U 3, Y 1, Ũ10, Ũ20) P e1 = (K 1)E (M 1)E (K 1)(M 1)E nr 20 E nr 3 E n(r 20+R 3 ) E nr 20 2 n[i(u 20;Y 1 U 10,U 3 ) ǫ] + 2 nr 3 2 n[i(u 3;Y 1 U 10,U 20 ) ǫ] + 2 n(r 20+R 3 ) 2 n[i(u 20,U 3 ;Y 1 U 10 ) ǫ] User 2 (E im ): (u b 3( ˆm), y2, b ũ b 10(î), ũ b 20(k)) A n ǫ (U 3, Y 2, Ũ10, Ũ20) P e2 = (I 1)E (M 1)E (I 1)(M 1)E nr 10 E nr 3 E n(r 10+R 3 ) E nr 10 2 n[i(u 10;Y 2 U 20,U 3 ) ǫ] + 2 nr 3 2 n[i(u 3;Y 2 U 10,U 20 ) ǫ] + 2 n(r 10+R 3 ) 2 n[i(u 10,U 3 ;Y 2 U 20 ) ǫ] User 3 (E ijkl F ik ): ũ b 10(î ), u b 11(ĵ), ũ b 20(ˆk ), u b 22(ˆl), w b 3, y b 3 A n ǫ (Ũ10, U 11, Ũ20, U 22, W 3, Y 3 ) ũ10 b 1 (i ), u10 b 1 (î ), u11 b 1 (j ), ũ20 b 1 (k ), u20 b 1 (ˆk ), u22 b 1 (l ), w 3 b 1, y3 b 1 A n ǫ (Ũ10, U 10, U 11, Ũ20, U 20, U 22, W 3, Y 3 ) P e3 =(I 1)E F (J 1)E (K 1)E F (L 1)E

39 CHAPTER 4. INNER BOUND 32 + (I 1)(J 1)E F (I 1)(K 1)E F (I 1)(L 1)E F (J 1)(K 1)E F (J 1)(L 1)E (K 1)(L 1)E F (I 1)(J 1)(K 1)E F (I 1)(J 1)(L 1)E F (I 1)(K 1)(L 1)E F (J 1)(K 1)(L 1)E F (I 1)(J 1)(K 1)(L 1)E1111F n[r 10 I(Ũ10;Y 3 Ũ20,U 11,U 22, W 3 ) I(U 10 ;Y 3 U 20,Ũ10,U 11,Ũ20,U 22, W 3 )+6ǫ] + 2 n[r 11 I(U 11 ;Y 3 Ũ10,Ũ20,U 22,Ũ10,U 11,Ũ20,U 22, W 3 )+3ǫ] + 2 n[r 20 I(Ũ20;Y 3 Ũ10,U 11,U 22, W 3 ) I(U 20 ;Y 3 U 10,Ũ10,U 11,Ũ20,U 22, W 3 )+6ǫ] + 2 n[r 22 I(U 22 ;Y 3 Ũ10,Ũ22,U 11, W 3 )+3ǫ] + 2 n[(r 10+R 11 ) I(Ũ10,U 11 ;Y 3 Ũ20,U 22, W 3 ) I(U 10 ;Y 3 U 20,Ũ10,U 11,Ũ20,U 22, W 3 )+6ǫ] + 2 n[(r 10+R 20 ) I(Ũ10,Ũ20;Y 3 U 11,U 22, W 3 ) I(U 10,U 20 ;Y 3 Ũ10,U 11,Ũ20,U 22, W 3 )+6ǫ] + 2 n[(r 10+R 22 ) I(Ũ10,U 22 ;Y 3 U 11,Ũ20, W 3 ) I(U 10 ;Y 3 U 20,Ũ10,U 11,Ũ20,U 22, W 3 )+6ǫ] + 2 n[(r 11+R 20 ) I(U 11,Ũ20;Y 3 Ũ10,U 22, W 3 ) I(U 20 ;Y 3 U 10,Ũ10,U 11,Ũ20,U 22, W 3 )+6ǫ] + 2 n[(r 11+R 22 ) I(U 11,U 22 ;Y 3 Ũ10,Ũ20, W 3 )+3ǫ] + 2 n[(r 20+R 22 ) I(Ũ20,U 22 ;Y 3 Ũ10,U 11, W 3 ) I(U 20 ;Y 3 U 10,Ũ10,U 11,Ũ20,U 22, W 3 )+6ǫ] + 2 n[(r 10+R 11 +R 20 ) I(Ũ10,U 11,Ũ20;Y 3 U 22, W 3 ) I(U 10,U 20 ;Y 3 Ũ10,U 11,Ũ20,U 22, W 3 )+6ǫ] + 2 n[(r 10+R 11 +R 22 ) I(Ũ10,U 11,U 22 ;Y 3 Ũ20, W 3 ) I(U 10 ;Y 3 U 20,Ũ10,U 11,Ũ20,U 22, W 3 )+6ǫ] + 2 n[(r 10+R 20 +R 22 ) I(Ũ10,Ũ20,U 22 ;Y 3 U 11, W 3 ) I(U 10,U 20 ;Y 3 Ũ10,U 11,Ũ20,U 22, W 3 )+6ǫ] + 2 n[(r 11+R 20 +R 22 ) I(U 11,Ũ20,U 22 ;Y 3 Ũ10, W 3 ) I(U 20 ;Y 3 U 10,Ũ10,U 11,Ũ20,U 22, W 3 )+6ǫ] + 2 n[(r 10+R 11 +R 20 +R 22 ) I(Ũ10,U 11,Ũ20,U 22 ;Y 3 W 3 ) I(U 10,U 20 ;Y 3 Ũ10,U 11,Ũ20,U 22, W 3 )+6ǫ] Hence, to make the probabilities of error arbitrarily small (as n ), we have to keep the rates in the rate region defined as: R 10 η min[i(u 10 ; Y 2 U 20, U 3 ), I(U 10 ; Y 3 Ũ10, U 20, U 11, Ũ20, U 22 )

40 CHAPTER 4. INNER BOUND 33, I(Ũ10; Y 3 Ũ20, U 11, U 22, W 3 ) + I(U 10 ; Y 3 U 20, Ũ10, U 11, Ũ20, U 22, W 3 )] R 11 ηi(u 11 ; Y 3 Ũ10, Ũ20, U 22, Ũ10, U 11, Ũ20, U 22, W 3 ) R 20 η min[i(u 20 ; Y 1 U 10, U 3 ), I(U 20 ; Y 3 Ũ10, U 11, Ũ20, U 10, U 22 ), I(Ũ20; Y 3 Ũ10, U 11, U 22, W 3 ) + I(U 20 ; Y 3 U 10, Ũ10, U 11, Ũ20, U 22, W 3 )] R 22 ηi(u 22 ; Y 3 Ũ10, Ũ22, U 11, Ũ10, U 11, Ũ20, U 22, W 3 ) R 10 + R 11 ηi(ũ10, U 11 ; Y 3 Ũ20, U 22, W 3 ) + I(U 10 ; Y 3 U 20, Ũ10, U 11, Ũ20, U 22, W 3 ) R 10 + R 20 η min[i(u 10, U 20 ; Y 3 Ũ10, U 11, Ũ20, U 22 ), I(Ũ10, Ũ20; Y 3 U 11, U 22, W 3 ) + I(U 10, U 20 ; Y 3 Ũ10, U 11, Ũ20, U 22, W 3 )] R 10 + R 22 ηi(ũ10, U 22 ; Y 3 U 11, Ũ20, W 3 ) + I(U 10 ; Y 3 U 20, Ũ10, U 11, Ũ20, U 22, W 3 ) R 11 + R 20 ηi(u 11, Ũ20; Y 3 Ũ10, U 22, W 3 ) + I(U 20 ; Y 3 U 10, Ũ10, U 11, Ũ20, U 22, W 3 ) R 11 + R 22 ηi(u 11, U 22 ; Y 3 Ũ10, Ũ20, W 3 ) R 20 + R 22 ηi(ũ20, U 22 ; Y 3 Ũ10, U 11, W 3 ) + I(U 20 ; Y 3 U 10, Ũ10, U 11, Ũ20, U 22, W 3 ) R 10 + R 11 + R 20 ηi(ũ10, U 11, Ũ20; Y 3 U 22, W 3 ) + I(U 10, U 20 ; Y 3 Ũ10, U 11, Ũ20, U 22, W 3 ) R 10 + R 11 + R 22 ηi(ũ10, U 11, U 22 ; Y 3 Ũ20, W 3 ) + I(U 10 ; Y 3 U 20, Ũ10, U 11, Ũ20, U 22, W 3 ) R 10 + R 20 + R 22 ηi(ũ10, Ũ20, U 22 ; Y 3 U 11, W 3 ) + I(U 10, U 20 ; Y 3 Ũ10, U 11, Ũ20, U 22, W 3 ) R 11 + R 20 + R 22 ηi(u 11, Ũ20, U 22 ; Y 3 Ũ10, W 3 ) + I(U 20 ; Y 3 U 10, Ũ10, U 11, Ũ20, U 22, W 3 ) R 10 + R 11 + R 20 + R 22 ηi(ũ10, U 11, Ũ20, U 22 ; Y 3 W 3 ) + I(U 10, U 20 ; Y 3 Ũ10, U 11, Ũ20, U 22, W 3 ) R 3 (1 η) min [I(U 3 ; Y 1 U 10, U 20 ), I(U 3 ; Y 2 U 10, U 20 )] R 20 + R 3 (1 η)i(u 20, U 3 ; Y 1 U 10 ) R 10 + R 3 (1 η)i(u 10, U 3 ; Y 2 U 20 )

41 Chapter 5 Outer Bound This chapter contains the proof of Theorem 3.2 and a dependence balance bound on the input distributions. We will denote channel inputs by X, channel outputs by Y and the messages by W with proper subscripts and superscripts as described below. The encoding will use B blocks of n channel uses each. So, we will introduce the following notation for all variables: First, A u,b will denote the b block of User u; note the block is of length n symbols. The vector of b consecutive blocks will be denoted by A b u = (A u,1,...,a u,b 1 ). Finally, the vector with all blocks b = 1,...,B will be denoted by A = (A u,1,...,a u,b ). Note that boldface variables denote vectors for B blocks each of length n drawn from the same distribution. For example, X 1 = (X 1,1, X 1,2,...,X 1,B ) is the vector input to the channel for User 1 for B blocks and (X 1,i = (X1,i, 1 X1,i, 2...,X1,i)) n is the channel input for User 1 for the i th block which is a vector (n-tuple) itself each independently drawn from the same distribution (X1,i k X 1 ). Finally, we introduce a random variable Z u,b = (X u,b 1, Y u,b 1 ). The model in Figure 3.1 can be mathematically defined by a Markov chain as shown in Figure 5.1. Note that X s are deterministic functions of W s and Y s (i.e. X 1,b = f 1 (W 1, Y 1,b 1 ), X 2,b = f 2 (W 2, Y 2,b 1 ) and X 3,b = g(w 3, Y 3,b ) for some deterministic encoding func- 34

42 CHAPTER 5. OUTER BOUND Y n 3,b 1 X n 3,b 1 W 3 Y 1,b 1 n W 1 X1,b n Y3,b n W 2 X n X2,b n W 3 3,b... Y n 2,b 1 Figure 5.1: Markov chain model of the multiuser feedback scenario. tions f 1, f 2, g). We prove each of the inequalities in the following subsections. To avoid notational overload, we will suppress conditioning on the time-sharing variable q in the proof, but the reader should modify the appropriate steps to account for time-sharing. 5.1 Outer Bound for R 3 We start by bounding R 3 first as follows. nbr 3 = H(W 3 W 1,W 2 ) = H(W 3 W 1,W 2,Y 1 ) + I(W 3 ;Y 1 W 1,W 2 ) (5.1) = H(W 3 W 1,W 2,Y 2 ) + I(W 3 ;Y 2 W 1,W 2 ). (5.2) The first term can be upper bounded by an arbitrarily small positive number using Fano s theorem (similarly for all other bounds given below). The mutual information (second term in equation 5.1) can be upper bounded as I(W 3 ;Y 1 W 1,W 2 ) = H(Y 1 W 1,W 2 ) H(Y 1 W 1,W 2,W 3 ) = (a) B b=1 B b=1 [H(Y 1,b W 1,W 2, Y b 1 1 ) H(Y 1,b W 1,W 2,W 3, Y b 1 1 )] [H(Y 1,b W 1,W 2, Y b 1 1, X b 1) H(Y 1,b W 1,W 2,W 3, X b 1, Y b 1 1, Y b 3 )]

43 CHAPTER 5. OUTER BOUND 36 (b) = (c) = (d) B b=1 [H(Y 1,b W 1,W 2, X b 1, Y b 1 1, Z b 1) H(Y 1,b W 1,W 2,W 3, X b 1, Y b 1 1, Y b 3, X b 3, Z b 1)] B [H(Y 1,b X 1,b, Z 1,b ) H(Y 1,b X 1,b, X 3,b, Z 1,b )] b=1 B I(Y 1,b ; X 3,b X 1,b, Z 1,b ) b=1 B b=1 n i=1 I(Y i 1,b; X i 3,b X i 1,b, Z i 1,b) nbi(x 3 ; Y 1 X 1, Z 1 ), where (a): Using X 1,b = f 1 (W 1, Y 1,b 1 ) in both the terms and the inequality is due to introducing Y b 3 in the second term. (b): Using Z 1,b = (X 1,b 1, Y 1,b 1 ) in both the terms and in the second term using the fact that X 3,b = g(w 3, Y 3,b ). (c): For the first term, we just remove a few variables and hence get an upper bound and for the second term, Y 1,b being the channel output is independent of all other terms given X 3,b and X 1,b as it uses feedback. (d): Using the DMC property for the n-channel uses in each block. Hence, R 3 I(X 3 ; Y 1 X 1, Z 1 ) and similarly from equation 5.2, R 3 I(X 3 ; Y 2 X 2, Z 2 ). So, R 3 min[i(x 3 ; Y 1 X 1, Z 1 ), I(X 3 ; Y 2 X 2, Z 2 )]. (5.3) 5.2 Outer Bound for R 1 and R 2 Note that nbr 1 = H(W 1 W 2,W 3 ) = H(W 1 W 2,W 3,Y 3 ) + I(W 1 ;Y 3 W 2,W 3 ).

44 CHAPTER 5. OUTER BOUND 37 The second term can be upper bounded as I(W 1 ;Y 3 W 2,W 3 ) I(W 1 ;Y 2,Y 3 W 2,W 3 ) = I(W 1 ;Y 2 W 2,W 3 ) + I(W 1 ;Y 3 W 2,W 3,Y 2 ). (5.4) The first term can be upper bounded very similar to the proof of Section 5.1, I(W 1 ;Y 2 W 2,W 3 ) = H(Y 2 W 2,W 3 ) H(Y 2 W 1,W 2,W 3 ) = (a) (b) = (c) = (d) B b=1 B b=1 B b=1 [H(Y 2,b W 2,W 3, Y b 1 2 ) H(Y 2,b W 1,W 2,W 3, Y b 1 2 )] [H(Y 2,b W 2,W 3, Y b 1 2, X b 2) H(Y 2,b W 1,W 2,W 3, X b 2, Y b 1 2, Y b 1 1 )] [H(Y 2,b W 2,W 3, X b 1 2, Y b 1 2, Z b 2) H(Y 2,b W 1,W 2,W 3, X b 1, Y b 1 1, Y b 1 2, X b 2, Z b 2)] B [H(Y 2,b X 2,b, Z 2,b,W 3 ) H(Y 2,b X 1,b, X 2,b, Z 2,b,W 3 )] b=1 B I(Y 2,b ; X 1,b X 2,b, Z 2,b,W 3 ) b=1 B b=1 n i=1 I(Y i 2,b; X i 1,b X i 2,b, Z i 2,b,W 3 ) nbi(x 1 ; Y 2 X 2, Z 2,W 3 ), where (a): Using X 2,b = f 2 (W 2, Y 2,b 1 ) in both the terms and the inequality is due to introducing Y b 1 1 in the second term. (b): Using Z 2,b = (X 2,b 1, Y 2,b 1 ) in both the terms and in the second term using the fact that X 3,b = g(w 3, Y 3,b ). (c): For the first term, we just remove a few variables and hence get an upper bound

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

A Comparison of Two Achievable Rate Regions for the Interference Channel

A Comparison of Two Achievable Rate Regions for the Interference Channel A Comparison of Two Achievable Rate Regions for the Interference Channel Hon-Fah Chong, Mehul Motani, and Hari Krishna Garg Electrical & Computer Engineering National University of Singapore Email: {g030596,motani,eleghk}@nus.edu.sg

More information

On Multiple User Channels with State Information at the Transmitters

On Multiple User Channels with State Information at the Transmitters On Multiple User Channels with State Information at the Transmitters Styrmir Sigurjónsson and Young-Han Kim* Information Systems Laboratory Stanford University Stanford, CA 94305, USA Email: {styrmir,yhk}@stanford.edu

More information

The Gallager Converse

The Gallager Converse The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Relay Networks With Delays

Relay Networks With Delays Relay Networks With Delays Abbas El Gamal, Navid Hassanpour, and James Mammen Department of Electrical Engineering Stanford University, Stanford, CA 94305-9510 Email: {abbas, navid, jmammen}@stanford.edu

More information

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Ahmad Abu Al Haija ECE Department, McGill University, Montreal, QC, Canada Email: ahmad.abualhaija@mail.mcgill.ca

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality

Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality 1st Semester 2010/11 Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality 1. Convexity of capacity region of broadcast channel. Let C R 2 be the capacity region of

More information

A Comparison of Superposition Coding Schemes

A Comparison of Superposition Coding Schemes A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA

More information

Information Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem

Information Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem Information Theory for Wireless Communications. Lecture 0 Discrete Memoryless Multiple Access Channel (DM-MAC: The Converse Theorem Instructor: Dr. Saif Khan Mohammed Scribe: Antonios Pitarokoilis I. THE

More information

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case 1 arxiv:0901.3580v1 [cs.it] 23 Jan 2009 Feedback Capacity of the Gaussian Interference Channel to Within 1.7075 Bits: the Symmetric Case Changho Suh and David Tse Wireless Foundations in the Department

More information

Interference Channels with Source Cooperation

Interference Channels with Source Cooperation Interference Channels with Source Cooperation arxiv:95.319v1 [cs.it] 19 May 29 Vinod Prabhakaran and Pramod Viswanath Coordinated Science Laboratory University of Illinois, Urbana-Champaign Urbana, IL

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

II. THE TWO-WAY TWO-RELAY CHANNEL

II. THE TWO-WAY TWO-RELAY CHANNEL An Achievable Rate Region for the Two-Way Two-Relay Channel Jonathan Ponniah Liang-Liang Xie Department of Electrical Computer Engineering, University of Waterloo, Canada Abstract We propose an achievable

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

Multicoding Schemes for Interference Channels

Multicoding Schemes for Interference Channels Multicoding Schemes for Interference Channels 1 Ritesh Kolte, Ayfer Özgür, Haim Permuter Abstract arxiv:1502.04273v1 [cs.it] 15 Feb 2015 The best known inner bound for the 2-user discrete memoryless interference

More information

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying Ahmad Abu Al Haija, and Mai Vu, Department of Electrical and Computer Engineering McGill University Montreal, QC H3A A7 Emails: ahmadabualhaija@mailmcgillca,

More information

Capacity bounds for multiple access-cognitive interference channel

Capacity bounds for multiple access-cognitive interference channel Mirmohseni et al. EURASIP Journal on Wireless Communications and Networking, :5 http://jwcn.eurasipjournals.com/content///5 RESEARCH Open Access Capacity bounds for multiple access-cognitive interference

More information

UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, Solutions to Take-Home Midterm (Prepared by Pinar Sen)

UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, Solutions to Take-Home Midterm (Prepared by Pinar Sen) UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, 2017 Solutions to Take-Home Midterm (Prepared by Pinar Sen) 1. (30 points) Erasure broadcast channel. Let p(y 1,y 2 x) be a discrete

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

Lecture 10: Broadcast Channel and Superposition Coding

Lecture 10: Broadcast Channel and Superposition Coding Lecture 10: Broadcast Channel and Superposition Coding Scribed by: Zhe Yao 1 Broadcast channel M 0M 1M P{y 1 y x} M M 01 1 M M 0 The capacity of the broadcast channel depends only on the marginal conditional

More information

Achievable Rates and Outer Bound for the Half-Duplex MAC with Generalized Feedback

Achievable Rates and Outer Bound for the Half-Duplex MAC with Generalized Feedback 1 Achievable Rates and Outer Bound for the Half-Duplex MAC with Generalized Feedback arxiv:1108.004v1 [cs.it] 9 Jul 011 Ahmad Abu Al Haija and Mai Vu, Department of Electrical and Computer Engineering

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

On the Capacity Region of the Gaussian Z-channel

On the Capacity Region of the Gaussian Z-channel On the Capacity Region of the Gaussian Z-channel Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@eng.umd.edu ulukus@eng.umd.edu

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

Cognitive Multiple Access Networks

Cognitive Multiple Access Networks Cognitive Multiple Access Networks Natasha Devroye Email: ndevroye@deas.harvard.edu Patrick Mitran Email: mitran@deas.harvard.edu Vahid Tarokh Email: vahid@deas.harvard.edu Abstract A cognitive radio can

More information

Survey of Interference Channel

Survey of Interference Channel Survey of Interference Channel Alex Dytso Department of Electrical and Computer Engineering University of Illinois at Chicago, Chicago IL 60607, USA, Email: odytso2 @ uic.edu Abstract Interference Channel(IC)

More information

Joint Source-Channel Coding for the Multiple-Access Relay Channel

Joint Source-Channel Coding for the Multiple-Access Relay Channel Joint Source-Channel Coding for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora Department of Electrical and Computer Engineering Ben-Gurion University, Israel Email: moriny@bgu.ac.il, ron@ee.bgu.ac.il

More information

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation Mehdi Ashraphijuo, Vaneet Aggarwal and Xiaodong Wang 1 arxiv:1308.3310v1 [cs.it] 15 Aug 2013

More information

On the Capacity of the Two-Hop Half-Duplex Relay Channel

On the Capacity of the Two-Hop Half-Duplex Relay Channel On the Capacity of the Two-Hop Half-Duplex Relay Channel Nikola Zlatanov, Vahid Jamali, and Robert Schober University of British Columbia, Vancouver, Canada, and Friedrich-Alexander-University Erlangen-Nürnberg,

More information

On Network Interference Management

On Network Interference Management On Network Interference Management Aleksandar Jovičić, Hua Wang and Pramod Viswanath March 3, 2008 Abstract We study two building-block models of interference-limited wireless networks, motivated by the

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12 Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user

More information

On the Capacity of the Interference Channel with a Relay

On the Capacity of the Interference Channel with a Relay On the Capacity of the Interference Channel with a Relay Ivana Marić, Ron Dabora and Andrea Goldsmith Stanford University, Stanford, CA {ivanam,ron,andrea}@wsl.stanford.edu Abstract Capacity gains due

More information

Source and Channel Coding for Correlated Sources Over Multiuser Channels

Source and Channel Coding for Correlated Sources Over Multiuser Channels Source and Channel Coding for Correlated Sources Over Multiuser Channels Deniz Gündüz, Elza Erkip, Andrea Goldsmith, H. Vincent Poor Abstract Source and channel coding over multiuser channels in which

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

The Sensor Reachback Problem

The Sensor Reachback Problem Submitted to the IEEE Trans. on Information Theory, November 2003. 1 The Sensor Reachback Problem João Barros Sergio D. Servetto Abstract We consider the problem of reachback communication in sensor networks.

More information

A digital interface for Gaussian relay and interference networks: Lifting codes from the discrete superposition model

A digital interface for Gaussian relay and interference networks: Lifting codes from the discrete superposition model A digital interface for Gaussian relay and interference networks: Lifting codes from the discrete superposition model M. Anand, Student Member, IEEE, and P. R. Kumar, Fellow, IEEE Abstract For every Gaussian

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Cut-Set Bound and Dependence Balance Bound

Cut-Set Bound and Dependence Balance Bound Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems

More information

An Outer Bound for the Gaussian. Interference channel with a relay.

An Outer Bound for the Gaussian. Interference channel with a relay. An Outer Bound for the Gaussian Interference Channel with a Relay Ivana Marić Stanford University Stanford, CA ivanam@wsl.stanford.edu Ron Dabora Ben-Gurion University Be er-sheva, Israel ron@ee.bgu.ac.il

More information

X 1 : X Table 1: Y = X X 2

X 1 : X Table 1: Y = X X 2 ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multiple-access

More information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora

More information

Interference Channel aided by an Infrastructure Relay

Interference Channel aided by an Infrastructure Relay Interference Channel aided by an Infrastructure Relay Onur Sahin, Osvaldo Simeone, and Elza Erkip *Department of Electrical and Computer Engineering, Polytechnic Institute of New York University, Department

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem LECTURE 15 Last time: Feedback channel: setting up the problem Perfect feedback Feedback capacity Data compression Lecture outline Joint source and channel coding theorem Converse Robustness Brain teaser

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Simultaneous Nonunique Decoding Is Rate-Optimal

Simultaneous Nonunique Decoding Is Rate-Optimal Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 1-5, 2012 Simultaneous Nonunique Decoding Is Rate-Optimal Bernd Bandemer University of California, San Diego La Jolla, CA

More information

Lecture 2. Capacity of the Gaussian channel

Lecture 2. Capacity of the Gaussian channel Spring, 207 5237S, Wireless Communications II 2. Lecture 2 Capacity of the Gaussian channel Review on basic concepts in inf. theory ( Cover&Thomas: Elements of Inf. Theory, Tse&Viswanath: Appendix B) AWGN

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper

Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper Mohsen Bahrami, Ali Bereyhi, Mahtab Mirmohseni and Mohammad Reza Aref Information Systems and Security

More information

The Capacity Region for Multi-source Multi-sink Network Coding

The Capacity Region for Multi-source Multi-sink Network Coding The Capacity Region for Multi-source Multi-sink Network Coding Xijin Yan Dept. of Electrical Eng. - Systems University of Southern California Los Angeles, CA, U.S.A. xyan@usc.edu Raymond W. Yeung Dept.

More information

A Summary of Multiple Access Channels

A Summary of Multiple Access Channels A Summary of Multiple Access Channels Wenyi Zhang February 24, 2003 Abstract In this summary we attempt to present a brief overview of the classical results on the information-theoretic aspects of multiple

More information

Capacity of All Nine Models of Channel Output Feedback for the Two-user Interference Channel

Capacity of All Nine Models of Channel Output Feedback for the Two-user Interference Channel Capacity of All Nine Models of Channel Output Feedback for the Two-user Interference Channel Achaleshwar Sahai, Vaneet Aggarwal, Melda Yuksel and Ashutosh Sabharwal 1 Abstract arxiv:1104.4805v3 [cs.it]

More information

EE 4TM4: Digital Communications II. Channel Capacity

EE 4TM4: Digital Communications II. Channel Capacity EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.

More information

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada

More information

Energy State Amplification in an Energy Harvesting Communication System

Energy State Amplification in an Energy Harvesting Communication System Energy State Amplification in an Energy Harvesting Communication System Omur Ozel Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland College Park, MD 20742 omur@umd.edu

More information

IN this paper, we show that the scalar Gaussian multiple-access

IN this paper, we show that the scalar Gaussian multiple-access 768 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 5, MAY 2004 On the Duality of Gaussian Multiple-Access and Broadcast Channels Nihar Jindal, Student Member, IEEE, Sriram Vishwanath, and Andrea

More information

An Achievable Rate for the Multiple Level Relay Channel

An Achievable Rate for the Multiple Level Relay Channel An Achievable Rate for the Multiple Level Relay Channel Liang-Liang Xie and P. R. Kumar Department of Electrical and Computer Engineering, and Coordinated Science Laboratory University of Illinois, Urbana-Champaign

More information

LECTURE 13. Last time: Lecture outline

LECTURE 13. Last time: Lecture outline LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to

More information

Random Access: An Information-Theoretic Perspective

Random Access: An Information-Theoretic Perspective Random Access: An Information-Theoretic Perspective Paolo Minero, Massimo Franceschetti, and David N. C. Tse Abstract This paper considers a random access system where each sender can be in two modes of

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 1. Cascade of Binary Symmetric Channels The conditional probability distribution py x for each of the BSCs may be expressed by the transition probability

More information

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 8-3, An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Invited Paper Bernd Bandemer and

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

Interactive Decoding of a Broadcast Message

Interactive Decoding of a Broadcast Message In Proc. Allerton Conf. Commun., Contr., Computing, (Illinois), Oct. 2003 Interactive Decoding of a Broadcast Message Stark C. Draper Brendan J. Frey Frank R. Kschischang University of Toronto Toronto,

More information

Can Feedback Increase the Capacity of the Energy Harvesting Channel?

Can Feedback Increase the Capacity of the Energy Harvesting Channel? Can Feedback Increase the Capacity of the Energy Harvesting Channel? Dor Shaviv EE Dept., Stanford University shaviv@stanford.edu Ayfer Özgür EE Dept., Stanford University aozgur@stanford.edu Haim Permuter

More information

Solutions to Homework Set #3 Channel and Source coding

Solutions to Homework Set #3 Channel and Source coding Solutions to Homework Set #3 Channel and Source coding. Rates (a) Channels coding Rate: Assuming you are sending 4 different messages using usages of a channel. What is the rate (in bits per channel use)

More information

Secret Key Agreement Using Asymmetry in Channel State Knowledge

Secret Key Agreement Using Asymmetry in Channel State Knowledge Secret Key Agreement Using Asymmetry in Channel State Knowledge Ashish Khisti Deutsche Telekom Inc. R&D Lab USA Los Altos, CA, 94040 Email: ashish.khisti@telekom.com Suhas Diggavi LICOS, EFL Lausanne,

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

Approximately achieving the feedback interference channel capacity with point-to-point codes

Approximately achieving the feedback interference channel capacity with point-to-point codes Approximately achieving the feedback interference channel capacity with point-to-point codes Joyson Sebastian*, Can Karakus*, Suhas Diggavi* Abstract Superposition codes with rate-splitting have been used

More information

Source-Channel Coding Theorems for the Multiple-Access Relay Channel

Source-Channel Coding Theorems for the Multiple-Access Relay Channel Source-Channel Coding Theorems for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora, and Deniz Gündüz Abstract We study reliable transmission of arbitrarily correlated sources over multiple-access

More information

Half-Duplex Gaussian Relay Networks with Interference Processing Relays

Half-Duplex Gaussian Relay Networks with Interference Processing Relays Half-Duplex Gaussian Relay Networks with Interference Processing Relays Bama Muthuramalingam Srikrishna Bhashyam Andrew Thangaraj Department of Electrical Engineering Indian Institute of Technology Madras

More information

On Function Computation with Privacy and Secrecy Constraints

On Function Computation with Privacy and Secrecy Constraints 1 On Function Computation with Privacy and Secrecy Constraints Wenwen Tu and Lifeng Lai Abstract In this paper, the problem of function computation with privacy and secrecy constraints is considered. The

More information

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where

More information

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits

More information

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University)

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University) Performance-based Security for Encoding of Information Signals FA9550-15-1-0180 (2015-2018) Paul Cuff (Princeton University) Contributors Two students finished PhD Tiance Wang (Goldman Sachs) Eva Song

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

On the Rate-Limited Gelfand-Pinsker Problem

On the Rate-Limited Gelfand-Pinsker Problem On the Rate-Limited Gelfand-Pinsker Problem Ravi Tandon Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 ravit@umd.edu ulukus@umd.edu Abstract

More information

Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback

Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback Achaleshwar Sahai Department of ECE, Rice University, Houston, TX 775. as27@rice.edu Vaneet Aggarwal Department of

More information

On Two-user Fading Gaussian Broadcast Channels. with Perfect Channel State Information at the Receivers. Daniela Tuninetti

On Two-user Fading Gaussian Broadcast Channels. with Perfect Channel State Information at the Receivers. Daniela Tuninetti DIMACS Workshop on Network Information Theory - March 2003 Daniela Tuninetti 1 On Two-user Fading Gaussian Broadcast Channels with Perfect Channel State Information at the Receivers Daniela Tuninetti Mobile

More information

arxiv: v2 [cs.it] 28 May 2017

arxiv: v2 [cs.it] 28 May 2017 Feedback and Partial Message Side-Information on the Semideterministic Broadcast Channel Annina Bracher and Michèle Wigger arxiv:1508.01880v2 [cs.it] 28 May 2017 May 30, 2017 Abstract The capacity of the

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

Capacity Theorems for Relay Channels

Capacity Theorems for Relay Channels Capacity Theorems for Relay Channels Abbas El Gamal Department of Electrical Engineering Stanford University April, 2006 MSRI-06 Relay Channel Discrete-memoryless relay channel [vm 7] Relay Encoder Y n

More information

Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels

Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Bike Xie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract arxiv:0811.4162v4 [cs.it] 8 May 2009

More information

The Unbounded Benefit of Encoder Cooperation for the k-user MAC

The Unbounded Benefit of Encoder Cooperation for the k-user MAC The Unbounded Benefit of Encoder Cooperation for the k-user MAC Parham Noorzad, Student Member, IEEE, Michelle Effros, Fellow, IEEE, and Michael Langberg, Senior Member, IEEE arxiv:1601.06113v2 [cs.it]

More information

NOMA: Principles and Recent Results

NOMA: Principles and Recent Results NOMA: Principles and Recent Results Jinho Choi School of EECS GIST September 2017 (VTC-Fall 2017) 1 / 46 Abstract: Non-orthogonal multiple access (NOMA) becomes a key technology in 5G as it can improve

More information

Classical codes for quantum broadcast channels. arxiv: Ivan Savov and Mark M. Wilde School of Computer Science, McGill University

Classical codes for quantum broadcast channels. arxiv: Ivan Savov and Mark M. Wilde School of Computer Science, McGill University Classical codes for quantum broadcast channels arxiv:1111.3645 Ivan Savov and Mark M. Wilde School of Computer Science, McGill University International Symposium on Information Theory, Boston, USA July

More information

IET Commun., 2009, Vol. 3, Iss. 4, pp doi: /iet-com & The Institution of Engineering and Technology 2009

IET Commun., 2009, Vol. 3, Iss. 4, pp doi: /iet-com & The Institution of Engineering and Technology 2009 Published in IET Communications Received on 10th March 2008 Revised on 10th July 2008 ISSN 1751-8628 Comprehensive partial decoding approach for two-level relay networks L. Ghabeli M.R. Aref Information

More information

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions Engineering Tripos Part IIA THIRD YEAR 3F: Signals and Systems INFORMATION THEORY Examples Paper Solutions. Let the joint probability mass function of two binary random variables X and Y be given in the

More information

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels Mehdi Mohseni Department of Electrical Engineering Stanford University Stanford, CA 94305, USA Email: mmohseni@stanford.edu

More information

Two Applications of the Gaussian Poincaré Inequality in the Shannon Theory

Two Applications of the Gaussian Poincaré Inequality in the Shannon Theory Two Applications of the Gaussian Poincaré Inequality in the Shannon Theory Vincent Y. F. Tan (Joint work with Silas L. Fong) National University of Singapore (NUS) 2016 International Zurich Seminar on

More information