Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Similar documents
Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

EE 4TM4: Digital Communications II. Channel Capacity

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Distributed Lossless Compression. Distributed lossless compression system

LECTURE 13. Last time: Lecture outline

Information Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem

Simultaneous Nonunique Decoding Is Rate-Optimal

Lecture 10: Broadcast Channel and Superposition Coding

Lecture 4 Noisy Channel Coding

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Error Exponent Region for Gaussian Broadcast Channels

On the Capacity Region of the Gaussian Z-channel

ECE Information theory Final (Fall 2008)

Multicoding Schemes for Interference Channels

Shannon s noisy-channel theorem

LECTURE 10. Last time: Lecture outline

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels

Information Theory for Wireless Communications, Part II:

A Summary of Multiple Access Channels

ELEC546 Review of Information Theory

On the Duality between Multiple-Access Codes and Computation Codes

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs

Lecture 3: Channel Capacity

Appendix B Information theory from first principles

Draft. On Limiting Expressions for the Capacity Region of Gaussian Interference Channels. Mojtaba Vaezi and H. Vincent Poor

Random Access: An Information-Theoretic Perspective

LECTURE 3. Last time:

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University)

UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, Solutions to Take-Home Midterm (Prepared by Pinar Sen)

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Lecture 4 Capacity of Wireless Channels

Lecture 5 Channel Coding over Continuous Channels

Capacity bounds for multiple access-cognitive interference channel

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages

Lecture 22: Final Review

National University of Singapore Department of Electrical & Computer Engineering. Examination for

A Comparison of Superposition Coding Schemes

Polar codes for the m-user MAC and matroids

The Gallager Converse

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

A Comparison of Two Achievable Rate Regions for the Interference Channel

On Network Interference Management

On Multiple User Channels with State Information at the Transmitters

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

An Achievable Error Exponent for the Mismatched Multiple-Access Channel

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation

On the K-user Cognitive Interference Channel with Cumulative Message Sharing Sum-Capacity

Network Coding on Directed Acyclic Graphs

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

X 1 : X Table 1: Y = X X 2

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

Research Article Multiaccess Channels with State Known to Some Encoders and Independent Messages

A New Metaconverse and Outer Region for Finite-Blocklength MACs

On the Capacity of the Interference Channel with a Relay

(Classical) Information Theory III: Noisy channel coding

Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels

Secrecy in the 2-User Symmetric Interference Channel with Transmitter Cooperation: Deterministic View

Lecture 6 Channel Coding over Continuous Channels

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel

Source-Channel Coding Theorems for the Multiple-Access Relay Channel

IN this paper, we show that the scalar Gaussian multiple-access

ECE Information theory Final

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

On Gaussian MIMO Broadcast Channels with Common and Private Messages

II. THE TWO-WAY TWO-RELAY CHANNEL

Interference channel capacity region for randomized fixed-composition codes

Energy State Amplification in an Energy Harvesting Communication System

Lecture 2. Capacity of the Gaussian channel

Lecture 4 Channel Coding

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels

Morning Session Capacity-based Power Control. Department of Electrical and Computer Engineering University of Maryland

Optimal Power Allocation for Parallel Gaussian Broadcast Channels with Independent and Common Information

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel

On the Secrecy Capacity of Fading Channels

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

NOMA: Principles and Recent Results

On Capacity Under Received-Signal Constraints

A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels

Achievable Rates and Outer Bound for the Half-Duplex MAC with Generalized Feedback

Error Exponent Regions for Gaussian Broadcast and Multiple Access Channels

Cooperative HARQ with Poisson Interference and Opportunistic Routing

On the Optimality of Treating Interference as Noise in Competitive Scenarios

Chapter 8: Differential entropy. University of Illinois at Chicago ECE 534, Natasha Devroye

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel

Shannon s Noisy-Channel Coding Theorem

Source and Channel Coding for Correlated Sources Over Multiuser Channels

Information Theory Meets Game Theory on The Interference Channel

arxiv: v1 [cs.it] 4 Jun 2018

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Lecture 4 Capacity of Wireless Channels

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

Interference Channel aided by an Infrastructure Relay

Transcription:

Lecture 1: The Multiple Access Channel Copyright G. Caire 12

Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13

Two-user MAC W 1 Tx1 X 1 P Y { W c 1, W c 2 } Y X1,X 2 Rx W 2 Tx2 X 2 Copyright G. Caire 14

The networking approach to the MAC The networking approach to the MAC is to avoid multiuser interference in the presence of users random access. Examples: Aloha, CSMA, 802.11, Packet-Reservation Multiple-Access, TDMA/FDMA. Another line of thought focused on signal design in order to make interference similar to noise. Examples: CDMA, UWB. Information theory shows that neither of the above approaches is generally optimal (although it can be optimal or near-optimal in some cases). Copyright G. Caire 15

Definitions Let {X 1 X 2,P Y X1,X 2, Y} denote a memoryless stationary MAC. A (2 nr 1, 2 nr 2,n) MAC code is defined by the two message sets M 1 = {1,...,2 nr 1 }, M 2 = {1,...,2 nr 2 } Two encoding functions f 1 : M 1! X n 1, f 2 : M 2! X n 2 such that x 1 (m 1 )=f 1 (m 1 ) and x 2 (m 2 )=f 2 (m 2 ). A decoding function g : Y n! M 1 M 2 Copyright G. Caire 16

Capacity region The average probability of error is defined as P e (f 1,f 2,g)=P ( M c 1, M c 2 ) 6= (M 1,M 2 ), where M 1,M 2 are independent and uniformly distributed over M 1 and M 2, respectively, and where X n 1 = x 1 (M 1 ),X n 2 = x 2 (M 2 ), ( c M 1, c M 2 )=g(y n ). The MAC capacity region C is the closure of the set of all pairs (R 1,R 2 ) for which there exists a sequence of codes (f (n) 1,f (n) 2,g (n) ) with rates (R 1,R 2 ) and P e (f (n) 1,f (n) 2,g (n) )! 0 as n!1. Copyright G. Caire 17

Simple inner and outer bounds Time-sharing inner bound (TDMA): let C 1 = max I(X 1 ; Y X 2 = x 2 ), C 2 = max I(X 2 ; Y X 1 = x 1 ) P X1,x 2 2X 2 P X2,x 1 2X 1 then, the following region is achievable R 1 C 1 + R 2 C 2 apple 1 Sum-rate outer bound: R 1 + R 2 apple where (X 1,X 2 ) P X1 (x 1 )P X2 (x 2 ). max I(X 1,X 2 ; Y ) P X1,P X2 Copyright G. Caire 18

How the capacity region looks like? d on the capacity region of any DM-MAC R 2 C 12 Time-division inner bound C 2 Outer bound? R 1 C 1 Notice: from a time-sharing argument we have that C must be a convex region. Copyright G. Caire 19

Capacity region of the 2-user MAC Theorem 1. MAC Capacity region: The MAC capacity region is the convex closure of the rates satisfying R 1 apple I(X 1 ; Y X 2,Q), R 2 apple I(X 2 ; Y X 1,Q), R 1 + R 2 apple I(X 1,X 2 ; Y Q) for some P Q P X1 QP X2 Q. } apple apple R 2 I(X 2 ; Y X 1 ) I(X 2 ; Y ) I(X 1 ; Y ) I(X 1 ; Y X 2 ) R 1 Copyright G. Caire 20

Proof: Achievability Fix P Q, P X1 Q and P X2 Q. We shall show that any point in the interior of R(X 1,X 2,Q) (pentagon) is achievable. Codebook generation: generate a typical q with i.i.d. components P Q, and independently two codebooks {x 1 (m 1 ):m 1 2 [1 : 2 nr 1]} and {x 2 (m 2 ): m 2 2 [1 : 2 nr 2]} with independent entries P X1 Q( q i ) and P X2 Q( q i ), respectively. Encoding: to send message m 1, encoder 1 transmits x 1 (m 1 ). encoder 2 sends x 2 (m 2 ). Similarly, Decoding: Declare bm 1, bm 2 if this is the unique index pair such that (x 1 ( bm 1 ), x 2 ( bm 2 ), y) 2 T (n) (X 1,X 2,Y q), otherwise declare error. Copyright G. Caire 21

By symmetry we have P e = P(E M 1 =1,M 2 = 1) where E = {g(y n ) 6= (1, 1)}. When analyzing the random coding ensemble average error probability, (Q n,x n 1 (m 1 ),X n 2 (m 2 ),Y n ) are random vectors. We consider the possible joint conditional probability distributions of (X n 1 (m 1 ),X n 2 (m 2 ),Y n ) given Q n : m 1 m 2 joint pmf 1 1 P X1 QP X2 QP Y X1,X 2 * 1 P X1 QP X2 QP Y X2,Q 1 * P X1 QP X2 QP Y X1,Q * * P X1 QP X2 QP Y Q Copyright G. Caire 22

The error event E is contained in the union of the following events: E1 c = {(Q n,x1 n (1),X2 n (1),Y n ) /2 T (n) (Q, X 1,X 2,Y)} E 2 = {(Q n,x n 1 (m 1 ),X n 2 (1),Y n ) 2 T (n) (Q, X 1,X 2,Y) for some m 1 6=1} E 3 = {(Q n,x n 1 (1),X n 2 (m 2 ),Y n ) 2 T (n) (Q, X 1,X 2,Y) for some m 2 6=1} E 4 = {(Q n,x1 n (m 1 ),X2 n (m 2 ),Y n ) 2 T (n) (Q, X 1,X 2,Y) for some m 1 6=1,m 2 6=1} Copyright G. Caire 23

We bound each term: 1. By LLN P(E1 M c 1 =1,M 2 = 1)! 0. 2. P(E 2 M 1 =1,M 2 = 1) apple 2 n(i(x 1;Y X 2,Q) R 1 ( )). 3. P(E 3 M 1 =1,M 2 = 1) apple 2 n(i(x 2;Y X 1,Q) R 2 ( )). 4. P(E 4 M 1 =1,M 2 = 1) apple 2 n(i(x 1,X 2 ;Y Q) R 1 R 2 ( )). The result follows from the union bound. Copyright G. Caire 24

Proof: Converse Assume that there is a sequence of (R 1,R 2,n)-codes such that P e (n) n!1.! 0 as The n-letter multivariate joint distribution is given by ny (M 1,M 2,X1 n,x2 n,y n ) 2 n(r 1+R 2 ) P Y X1,X 2 (y i x 1i,x 2i ) 1{X n 1 = x 1 (m 1 )} 1{X n 2 = x 2 (m 2 )} Copyright G. Caire 25

Sum rate: from Fano inequality we have: H(M 1,M 2 Y n ) apple 1+n(R 1 + R 2 )P (n) e apple n n H(M 1 Y n,m 2 ) apple H(M 1,M 2 Y n ) apple n n H(M 2 Y n,m 1 ) apple H(M 1,M 2 Y n ) apple n n Therefore: n(r 1 + R 2 ) = H(M 1,M 2 ) H(M 1,M 2 Y n )+H(M 1,M 2 Y n ) apple I(M 1,M 2 ; Y n )+n n nr 1 = H(M 1 M 2 ) H(M 1 Y n,m 2 )+H(M 1 Y n,m 2 ) apple I(M 1 ; Y n M 2 )+n n nr 2 = H(M 2 M 1 ) H(M 2 Y n,m 1 )+H(M 2 Y n,m 1 ) apple I(M 2 ; Y n M 1 )+n n Copyright G. Caire 26

Sum rate inequality n(r 1 + R 2 ) apple I(M 1,M 2 ; Y n )+n n nx = I(M 1,M 2 ; Y i Y i 1 )+n n = = = nx I(M 1,M 2,X 1i,X 2i ; Y i Y i 1 )+n n nx I(X 1i,X 2i ; Y i Y i 1 )+I(M 1,M 2 ; Y i Y i 1,X 1i,X 2i )+n n nx I(X 1i,X 2i ; Y i )+n n Copyright G. Caire 27

Individual rate inequalities nr 1 apple I(M 1 ; Y n M 2 )+n n nx = I(M 1 ; Y i Y i 1,M 2 )+n n = apple = = = nx I(M 1 ; Y i Y i nx I(M 1,M 2,Y i nx I(X 1i,M 1,M 2,Y i nx 1,M 2,X 2i )+n n 1 ; Y i X 2i )+n n 1 ; Y i X 2i )+n n I(X 1i ; Y i X 2i )+I(M 1,M 2,Y i 1 ; Y i X 1i,X 2i ) nx I(X 1i ; Y i X 2i )+n n + n n Copyright G. Caire 28

Multiletter rate inequalities We arrive at R 1 apple 1 n R 2 apple 1 n R 1 + R 2 apple 1 n nx I(X 1i ; Y i X 2i )+ n nx I(X 2i ; Y i X 1i )+ n nx I(X 1i,X 2i ; Y i )+ n Notice that these mutual informations depend on the marginal distributions at time i induced by the joint distribution induced by the specific sequence of codes that we have supposed to exist. Copyright G. Caire 29

From multiletter to single-letter Time-sharing argument: define a new random variance Q Uniform[1 : n] independent of (X n 1,X n 2,Y n ). We can write nx R 1 apple 1 n I(X 1i ; Y i X 2i )+ n = I(X 1Q ; Y Q X 2Q,Q)+ n Now we identify X 1 = X 1Q,X 2 = X 2Q,Y = Y Q and, letting n!1, we obtain R 1 apple I(X 1 ; Y X 2,Q) R 2 apple I(X 2 ; Y X 1,Q) R 1 + R 2 apple I(X 1,X 2 ; Y Q) for some (Q, X 1,X 2 ) P Q P X1 QP X2 Q. Copyright G. Caire 30

The Gaussian MAC The Gaussian MAC is described by Y = g 1 X 1 + g 2 X 2 + Z Real case: X 1, X 2, Y = R, g 1,g 2 2 R, Z N (0,N 0 /2). Complex circularly symmetric case: X 1, X 2, Y = C, g 1,g 2 2 C, Z CN(0,N 0 ). The input constraints are given by 1 n nx x k,i (m k ) 2 apple E sk, 8 m k 2 [1 : 2 nr k], k =1, 2 Copyright G. Caire 31

For the real case, we define S 1 =2g1E 2 s1 /N 0 and S 2 =2g2E 2 s2 /N 0, and the function C(S) = 1 2 log(1 + S). For the complex case, we define S 1 = g 1 2 E s1 /N 0 and S 2 = g 2 2 E s2 /N 0, and the function C(S) = log(1 + S). Choosing X 1,X 2 Gaussian and independent we find the region R 1 apple C(S 1 ) R 2 apple C(S 2 ) R 1 + R 2 apple C(S 1 + S 2 ) It is easy to see that this is in fact the capacity region (no convex hull needed since it is achieved by a single input distribution). Copyright G. Caire 32

The Wyner-Cover pentagon apple R 2 C(S 2 ) C(S 2 /(1 + S 1 )) C(S 1 /(1 + S 2 )) C(S 1 ) R 1 Copyright G. Caire 33

Comparison with multiaccess techniques R 2 High SNR R 2 Low SNR C(S 2 ) TDMA C(S 2 ) S2 C S 1 +1 S2 C S 1 +1 S1 C S 2 +1 C(S 1 ) Treating other codeword as noise R 1 S1 C S 2 +1 C(S 1 ) R 1 Copyright G. Caire 34

Generalization to K users Theorem 2. K-user MAC capacity region. The capacity region of the K-user MAC with common message is given by the convex closure of the set of rates satisfying X R k apple I(X K ; Y X K c,q) k2k for all K [1 : K] (we use the notation X K = {X k : k 2 K}), for some P Q Q K k=1 P X k Q. Copyright G. Caire 35

K-user Gaussian MAC The Gaussian MAC with K users is described by Y = KX g k X k + Z k=1 with 1 P n n x k,i(m k ) 2 apple E sk, 8 k =1,...,K and all messages m k. Letting S k =2gk 2E sk/n 0 (real case), or S k = g k 2 E sk /N 0 (complex case), we have: ( X C(S 1,...,S K )= R k apple C X! ) S k 8 K {1,...,K} k2k k2k Copyright G. Caire 36

Resource allocation on the Gaussian MAC In resource allocation problems we care about the transmit power (not receive SNR): let k = S k / g k 2, such that X R k apple C X! g k 2 k k2k k2k We are interested in solving problems of the following form: maximize subject to KX w k R k, k=1 R 2 C KX k=1 k apple Copyright G. Caire 37

Polymatroid structure of C Definition 1. Sub-modular rank function. Let K =[1: K] and let f :2 [1:K]! R + denote a set function with the following properties: 1. f(;) =0(normalized). 2. f(k) f(k 0 ) if K K 0 (non-decreasing). 3. f(k)+f(k 0 ) f(k [ K 0 )+f(k \ K 0 ) (submodular). Definition 2. Polymatroid. The polyhedron defined by the inequalities x 2 R K +, and X x k apple f(k), 8 K [1 : K] k2k where f :2 [1:K]! R + is a submodular rank function, is called a polymatroid. Copyright G. Caire 38

Copyright G. Caire 39

Polymatroids and linear programming Suppose that we are interested in the problem of maximizing the weighted rate-sum: KX maximize w k R k, subject to R 2 C k=1 This is a linear program with K! relevant constraints, corresponding to all possible decoding orders. Using the fact that C is a polymatroid, we immediately have the optimal solution: Let denote the permutation of {1,...,K} that sorts the weights in increasing order: w 1 apple w 2 apple apple w K Copyright G. Caire 40

Then, the weighted rate-sum is maximized by the vertex R, obtained by decoding in the order, i.e., 1! 2!! K. Namely, we have R sum (w) = =! KX g k 2 w k C k 1+ P K j=k+1 g j 2 j 0 1 KX KX (w k w k 1 )C @ g j 2 A j k=1 k=1 j=k where w 0 = w 0 =0. Notice that the solution is a sum of concave functions of 1,..., K, which can be further maximized subject to the constraint P K k=1 k apple, if required. Copyright G. Caire 41

End of Lecture 1 Copyright G. Caire 42