Cut-Set Bound and Dependence Balance Bound

Similar documents
Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

On Dependence Balance Bounds for Two Way Channels

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

On the Capacity Region of the Gaussian Z-channel

Capacity Bounds for Diamond Networks

II. THE TWO-WAY TWO-RELAY CHANNEL

Variable Length Codes for Degraded Broadcast Channels

Upper Bounds on the Capacity of Binary Intermittent Communication

Shannon s noisy-channel theorem

An Outer Bound for the Gaussian. Interference channel with a relay.

Joint Source-Channel Coding for the Multiple-Access Relay Channel

On the Capacity of the Interference Channel with a Relay

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

An Achievable Rate for the Multiple Level Relay Channel

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel

A Summary of Multiple Access Channels

The Role of Directed Information in Network Capacity

National University of Singapore Department of Electrical & Computer Engineering. Examination for

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

On the Optimality of Treating Interference as Noise in Competitive Scenarios

Error Exponent Region for Gaussian Broadcast Channels

Lecture 4 Channel Coding

Interference Channel aided by an Infrastructure Relay

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Reliable Computation over Multiple-Access Channels

On the Duality between Multiple-Access Codes and Computation Codes

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

Bounds on Achievable Rates for General Multi-terminal Networks with Practical Constraints

The Capacity of the Semi-Deterministic Cognitive Interference Channel and its Application to Constant Gap Results for the Gaussian Channel

ECE Information theory Final (Fall 2008)

Cooperative Communication with Feedback via Stochastic Approximation

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

Lecture 3: Channel Capacity

Multiuser Successive Refinement and Multiple Description Coding

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel

Lecture 4 Noisy Channel Coding

Equivalence for Networks with Adversarial State

Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper

On Capacity Under Received-Signal Constraints

LECTURE 13. Last time: Lecture outline

The Poisson Channel with Side Information

On the Duality of Gaussian Multiple-Access and Broadcast Channels

Relay Networks With Delays

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources

The Gallager Converse

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

Refinement of the outer bound of capacity region in Gaussian multiple access channel with feedback

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

X 1 : X Table 1: Y = X X 2

Capacity Region of the Permutation Channel

Multicoding Schemes for Interference Channels

An Extended Fano s Inequality for the Finite Blocklength Coding

Joint Write-Once-Memory and Error-Control Codes

Secret Key Agreement Using Asymmetry in Channel State Knowledge

On Multiple User Channels with State Information at the Transmitters

The Capacity Region of the Cognitive Z-interference Channel with One Noiseless Component

Bounds on Mutual Information for Simple Codes Using Information Combining

Lecture 22: Final Review

Towards a Theory of Information Flow in the Finitary Process Soup

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Some Aspects of Finite State Channel related to Hidden Markov Process

Lecture 5 Channel Coding over Continuous Channels

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

Graph Coloring and Conditional Graph Entropy

The Capacity Region for Multi-source Multi-sink Network Coding

Can Feedback Increase the Capacity of the Energy Harvesting Channel?

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel

A Comparison of Two Achievable Rate Regions for the Interference Channel

Simple Channel Coding Bounds

Bounds and Capacity Results for the Cognitive Z-interference Channel

EE 4TM4: Digital Communications II. Channel Capacity

IET Commun., 2009, Vol. 3, Iss. 4, pp doi: /iet-com & The Institution of Engineering and Technology 2009

Generalized Writing on Dirty Paper

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback

The Unbounded Benefit of Encoder Cooperation for the k-user MAC

Distributed Functional Compression through Graph Coloring

The Capacity Region of a Class of Discrete Degraded Interference Channels

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Exercise 1. = P(y a 1)P(a 1 )

Symmetric Characterization of Finite State Markov Channels

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets

Survey of Interference Channel

On Scalable Coding in the Presence of Decoder Side Information

Achievable Rates and Outer Bound for the Half-Duplex MAC with Generalized Feedback

List of Figures. Acknowledgements. Abstract 1. 1 Introduction 2. 2 Preliminaries Superposition Coding Block Markov Encoding...

Optimization in Information Theory

LECTURE 10. Last time: Lecture outline

Source-Channel Coding Theorems for the Multiple-Access Relay Channel

A New Metaconverse and Outer Region for Finite-Blocklength MACs

Recursions for the Trapdoor Channel and an Upper Bound on its Capacity

On Scalable Source Coding for Multiple Decoders with Side Information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

Compound Polar Codes

Cases Where Finding the Minimum Entropy Coloring of a Characteristic Graph is a Polynomial Time Problem

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

Transcription:

Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems [2]. I. THE CUT-SET BOUND This section follows the discussion in [1, Section 14.10]. Consider m nodes in a memoryless network. Each node i transmit X (i) and receives Y (i). Communication from node i to node j is at rate R (ij). If we partition the network into a set of nodes S and its complement S c, the cut set bound gives us the following constraint on the possible achievable rates, in the sense that the probability of decoding errors goes to zeros as the codeword length approaches infinity. Theorem 1: If the information rates {R (ji) } are achievable, then there exists some joint probability distribution p(x (1), x (2),, x (m) ), such that R i S j S c (ij) I(X (S) ; Y (Sc) X (Sc) ) (1) for all S {1, 2,, m}. Thus the total rate of flow of information across a cut is bounded by the conditional mutual information. It is worth noting that cut-set bound are usually not achievable even for simple channels. For instance, the cut-set bound for multiple access channel takes the same form as the capacity, but fails to provide a restriction on the independence of the input distribution [1, Section 14.10].

2 II. THE DEPENDENCE BALANCE BOUND We start with the concept of K-information (also known as multiple information) i.e. the mutual information among K random variables and some of its properties. The dependence balance bound is then derived utilizing the results on K-information. A. The K-Information Definition 1: The K-information I K, is defined by ( ) K ( ) k 1 I K V1 ; V 2 ; ; V K = 1 k=1 } S {V 1,V 2,,V K S =k where V 1, V 2,, V K are K random variables (hence then name K-information); H(S) denotes the joint entropy of the random variables in the set S. S is the cardinality of the set S. The conditional K-information can be defined likewise ( ) K ( ) I K V1 ; V 2 ; ; V V0 k 1 K = 1 k=1 } S {V 1,V 2,,V K S =k H(S), (2) H(S V 0 ) (3) For the purpose of illustration, a few examples for small K s are given as follows: I 1 (A) = H(A) (4a) I 2 (A; B) = H(A) + H(B) H(A, B) = I(A; B) (4b) I 3 (A; B; C) = H(A) + H(B) + H(C) H(A, B) H(A, C) H(B, C) + H(A, B, C) (4c) = I(A; B) + I(C; B) I(A, C; B) (4d) Some properties of the K-information: K-information is symmetric in the variables. This is explained by the symmetry of the variables in the entropy function.

3 In contrast to the mutual information for two random variables, K-information can in general be negative. Chaining property ( ) I K (V1, V 2 ); V 3 ; V 4 ; ; V K+1 V 0 = I K (V 1 ; V 3 ; V 4 ; ; V K+1 V 0 ) + I K (V 2 ; V 3 ; V 4 ; ; V K+1 V 0, V 1 ) (5) Recursive relation ( ) I K V1 ; V 2 ; ; V K V 0 = I K 1 (V 1 ; V 2 ; ; V K 1 V 0 ) + I K 1 (V 1 ; V 2 ; ; V K 1 V 0, V K ) (6) For the case of K = 3, the above recursive relation gives I 3 (A; B; C D) = I 2 (A; B D) I 2 (A; B C, D) (7) More discussion (and references) on the subject of K-information can be found in Sunil Srinivasa s tutorial for EE80653 at http://www.nd.edu/ jnl/ee80653/tutorials/sunil.pdf (University of Notre Dame NetID and password required). B. The Dependence Balance Bound We consider the constraint on the input distribution on the two-input channel with noiseless feedback to the encoder, as depicted in Fig. 1. More specifically, the following are assumed: The two messages W 1 and W 2 at the input of the encoders are statistically independent, uniformly distributed, and of alphabet sizes M 1 and M 2, respectively. The messages are to be transmitted over the channel via N transmissions. Each encoder is completely described by N encoding functions, mapping the message, W 1 or W 2, and the previous channel outputs, Y n 1, into the next channel input, i.e. X 1n = f 1n (W 1, Y n 1 ) (8a) X 2n = f 2n (W 1, Y n 1 ) n = 1, 2,, N (8b)

4 W 1 ENC 1 X N 1 Two Input Channel Y N W 2 ENC 2 X N 2 Fig. 1. Two input, one output channel with feedback considered in dependence balance bound. Hence there is the Markov relation (W 1, W 2, Y n 1 ) (X 1n, X 2n ) Y n (9) The channel is discrete memoryless. It takes two inputs X 1, X 2 and generates an output Y via the conditional PDF P (y x 1, x 2 ). The dependence balance bound states that, for such an encoder pair with feedback, the input distribution must satisfy N ( I(X1n ; X 2n Y n ) I(X 1n ; X 2n Y n 1 ) ) = N I 3 (X 1n ; X 2n ; Y n Y n 1 ) 0 (10) Note that the amount of mutual information reflects the dependence among two random variables. The term I(X 1n ; X 2n Y n 1 ) can be interpretated as the dependence that is consumed in transmission, i.e. the the level of correlation the two seperate encoders can exploit to guess each other s input to the channel prior to that transmission; then the term I(X 1n ; X 2n Y n ) can be regarded as the dependence that is produced in that transmission. The dependence balance

5 bound indicates that each code has to produce at least the amount of dependence it consumes. Hence the name dependence balance bound. The dependence balance bound also gives K-information a physical interpretation for K = 3 as the dependence reduction [2]. III. APPLICATION: TWO WAY CHANNEL The dependence balance bound was derived in [2] to provide a tight outer bound for the common-output two way channel, as presented in Fig. 2. The error probabilities are defined as W 1 ENC 1 X 1 X 1 DEC 1 Ŵ 2 Two Input Channel Y W 2 ENC 2 X 2 X 2 DEC 2 Ŵ 1 Fig. 2. Single output two way configuration. P e1 = Pr{Ŵ1 W 1 } (11) P e2 = Pr{Ŵ2 W 2 } (12) It can be shown that [2] log(m 1 ) log(m 2 ) N I(X 1n ; Y n X 2n, Y n 1 ) + Nɛ 1 (13) N I(X 2n ; Y n X 1n, Y n 1 ) + Nɛ 2 (14)

6 Note that the two input channel with feedback should satisfy the dependence balance constraint in (10). Let S be a uniform random variable taking value from 1 to N, define T = (S, Y S 1 ) (15) X 1 = X 1S (16) X 2 = X 2S (17) Y = Y S (18) and the converse with the dependence balance bound can be stated as follows. Theorem 2: For each single output two way channel, the capacity region is upper bounded by 0 R 1 I(X 1 ; Y X 2, T ) (19) 0 R 2 I(X 2 ; Y X 1, T ) (20) for some p(t, x 1, x 2, y) = p(t, x 1, x 2 )p(y x 1, x 2 ) satisfying I(X 1 ; X 2 T ) I(X 1 ; X 2 Y, T ). The support lemma can bound the alphabet size of T to three. IV. COMPARISON OF THE TWO BOUNDS The cut-set bound and the dependence balance bound are different in a number of aspects. The following is a comparison between the two bounds. The cut-set bound is applicable to discrete memoryless network of any form (for example, with or without feedback, scalar output channels, or vector output channels). The dependence balance bound derived in [2] is only applicable to two input, single output channels with noiseless feedback to the two individual encoders. Kramer generalized the dependence balance bound to more than just two inputs in [3]. However, single output at the channel, and feedback of this single output to the encoders are necessary to apply the dependence balance bound. The cut set bound directly gives a upper bound on the rate, while the dependence balance bound only provides an additional constraint to the possible input distribution which help

7 tighten the outer bound due to a smaller region to maximize over. In this sense, the cut-set bound and dependence balance bound represent constraints on different quantities. For the multiple access channel with feedback, the cut-set bound coincide with the capacity region in form [1], but does not restrict the input distribution to be a product form. Using the dependence balance bound and a parallel channel, it is shown in [2] that the dependence balance bound can provide the product form constraint originally derived in [4] with a different approach. V. RECENT DEVELOPMENTS The refined and generalized dependence balance bounds are developed and applied to K- user Gaussian multiple access channel with feedback in [5]. Kramer and Savari studied two way channel networks with network coding in [6] and provides a new cut-set bound for such networks. REFERENCES [1] T.M.Cover and J.A.Thomas, Elements of Information Theory. New York: Wiley-Interscience, 1991. [2] A. P. Hekstra and F. M. J. Willems, Dependence balance bounds for single-output two-way channels, IEEE Trans. Inform. Theory, vol. 35, pp. 44 53, Jan. 1989. [3] G. Kramer, Capacity results for the discrete memoryless network, IEEE Trans. Inform. Theory, vol. 49, pp. 4 21, Jan. 2003. [4] F. M. J. Willems, The feedback capacity region of a class of discrete memoryless multiple access channels, IEEE Trans. Inform. Theory, vol. 28, pp. 93 95, Jan. 1982. [5] G. Kramer and M. Gastpar, Dependence balance and the gaussian multiaccess channel with feedback, in Proc. IEEE Information Theory Workshop, Punta del Este, Uruguay, Mar. 2006. [6] G. Kramer and S. A. Savari, Cut sets and information flow in networks of two-way channels, in Proc. IEEE International Symposium on Information Theory, Chicago, IL, June 2004.