Optimal Independent Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels

Size: px
Start display at page:

Download "Optimal Independent Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels"

Transcription

1 Optimal Independent Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Bike Xie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE arxiv: v1 [cs.it] 25 Nov 2008 Abstract Let X Y Z be a discrete memoryless degraded broadcast channel (DBC) with marginal transition probability matrices T YX and T ZX. Denote q as the distribution of the channel input X. For any given q, and H(Y X) s H(Y), where H(Y X) is the conditional entropy of Y given X and H(Y) is the entropy of Y, define the function F T Y X,T ZX (q,s) as the infimum of H(Z U), the conditional entropy of Z given U with respect to all discrete random variables U such that a) H(Y U) = s, and b) U and Y,Z are conditionally independent given X. This paper studies the function F, its properties and its calculation. This paper then applies these results to several classes of DBCs including the broadcast Z channel, the inputsymmetric DBC, which includes the degraded broadcast group-addition channel, and the discrete degraded multiplication channel. This paper provides independent encoding schemes and demonstrates that each achieve the boundary of the capacity region for the corresponding class of DBCs. This paper first represents the capacity region of the DBC X Y Z with the function F T Y X,T ZX. Secondly, this paper shows that the OR approach, an independent encoding scheme, achieves the optimal boundary of the capacity region for the multi-user broadcast Z channel. This paper then studies the inputsymmetric DBC, introduces the permutation approach, an independent encoding scheme, for the inputsymmetric DBC and proves its optimality. As a consequence, the group-addition approach achieves the optimal boundary of the capacity region for the degraded broadcast group-addition channel. Finally, this paper studies the discrete degraded broadcast multiplication channel and shows that the multiplication approach achieves the boundary of the capacity region for the discrete degraded broadcast multiplication channel. Keywords Degraded broadcast channel, independent encoding, broadcast Z channel, input-symmetric, groupaddition channel, multiplication channel. I. Introduction In the 70 s, Cover [1], Bergmans [2] and Gallager [3] established the capacity region for degraded broadcast channels. The optimal transmission strategy to achieve the boundary of the capacity region for degraded broadcast channels is generally a joint encoding scheme. Particularly, the data sent to the user with the most degraded channel is encoded first. This work was supported by the Defence Advanced Research Project Agency SPAWAR Systems Center, San Diego, California under Grant N The authors are with the Electrical Engineering Department, University of California, Los Angeles, CA USA ( xbk@ee.ucla.edu; wesel@ee.ucla.edu). 1

2 Given the encoded bits for that user, an appropriate codebook for the second most degraded channel user is selected, and so forth. The joint encoding scheme is potentially too complex to implement. Fortunately, combining independently encoded streams, one for each user, can achieve the optimal boundary of the capacity region for some broadcast channels including broadcast Gaussian channels [4], broadcast binary-symmetric channels [2] [5] [6] [7], discrete additive degraded broadcast channels [8] and two-user broadcast Z channels [9] [10]. Shannon s entropy power inequality (EPI) [11] gives a lower bound on the differential entropy of the sum of independent random variables. In Bergmans s remarkable paper [4], he applied EPI to establish a converse showing the optimality of the independent encoding scheme given by [1] [2] for broadcast Gaussian channels. Mrs. Gerber s Lemma [12] provides a lower bound on the entropy of a sequence of binary-symmetric channel outputs. Wyner and Ziv obtained Mrs. Gerber s Lemma and applied it to establish a converse showing that the independent encoding scheme for broadcast binary-symmetric channels suggested by Cover [1] and Bergmans[2] achieves the optimal boundary of the capacity region [5]. EPI and Mrs. Gerber s Lemma play the same significant role in proving the optimality of the independent encoding schemes for broadcast Gaussian channels and broadcast binarysymmetric channels. Witsenhausen and Wyner studied a conditional entropy bound for the channel output of a discrete channel and applied the results to establish an outer bound of the capacity region for degraded broadcast channels [6] [7]. For broadcast binary-symmetric channels, this outer bound coincides with the capacity region. This paper borrows ideas from Witsenhausen and Wyner [7] to study a conditional entropy bound for the channel output of a discrete degraded broadcast channel and represent the capacity region of discrete degraded broadcast channels with this conditional entropy bound. This paper simplifies the expression of the conditional entropy bound for broadcast binary-symmetric channels and broadcast Z channels. For broadcast Z channels, the simplified expression of the conditional entropy bound demonstrates that the independent encoding scheme provided in [9], which is optimal 2

3 for two-user broadcast Z channels, is also optimal for multi-user broadcast Z channels. The input-symmetric channel is introduced by Witsenhausen and Wyner [7] and studied in [13] and [14]. This paper extends the definition of the input-symmetric channel to the definition of the input-symmetric degraded broadcast channel. This paper introduces the permutation approach, an independent encoding scheme, for the input-symmetric degraded broadcast channel and proves its optimality. The discrete additive degraded broadcast channel [8] is a special case of the input-symmetric degraded broadcast channel, and the optimal approach for the discrete additive degraded broadcast channel [8] is also a special case of the permutation approach. The degraded broadcast group-addition channel is a class of input-symmetric degraded broadcast channels. The permutation approach for the degraded broadcast group-addition channel turns out to be the group-addition approach. Finally, this paper studies the discrete degraded broadcast multiplication channel and shows that the optimal transmission strategy is just the multiplication approach, which is also an independent encoding scheme. This paper is organized as follows. Section II defines the conditional entropy bound F ( ) for the channel output of a discrete degraded broadcast channel and represents the capacity region of the discrete degraded broadcast channel with the function F. In Section III, we establish a number of theorems concerning various properties of F. Section IV evaluates F ( ) and indicates the optimal transmission strategy for the discrete degraded broadcast channel. Section V proves the optimality of the OR approach, an independent encoding scheme, for the multi-user broadcast Z channel. Section VI introduces the input-symmetric degraded broadcast channel, provides the permutation approach, an independent encoding scheme, for the input-symmetric degraded broadcast channel and proves its optimality. Section VII studies the discrete degraded broadcast multiplication channel and provides the multiplication approach which achieves the capacity region for the discrete degraded broadcast multiplication channel. 3

4 II. The Conditional Entropy Bound F ( ) Let X Y Z be a discrete memoryless degraded broadcast channel where X {1,2,,k}, Y {1,2,,n} and Z {1,2,,m}. Let T YX be an n k stochastic matrix with entries T YX (j,i) = Pr(Y = j X = i) and T ZX be an m k stochastic matrix with entries T ZX (j,i) = Pr(Z = j X = i). Thus, T YX and T ZX are the marginal transition probability matrices of the degraded broadcast channel. Let vector q in the simplex k of probability k-vectors be the distribution of the channel input X. For H(Y X) s H(Y), where H(Y X) is the conditional entropy of Y given X and H(Y) is the entropy of Y, define the function FT Y X,T ZX (q,s) as the infimum of H(Z U), the conditional entropy of Z given U, with respect to all discrete random variables U such that a) H(Y U) = s; b) U and Y,Z are conditionally independent given X, i.e., the sequence U,X,Y,Z forms a Markov chain U X Y Z. We will use FT Y X,T ZX (q,s), F (q,s) and F (s) interchangeably. Theorem 1: FT Y X,T ZX (q,s) is monotonically nondecreasing in s and the infimum in its definition is a minimum. Hence, FT Y X,T ZX (q,s) can be taken as the minimum H(Z U) with respect to all discrete random variables U such that a) H(Y U) s; b) U and Y,Z are conditionally independent given X. The proof of Theorem 1 will be given in Section III. Theorem 2: The capacity region for the discrete memoryless degraded broadcast channel X Y Z is the closure of the convex hull of all rate pairs (R 1,R 2 ) satisfying 0 R 1 I(X;Y), (1) R 2 H(Z) F T Y X,T ZX (q,r 1 +H(Y X)), (2) 4

5 for any q k, where I(X;Y) is the mutual information of between X and Y, H(Y X) is the conditional entropy of Y given X and H(Z) is the entropy of Z with respect to the channel input s distribution q. Proof: The capacity region of the degraded broadcast channel is known in [1] [3] [15] as co {(R 1,R 2 ) : R 1 I(X;Y U),R 2 I(U;Z)}, (3) p(u),p(x u) where co denotes the closure of the convex hull operation, and U is the auxiliary random variable which satisfies the Markov chain U X Y Z and U min( X, Y, Z ). Rewrite (3) and we have co p(u),p(x u) = co p X =q k = co p X =q k {(R 1,R 2 ) : R 1 I(X;Y U),R 2 I(U;Z)} p(u,x) with p X =q p(u,x) with p X =q {(R 1,R 2 ) : R 1 I(X;Y U),R 2 I(U;Z)} (4) {(R 1,R 2 ) : R 1 H(Y U) H(Y X),R 2 H(Z) H(Z U)} (5) = co { (R1,R 2 ) : R 1 s H(Y X),R 2 H(Z) FT Y X,T ZX (q,s) } (6) p X =q k = co, p X =q k { (R1,R 2 ) : 0 R 1 I(X;Y),R 2 H(Z) F T Y X,T ZX (q,r 1 +H(Y X)) } (7) where p X is the vector expression of the distribution of channel input X. Some of these steps are justified as follows: (4) follows from the equivalence of p(u),p(x u) and p X =q k p(u,x) with p X =q ; 5

6 (6) follows from the definition of the conditional entropy bound F ; (7) follows from the nondecreasing property of F (s) in Theorem 1. Note that for a fixed distribution p X = q of the channel input X, the items I(X;Y), H(Z) and H(Y X) in (7) are constants. This theorem provides the relationship between the capacity region and the conditional entropy bound F for a discrete degraded broadcast channel. It also motivates the further study of F. III. Some Properties of F ( ) In this section, we will borrow ideas from [7] to establish several properties of the conditional entropy bound F ( ). In [7], Witsenhausen and Wyner defined a conditional entropy bound F( ) for a pair of discrete random variables and provided some properties of F( ). The definition of F( ) is restated here. Let X Z be a discrete memoryless channel with the m k transition probability matrix T, where the entries T(j,i) = Pr(Z = j X = i). Let q be the distribution of X. For any q k, and 0 s H(X), the function F T (q,s) is the infimum of H(Z U) with respect to all discrete random variables U such that H(X U) = s and the sequence U,X,Z is a Markov chain. By definition, F T (q,s) = FI,T (q,s), where I is an identity matrix. Since F ( ) is the generalization of F( ), most of the properties of F ( ) in this section are the generalization of the properties of F( ) in [7]. For any choice of l > 0, w = [w 1,,w l ] T l and p j k,j = 1,,l, let U be a l-ary random variable with distribution w, and let T XU = [p 1,,p l ] be the transition probability matrix from U to X. We can compute l p = p X = T XU w = w j p j (8) ξ = H(Y U) = η = H(Z U) = j=1 l w j h n (T YX p j ) (9) j=1 l w j h m (T ZX p j ) (10) j=1 6

7 where h n : n R is the entropy function, i.e., h n (p 1,,p n ) = p i logp i. 1 Thus the choices of U satisfying conditions a) and b) in the definition of FT Y X,T ZX (q,s) corresponds to the choices of l,w and p j for which (8) (9) yields p = q and ξ = s. Let S = {(p,h n (T YX p),h m (T ZX p)) k [0,logn] [0,logn] p k }. Since k is (k 1)-dimensional, k [0,logn] [0,logn] is a (k+1)-dimensional convex polytope. The mapping p (p,h n (T YX p),h m (T ZX p)) assigns a point in S for each p k. Because this mapping is continuous and the domain of the mapping, k, is compact and connected, the image S is also compact and connected. Let C be the set of all (p,ξ,η) satisfying (8) (9) and (10) for some choice of l, w and p j. By definition, the set C is the convex hull of the set S. Thus, C is compact, connected and convex. Lemma 1: C is the convex hull of S, and thus C is compact, connected and convex. Lemma 2: i) Every point of C can be obtained by (8) (9) and (10) with l k+1. In other words, one only need to consider random variables U taking at most k +1 values. ii) Every extreme point of the intersection of C with a two-dimensional plane can be obtained with l k. The proof of Lemma 2 is the same as the proof of a similar lemma for F( ) in [7]. The details of the proof is given in Appendix A. Let C = {(ξ,η) (q,ξ,η) C } be the projection of the set C on the (ξ,η)-plane. Let C q = {(ξ,η) (q,ξ,η) C } be the projection on the (ξ,η)-plane of the intersection of C with the two-dimensional plane p = q. By definition, C = q k C q. Also, C and C q are compact and convex. By definition, FT Y X,T ZX (q,s) is the infimum of all η, for which C q contains the point (s,η). Thus F T Y X,T ZX (q,s) = inf{η (q,s,η) C} = inf{η (s,η) C q }. (11) 1 All logarithms are natural in this paper. 7

8 Lemma 3: For any fixed q as the distribution of X, the domain of F T Y X,T ZX (q,s) in s is [H(Y X),H(Y)], i.e.,[ k q ih n (T YX e i ),h n (T YX q)], where e i is a vector, for which the i th entry is 1 and all other entries are zeros. Proof: For any Markov chain U X Y, by the Data Processing Theorem [16], H(Y U) H(Y X) and the equality is achieved when the random variable U = X. One also has H(Y U) H(Y) and the equality is achieved when U is a constant. Thus, the domain of F T Y X,T ZX (q,s) in s is [H(Y X),H(Y)] for a fixed distribution of channel input X. Since q is the distribution of X, H(Y X) = k q ih n (T YX e i ) and H(Y) = h n (T YX q). Q.E.D. Theorem 3: ThefunctionF T Y X,T ZX (q,s)isdefinedonthecompactconvexdomain{(q,s) q k, k q ih n (T YX e i ) s h n (T YX q)} and for each (q,s) in this domain, the infimum in its definition is a minimum, attainable with U taking at most k +1 values. Proof: By Lemma 3, the function F is defined on the compact domain {(q,s) q k, k q ih n (T YX e i ) s h n (T YX q)}. This domain is convex because k is convex, the entropy function h n (T YX q) is concave in q and k q ih n (T YX e i ) is linear in q. For each (q,s) in this domain, the set {η (s,η) C q } is non-empty. It is in fact a compact interval since C q is compact. Therefore, F T Y X,T ZX (q,s) = inf{η (s,η) C q } = min{η (s,η) C q } = min{η (q,s,η) C}. (12) By Lemma 2 i), this minimum is attained with U taking at most k +1 values. Q.E.D. By Lemma 2 ii), the extreme points of C q can be attained by convex combinations of at most k points of S. Thus, every linear function of (ξ,η) could attain its minimum with U taking at most k value since every linear function of (ξ,η) achieves its minimum over C q at an extreme point of the compact set C q. Lemma 4: The function FT Y X,T ZX (q,s) is jointly convex in (q,s). Proof: FT Y X,T ZX (q,s) is jointly convex in (q,s) because C is a convex set. In particular, 8

9 the domain of F is convex by Theorem 3. For any tow points (q 1,s 1 ) and (q 2,s 2 ) in the domain, and for any 0 θ 1, FT Y X,T ZX (θq 1 +(1 θ)q 2,θs 1 +(1 θ)s 2 ) =min{η (θq 1 +(1 θ)q 2,θs 1 +(1 θ)s 2,η) C} min{θη 1 +(1 θ)η 2 (q 1,s 1,η 1 ),(q 2,s 2,η 2 ) C} =θft Y X,T ZX (q 1,s 1 )+(1 θ)ft Y X,T ZX (q 2,s 2 ). Therefore, F T Y X,T ZX (q,s) is jointly convex in (q,s). Q.E.D. Now we give the proof of Theorem 1. Since Theorem 3 has shown that the infimum in the definition of F is a minimum, it suffices to show that F (s) = FT Y X,T ZX (q,s) is monotonically nondecreasing in s. For any fixed q, the domain of s is [H(Y X),H(Y)]. On the one hand, F (q,h(y X)) = min{h(z U) p X = q,h(y U) = H(Y X)} min{h(z U) p X = q,u = X} = H(Z X). (13) On the other hand, for any s [H(Y X),H(Y)], F (q,s) = min{h(z U) p X = q,h(y U) = s} min{h(z U,X) p X = q,h(y U) = s} (14) = H(Z X), (15) where (14) follows from H(Z U) H(Z U,X) and (15) follows from the conditional independence between Z and U given X. Equation (13) and (15) imply that for any s 9

10 [H(Y X),H(Y)], F (q,s) F (q,h(y X)). (16) Combining (16) and the fact that F (q,s) is convex in s for any fixed q, we have F (q,s) is monotonically nondecreasing in s. Q.E.D. The proof of Theorem 1 also gives an endpoint of F (s), F (q,h(y X)) = H(Z X), (17) which is achieved when U = X. The following theorem will provide the other endpoint, F (q,h(y)) = H(Z), (18) which is obtained when U is a constant. Theorem 4: For H(Y X) s H(Y), an lower bound of F (s) is F (s) s+h(z) H(Y). (19) If F (s) is derivative in s, then Proof: 0 df (s) ds 1. (20) I(U;Z) I(U;Y) (21) H(Z) H(Z U) H(Y) H(Y U) H(Z U) H(Y U)+H(Z) H(Y) F (s) s+h(z) H(Y). (22) Some of these steps are justifies as follows: 10

11 H(Z) F*(q,s) slope 1 H(Z X) slope 0 H(Y X) H(Y) s Fig. 1 The graph of the function F (s) = F T Y X,T ZX (q,s). (21) follows from the Data Processing Theorem [16]; (22) follows from the definition of F (s). When the random variable U is a constant, H(Y U) = H(Y) and H(Z U) = H(Z). Thus, the equality of F (s) s+h(z) H(Y) is attained when s = H(Y). If F (s) is derivative, then for any H(Y X) s H(Y), df (s) ds df (s) ds 1. (23) s=h(y) The first inequality follows from the convexity of F (s) and the second inequality follows from F (s) s + H(Z) H(Y) and F (H(Y)) = H(Z). df (s) ds 0 because F (s) is monotonically nondecreasing. The graph of the function F (s) = F T Y X,T ZX (q,s) is shown in Fig. 1. Q.E.D. For a fixed vector q as the distribution of X, by Theorem 2, finding the maximum of R 2 +λr 1 is equivalent to finding the minimum of F (q,s) λs. Theorem 4 indicates that forevery λ > 1, theminimum off (q,s) λsisattainedwhen s = H(Y)andF (s) = H(Z), i.e., U is a constant. Thus, the non-trivial range of λ is 0 λ 1. 11

12 The following theorem is the key to the applications in Section V and is an extension and generalization of Theorem 2.4 in [7]. Let X = (X 1,,X N ) be a sequence of channel inputs to the degraded broadcast channel X Y Z. The corresponding channel outputs are Y = (Y 1,,Y N ) and Z = (Z 1,,Z N ). Thus, the sequence of the channel outputs (Y i,z i ), i = 1,,N, areconditionallyindependent witheachothergiventhechannel inputs X. Note that the channel outputs (Y i,z i ) do not have to be identically or independently distributed since X 1,,X N could be correlated and have different distributions. Denote q i as the distribution of X i for i = 1,,N. Thus, q = q i /N is the average of the distribution of the channel inputs. For any q k, define F T (N) Y X,T(N) ZX (q,ns) be the infimum of H(Z U) with respect to all random variables U and all possible channel inputs X such that H(Y U) = Ns, the average of the distribution of the channel inputs is q and U X Y Z is a Markov chain. Theorem 5: For all N = 1,2,, and all T YX,T ZX, q, and H(Y X) s H(Y), one has F T (N) Y X,T(N) ZX (q,ns) = NF T Y X,T ZX (q,s). (24) Proof: We first prove that F T (N) Y X,T(N) ZX (q,ns) NF T Y X,T ZX (q,s). Since Ns = H(Y U) = = N H(Y i Y 1,,Y i 1,U) (25) N s i, (26) 12

13 where s i = H(Y i Y 1,,Y i 1,U) and (25) follows from the chain rule of entropy [15], H(Z U) = = N H(Z i Z 1,,Z i 1,U) (27) N H(Z i Z 1,,Z i 1,Y 1,,Y i 1,U) (28) N H(Z i Y 1,,Y i 1,U) (29) N FT Y X,T ZX (q i,s i ) (30) N N NFT Y X,T ZX ( q i /N, s i /N) (31) = NF T Y X,T ZX (q,s). (32) Some of these steps are justified as follows: (27) follows from the chain rule of entropy [15]; (28) holds because conditional entropy decreases when the conditioning increases; (29) follows from the fact that Z i and Z 1,,Z i 1 are conditionally independent given Y 1,,Y i 1 ; (30) follows from the definition of F if considering the Markov chain (U,Y 1,,Y i 1 ) X i Y i Z i ; (31) is implied by applying Jensen s inequality to the convex function F. By the definition of F T (N) Y X,T(N) ZX (q, Ns), Equation (32) implies that F T (N) Y X,T(N) ZX (q,ns) NF T Y X,T ZX (q,s). (33) On the other hand, in the case that U is composed with N independently identically distributed (i.i.d.) random variables (U 1,,U N ), and each U i X i achieves p Xi = q, 13

14 H(Y i U i ) = s and H(Z i U i ) = F T Y X,T ZX (q,s), one has H(Y U) = Ns and H(Z U) = NFT Y X,T ZX (q,s). Since F T (N) Y X,T(N) ZX is defined by taking the minimum, F T (N) Y X,T(N) ZX (q,ns) NF T Y X,T ZX (q,s). (34) Combining (33) and (34), one has F T (N) Y X,T(N) ZX (q,ns) = NF T Y X,T ZX (q,s). Q.E.D. Theorem 5 indicates that if using the degraded broadcast channel X Y Z for N times, and for a fixed q as the average of the distribution of the channel inputs, the conditional entropy bound F T (N) Y X,T(N) ZX (q, Ns) is achieved when the channel is used independently and identically for N times, and single use of the channel at each time achieves the conditional entropy bound FT Y X,T ZX (q,s). IV. Evaluation of F ( ) In this section, we evaluate F (s) = FT Y X,T ZX (q,s) via a duality technique, which is also used for evaluating F( ) in [7]. This duality technique also provides the optimal transmission strategy for the degraded broadcast channel X Y Z to achieve the maximum of R 2 +λr 1 for any λ 0. Theorem 3 shows that F T Y X,T ZX (q,s) = min{η (s,η) C q } = min{η (q,s,η) C}. (35) Thus, the function F T Y X,T ZX (q,s) is determined by the lower boundary of C q. Since C q is convex, its lower boundary can be described by the lines supporting its graph from the below. The line with slope λ in the (ξ,η)-plane supporting C q has the equation η = λξ +ψ(q,λ), (36) whereψ(q,λ)istheη-interceptofthetangentlinewithslopeλforthefunctionf T Y X,T ZX (q,s). 14

15 Thus, ψ(q,λ) = min{f (q,ξ) λξ H(Y X) ξ H(Y)} (37) = min{η λξ (ξ,η) C q } (38) = min{η λξ (q,ξ,η) C}. (39) For H(Y X) s H(Y), the function F (s) = F T Y X,T ZX (q,s) can be reconstructed by F (s) = max{ψ(q,λ)+λs < λ < }. (40) Theorem 1 shows that the graph of F (s) is supported at s = H(Y X) by a line of slope 0, and Theorem 4 shows that the graph of F (s) is supported at s = H(Y) by a line of slope 1. Thus, for H(Y X) s H(Y), F (s) = max{ψ(q,λ)+λs 0 λ 1}. (41) Let L λ be a linear transformation (q,ξ,η) (q,η λξ). It maps C and S onto the sets C λ = {(q,η λξ) (q,ξ,η) C}, (42) and S λ = {(q,h m (T ZX q) λh n (T YX q)) q k }. (43) The lower boundaries of C λ and S λ are the graphs of ψ(q,λ) and φ(q,λ) = h m (T ZX q) λh n (T YX q) respectively. Since C is the convex hull of S, and thus C λ is the convex hull of S λ, ψ(q,λ) is the lower convex envelope of φ(q,λ) on k. In conclusion, ψ(,λ) can be obtained by forming the lower convex envelope of φ(,λ) for each λ and F (q,s) can be reconstructed from ψ(q,λ) by (41). This is the dual approach to 15

16 the evaluation of F. Theorem 2 represents the capacity region for a degraded broadcast channel by the function F (q,s). Since ψ(q,λ) and F (q,s) can be constructed by each other from (37) and (41), the capacity region could also be determined by ψ(q,λ) as shown below: for any λ 0, q k max{r 2 +λr 1 p X = q} = max{h(z) F (q,s)+λs λh(y X)} q k = (H(Z) λh(y X) min{f (q,s) λs}) q k = (H(Z) λh(y X) ψ(q,λ)) (44) q k We have shown the relationship among F, ψ and the capacity region for the degraded broadcast channel. Now we state a theorem which provides the relationship among F (q,s), ψ(q, λ), φ(q, λ), and the optimal transmission strategies for the degraded broadcast channel. Theorem 6: i) For any 0 λ 1, if a point of the graph of ψ(,λ) is the convex combination of l points of the graph of φ(,λ) with arguments p j and weights w λ, j = 1,,l, then F T Y X,T ZX ( j w j p j, j w j h n (T YX p j )) = j w j h m (T ZX p j ). (45) Furthermore, for a fixed channel input distribution q = j w jp j, the optimal transmission strategy to achieve the maximum of R 2 +λr 1 is determined by l,w j and p j. In particular, the optimal transmission strategy is U = l, Pr(U = j) = w j and p X U=j = p j, where p X U=j denotes the conditional distribution of X given U = j. ii)for a predetermined channel input distribution q, if the transmission strategy U = l, Pr(U = j) = w j and p X U=j = p j achieves max{r 2 + λr 1 j w jp j = q}, then the point (q,ψ(q,λ)) is the convex combination of l points of the graph of φ(,λ) with arguments p j 16

17 and weights w λ, j = 1,,l. The proof is given in Appendix B. Note that if for some pair (q,λ), ψ(q,λ) = φ(q,λ), then the corresponding optimal transmission strategy has l = 1, which means, U is a constant. Thus, the line η = λξ +ψ(q,λ) supporting the graph of F (q, ) at its endpoint (h n (T YX q),h m (T ZX q)). A. Broadcast binary-symmetric channel For the broadcast binary-symmetric channel X Y Z with T YX = 1 α 1 α 1,T ZX = 1 α 2 α 2, (46) α 1 1 α 1 α 2 1 α 2 where 0 < α 1 < α 2 < 1/2, one has φ(p,λ) = φ((p,1 p) T,λ) = h((1 α 2 )p+α 2 (1 p)) λh((1 α 1 )p+α 1 (1 p)), (47) where h(x) = xlogx (1 x)log(1 x) is the entropy function. Taking the second derivative of φ(p,λ) with respect to p, we have φ (1 2α 2 ) 2 (p,λ) = (α 2 p+(1 α 2 )(1 p))((1 α 2 )p+α 2 (1 p)) λ(1 2α 1 ) 2 + (α 1 p+(1 α 1 )(1 p))((1 α 1 )p+α 1 (1 p)), (48) which has the sign of ρ(p,λ) = ( 1 α 1 1 2α 1 p)( 1 α 1 1 2α 1 +p)+λ( 1 α 2 1 2α 2 p)( 1 α 2 1 2α 2 +p). (49) For any 0 λ 1, min p ρ(p,λ) = λ 4(1 2α 2 ) 1 2 4(1 2α 1 ) 2. (50) 17

18 p p 0 p 1/2 1-p 1 p Fig. 2 Thus, for λ (1 2α 2 ) 2 /(1 2α 1 ) 2, φ (p,λ) 0 for all 0 p 1, and so ψ(p,λ) = φ(p,λ). In this case, the optimal transmission strategy achieving the maximum of R 1 also achieves the maximum of R 2 + λr 1, and thus the optimal transmission strategy has l = 1, which means, U is a constant. Note that φ(1/2+p,λ) = φ(1/2 p,λ). For λ < (1 2α 2 ) 2 /(1 2α 1 ) 2, φ(p,λ) has negative second derivative on an interval symmetric about p = 1/2. Let p λ = argmin p φ(p,λ) with p λ 1/2. Thus p λ satisfies φ p (p λ,λ) = 0. By symmetry, the envelope ψ(,λ) is obtained by replacing φ(p,λ) on the interval (p λ,1 p λ ) by its minimum over p, which is shown in Fig. 2. Therefore, the lower envelope of φ(p,λ) is φ(p λ,λ), for p λ p 1 p λ ψ(p,λ) = φ(p, λ), otherwise. (51) For the predetermined distribution of X, p X = q = (q,1 q) T with p λ < q < 1 p λ, (q,ψ(q,λ)) is the convex combination of the points (p λ,ψ(p λ,λ)) and (1 p λ,ψ(1 p λ,λ)). Therefore, by Theorem 6, F (q,s) = h 2 (T ZX (p λ,1 p λ ) T ) = h(α 2 + (1 2α 2 )p λ ) for s = h 2 (T YX (p λ,1 p λ ) T ) = h(α 1 + (1 2α 1 )p λ ), and 0 p λ q or, equivalently, 18

19 1 q p λ 1. This defines F (q, ) on its entire domain [h(α 1 ),h(α 1 + (1 2α 1 )q)], i.e., [H(Y X),H(Y)]. For the predetermined distribution of X, q = (q,1 q) T with q < p λ or q > 1 p λ, one has φ(q,λ) = ψ(q,λ), which means that a line with slope λ supports F (q, ) at point s = H(Y) = h(α 1 + (1 2α 1 )q, and thus the optimal transmission strategy has l = 1, which means U is a constant. V. Broadcast Z Channels The Z channel, shown in Fig. 3(a), is a binary asymmetric channel which is noiseless when symbol 1 is transmitted but noisy when symbol 0 is transmitted. The capacity of the Z channel was studied in [17]. The Broadcast Z channel is a class of discrete memoryless broadcast channels whose component channels are Z channels. A two-user broadcast Z channel with marginal transition probability matrices T YX = 1 α 1,T ZX = 1 α 2, (52) 0 1 α α 2 where 0 < α 1 α 2 < 1, is shown in Fig 3(b). The two-user broadcast Z channel is stochastically degraded and can be modeled as a physically degraded broadcast channel as shown in Fig. 4, where α = (α 2 α 1 )/(1 α 1 ) [9]. The OR approach for broadcast Z channels is an independent encoding scheme in which the transmitter first independently encodes users information messages into binary codewords and then broadcasts the binary OR of these encoded codewords. The OR approach achieves the whole boundary of the capacity region for the two-user broadcast Z channel [9] [10]. In this section, we will show that the OR approach also achieves the whole boundaries of the capacity regions for multiuser broadcast Z channels. 19

20 1 0 X (a) Y 1 0 X (b) Y Z Fig. 3 The broadcast Z channel 1 X Y Z Fig. 4 The degraded version of the broadcast Z channel A. F for the broadcast Z channel For the broadcast Z channel X Y Z shown in Fig. 3(b) and Fig. 4 with T YX = 1 α 1,T ZX = 1 α 2, (53) 0 β 1 0 β 2 where 0 < α 1 α 2 < 1, β 1 = 1 α 1, and β 2 = 1 α 2, one has φ(p,λ) = φ((1 p,p) T,λ) = h(pβ 2 ) λh(pβ 1 ). (54) Taking the second derivative of φ(p,λ) with respect to p, we have φ (p,λ) = β2 2 λβ1 2, (55) (1 pβ 2 )pβ 2 (1 pβ 1 )pβ 1 20

21 (p, ) (p, ) p 0 1 p Fig. 5 The graphs of φ(,λ) and ψ(,λ) for the broadcast Z channel which has the sign of ρ(p,λ) = pβ 1 β 2 (1 λ)+λβ 1 β 2. (56) Let β = β 2 /β 1. For the case of β λ 1, φ (p,λ) 0 for all 0 p 1. Hence, φ(p,λ) is convex in p and thus φ(p,λ) = ψ(p,λ) for all 0 p 1. In this case, the optimal transmission strategy achieving the maximum of R 1 also achieves the maximum of R 2 +λr 1, and the optimal transmission strategy has l = 1, i.e., U is a constant. Note that the transmission strategy with l = 1 is a special case of the OR approach in which the only codeword for the second user is an all one codeword. [ For the case of 0 λ < β, φ(p,λ) is concave in p on [0, β 2 λβ 1 β 1 β 2 (1 λ) β 2 λβ 1 β 1 β 2 ] and convex on (1 λ),1]. The graph of φ(,λ) in this case is shown in Fig. 5. Since φ(0,λ) = 0, ψ(, λ), the lower convex envelope of φ(, λ), is constructed by drawing the tangent through the origin. Let (p λ,φ(p λ,λ)) be the point of contact. The value of p λ is determined by φ p (p λ,λ) = φ(p λ,λ)/p λ, i.e., log(1 β 2 p λ ) = λlog(1 β 1 p λ ). (57) Let q = (1 q,q) T be the distribution of the channel input X. For q p λ, ψ(q,λ) is obtained as a convex combination of points (0,0) and (p λ,φ(p λ,λ)) with weights (p λ q)/p λ 21

22 U X Y Z 1-q/p q/p 1-p 1 p 1 Fig. 6 The optimal transmission strategy for the broadcast Z channel and q/p λ. By Theorem 6, it corresponds to s = [(p λ q)/p λ ]0+[q/p λ ]h(β 1 p λ ) = qh(β 1 p λ )/p λ and F (q,s) = q/p λ h(β 2 p λ ). Hence, for the broadcast Z channel, F T Y X,T ZX (q,qh(β 1 p)/p) = qh(β 2 p)/p (58) for p [q,1], which defines FT Y X,T ZX (q, ) on its entire domain [qh(β 1 ),h(qβ 1 )]. Also by Theorem 6, the optimal transmission strategy U X achieving max{r 2 +λr 1 j w jp j = q} is determined by l = 2, w 1 = (p λ q)/p λ, w 2 = q/p λ, p 1 = (1,0) T and p 2 = (1 p λ,p λ ) T. Sincetheoptimaltransmission strategyu X isazchannel asshowninfig.6, therandom variable X could also be constructed as the OR operation of two Bernoulli random variables withparameters(p λ q)/p λ and1 p λ respectively. Hence, theoptimaltransmission strategy for the broadcast Z channel is still an OR approach in this case. For q > p λ, ψ(q,λ) = φ(q,λ) and so the optimal transmission strategy has l = 1, i.e., U is a constant. Therefore, we provide an alternative proof to show that the OR approach achieves the whole boundary of the two-user broadcast Z channel. B. Multi-user broadcast Z channel Let X = (X 1,,X N ) be a sequence of channel inputs to the broadcast Z channel X Y Z satisfying (53). The corresponding channel outputs are Y = (Y 1,,Y N ) and Z = (Z 1,,Z N ). Thus, the sequence of the channel outputs (Y i,z i ), i = 1,,N, are conditionally independent with each other given the channel inputs X. Note that the 22

23 X Y 1 Y 2 Y 3 Y K-1 Y K... 1 / 1 / / Fig. 7 The K-user broadcast Z channel channel outputs (Y i,z i ) do not have to be identically or independently distributed since X 1,,X N could be correlated and have different distributions. Lemma 5: Consider the Markov chain U X Y Z with i Pr(X i = 0)/N = q, if H(Y U) N q p h(β 1p), (59) for some p [q,1], then The proof is given in Appendix C. H(Z U) N q p h(β 2p) (60) = N q p h(β 1pβ ), (61) Consider a K-user broadcast Z channel with marginal transition probability matrices T Yj X = 1 α j, (62) 0 β j where 0 < α 1 α K < 1, and β j = 1 α j for j = 1,,K. The K-user broadcast Z channel is stochastically degraded and can be modeled as a physically degraded broadcast channel as shown in Fig. 7. The OR approach for the K-user broadcast Z channel is to independently encode K users information messages into K binary codewords and broadcast the OR of these K encoded codewords. The j th user then successively decodes the messages 23

24 for User K, User K 1,, and finally for User j. The codebook for the j th user is designed by random coding technique according to the binary random variable X (j) with Pr{X (j) = 0} = q (j). Denote X (i) X (j) as the OR of X (i) and X (j). Hence, the channel input X is the OR of X (j) for all 1 j K, i.e., X = X (1) X (K). From the coding theorem for degraded broadcast channels [2] [3], the achievable region of the OR approach for the K-user broadcast Z channel is determined by R j I(Y j,x (j) X (j+1),,x (K) ) (63) = H(Y j X (j+1),,x (K) ) H(Y j X (j),x (j+1),,x (K) ) (64) ( K ) ( j K ) j 1 = q (i) h(β j q (i) ) q (i) h(β j q (i) ) (65) i=j+1 i=j = q t j h(β j t j ) q t j 1 h(β j t j 1 ) (66) where t j = j q(i) for j = 1,,K, and q = Pr(X = 0) = K q(i). Denote t 0 = 1. Since 0 q (1),,q (K) 1, one has 1 = t 0 t 1 t K = q. (67) We now state and prove that the achievable region of the OR approach is the capacity region for the multi-user broadcast Z channel. Fig. 8 shows the communication system for the K-user broadcast Z channel. X = (X 1,,X N ) is a length-n codeword determined by the messages W 1,,W K. Y 1,,Y K are the channel outputs corresponding to the channel input X. Theorem 7: If N Pr{X i = 0}/N = q, then no point (R 1,,R K ) such that R j q t j h(β j t j ) q t j 1 h(β j t j 1 ), j = 1,,K R d = q t d h(β d t d ) q t d 1 h(β d t d 1 )+δ, some d {1,,K},δ > 0 (68) 24

25 S 1 W 1 {1,...,M 1 } S 2... W 2 X(W 1,...,W K ) Encoder Z Z Z Y 1 Y 2 Y K S K W K {1,...,M K } DEC DEC DEC Wˆ 1 Wˆ 2 WˆK Fig. 8 The communication system for the multi-user broadcast Z channel is achievable, where the t j are as in (66) and (67). Proof (by contradiction): This proof borrows the idea of proving the converse of the coding theorem for broadcast Gaussian channels [2]. Lemma 5 plays the same role in this proof as the entropy power inequality does in the proof for broadcast Gaussian channels. We suppose that the rates of (68) is achievable, which means that the probability of decoding error for each receiver can be upper bounded by an arbitrarily small ǫ for sufficiently large N Pr{Ŵj W j Y j } < ǫ, j = 1,,K. (69) By Fano s inequality, this implies that H(W j Y j ) h(ǫ)+ǫlog(m j 1), j = 1,,K. (70) 25

26 Let o(ǫ) represent any function of ǫ such that o(ǫ) 0 and o(ǫ) 0 as ǫ 0. Equation (70) implies that H(W j Y j ), j = 1,,K, are all o(ǫ). Therefore, H(W j ) = H(W j W j+1,,w K ) (71) = I(W j ;Y j W j+1,,w K )+H(W j Y j,w j+1,,w K ) (72) I(W j ;Y j W j+1,,w K )+H(W j Y j ) (73) = H(Y j W j+1,,w K ) H(Y j W j,w j+1,,w K )+o(ǫ), (74) where (71) follows from the independence of the W j, j = 1,,K. From (68), (74) and the fact that NR j H(W j ), H(Y j W j+1,,w K ) H(Y j W j,w j+1,,w K ) N q t j h(β j t j ) N q t j 1 h(β j t j 1 ) o(ǫ). Next, using Lemma 5 and (75), we show in the Appendix D that (75) H(Y j W j+1,,w K ) N q t j h(β j t j ) o(ǫ), (76) and where Nδ should be added to the right side of (76) for j d. Therefore, it follows from (76), with j = N d, that H(Y K ) Nh(β K q)+nδ o(ǫ), (77) where q = t K = N Pr(X i = 0)/N. Since ǫ can be arbitrarily small for sufficient large N, o(ǫ) 0 as N. For sufficiently large N, H(Y K ) Nh(β K q)+nδ/2. However, it is 26

27 contradict to H(Y K ) = N H(Y K,i ) (78) N h(β K Pr(X i = 0)) (79) Nh(β K N Pr(X i = 0)/N) (80) = Nh(β K q). (81) Some of these steps are justified as follows: (78) follows from Y K = (Y K,1,,Y K,N ); (80) is obtained by applying Jensen s inequality to the concave function h( ); (81) follows from q = N Pr(X i = 0)/N. The desired contradiction has been obtained, so the theorem is proved. VI. Input Symmetric Degraded Broadcast Channels The input symmetric channel was first introduced in [7] and further studied in [13] [14]. The definition of the input symmetric channel is as follows: Let Φ n denote the symmetric group of permutations of n objects by the n n permutation matrices. An n input m output channel with transition probability matrix T m n is input symmetric if the set G T = {G Φ n Π Φ m, s.t. TG = ΠT} (82) is transitive, which means each element of {1,,n} can be mapped to every other element of {1,,n} by some permutation matrix in G T [7]. Extend the definition of the input symmetric channel to the input symmetric degraded broadcast channel as follows: Definition 1: Input Symmetric Degraded Broadcast Channel: A discrete memoryless de- 27

28 graded broadcast channel X Y Z with X = k, Y = n and Z = m is input symmetric if the set G TY X,T ZX = G TY X G TY X (83) = {G Φ k Π YX Φ n,π ZX Φ m, s.t. T YX G = Π YX T YX,T ZX G = Π ZX T ZX } (84) is transitive. Lemma 6: G TY X,T ZX is a group under matrix multiplication. Proof: Since G TY X,T ZX is a subset of Φ k, which is a group under matrix multiplication, it suffices to show that G TY X,T ZX is closed under matrix multiplication. Suppose G 1,G 2 G TY X,T ZX such that T YX G 1 = Π YX,1 T YX, T ZX G 1 = Π ZX,1 T ZX, T YX G 2 = Π YX,2 T YX and T ZX G 2 = Π ZX,2 T ZX. Thus, T YX G 1 G 2 = Π YX,1 Π YX,2 T YX, (85) and T ZX G 1 G 2 = Π ZX,1 Π ZX,2 T ZX. (86) Therefore, G 1 G 2 G TY X,T ZX. Q.E.D. Let l = G TY X,T ZX and G TY X,T ZX = {G 1,,G l }. Lemma 7: l G i = l k 11T, where l k Proof: Since G TY X,T ZX is a group, for all j = 1,,l, is an integer and 1 is an all one vector. l l G j ( G i ) = ( G i )G j = l G i. (87) Hence, l G i has k identical columns and k identical rows since G TY X,T ZX is transitive. Therefore, l G i = l k 11T. Q.E.D. 28

29 Definition 2: Smallest transitive subset of G TY X,T ZX : {G i1,,g ils } is a smallest transitive subset of G TY X,T ZX if l s j=1 G ij = l s k 11T, (88) where ls k is the smallest feasible integer. A. Some Examples The class of input symmetric degraded broadcast channels includes most of common discrete memoryless degraded broadcast channels. For example, the broadcast binarysymmetric channel with marginal transition probability matrices (46) is input symmetric since G TY X,T ZX = 1 0, (89) is transitive. Another interesting example is the broadcast symmetric binary-erasure channel with 1 a a 2 0 T YX = a 1 a 1,T ZX = a 2 a 2, (90) 0 1 a a 2 where 0 a 1 a 2 0. It is input symmetric since its G TY X,T ZX is the same as that of the broadcast binary-symmetric channel as shown in (89). Definition 3: Degraded Broadcast Group-addition Channel: A degraded broadcast channel X Y Z with X,Y,Z {1,,n} is a degraded broadcast group-addition channel if there exit two n-ary random variables N 1 and N 2 such that Y X N 1 and Z Y N 2 as shown in Fig. 9, where denotes identical distribution and denotes group addition. The class of the degraded broadcast group-addition channel includes the broadcast binarysymmetric channel and the discrete additive degraded broadcast channel [8] as special cases. Theorem 8: Degraded broadcast group-addition channels are input symmetric. 29

30 N1 N2 X Y Z Fig. 9 The degraded broadcast group-addition channel. Proof: For the degraded broadcast group-addition channel X Y Z with X,Y,Z {1,,n}, let G x for x = 1,,n, be 0-1 matrices with entries 1 if j x = i G x (i,j) = 0 otherwise for i,j = 1,,n. (91) G x forx = 1,,n,areactuallypermutationmatricesandhavethepropertythatG x1 G x2 = G x2 G x1 = G x1 x 2. Let (γ 0,,γ n 1 ) T be the distribution of N 1. Since Y has the same distribution as X N 1, one has n T YX = γ x G x. (92) x=1 Hence, T YX G x = G x T YX for all x = 1,,n. Similarly, we have T ZX G x = G x T ZX for all x = 1,,n, and so {G 1,,G n } G TY X,T ZX. (93) Since the set {G 1,,G n } is transitive by definition, G TY X,T ZX is also transitive and the degraded broadcast group-addition channel is input symmetric. Q.E.D. By definition, n j=1 G j = 11 T, and hence, {G 1,,G n } is a smallest transitive subset of G TY X,T ZX for the degraded broadcast group-addition channel. 30

31 B. Optimal input distribution and capacity region Consider the input symmetric degraded broadcast channel X Y Z with the marginal transition probability matrices T YX and T ZX. Recall that the set C is the set of all (p,ξ,η) satisfying (8) (9) and (10) for some choice of l, w and p j, j = 1,,l, the set C = {(ξ,η) (q,ξ,η) C } is the projection of the set C on the (ξ,η)-plane, and the set C q = {(ξ,η) (q,ξ,η) C } is the projection on the (ξ,η)-plane of the intersection of C with the two-dimensional plane p = q. Lemma 8: ForanypermutationmatrixG G TY X,T ZX and(p,ξ,η) C,onehas(Gp,ξ,η) C. Proof: Since (p,ξ,η) satisfying (8) (9) and (10) for some choice of l, w and p j, l w j Gp j = Gp (94) j=1 l w j h n (T YX Gp j ) = j=1 l w j h m (T ZX Gp j ) = l w j h n (Π YX T YX p j ) = ξ (95) j=1 j=1 j=1 l w j h m (Π YX T ZX p j ) = η. (96) Hence, (Gp,ξ,η) satisfying (8) (9) and (10) for the choice of l, w and Gp j. Q.E.D. Corollary 1: p k and G G TY X,T ZX, one has C Gp = C p, and so F (Gp,s) = F (p,s) for any H(Y X) s H(Y). Lemma 9: For any input symmetric degraded broadcast channel, C = C u, where u denotes the uniform distribution. Proof: For any (ξ,η) C, there exits a distribution p such that (p,ξ,η) C. Let G TY X,T ZX = {G 1,,G l }. By Corollary 1, (G j p,ξ,η) C for all j = 1,,l. By the convexity of the set C, l (q,ξ,η) = ( G j p,ξ,η) C, (97) j=1 31

32 where q = l j=1 G jp. Since G TY X,T ZX isagroup, forany permutationmatrix G G TY X,T ZX, G q = l G G j p = j=1 l G j p = q. (98) j=1 Hence, the i th entry and the j th entry of q are the same if G permutes i th row to j th row. Since the set G TY X,T ZX for an input symmetric degraded broadcast channel is transitive, all the entries of q are the same, and so q = u. This implies that (ξ,η) C u. Since (ξ,η) is arbitrarily taken from C, one has C C u. On the other hand, by definition, C C u. Therefore, C = C u. Q.E.D. Now we state and prove that the uniform distributed X is optimal for input symmetric degraded broadcast channels. Theorem 9: For any input symmetric degraded broadcast channel, its capacity region can be achieved by using the transmission strategies such that the broadcast signal X is uniformly distributed. As a consequence, the capacity region is co { (R 1,R 2 ) : R 1 s h n (T YX e 1 ),R 2 h m (T ZX u) FT Y X,T ZX (u,s),h n (T YX e 1 ) s log(n) }, where e 1 = (1,0,,0) T, n = Y, and m = Z. (99) Proof: Let q = (q 1,,q k ) T be the distribution of the channel input X for the input symmetric degraded broadcast channel X Y Z. Since G TY X is transitive, the columns 32

33 of T YX are permutations of each other. H(Y X) = = = k H(Y X = i) (100) k q i h n (T YX e i ) (101) k q i h n (T YX e 1 ) (102) = h n (T YX e 1 ), (103) which is independent with q. Let l = G TY X,T ZX and G TY X,T ZX = {G 1,,G l }. H(Z) = h m (T ZX q) (104) = h m (T ZX q) (105) = 1 l l h m (T ZX G i q) (106) h m (T ZX 1 l l G i q) (107) = h m (T ZX u), (108) where (107) follows from Fano s inequality. Since C = C u for the input symmetric degraded broadcast channel, F (q,s) F (u,s). (109) 33

34 Plugging (103), (108) and (109) into (6), the expression of the capacity region for the degraded broadcast channel, one has co { (R1,R 2 ) : R 1 s H(Y X),R 2 H(Z) FT Y X,T ZX (q,s) } (110) p X =q k co { (R1,R 2 ) : R 1 s h n (T YX e 1 ),R 2 h m (T ZX u) FT Y X,T ZX (u,s) } p X =q k (111) = co { (R 1,R 2 ) : R 1 s h n (T YX e 1 ),R 2 h m (T ZX u) FT Y X,T ZX (u,s) }, (112) which means that the capacity region can be achieved by using the transmission strategies such that the broadcast signal X is uniformly distributed. On the other hand, co { (R1,R 2 ) : R 1 s H(Y X),R 2 H(Z) FT Y X,T ZX (q,s) } (113) p X =q k co { (R 1,R 2 ) : p X = u,r 1 s H(Y X),R 2 H(Z) FT Y X,T ZX (u,s) } (114) = co { (R 1,R 2 ) : R 1 s h n (T YX e 1 ),R 2 h m (T ZX u) FT Y X,T ZX (u,s) }. (115) Hence, Equation (99) expresses the capacity region for the input symmetric degraded broadcast channels. Q.E.D. C. Permutation approach and its optimality Permutation approach is an independent encoding scheme which achieves the capacity region for input symmetric degraded broadcast channels. The block diagram of the permutationapproach is shown in Fig. 10. W 1 is the message for User 1 who sees the better channel T YX and W 2 is the message for User 2 who sees the worse channel T ZX. The permutation approach is first to independently encode these two messages into two codewords X 1 and 34

35 S 1 S 2 W 1 W 2 Encoder 1 Encoder 2 X 1 X 2 g X ( X ) 2 1 X T YX T ZX Y Z Successive Decoder Decoder 2 Wˆ 1 Wˆ 2 Fig. 10 The block diagram of the permutation approach Y Decoder 1 Xˆ,Wˆ 1 1 Decoder 2 ˆX 2 Fig. 11 The structure of the successive decoder for input symmetric degraded broadcast channels X 2 respectively. Let G s be a smallest subset of G TY X,T ZX. Denote k = X and l s = G s. We use random coding technique to design the codebook for User 1 according to the k-ary random variable X 1 with distribution p 1 and the codebook for User 2 according to the l-ary random variable X 2 with uniform distribution. Let G s = {G 1,,G ls }. Define the permutation function g x2 (x 1 ) = x if the permutation matrix G x2 maps the x th 1 column to the x th column, where x 2 {1,,l s } and x,x 1 {1,,k}. Hence, g x2 (x 1 ) = x if and only if the x th 1 row, x th column entry of G x2 is 1. The permutation approach is then to broadcast X which is obtained by applying the single-letter permutation function X = g X2 (X 1 ) on symbols of codewords X 1 and X 2. Since X 2 is uniformly distributed and l s j=1 G j = ls k 11T, the broadcast signal X is also uniformly distributed. User 2 receives Z and decodes the desired message directly. User 1 receives Y and successively decodes the message for User 2 and then for User 1. The structure of the successive decoder is shown in Fig. 11. Note that Decoder 1 in Fig. 11 is not a joint decoder even it has two inputs Y and ˆX 2. In particular, for the degraded group-addition channel with Y X N 1 and Z Y N 2, 35

36 Y y (- x ) ˆ2 Y Decoder 1 Xˆ,Wˆ 1 1 Decoder 2 ˆX 2 Fig. 12 The structure of the successive decoder for degraded group-addition channels the permutation function g x2 (x 1 ) is the group addition x 2 x 1. Hence the permutation approach for the degraded group-addition channel is the group-addition approach, which independently encodes two messages for two users and broadcast the group addition of these two encoded codewords. The successive decoder for the degraded group-addition channel is shown in Fig. 12, where ỹ = y ( ˆx 2 ). (116) From the coding theorem for degraded broadcast channels [2] [3], the achievable region of the permutation approach for the input symmetric degraded broadcast channel is determined by R 1 I(X;Y X 2 ) (117) = H(Y X 2 ) H(Y X) (118) = = = l s x 2 =1 l s x 2 =1 l s k H(Y X 2 = x 2 ) H(Y X = x) (119) h n (T YX G x2 p 1 ) x=1 k h n (T YX e x ) (120) x=1 h n (Π YX,x2 T YX p 1 ) x 2 =1 x=1 k h n (T YX e 1 ) (121) = h n (T YX p 1 ) h n (T YX e 1 ), (122) 36

37 and R 2 I(X 2 ;Z) (123) = H(Z) H(Z X 2 ) (124) = h m (T ZX u) = h m (T ZX u) l s x 2 =1 l s x 2 =1 h m (T ZX G x2 p 1 ) (125) h m (Π ZX,x2 T ZX p 1 ) (126) = h m (T ZX u) h m (T ZX p 1 ), (127) (128) where u is the k-ary uniform distribution, p 1 is the distribution of X, and e x is a 0-1 vector such that the x th entry is 1 and all other entries are 0. Hence, the achievable region is [ ] co {(R 1,R 2 ) : R 1 h n (T YX p 1 ) h n (T YX e 1 ),R 2 h m (T ZX u) h m (T ZX p 1 )} p 1 k (129) Define F(s) as the infimum of h m (T ZX p 1 ) with respect to all distributions p 1 such that h n (T YX p 1 ) = s. Hence the achievable region (129) can be expressed as { } (R 1,R 2 ) : R 1 s h n (T YX e 1 ),R 2 h m (T ZX u) env F(s),h n (T YX e 1 ) s h n (T YX u), (130) where env F(s) denotes the lower convex envelope of F(s). In order to show that the achievable region (130) is the same as the capacity region (99) for the input symmetric degraded broadcast channel, it suffices to show that env F(s) F (u,s) (131) 37

38 For any U X with uniformly distributed X, H(Z U) = u Pr(U = u)h(z U = u) (132) = u Pr(U = u)h m (T ZX p X U=u ) (133) u Pr(U = u) F(h m (T YX p X U=u )) (134) u Pr(U = u)env F(h m (T YX p X U=u )) (135) env F( u Pr(U = u)h m (T YX p X U=u )) (136) = env F(H(Y U)), (137) where p X U=u is the conditional distribution of X given U = u. Some of these steps are justified as follows: (134) follows from the definition of F(s); (136) follows from Fano s inequality. Therefore, by definition, env F(s) F (u,s). The results of this subsection may be summarized in the following theorem. Theorem 10: The permutation approach achieves the capacity region for input symmetric degraded broadcast channels, which is expressed in (129) (130) and (99). Corollary 2: The group-addition approach achieves the capacity region for degraded groupaddition channels. VII. Discrete Degraded Broadcast Multiplication Channels Definition 4: Discrete Degraded Broadcast Multiplication Channel: A discrete degraded broadcast channel X Y Z with X,Y,Z {0,1,,n} is a degraded broadcast group-addition channel if there exit two (n+1)-ary random variables N 1 and N 2 such that Y X N 1 and Z Y N 2 as shown in Fig. 13, where denotes ring multiplication. 38

39 N1 N2 X Y Z Fig. 13 The discrete degraded broadcast multiplication channel. By the definition of ring multiplication and group addition, the multiplication of zero and any element in {0,1,,n} is always zero and {1,,n} under the ring multiplication operation forms a group. Hence, the discrete degraded broadcast multiplication channel X Y Z has the channel structure as shown in Fig. 14. The sub-channel X Ỹ Z is a degraded broadcast group-addition channel with marginal distributions TỸ X and T Z X = T ZỸ T Ỹ X, where X, Ỹ, Z = {1,,n}. For the discrete degraded broadcast multiplication channel X Y Z, if the channel input X is zero, the channel outputs Y and Z are zeros for sure. If the channel input is a non-zero symbol, the channel output Y is zero with probability α 1 and Z is zero with probability α 2, where α 2 = α 1 + (1 α 1 )α. Therefore, the marginal transmission probability matrices for X Y Z are T YX = 1 α 11 T,T ZY = 1 α 1 T, (138) 0 (1 α 1 )TỸ X 0 (1 α )T ZỸ and T ZX = T ZY T YX = 1 α 11 T 1 α 1 T = 1 α 21 T, (139) 0 (1 α 1 )TỸ X 0 (1 α )T ZỸ 0 (1 α 2 )T ZỸ where 1 is an all one vector and 0 is an all zero vector. 39

Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels

Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Bike Xie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract arxiv:0811.4162v4 [cs.it] 8 May 2009

More information

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Bike ie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract Certain degraded broadcast channels

More information

The Gallager Converse

The Gallager Converse The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing

More information

Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality

Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality 1st Semester 2010/11 Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality 1. Convexity of capacity region of broadcast channel. Let C R 2 be the capacity region of

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

Lecture 10: Broadcast Channel and Superposition Coding

Lecture 10: Broadcast Channel and Superposition Coding Lecture 10: Broadcast Channel and Superposition Coding Scribed by: Zhe Yao 1 Broadcast channel M 0M 1M P{y 1 y x} M M 01 1 M M 0 The capacity of the broadcast channel depends only on the marginal conditional

More information

The Capacity Region of a Class of Discrete Degraded Interference Channels

The Capacity Region of a Class of Discrete Degraded Interference Channels The Capacity Region of a Class of Discrete Degraded Interference Channels Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@umd.edu

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003

SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SLEPIAN-WOLF RESULT { X i} RATE R x ENCODER 1 DECODER X i V i {, } { V i} ENCODER 0 RATE R v Problem: Determine R, the

More information

A Comparison of Superposition Coding Schemes

A Comparison of Superposition Coding Schemes A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12 Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user

More information

UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, Solutions to Take-Home Midterm (Prepared by Pinar Sen)

UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, Solutions to Take-Home Midterm (Prepared by Pinar Sen) UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, 2017 Solutions to Take-Home Midterm (Prepared by Pinar Sen) 1. (30 points) Erasure broadcast channel. Let p(y 1,y 2 x) be a discrete

More information

PART III. Outline. Codes and Cryptography. Sources. Optimal Codes (I) Jorge L. Villar. MAMME, Fall 2015

PART III. Outline. Codes and Cryptography. Sources. Optimal Codes (I) Jorge L. Villar. MAMME, Fall 2015 Outline Codes and Cryptography 1 Information Sources and Optimal Codes 2 Building Optimal Codes: Huffman Codes MAMME, Fall 2015 3 Shannon Entropy and Mutual Information PART III Sources Information source:

More information

Solutions to Homework Set #1 Sanov s Theorem, Rate distortion

Solutions to Homework Set #1 Sanov s Theorem, Rate distortion st Semester 00/ Solutions to Homework Set # Sanov s Theorem, Rate distortion. Sanov s theorem: Prove the simple version of Sanov s theorem for the binary random variables, i.e., let X,X,...,X n be a sequence

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

Solutions to Homework Set #3 Channel and Source coding

Solutions to Homework Set #3 Channel and Source coding Solutions to Homework Set #3 Channel and Source coding. Rates (a) Channels coding Rate: Assuming you are sending 4 different messages using usages of a channel. What is the rate (in bits per channel use)

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Jun Chen Dept. of Electrical and Computer Engr. McMaster University Hamilton, Ontario, Canada Chao Tian AT&T Labs-Research 80 Park

More information

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels Mehdi Mohseni Department of Electrical Engineering Stanford University Stanford, CA 94305, USA Email: mmohseni@stanford.edu

More information

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

On the Capacity Region of the Gaussian Z-channel

On the Capacity Region of the Gaussian Z-channel On the Capacity Region of the Gaussian Z-channel Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@eng.umd.edu ulukus@eng.umd.edu

More information

Chain Independence and Common Information

Chain Independence and Common Information 1 Chain Independence and Common Information Konstantin Makarychev and Yury Makarychev Abstract We present a new proof of a celebrated result of Gács and Körner that the common information is far less than

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

On Multiple User Channels with State Information at the Transmitters

On Multiple User Channels with State Information at the Transmitters On Multiple User Channels with State Information at the Transmitters Styrmir Sigurjónsson and Young-Han Kim* Information Systems Laboratory Stanford University Stanford, CA 94305, USA Email: {styrmir,yhk}@stanford.edu

More information

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case 1 arxiv:0901.3580v1 [cs.it] 23 Jan 2009 Feedback Capacity of the Gaussian Interference Channel to Within 1.7075 Bits: the Symmetric Case Changho Suh and David Tse Wireless Foundations in the Department

More information

Network Coding on Directed Acyclic Graphs

Network Coding on Directed Acyclic Graphs Network Coding on Directed Acyclic Graphs John MacLaren Walsh, Ph.D. Multiterminal Information Theory, Spring Quarter, 0 Reference These notes are directly derived from Chapter of R. W. Yeung s Information

More information

Information Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem

Information Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem Information Theory for Wireless Communications. Lecture 0 Discrete Memoryless Multiple Access Channel (DM-MAC: The Converse Theorem Instructor: Dr. Saif Khan Mohammed Scribe: Antonios Pitarokoilis I. THE

More information

On the Secrecy Capacity of Fading Channels

On the Secrecy Capacity of Fading Channels On the Secrecy Capacity of Fading Channels arxiv:cs/63v [cs.it] 7 Oct 26 Praveen Kumar Gopala, Lifeng Lai and Hesham El Gamal Department of Electrical and Computer Engineering The Ohio State University

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

Interactive Decoding of a Broadcast Message

Interactive Decoding of a Broadcast Message In Proc. Allerton Conf. Commun., Contr., Computing, (Illinois), Oct. 2003 Interactive Decoding of a Broadcast Message Stark C. Draper Brendan J. Frey Frank R. Kschischang University of Toronto Toronto,

More information

Approximately achieving the feedback interference channel capacity with point-to-point codes

Approximately achieving the feedback interference channel capacity with point-to-point codes Approximately achieving the feedback interference channel capacity with point-to-point codes Joyson Sebastian*, Can Karakus*, Suhas Diggavi* Abstract Superposition codes with rate-splitting have been used

More information

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada

More information

Joint Write-Once-Memory and Error-Control Codes

Joint Write-Once-Memory and Error-Control Codes 1 Joint Write-Once-Memory and Error-Control Codes Xudong Ma Pattern Technology Lab LLC, U.S.A. Email: xma@ieee.org arxiv:1411.4617v1 [cs.it] 17 ov 2014 Abstract Write-Once-Memory (WOM) is a model for many

More information

Subset Universal Lossy Compression

Subset Universal Lossy Compression Subset Universal Lossy Compression Or Ordentlich Tel Aviv University ordent@eng.tau.ac.il Ofer Shayevitz Tel Aviv University ofersha@eng.tau.ac.il Abstract A lossy source code C with rate R for a discrete

More information

List of Figures. Acknowledgements. Abstract 1. 1 Introduction 2. 2 Preliminaries Superposition Coding Block Markov Encoding...

List of Figures. Acknowledgements. Abstract 1. 1 Introduction 2. 2 Preliminaries Superposition Coding Block Markov Encoding... Contents Contents i List of Figures iv Acknowledgements vi Abstract 1 1 Introduction 2 2 Preliminaries 7 2.1 Superposition Coding........................... 7 2.2 Block Markov Encoding.........................

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY Discrete Messages and Information Content, Concept of Amount of Information, Average information, Entropy, Information rate, Source coding to increase

More information

The Unbounded Benefit of Encoder Cooperation for the k-user MAC

The Unbounded Benefit of Encoder Cooperation for the k-user MAC The Unbounded Benefit of Encoder Cooperation for the k-user MAC Parham Noorzad, Student Member, IEEE, Michelle Effros, Fellow, IEEE, and Michael Langberg, Senior Member, IEEE arxiv:1601.06113v2 [cs.it]

More information

Remote Source Coding with Two-Sided Information

Remote Source Coding with Two-Sided Information Remote Source Coding with Two-Sided Information Basak Guler Ebrahim MolavianJazi Aylin Yener Wireless Communications and Networking Laboratory Department of Electrical Engineering The Pennsylvania State

More information

Side-information Scalable Source Coding

Side-information Scalable Source Coding Side-information Scalable Source Coding Chao Tian, Member, IEEE, Suhas N. Diggavi, Member, IEEE Abstract The problem of side-information scalable (SI-scalable) source coding is considered in this work,

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

Noisy-Channel Coding

Noisy-Channel Coding Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/05264298 Part II Noisy-Channel Coding Copyright Cambridge University Press 2003.

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye Chapter 2: Entropy and Mutual Information Chapter 2 outline Definitions Entropy Joint entropy, conditional entropy Relative entropy, mutual information Chain rules Jensen s inequality Log-sum inequality

More information

Lecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1)

Lecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1) 3- Mathematical methods in communication Lecture 3 Lecturer: Haim Permuter Scribe: Yuval Carmel, Dima Khaykin, Ziv Goldfeld I. REMINDER A. Convex Set A set R is a convex set iff, x,x 2 R, θ, θ, θx + θx

More information

Capacity Region of the Permutation Channel

Capacity Region of the Permutation Channel Capacity Region of the Permutation Channel John MacLaren Walsh and Steven Weber Abstract We discuss the capacity region of a degraded broadcast channel (DBC) formed from a channel that randomly permutes

More information

EE 4TM4: Digital Communications II. Channel Capacity

EE 4TM4: Digital Communications II. Channel Capacity EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.

More information

Random Access: An Information-Theoretic Perspective

Random Access: An Information-Theoretic Perspective Random Access: An Information-Theoretic Perspective Paolo Minero, Massimo Franceschetti, and David N. C. Tse Abstract This paper considers a random access system where each sender can be in two modes of

More information

The Capacity Region of the Gaussian MIMO Broadcast Channel

The Capacity Region of the Gaussian MIMO Broadcast Channel 0-0 The Capacity Region of the Gaussian MIMO Broadcast Channel Hanan Weingarten, Yossef Steinberg and Shlomo Shamai (Shitz) Outline Problem statement Background and preliminaries Capacity region of the

More information

Simultaneous Nonunique Decoding Is Rate-Optimal

Simultaneous Nonunique Decoding Is Rate-Optimal Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 1-5, 2012 Simultaneous Nonunique Decoding Is Rate-Optimal Bernd Bandemer University of California, San Diego La Jolla, CA

More information

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University)

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University) Performance-based Security for Encoding of Information Signals FA9550-15-1-0180 (2015-2018) Paul Cuff (Princeton University) Contributors Two students finished PhD Tiance Wang (Goldman Sachs) Eva Song

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

X 1 : X Table 1: Y = X X 2

X 1 : X Table 1: Y = X X 2 ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multiple-access

More information

Chapter 2: Source coding

Chapter 2: Source coding Chapter 2: meghdadi@ensil.unilim.fr University of Limoges Chapter 2: Entropy of Markov Source Chapter 2: Entropy of Markov Source Markov model for information sources Given the present, the future is independent

More information

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation Mehdi Ashraphijuo, Vaneet Aggarwal and Xiaodong Wang 1 arxiv:1308.3310v1 [cs.it] 15 Aug 2013

More information

On Common Information and the Encoding of Sources that are Not Successively Refinable

On Common Information and the Encoding of Sources that are Not Successively Refinable On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa

More information

Shannon s Noisy-Channel Coding Theorem

Shannon s Noisy-Channel Coding Theorem Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 2015 Abstract In information theory, Shannon s Noisy-Channel Coding Theorem states that it is possible to communicate over a noisy

More information

LECTURE 13. Last time: Lecture outline

LECTURE 13. Last time: Lecture outline LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to

More information

Energy State Amplification in an Energy Harvesting Communication System

Energy State Amplification in an Energy Harvesting Communication System Energy State Amplification in an Energy Harvesting Communication System Omur Ozel Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland College Park, MD 20742 omur@umd.edu

More information

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 Source Coding With Limited-Look-Ahead Side Information at the Decoder Tsachy Weissman, Member, IEEE, Abbas El Gamal, Fellow,

More information

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels Representation of Correlated s into Graphs for Transmission over Broadcast s Suhan Choi Department of Electrical Eng. and Computer Science University of Michigan, Ann Arbor, MI 80, USA Email: suhanc@eecs.umich.edu

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Linear Codes, Target Function Classes, and Network Computing Capacity

Linear Codes, Target Function Classes, and Network Computing Capacity Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:

More information

Interference Channels with Source Cooperation

Interference Channels with Source Cooperation Interference Channels with Source Cooperation arxiv:95.319v1 [cs.it] 19 May 29 Vinod Prabhakaran and Pramod Viswanath Coordinated Science Laboratory University of Illinois, Urbana-Champaign Urbana, IL

More information

Lecture 2. Capacity of the Gaussian channel

Lecture 2. Capacity of the Gaussian channel Spring, 207 5237S, Wireless Communications II 2. Lecture 2 Capacity of the Gaussian channel Review on basic concepts in inf. theory ( Cover&Thomas: Elements of Inf. Theory, Tse&Viswanath: Appendix B) AWGN

More information

The Binary Energy Harvesting Channel. with a Unit-Sized Battery

The Binary Energy Harvesting Channel. with a Unit-Sized Battery The Binary Energy Harvesting Channel 1 with a Unit-Sized Battery Kaya Tutuncuoglu 1, Omur Ozel 2, Aylin Yener 1, and Sennur Ulukus 2 1 Department of Electrical Engineering, The Pennsylvania State University

More information

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

Performance of Polar Codes for Channel and Source Coding

Performance of Polar Codes for Channel and Source Coding Performance of Polar Codes for Channel and Source Coding Nadine Hussami AUB, Lebanon, Email: njh03@aub.edu.lb Satish Babu Korada and üdiger Urbanke EPFL, Switzerland, Email: {satish.korada,ruediger.urbanke}@epfl.ch

More information

New Results on the Equality of Exact and Wyner Common Information Rates

New Results on the Equality of Exact and Wyner Common Information Rates 08 IEEE International Symposium on Information Theory (ISIT New Results on the Equality of Exact and Wyner Common Information Rates Badri N. Vellambi Australian National University Acton, ACT 0, Australia

More information

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College

More information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora

More information

Universal Incremental Slepian-Wolf Coding

Universal Incremental Slepian-Wolf Coding Proceedings of the 43rd annual Allerton Conference, Monticello, IL, September 2004 Universal Incremental Slepian-Wolf Coding Stark C. Draper University of California, Berkeley Berkeley, CA, 94720 USA sdraper@eecs.berkeley.edu

More information

LOW-density parity-check (LDPC) codes were invented

LOW-density parity-check (LDPC) codes were invented IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 1, JANUARY 2008 51 Extremal Problems of Information Combining Yibo Jiang, Alexei Ashikhmin, Member, IEEE, Ralf Koetter, Senior Member, IEEE, and Andrew

More information

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan The Communication Complexity of Correlation Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan Transmitting Correlated Variables (X, Y) pair of correlated random variables Transmitting

More information

Layered Synthesis of Latent Gaussian Trees

Layered Synthesis of Latent Gaussian Trees Layered Synthesis of Latent Gaussian Trees Ali Moharrer, Shuangqing Wei, George T. Amariucai, and Jing Deng arxiv:1608.04484v2 [cs.it] 7 May 2017 Abstract A new synthesis scheme is proposed to generate

More information

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016 Midterm Exam Time: 09:10 12:10 11/23, 2016 Name: Student ID: Policy: (Read before You Start to Work) The exam is closed book. However, you are allowed to bring TWO A4-size cheat sheet (single-sheet, two-sided).

More information

Capacity bounds for multiple access-cognitive interference channel

Capacity bounds for multiple access-cognitive interference channel Mirmohseni et al. EURASIP Journal on Wireless Communications and Networking, :5 http://jwcn.eurasipjournals.com/content///5 RESEARCH Open Access Capacity bounds for multiple access-cognitive interference

More information

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers Albrecht Wolf, Diana Cristina González, Meik Dörpinghaus, José Cândido Silveira Santos Filho, and Gerhard Fettweis Vodafone

More information

ProblemsWeCanSolveWithaHelper

ProblemsWeCanSolveWithaHelper ITW 2009, Volos, Greece, June 10-12, 2009 ProblemsWeCanSolveWitha Haim Permuter Ben-Gurion University of the Negev haimp@bgu.ac.il Yossef Steinberg Technion - IIT ysteinbe@ee.technion.ac.il Tsachy Weissman

More information

MATH32031: Coding Theory Part 15: Summary

MATH32031: Coding Theory Part 15: Summary MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

On Function Computation with Privacy and Secrecy Constraints

On Function Computation with Privacy and Secrecy Constraints 1 On Function Computation with Privacy and Secrecy Constraints Wenwen Tu and Lifeng Lai Abstract In this paper, the problem of function computation with privacy and secrecy constraints is considered. The

More information

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages Degrees of Freedom Region of the Gaussian MIMO Broadcast hannel with ommon and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and omputer Engineering University of Maryland, ollege

More information

arxiv: v1 [cs.it] 4 Jun 2018

arxiv: v1 [cs.it] 4 Jun 2018 State-Dependent Interference Channel with Correlated States 1 Yunhao Sun, 2 Ruchen Duan, 3 Yingbin Liang, 4 Shlomo Shamai (Shitz) 5 Abstract arxiv:180600937v1 [csit] 4 Jun 2018 This paper investigates

More information

Lecture 5 - Information theory

Lecture 5 - Information theory Lecture 5 - Information theory Jan Bouda FI MU May 18, 2012 Jan Bouda (FI MU) Lecture 5 - Information theory May 18, 2012 1 / 42 Part I Uncertainty and entropy Jan Bouda (FI MU) Lecture 5 - Information

More information

A Summary of Multiple Access Channels

A Summary of Multiple Access Channels A Summary of Multiple Access Channels Wenyi Zhang February 24, 2003 Abstract In this summary we attempt to present a brief overview of the classical results on the information-theoretic aspects of multiple

More information

Common Information. Abbas El Gamal. Stanford University. Viterbi Lecture, USC, April 2014

Common Information. Abbas El Gamal. Stanford University. Viterbi Lecture, USC, April 2014 Common Information Abbas El Gamal Stanford University Viterbi Lecture, USC, April 2014 Andrew Viterbi s Fabulous Formula, IEEE Spectrum, 2010 El Gamal (Stanford University) Disclaimer Viterbi Lecture 2

More information