Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels

Similar documents
Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels

Optimal Independent Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

The Capacity Region of a Class of Discrete Degraded Interference Channels

Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel

Symmetric Characterization of Finite State Markov Channels

Lecture 10: Broadcast Channel and Superposition Coding

Nonlinear Turbo Codes for the broadcast Z Channel

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

A Comparison of Superposition Coding Schemes

A Comparison of Two Achievable Rate Regions for the Interference Channel

Reliable Computation over Multiple-Access Channels

The Gallager Converse

Optimal Power Allocation for Parallel Gaussian Broadcast Channels with Independent and Common Information

Variable Length Codes for Degraded Broadcast Channels

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

On the Capacity Region of the Gaussian Z-channel

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

On Scalable Coding in the Presence of Decoder Side Information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Capacity Region of the Permutation Channel

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

X 1 : X Table 1: Y = X X 2

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

Lin Song Shuo Shao Jun Chen. 11 September 2013

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

Lecture 22: Final Review

Lecture 8: Shannon s Noise Models

SUCCESSIVE refinement of information, or scalable

Polar Codes for Some Multi-terminal Communications Problems

Energy State Amplification in an Energy Harvesting Communication System

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

On Lossless Coding With Coded Side Information Daniel Marco, Member, IEEE, and Michelle Effros, Fellow, IEEE

New Results on the Equality of Exact and Wyner Common Information Rates

Cut-Set Bound and Dependence Balance Bound

On the Rate-Limited Gelfand-Pinsker Problem

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels

An Alternative Proof for the Capacity Region of the Degraded Gaussian MIMO Broadcast Channel

Lecture 4 Noisy Channel Coding

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Appendix B Information theory from first principles

The Capacity Region for Multi-source Multi-sink Network Coding

Information Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Energy Efficient Estimation of Gaussian Sources Over Inhomogeneous Gaussian MAC Channels

(Classical) Information Theory III: Noisy channel coding

Random Access: An Information-Theoretic Perspective

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

A Simple Memoryless Proof of the Capacity of the Exponential Server Timing Channel

On the Gaussian Z-Interference channel

Simultaneous Nonunique Decoding Is Rate-Optimal

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel

Cognitive Multiple Access Networks

Exercise 1. = P(y a 1)P(a 1 )

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets

The Adaptive Classical Capacity of a Quantum Channel,

ELEMENTS O F INFORMATION THEORY

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

1 Introduction to information theory

Electrical and Information Technology. Information Theory. Problems and Solutions. Contents. Problems... 1 Solutions...7

On Common Information and the Encoding of Sources that are Not Successively Refinable

On Gaussian MIMO Broadcast Channels with Common and Private Messages

On the Duality between Multiple-Access Codes and Computation Codes

Lecture 3: Channel Capacity

ELEC546 Review of Information Theory

WITH advances in communications media and technologies

On Scalable Source Coding for Multiple Decoders with Side Information

Multiuser Successive Refinement and Multiple Description Coding

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel

Capacity Bounds for. the Gaussian Interference Channel

Distributed Functional Compression through Graph Coloring

An Achievable Error Exponent for the Mismatched Multiple-Access Channel

Noisy-Channel Coding

PART III. Outline. Codes and Cryptography. Sources. Optimal Codes (I) Jorge L. Villar. MAMME, Fall 2015

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages

The Duality Between Information Embedding and Source Coding With Side Information and Some Applications

The Poisson Channel with Side Information

Approximately achieving the feedback interference channel capacity with point-to-point codes

Joint Write-Once-Memory and Error-Control Codes

EE376A: Homeworks #4 Solutions Due on Thursday, February 22, 2018 Please submit on Gradescope. Start every question on a new page.

On the Secrecy Capacity of the Z-Interference Channel

Channel Polarization and Blackwell Measures

Error Exponent Region for Gaussian Broadcast Channels

On Two-user Fading Gaussian Broadcast Channels. with Perfect Channel State Information at the Receivers. Daniela Tuninetti

On Multiple User Channels with State Information at the Transmitters

Capacity of a channel Shannon s second theorem. Information Theory 1/33

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

Chain Independence and Common Information

Multicoding Schemes for Interference Channels

Bounds on Mutual Information for Simple Codes Using Information Combining

Duality Between Channel Capacity and Rate Distortion With Two-Sided State Information

820 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY Stefano Rini, Daniela Tuninetti, and Natasha Devroye

Upper Bounds on the Capacity of Binary Intermittent Communication

Transcription:

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Bike ie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract Certain degraded broadcast channels (DBCs) have the property that the boundary of the capacity region can be achieved by an encoder that combines independent codebooks (one for each receiver) using the same singleletter function that adds distortion to the channel. We call this the natural encoder for the DBC. Natural encoders are known to achieve the capacity region boundary of the broadcast Gaussian channel, and the broadcast binary-symmetric channel. Recently, they have also been shown to achieve the capacity region of the broadcast Z channel. his paper shows that natural encoding achieves the capacity region boundary for discrete multiplicative DBCs. he optimality of the natural encoder also leads to a relatively simple expression for the capacity region for discrete multiplicative DBCs. Index erms Conditional entropy bound, degraded broadcast channel, discrete multiplicative degraded broadcast channel, natural encoding I. Introduction In the 70 s, Cover [], Bergmans [] and Gallager [3] established the capacity region for degraded broadcast channels (DBCs). he general optimal transmission strategy to achieve the boundary of the capacity region for DBCs is a oint encoding scheme. Certain DBCs have the property that the boundary of the capacity region can be achieved by an encoder that combines independent codebooks (one for each receiver) using the same single-letter function that adds distortion to the channel. We call this the natural encoder for the DBC. Natural encoders are known to achieve the capacity region boundary of the broadcast Gaussian channel [4], the broadcast binary-symmetric channel [] [5], and discrete additive DBCs [6]. Recently, natural encoding has also been shown to achieve the capacity region of the two-user broadcast Z channel [7]. he discrete multiplicative degraded broadcast channel (DM-DBC) is a discrete DBC whose channel outputs are discrete multiplications (multiplications in a finite field) of the channel input and noise. his paper decomposes the DM-DBC into a group-additive DBC with an extra erasure (zero) symbol. Based on this decomposition, this paper studies a conditional entropy bound for the DM-DBC, and proves that the natural encoding approach achieves the boundary of the capacity region for DM-DBCs. his work was supported by the Defence Advanced Research Proect Agency SPAWAR Systems Center, San Diego, California under Grant N6600-0--8938. his work was also supported by Rockwell Collins through contract #450769987. he authors are with the Electrical Engineering Department, University of California, Los Angeles, CA 90095 USA (email:xbk@ee.ucla.edu; wesel@ee.ucla.edu). his conference paper presents part of the Ariv submission [8] which will be submitted to IEEE transaction on information theory. For more details, please refer to [8]. his paper is organized as follows: Section II provides definitions and states some results from [8] that will be useful. Section III defines the discrete multiplicative DBC and decomposes it into a group-additive DBC with an extra erasure (zero) symbol. Section IV computes the optimal input distribution to achieve the boundary of the capacity region for DM-DBCs. Section V introduces the natural encoding approach and proves that it achieves the capacity region for DM-DBCs. Section VI provides the conclusion. II. Definitions and Preliminaries Let Y Z be a discrete memoryless DBC where {,,, k}, Y {,,, n} and Z {,,, m}. Let Y be an n k stochastic matrix with entries Y (, i)= Pr(Y = =i) and Z be an m k stochastic matrix with entries Z (, i)=pr(z= =i). hus, Y and Z are the marginal transition probability matrices of the DBC. A. Conditional entropy bound F, DBC capacity regions Our results depend heavily on the function F, which we will now define. Let vector q in the simplex k of probability k-vectors be the distribution of the channel input. For any H(Y ) s H(Y ), define the function F Y, Z (q, s) as the infimum of H(Z U) with respect to all discrete random variables U such that a) H(Y U) = s; b) U and Y, Z are conditionally independent given, i.e., the sequence U,, Y, Z forms a Markov chain U Y Z. he function F ( ) is an extension to the function F ( ) introduced in [5]. We will use F Y, Z (q, s), F (q, s) and F (s) interchangeably. heorem : F Y, Z (q, s) is ointly convex in (q, s), and nondecreasing in s. he infimum in its definition is attainable. (See [8] for proof.) heorem : he capacity region for the discrete memoryless DBC Y Z is the closure of the convex hull of all rate pairs (R, R ) satisfying 0 R I(; Y ), () R H(Z) F Y, Z (q, R + H(Y )), () for some q k. (See [8] for proof.) B. Definitions of C, C q, and the (ξ, η)-plane for DBCs For any choice of the integer l, w = [w,, w l ] l and p k, =,, l, let U be an l-ary random variable with distribution w, and let U = [p,, p l ] be the transition probability matrix from U to. We can

compute p = p = U w = ξ = H(Y U) = η = H(Z U) = w p, (3) = w h n ( Y p ), (4) = w h m ( Z p ), (5) = where h n : n R is the entropy function, i.e., h n (p,, p n ) = p i ln p i. Let C be the set of all (p, ξ, η) satisfying (3) (4) and (5) for some choice of l, w and p. C is compact, connected, and convex. Let C = { (ξ, η) (q, ξ, η) C } be the proection of the set C onto the (ξ, η)-plane. Define Cq ={(ξ, η) (q, ξ, η) C} as the proection onto the (ξ, η)- plane of the subset of C where p = q. C and Cq are also compact and convex. By definition, F Y, Z (q, s) is the infimum of all η, for which Cq contains the point (s, η). Fig. (a) shows how F Y, Z (q, s) forms a lower boundary of the region Cq. C. U s that achieve F and optimal U s for DBCs In Fig. (a), the line with slope in the (ξ, η)-plane supporting C q has the equation η = ξ + ψ(q, ), where ψ(q, ) is the η-intercept of the tangent line with slope for F Y, Z (q, s). Let φ(q, ) = h m ( Z q) h n ( Y q). As illustrated in Fig. (b), ψ(q, ) is the lower convex envelope of φ(q, ) on k as shown in [8]. For any 0, the associated point on the boundary of the capacity region may be found (from its unique value of R + R ) as follows max max{r + R p = q} max{h(z) F (q, s) + s H(Y )} (H(Z) H(Y ) min{f (q, s) s}) (H(Z) H(Y ) ψ(q, )). (6) heorem 3: his theorem has two parts: i) For any fixed 0, if a point of the graph of ψ(, ) is the convex combination of l points of the graph of φ(, ) with arguments p and weights w for =,, l, then F ( w p, ) w h n ( Y p ) = w h m ( Z p ). Furthermore, for a fixed input distribution q = w p, the optimal transmission strategy to achieve the maximum of R + R is determined by l,w and p. In particular, the optimal transmission strategy is U = l, Pr(U =)=w and p U= = p, where p U= denotes the conditional distribution of given U =. ii) For a predetermined channel input distribution q, if the transmission strategy U = l, Pr(U = ) = w and H(Z) H(Z ) (q,) H(Y ) slope 0 (a) C * q H(Y) slope slope F*(q,s) s or (p,) (p,) 0 p (b) Fig.. (a) he illustrations of the curve F (q, s) shown in bold, the region Cq, and the point (0, ψ(q, )), (b) he illustrations of φ(, ) and ψ(, ) for the broadcast Z channel. p U= = p achieves max{r + R w p = q}, then the point (q, ψ(q, )) is the convex combination of l points of the graph of φ(, ) with arguments p and weights w for =,, l. (See [8] for proof.) D. Example: the two-user broadcast Z channel An important example, because of the decomposition we will apply to the discrete multiplicative DBC, is the broadcast Z channel with marginal transition probability matrices Y = α 0 β and Z = α, (7) 0 β where 0 < α α <, β + α = β + α =. Let β = β /β. For β, φ (p, ) 0 for all 0 p. herefore, φ(p, ) is convex in p and thus φ(p, ) = ψ(p, ) for all 0 p. For 0 < β, φ(p, ) is concave in p for p [ 0, β β β β β β ( ) ] and convex in p for p [ β β ( ), ]. he graph of φ(, ) in this case is shown in Fig. (b). Since φ(0, ) = 0, ψ(, ), the lower convex envelope of φ(, ), is constructed by drawing the tangent through the origin. Let (p, φ(p, )) be the point of contact. he value of p is determined by the equation φ p(p, ) = φ(p, )/p, i.e., ln( β p ) = ln( β p ). (8) E. Input-symmetric degraded broadcast channels Let Φ n denote the set of all n n permutation matrices. An n-input m-output channel with transition probability matrix m n is input-symmetric if the set G = {G Φ n Π Φ m, s.t. G = Π }, (9) where Π and G are permutation matrices, is transitive, which means each element of {,, n} can be mapped to every other element of {,, n} by some permutation matrix in G [5] [9]. An important property of inputsymmetric channel is that the uniform distribution achieves capacity. Definition : Input-Symmetric Degraded Broadcast Channel: A discrete memoryless DBC Y Z with = k, Y = n and Z = m is input-symmetric if the set G Y, Z = GY G Z (0) = { G Φ k Π Y Φ n, Π Z Φ m, s.t. Y G = Π Y Y, Z G = Π Z Z } ()

S S Fig.. W W N N Y Z he group-additive degraded broadcast channel. Encoder Encoder Fig. 3. Y Z Y Z Successive Decoder Decoder he group-addition encoding approach. is transitive. Lemma : For any permutation matrix G G Y, Z and (p, ξ, η) C, one has (Gp, ξ, η) C. (See [8] for proof.) Corollary : p k and G G Y, Z, one has C Gp = C p, and so F (Gp, s) = F (p, s) for any H(Y ) s H(Y ). (See [8] for proof.) Corollary : A uniform distribution on the alphabet of achieves the capacity region of any input-symmetric DBC. (See [8] for proof.) III. Group-additive and Multiplicative DBCs A. Group-additive DBC Definition : Group-additive (GA) Degraded Broadcast Channel: A DBC Y Z with, Y, Z {,, n} is a group-additive DBC if there exist two n-ary random variables N and N such that Y N and Z Y N as shown in Fig., where denotes identical distribution and denotes group addition. he group-additive DBC is input-symmetric. It includes the broadcast binary-symmetric channel and the discrete additive degraded broadcast channel [6] as special cases. Fig. 3 shows the group-addition encoding approach, which is to independently encode the message for each of the two users and broadcast the group addition of the two resulting codewords. he group-addition encoding approach achieves the boundary of capacity region for GA- DBCs [8]. B. Discrete multiplicative DBC Definition 3: Discrete Multiplicative Degraded Broadcast Channel: A discrete memoryless DBC Y Z with, Y, Z {0,,, n} is a discrete multiplicative DBC if there exist two (n + )-ary random variables N and N such that Y N and Z Y N as shown in Fig. 4, where denotes discrete multiplication. Y Z 0 0 0 Y Z Y ZY Fig. 5. he channel structure of a discrete multiplicative degraded broadcast channel. By the definition of field and group, the discrete multiplication of zero and any element in {0,,, n} is always zero. Also {,, n} forms a group under discrete multiplication. Hence, the DM-DBC Y Z has the channel structure as shown in Fig. 5. he sub-channel Ỹ Z is a GA-DBC with marginal distributions Ỹ and Z = ZỸ Ỹ, where, Ỹ, Z = {,, n}. For the DM-DBC Y Z, if the channel input is zero, then the channel outputs Y and Z are zeros. If the channel input is a non-zero symbol, the channel output Y is zero with probability α and Z is zero with probability α, where α = α + ( α )α. herefore, the marginal transmission probability matrices for Y Z are α Y =, () 0 ( α )Ỹ [ α ZY = 0 ( α ) ZỸ ], (3) and α Z = ZY Y = α 0 ( α )Ỹ 0 ( α ) ZỸ α =, (4) 0 ( α ) ZỸ where is an all-one vector and 0 is an all-zero vector. IV. Optimal Input Distribution for DM-DBCs he sub-channel Ỹ Z is a group-additive DBC, which is input-symmetric, and hence, G Ỹ, Z is transitive. For any n n permutation matrix G G Ỹ, Z with Ỹ G = ΠỸ Ỹ and Z G = Π Z Z, the (n + ) (n + ) permutation matrix 0 G = (5) 0 G has α Y G= 0 0 = 0 ( α )Ỹ 0 G 0 Y (6) ΠỸ Fig. 4. N N Y Z he discrete multiplicative degraded broadcast channel. and so G G Y. Similarly, G G Z, and hence G G Y, Z. herefore, any non-zero element in {0,,, n} can be mapped to any other non-zero element in {0,,, n} by some permutation matrix in G Y, Z, however, no matrix in G Y, Z maps zero to non-zero element or non-zero element to zero. Hence, any permutation matrix G G Y, Z has the structure (5) for 3

some G G Ỹ, Z. hese results are summarized in the following Lemma: Lemma : Let G Ỹ, Z = { G,, G l }. Hence, G Y, Z = {G,, G l }, where 0 G =, for =,..., l. (7) 0 G Now we state and prove that the uniform distributed is optimal for the DM-DBC. Lemma 3: Let p = ( q, qp ) n+ be the distribution of channel input, where p is the distribution of. For any DM-DBC, C p C ( q,qu ) and C = q [0,] C ( q,qu ), where u n denotes the uniform distribution. (See [8] for proof.) heorem 4: he capacity region of the DM-DBC can be achieved by using the transmission strategies such that is uniformly distributed, i.e., the distribution of has p = ( q, qu ) for some q [0, ]. As a consequence, the capacity region is the convex hull of the closure of all (R, R ) satisfying R s qh n (Ỹ e ), (8) R h(( α )q) + ( α )q ln(n) F Y, Z (( q, qu ), s), (9) for some 0 q and qh n (Ỹ e ) s h(( α )q) + ( α )q ln(n). Proof : Let p = ( q, qp ) be the distribution of the channel input, where p = (p,, p n ). Since G Ỹ is transitive and the columns of Ỹ are permutations of each other, H(Y ) = H(Y = i) (0) i=0 = ( q)h(y = 0) + = qp i h n (Ỹ e i ) () qp i h n (Ỹ e ) () = qh n (Ỹ e ), (3) which is independent with p. Let G Y, Z ={G,,G l }. H(Z) = h n+ ( Z p ) (4) = h n+ ( Z G i p l ) (5) h n+ ( Z l G i p ) (6) = h n+ ( Z ( q, qu ) ) (7) = h(( α )q) + ( α )q ln(n) (8) where (6) follows from Jensen s inequality. Since C p from Lemma 3, C ( q,qu ) F (p, s) F (( q, qu ), s). (9) S S W W Fig. 6. Encoder Encoder Y Z Y Z Successive Decoder Decoder he natural encoding approach for DM-DBCs. Plugging (3), (8) and (9) into heorem, the capacity region for DM-DBCs is co (R, R ) : R s H(Y ), R H(Z) F Y, Z (q, s) }] (30) co (R, R ) : R s h n (Ỹ e ), [ = co q [0,] [ = co R h(( α )q) + ( α )q ln(n) F Y, Z (( q, qu ), s) }] (3) { (R, R ) : R s qh n (Ỹ e ), R h(( α )q) + ( α )q ln(n) p =( q,qu ) F Y, Z (( q, qu ), s) }] (3) { (R, R ) : R s H(Y ), R H(Z) F Y, Z (q, s) }] (33) co (R, R ) : R s H(Y ), R H(Z) F Y, Z (q, s) }], (34) where co denotes the convex hull of the closure. Note that (30) and (34) are identical, hence (30-34) are all equal. herefore, (3) expresses the capacity region for the DM- DBC, which also means that the capacity region can be achieved by using the transmission strategies such that the broadcast signal has distribution p = ( q, qu ) for some q [0, ]. Q.E.D. V. Natural Encoding and Capacity Region for DM-DBCs he natural encoding approach for the discrete multiplicative DBC is shown in Fig. 6. W is the message for User who sees the better channel Y and W is the message for User who sees the worse channel Z. he natural encoding approach is first to independently encode these two messages into two codewords and respectively, and then to broadcast which is obtained by applying the single-letter multiplication function = on the symbols of the codewords and. he distribution of is constrained to be p = ( q, qu ) for some q [0, ] and hence the distribution of the broadcast signal also has p = ( q, qu ) for some q [0, ], which the previous section proved to be the optimal input distribution for the DM-DBC. 4

User receives Z and decodes the desired message directly. User receives Y and successively decodes the message for User and then for User. Let p = ( q, qp ) be the distribution of the channel input, where p is the distribution of sub-channel input. For the DM-DBC Y Z, φ(p, ) = h n+ ( Z p ) h n+ ( Y p ) (35) ([ ]) q + qα =h n+ q( α ) Z p ([ ]) q + qα h n+ (36) q( α )Ỹ p =h(q( α )) q( α )h n ( Z p ) (h(q( α )) q( α )h n (Ỹ p )) (37) =h(qβ ) h(qβ ) + qβ ( hn ( Z p ) α h n (Ỹ p ) ), (38) α h n (Ỹ p ). Define ϕ(q, p, ) as fol- where β = α and β = α. For the sub-channel Ỹ Z, define φ(p, h n ( Z p ) lows: α ) = ϕ(q, p, ) = h(qβ ) h(qβ )+qβ ψ(p, α ), (39) where ψ is the lower envelope of φ(p, α ) in p. With this definition, note that ψ(p, ), the lower envelope of φ(p, ), is also the lower envelope of ϕ(q, p, ). Lemma 4: ψ(( q, qu ), ), the lower envelope of φ(p, ) in p at p = ( q, qu ) is on the lower envelope of ϕ(q, u, ) in q. (See [8] for proof.) Lemma 4 indicates that the lower envelope of φ(, ) at p = ( q, qu ) can be decomposed into two steps. First, for any fixed q, the lower envelope of φ(p, ) in p is ϕ(q, p, ). Second, for p = u, the lower envelope of ϕ(q, u, ) in q coincides with ψ(p, ), the lower envelope of φ(p, ) in p. heorem 5: he natural encoding approach with time sharing achieves the boundary of the capacity region for the discrete multiplicative DBC. Proof : heorem 4 shows that the boundary of the capacity region for the DM-DBC can be achieved by using transmission strategies with uniformly distributed, i.e., the input distribution p = ( q, qu ). For p = ( q, qu ), ψ(( q, qu ), ) can be attained by the convex combination of points on the graph of ϕ(q, u, ). Since ϕ(q, u, ) = h(qβ ) h(qβ )+qβ ψ(u, α ), which is the sum of φ(q, ) for the broadcast Z channel and q times the constant β ψ(u, α ). Hence, by a discussion analogous to Section II-D, ψ(( q, qu ), ) can be attained by the convex combination of points on the graph of ϕ(q, u, ). One point is at q = 0 and ϕ(0, u, ) = 0. he other point is at q = p, where p is determined by (8). Note that the point (0,0) on the graph of ϕ(q, u, ) is also on the graph of φ(p, ). By heorem 3, the point (p, ϕ(p, u, )) is the convex combination of n points on U 0 U p U V Y Z 0 0 0 Y Z Y ZY Fig. 7. he optimal transmission strategy for the discrete multiplicative degraded broadcast channel. the graph of φ(p, ), which corresponds to the groupaddition encoding approach for the sub-channel Ỹ Z because the group-addition encoding approach is optimal for the group-additive DBC Ỹ Z. herefore, by heorem 3, an optimal transmission strategy for the DM- DBC Y Z has the structure as shown in Fig. 7. If the auxiliary random variable U = 0, then the channel input = 0. If U is a non-zero symbol, then = 0 with probability p. In the case where U and are both non-zero, can be obtained as = Ũ Ṽ, where is the group addition in the group-additive degraded broadcast sub-channel Ỹ Z, Ũ is uniformly distributed and Ṽ is an n-ary random variable. By the definition of group addition and discrete multiplication, the transmission strategy with the structure in Fig. 7 is the natural encoding approach. Q.E.D. VI. Conclusion his paper shows that the natural encoding of Fig. 6 is optimal for two-user DM-DBCs. Its achievable rate region is also a single-letter characterization of the capacity region for DM-DBCs. Hence, the capacity region for the DM-DBC in Fig. 4 is co (R, R ) : R H(U V N ) H(U V N U) p U,p V R H(U V N U) H(U V N U V ) }]. (40) References []. M. Cover. Broadcast channels. IEEE rans. Inform. heory, I-8: 4, January 97. [] P. P. Bergmans. Random coding theorem for broadcast channels with degraded components. IEEE rans. Inform. heory, I- 9:97 07, March 973. [3] R. G. Gallager. Capacity and coding for degraded broadcast channels. Probl. Pered. Inform., 0:3 4, July Sept. 974. [4] P. P. Bergmans. A simple converse for broadcast channels with additive white Gaussian noise. IEEE rans. Inform. heory, I- 0:79 80, March 974. [5] H. Witsenhausen and A. Wyner. A conditional entropy bound for a pair of discrete random variables. IEEE rans. Inform. heory, I-(5):493 50, Sep 975. [6] R. Benzel. he capacity region of a class of discrete additive degraded interference channels. IEEE rans. Inform. heory, 5:8 3, Mar 979. [7] B. ie, M. Griot, A. I. Vila Casado and R. D. Wesel. Optimal transmission strategy and capacity region for broadcast Z channels. In IEEE Information heory Workshop 007, Lake ahoe, USA, Sep 007. [8] B. ie and R. D. Wesel. Optimal Independent-Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels. In Ariv:08.46v, Jan. 4 009. [9] B. ie and R. D. Wesel. A mutual information invariance approach to symmetry in discrete memoryless channels. In Information heory and Application 008, UCSD, San Diego, USA, Jan. 7-Feb. 008. 5