Polar Codes for Some Multi-terminal Communications Problems

Similar documents
Polar Codes for Sources with Finite Reconstruction Alphabets

Computing sum of sources over an arbitrary multiple access channel

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

On Scalable Source Coding for Multiple Decoders with Side Information

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets

On the Polarization Levels of Automorphic-Symmetric Channels

The Gallager Converse

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Polar Codes are Optimal for Write-Efficient Memories

Cut-Set Bound and Dependence Balance Bound

On the Capacity Region of the Gaussian Z-channel

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels

Reliable Computation over Multiple-Access Channels

SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003

arxiv: v1 [cs.it] 5 Feb 2016

Group, Lattice and Polar Codes for Multi-terminal Communications

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

On Multiple User Channels with State Information at the Transmitters

On the Convergence of the Polarization Process in the Noisiness/Weak- Topology

Distributed Lossy Interactive Function Computation

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers

On Scalable Coding in the Presence of Decoder Side Information

ECE Information theory Final (Fall 2008)

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview

arxiv: v1 [cs.it] 19 Aug 2008

Side-information Scalable Source Coding

SOURCE coding problems with side information at the decoder(s)

New communication strategies for broadcast and interference networks

Random Access: An Information-Theoretic Perspective

Practical Polar Code Construction Using Generalised Generator Matrices

Polar Codes for Arbitrary DMCs and Arbitrary MACs

Polar codes for the m-user MAC and matroids

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

Equidistant Polarizing Transforms

On the Duality between Multiple-Access Codes and Computation Codes

Coding for Noisy Write-Efficient Memories

Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents

Lecture 10: Broadcast Channel and Superposition Coding

On Gaussian MIMO Broadcast Channels with Common and Private Messages

Performance of Polar Codes for Channel and Source Coding

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University)

How to Compute Modulo Prime-Power Sums?

Lossy Distributed Source Coding

On Bit Error Rate Performance of Polar Codes in Finite Regime

On Function Computation with Privacy and Secrecy Constraints

Error Exponent Region for Gaussian Broadcast Channels

Multiuser Successive Refinement and Multiple Description Coding

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Joint Source-Channel Coding for the Multiple-Access Relay Channel

A Comparison of Two Achievable Rate Regions for the Interference Channel

Channel Polarization and Blackwell Measures

Equivalence for Networks with Adversarial State

Source and Channel Coding for Correlated Sources Over Multiuser Channels

Upper Bounds on the Capacity of Binary Intermittent Communication

An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel

On the Secrecy Capacity of the Z-Interference Channel

Bounds on Mutual Information for Simple Codes Using Information Combining

List of Figures. Acknowledgements. Abstract 1. 1 Introduction 2. 2 Preliminaries Superposition Coding Block Markov Encoding...

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels

Secret Key Agreement Using Asymmetry in Channel State Knowledge

A Comparison of Superposition Coding Schemes

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

Joint Write-Once-Memory and Error-Control Codes

An Alternative Proof for the Capacity Region of the Degraded Gaussian MIMO Broadcast Channel

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe

SUCCESSIVE refinement of information, or scalable

Multiterminal Source Coding with an Entropy-Based Distortion Measure

Distributed Lossless Compression. Distributed lossless compression system

On Dependence Balance Bounds for Two Way Channels

Amobile satellite communication system, like Motorola s

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

The Duality Between Information Embedding and Source Coding With Side Information and Some Applications

On Network Interference Management

X 1 : X Table 1: Y = X X 2

The Least Degraded and the Least Upgraded Channel with respect to a Channel Family

Capacity bounds for multiple access-cognitive interference channel

Interactive Decoding of a Broadcast Message

On Source-Channel Communication in Networks

Solutions to Homework Set #2 Broadcast channel, degraded message set, Csiszar Sum Equality

Secret Key Agreement Using Conferencing in State- Dependent Multiple Access Channels with An Eavesdropper

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel

On the Capacity of the Interference Channel with a Relay

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

A Graph-based Framework for Transmission of Correlated Sources over Multiple Access Channels

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality

Cognitive Multiple Access Networks

Network coding for multicast relation to compression and generalization of Slepian-Wolf

Optimal Power Allocation for Parallel Gaussian Broadcast Channels with Independent and Common Information

A Simple Memoryless Proof of the Capacity of the Exponential Server Timing Channel

Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel

Lecture 5 Channel Coding over Continuous Channels

ProblemsWeCanSolveWithaHelper

Sum Rate of Multiterminal Gaussian Source Coding

Transcription:

Polar Codes for ome Multi-terminal Communications Problems Aria G. ahebi and. andeep Pradhan Department of Electrical Engineering and Computer cience, niversity of Michigan, Ann Arbor, MI 48109, A. Email: ariaghs@umich.edu, pradhanv@umich.edu Abstract It is shown that polar coding schemes achieve the known achievable rate regions for several multi-terminal communications problems including lossy distributed source coding, multiple access channels and multiple descriptions coding. he results are valid for arbitrary alphabet sizes (binary or nonbinary) and arbitrary distributions (symmetric or asymmetric). I. IODCIO Polar codes were recently proposed by Arikan [1] to achieve the symmetric capacity of binary input channels. his result was later generalized to arbitrary discrete memoryless channels [2] [5]. Polar coding schemes were also developed to achieve the symmetric rate-distortion function for arbitrary discrete memoryless sources [6] [8]. Polar coding results for asymmetric cases are developed in [9]. Among the existing works on the application of polar codes for multi-terminal cases we note [6], [10] [12] for distributed source coding, [13], [14] for the multiple access channels and [15] for broadcast channels. In [16], it is shown that nested polar codes can be used to achieve the hannon capacity of arbitrary discrete memoryless channels and the hannon rate-distortion function for discrete memoryless sources. In this paper, we show that nested polar codes can achieve the best known achievable rate regions for several multi-terminal communication systems. We present several examples in this paper, including the distributed source coding problem, multiple access channels, computation over MAC, broadcast channels and multiple description coding to illustrate how these codes can be employed to have an optimal performance for multi-terminal cases. he results of this paper are general regarding the size of alphabets (binary or nonbinary) using the approach of [4]. he special case where the alphabets are binary is discussed in [16] for the lossy source coding problem. In addition, the results of this paper are general regarding the distributions i.e., we do not assume uniform distributions on the channel inputs or the source alphabets. his paper is organized as follows: In ection II we state some preliminaries. In ection III, we consider the distributed source coding problem and show that polar codes achieve the Berger-ung rate region. In ection I, we consider a his work was supported by F grant CCF-1116021. distributed source coding problem in which the decoder is interested in decoding the sum of auxiliary random variables and show that polar codes have the optimal performance (this scheme has the Korner-Marton scheme as a special case). In ection, we show that polar codes achieve the capacity region for multiple access channels. In ection I, we show that polar codes have an optimal performance for the problem of computation over MAC where the decoder is interested in the sum of variables. In ection II, we study the performance of polar codes for broadcast channels. In ection III, we show that polar codes are optimal for the multiple description problem. Finally, in ection I, we discuss briefly other possible problems and extensions to multiple user (more that two) cases. II. PELIMIAIE 1) Channel Parameters: For a channel (,, W ), assume is equipped with the structure of a group (G, +). he symmetric capacity is defined as Ī(W ) = I(; ) where the channel input is uniformly distributed over and is the output of the channel. For d G, we define d (W ) = 1 W (y x)w (y x + d) q x G y and for H G define H (W ) = d / H d(w ). 2) Binary Polar Codes: For any = 2 n, a polar code of length designed for the channel ( 2,, W ) is a linear (coset) code characterized by a generator matrix G and a set of indices A {1,, } of almost perfect channels. he set A is a function of the channel. he decoding algorithm for polar codes is a specific form of successive cancelation [1]. 3) Polar Codes Over Abelian Groups: For any discrete memoryless channel, there always exists an Abelian group of the same size as that of the channel input alphabet. Polar codes for arbitrary discrete memoryless channels (over arbitrary Abelian groups) are introduced in [4]. For various notations used in this paper, we refer the reader to [4] and [16]. III. DIIBED OCE CODIG: HE BEGE-G POBLEM In the distributed source coding problem, two separate sources and communicates with a centralized decoder.

Let, and, be the source and the reconstruction alphabets of the two terminals and assume and have the joint distribution p. Let d 1 : + and d 2 : + be the distortion measures for terminals and respectively. We denote this source by (,,,, p, d 1, d 2 ). Let and be auxiliary random variables taking values from and respectively such that, E{d 1 (, )} D 1 and E{d 2 (, )} D 2 for some distortion levels D 1, D 2 +. It is known by the Berger-ung coding scheme that the tuple ( 1, 2, D 1, D 2 ) is achievable if 1 I(; ) I(; ), 2 I( ; ) I(; ) and 1 + 2 I(; ) + I( ; ) I(; ). In this section, we prove the following theorem: heorem III.1. For a source (,,,, p, d 1, d 2 ), assume and are finite. hen the Berger-ung rate region is achievable using nested polar codes. It suffices to show that the rates 1 = I(; ) I(; ) and 2 = I( ; ) achievable. Let G be an Abelian group of the size larger than or equal to the size of both and. ote that for the source, we can use a nested polar codes as introduced in [16] to achieve the rate I( ; ). Furthermore, we have access to the outcome v1 of 1 at the decoder with high probability. It remains to show that the rate 1 = I(; ) I(; ) is achievable when the sequence v1 with d 2 (y1, v1 ) D 2 is available at the decoder. Given the test channel p, define the artificial channels (G, G 2, ) and (G, G, ) such that for s, z G and x, (v, z s) = p (v, z s) and (x, z s) = p (x, z s). hese channels have been depicted in Figures 1 and 2. Let be a random variable uniformly distributed over p Fig. 1: est channel for the Fig. 2: est channel for the inner code (the channel coding outer code (the source coding component) component) p G which is independent from and. It is straightforward to show that in this case, is also uniformly distributed over G. imilarly to the point-to-point result [16], we can show that the symmetric capacities of the channels and are given by Ī() = q H( ) and Ī() = q H( ). We employ a nested polar code in which the inner code is a good channel code for the channel and the outer code is a good source code for. he rate of this code is equal to = Ī() Ī() = I(; ) I(; ). he rest of this section is devoted to some general definitions and lemmas which are used in the proofs. Lemma III.1. he channel is stochastically degraded with respect to the channel. Proof: In the Definition [16, Definition III.1], let the channel ( G, G 2, W ) be such that for v, z, z G and x, W (v, z x, z ) = p (v x)1 {z=z }. Let = 2 n for some positive integer n and let G be the corresponding generator matrix for polar codes. For i = 1,,, and for z1, a 1 G, v1 and x 1, let c, (zn 1, v 1, a i 1 s, (x 1, z n 1, a i 1 1 q 1 (z1, v1 a 1 G) a i+1 G i a i+1 G i 1 q 1 W s (x 1, z 1 a 1 G) Let the random vectors 1, 1, 1, 1 be distributed according to P and let 1 be a random variable uniformly distributed over G which is independent of 1, 1, 1, 1. Let 1 = 1 1 and A 1 = 1 G 1 (Here, G 1 is the inverse of the mapping G : G G ). In other words, the joint distribution of the random vectors is given by p A 1 1 1 1 1 1 (a 1, s 1, u 1, v 1, x 1, z 1 ) = 1 q p (x 1, u 1, v 1 )1 {s 1 =a 1 G,un 1 =z 1 a 1 G} A. ketch of the proof he following theorems from [4] state the standard channel coding and source coding polarization phenomenons for the general case. heorem III.2. For any ɛ > 0 and 0 < β < 1 2, there exist a large = 2 n and a partition {A H H G} of [1, ] such (i) that for H G and i A H, Ī(W ) < ɛ and H ( c, ) < 2 β. Moreover, as ɛ 0 (and ), A H p H for some probabilities p H, H G adding up to one with H G p H = Ī(). heorem III.3. For any ɛ > 0 and 0 < β < 1 2, there exist a large = 2 n and a partition {B H H G} of [1, ] such (i) that for H G and i B H, Ī(W ) < ɛ and c, H ( s, ) < 2 β. Moreover, as ɛ 0 (and ), B H q H for some probabilities q H, H G adding up to one with H G q H = Ī(). s, For H G, define { A H = i [1, ] H ( c, ) < β 2, } K H : K ( c, ) < β { 2 B H = i [1, ] H ( s, ) < 1 β 2, } K H : K ( c, ) < 1 β 2 For H G and K G, define A H,K = A H B K. ote that for large, 2 β < 1 2 β. his implies for i A H, we have H ( s, ) < 1 2 β and hence i K H B K.

herefore, for K H, we have A H,K =. his means {A H,K K H G} forms a partition of [1, ]. ote that as increases, A H p H and B H q H. he encoding and decoding rules are as follows: Let z1 G be an outcome of the random variable 1 known to both the encoder and the decoder. Given K H G, let H be a transversal of H in G and let K H be a transversal of K in H. Any element g of G can be represented by g = [g] K + [g] K H + [g] H for unique [g] K K, [g] K H K H and [g] H H. Also note that K H + H is a transversal K of K in G so that g can be uniquely represented by g = [g] K + [g] K for some [g] K K and [g] K can be uniquely represented by [g] K = [g] K H + [g] H. Given a source sequence x 1, the encoding rule is as follows: For i [1, ], if i A H,K for some K H G, [a i ] K is uniformly distributed over K and is known to both the encoder and the decoder (and is independent from other random variables). he component [a i ] K is chosen randomly so that for g G, P (a i = g) = p Ai (g x 1 1 Ai 1 1, z1, a i 1 1 ) 1 p Ai 1 1 Ai 1 1 ([a i ] K + K x 1, z 1, ai 1 1 ) ote that a 1 can be decomposed as a 1 = [v1 ] K + [a 1 ] K H + [a 1 ] H in which [a 1 ] K is known to the decoder. he encoder sends [a 1 ] K H to the decoder and the decoder uses the channel code to recover [a 1 ] H. he decoding rule is as follows: Given z1, v1, [a 1 ] K and [a 1 ] K H, and for i A H,K, let â i = argmax c, (z 1, v1, â i 1 1 g) g [a i] K +[a i] K H + H Finally, the decoder outputs z1 â 1 G. ote that the rate of this code is equal to = A H,K K = A H,K K Ī() Ī() = I(; ) I(; ) A H,K I. DIIBED OCE CODIG: DECODIG HE M OF AIABLE For a distributed source (, p,, d) let the random variables and take values from a group G. Assume that and satisfy the Markov chain and assume E{d(,, g( + )) D} for some function g. For W = +, we show that the following rates are achievable: 1 = H(W ) H( ), 2 = H(W ) H( ) for decoding W at the decoder. he source employs a nested polar codes whose inner code is a good channel code for the channel (G, G,, ) and whose outer code is a good source code for the test channel (G, G,, ) where for s, t, q, z G and x,, (q s+t) = p W (q s t) and, (x, z s)=p (x, z s). imilarly, the source employs a nested polar code whose inner code is a good channel code for the channel (G, G,, ) and whose outer code is a good source code for the test channel (G, G,, ) where for s, t, q, r G and y,, (q s+t) = p W (q s t) and, (y, r t) = p (y, r t). hese channels are depicted in Figures 3, 4, 5 and 6. + + W = +, Fig. 3: Inner code (). W = +, Fig. 5: Inner code ()., p Fig. 4: Outer code ()., p Fig. 6: Inner code (). We need to show that, is degraded with respect to, (and, is degraded with respect to, ). o show this, in the definition of degradedness [16, Definition III.1], we let the channel (G, G, W ) be such that that for q, z G and x, W (q x, z) = p (q z x).. MLIPLE ACCE CHAEL Let the finite sets and be the input alphabets of a twouser MAC and let be the output alphabet and assume the messages are independent. In order to show that nested polar codes achieve the capacity region of a MAC, it suffices to show that the rates 1 = I(; ) = H() H( ) and 2 = I( ; ) are achievable (to incorporate the timesharing argument see ection I). It is known from the pointto-point result [16] that the terminal can communicate with the decoder with rate I( ; ) so that y1 is available at the decoder with high probability. It remains to show that the rate 1 is achievable for the terminal when y1 is available at the decoder. Let G be an Abelian group with =. Define the artificial channels (G, G, ) and (G, G, ) such that for u, z G and y, (s u) = p (s u) and (y, z, s u) = p (s u, y, z). hese channels have been depicted in Figures(7) and (8). Fig. 7: Channel for inner code. Fig. 8: Channel for outer code. p, imilarly to previous cases, one can show that the symmetric capacities of the channels are equal to Ī() =

q H() and Ī() = q H( ). We employ a nested polar code in which the inner code is a good source code for the test channel and the outer code is a good channel code for. he rate of this code is equal to = Ī() Ī(W x) = I(; ). Here, we only give a sketch of the proof. First note that the channel is degraded with respect to so that the the source code is contained in the channel code. For s 1 G, y1 and z1, let s, (s 1, a i 1 1 q 1 (s 1 a 1 G) a i+1 G i c, (y 1,z n 1,s 1,a i 1 1 a i)= a i+1 G i 1 q 1 W c (y 1,z 1,s 1 a 1 G) Let the random vectors 1, 1, 1, 1 be distributed according to P and let 1 be a random variable uniformly distributed over G which is independent of 1, 1, 1, 1. Let 1 = 1 1 and A 1 = 1 G 1. he encoding and decoding rules are similar to those of the point-to-point channel coding result; i.e., at the encoder, the distribution p Ai is used for soft encoding and at the 1 Ai 1 1 decoder, c, (y 1, z1 n, s 1, a i 1 1 a i ) is used in the successive cancelation decoder to decode a 1. he final decoder output is equal to z1 a 1 G. ote that since y1 is known to the decoder with high probability, it can be used as the channel output for. I. COMPAIO OE MAC In this section, we consider a simple computation problem over a MAC with input alphabets, and output alphabet. he two input terminals of a MAC, and are trying to communicate with a centralized decoder which is interested in the sum of the inputs = + where + is summation over a group G. We show that the rate = min(h(), H( )) H( ) is achievable using polar codes. he terminal employs a nested polar code whose inner code is a good source code for the test channel (G, G,, ) and whose outer code is a good channel code for the channel (G, G,, ) where for u, v, r, z G and z,, (r u)=p (r u) and, (z, q u+v)=p (q u v, z). imilarly, the terminal employs a nested polar code whose inner code is a good source code for the test channel (G, G,, ) and whose outer code is a good channel code for the channel (G, G,, ) where for u, v, t, z G and z,, (t v)=p (t v) and, (z, q u+v)=p (q u v, z). ote that the two terminals use the same channel code. hese channels are depicted in Figures 9, 10, 12 and 11., Fig. 9: Inner code (). +, p Fig. 10: Outer code ()., Fig. 11: Inner code (). +, p Fig. 12: Outer code (). imilarly to previous cases, one can show that the symmetric capacities of the channels are equal to Ī() = q H() and Ī() = q H( ). We employ a nested polar code in which the inner code is a good source code for both test channels, and, and the outer code is a good channel code for, =,. he rate of this code is equal to = Ī(,) max(ī(,), Ī(, )) = min(h(), H( )) H( ). It is worth noting that it can be shown that the intersection of the two source codes is contained in the common channel code. II. HE BOADCA CHAEL In this section, we consider a broadcast channel (,, W, w) when = G for some arbitrary Abelian group G. Let be a random variable over such that E{w()} D and let, be the corresponding channel outputs. Let, be random variable over G satisfying the Markov chain such that there exists a function g : G 2 with g(, ) =. We show that the following rates are achievable 1 =I(; ) I(; )=H( ) H( ), 2 =I( ; ) if the Markov chain holds in addition to the Markov chain above needed for Marton s bound. ote that the terminal can use a point-to-point channel code to achieve the desired rate. It remains to show that the rate 2 is achievable when v1 is available at the encoder. Define the artificial channels (G, G 2, ) and (G, G, ) such that for s, v, z G and y, (v, z s) = p (z s, v) and (y, z s) = p (z s, y). hese channels have been depicted in Figures(13) and (14). imilarly to previous cases, p Fig. 13: Channel for inner code p Fig. 14: Channel for outer code one can show that the symmetric capacities of the channels are equal to Ī() = q H( ) and Ī() = q H( ). ote that to guarantee that is degraded with respect to, we need an additional condition on the auxiliary random variables. It suffices to assume that the Markov chain holds. We employ a nested polar code in which the inner code is a good source code for the test channel and the outer code is a good channel code for. he rate of this code is equal to = Ī() Ī(W x) = I(; ) I(; ). III. MLIPLE DECIPIO CODIG Consider a multiple description problem in which a source is to be reconstructed at three terminals, and W.

here are two encoders and three decoders. erminals and have access to the output of their corresponding encoders and terminal W has access to the output of both encoders. he goal is to find all achievable tuples ( 1, 2, D 1, D 2, D 3 ) where 1 and 2 are the rates of encoders and respectively and D 1, D 2 and D 3 are the distortion levels corresponding to decoders, and W respectively. D 1, D 2 and D 3 are measured as the average of distortion measures d 1 (, ), d 2 (, ) and d 3 (, ) respectively. Let, and W be random variables such that E{d 1 (, )} D 1, E{d 2 (, )} D 2 and E{d 3 (, W )} D 3. We show that the tuple ( 1, 2, D 1, D 2, D 3 ) is achievable if 1 I(; ), 2 I(; ) and 1 + 2 I(; W )+I(; ). It suffices to show that the rates 1 = I(; W ) I(; ) + I(; ), 2 = I(; ) are achievable. he point-to-point source coding result implies that with 2 = I(; ) we can have v1 at the output of the second decoder with high probability. o achieve the rate 1 when v1 is available, first we note that 1 = H() H( ) + H(W ) H(W ). We use a code with rate 11 = H() H( ) for sending and another code 12 = H(W ) H(W ) for sending W. he corresponding channels are depicted in Figures 15, 16, 17 and 18., Fig. 15: Inner code (). W,W p W Fig. 17: Inner code ().,, p Fig. 16: Outer code (). W,W p W Fig. 18: Outer code (). I. OHE POBLEM AD DICIO,, In this paper, we studied the main multi-terminal communication problems in their simplest forms (e.g., no time sharing etc.). he approach of this paper can be extended to the more general formulations and to other similar problems. he approach presented in this paper can also be extended to multiple user (more than two) cases in a straightforward fashion. We briefly discuss examples of such extensions. First, consider the Berger-ung rate region for the distributed source coding problem and let be the time-sharing random variable. We show that the rates 1 = I(; ) and 2 = I( ; ) are achievable for this problem. o achieve these rates, note that 1 = I(; ) I(; ) and 2 = I( ; ) I( ; ) and design an inner polar code of rate I(; ) and an outer polar code of rate I(; ) for the source, and an inner polar code of rate I( ; ) and an outer polar code of rate I( ; ) for the source using some suitably defined channels. Let s denote the channel depicted in Figure 1 symbolically by W( ) for generic random variables and. hen, these channels are given by W(, ), W(, ), W( ) and W(, ) respectively. ext, we consider the multiple description coding problem and show that the rates 1 = I(; ) and 2 = I(; W ) + 2I(; ) + I(; ) I(; ) are achievable for some random variable. o achieve these rates, we note that 1 = 11 + 12 + 13 and 2 = 21 + 22 where 11 = H( ) H( ), 12 = H(W ) H(W ), 13 = H( ) H( ), 21 = H( ) H( ) and 22 = H( ) H( ). We design a nested polar codes for each of these rates similarly to the other examples presented in the paper. Finally, consider a 3-user MAC with inputs W, and and output. We have seen in ection that with rates = I(; ) and = I( ; ), we can have access to x 1 and y1 at the decoder with high probability. he channels W(W 0) and W(,, ) can be used to design a nested polar code of rate W = I(W ; ) for terminal W. EFEECE [1] E. Arikan, Channel Polarization: A Method for Constructing Capacity- Achieving Codes for ymmetric Binary-Input Memoryless Channels, IEEE ransactions on Information heory, vol. 55, no. 7, pp. 3051 3073, 2009. [2] E. asoglu, E. elatar, and E. Arikan, Polarization for arbitrary discrete memoryless channels, IEEE Information heory Worshop, Dec. 2009, Lausanne, witzerland. [3]. Mori and. anaka, Channel Polarization on q-ary Discrete Memoryless Channels by Arbitrary Kernels, Proceedings of IEEE International ymposium on Information heory, July 2010, austin,. [4] A. ahebi and. Pradhan, Multilevel Channel Polarization for Arbitrary Discrete Memoryless Channels, Information heory, IEEE ransactions on, vol. 59, no. 12, pp. 7839 7857, 2013. [5] W. Park and A. Barg, Polar codes for q-ary channels, q = 2 r, 2012, Online: http://arxiv.org/abs/1107.4965. [6]. Korada and. rbanke, Polar codes are optimal for lossy source coding, in Information heory Workshop, 2009. IW 2009. IEEE, 2009, pp. 149 153. [7] M. Karzand and E. elatar, Polar codes for q-ary source coding, in Information heory Proceedings (II), 2010 IEEE International ymposium on, 2010, pp. 909 912. [8] A. ahebi and. Pradhan, Polar codes for sources with finite reconstruction alphabets, in Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on, 2012, pp. 580 586. [9] J. Honda and H. amamoto, Polar coding without alphabet extension for asymmetric models, Information heory, IEEE ransactions on, vol. 59, no. 12, pp. 7829 7838, Dec 2013. [10] E. Arikan, ource polarization, in Information heory Proceedings (II), 2010 IEEE International ymposium on, 2010, pp. 899 903. [11], Polar coding for the slepian-wolf problem based on monotone chain rules, in Information heory Proceedings (II), 2012 IEEE International ymposium on, 2012, pp. 566 570. [12] E. Abbe, andomness and dependencies extraction via polarization, in Information heory and Applications Workshop (IA), 2011, Feb 2011, pp. 1 7. [13] E. asoglu, E. elatar, and E. eh, Polar codes for the two-user binaryinput multiple-access channel, in Information heory Workshop (IW), 2010 IEEE, 2010, pp. 1 5. [14] E. Abbe and E. elatar, Polar Codes for the m-ser MAC, 2010, Online: http://arxiv.org/abs/1002.0777. [15]. Goela, E. Abbe, and M. Gastpar, Polar Codes For Broadcast Channels, 2013, Online: http://arxiv.org/abs/1301.6150. [16] A. ahebi and. Pradhan, ested Polar Codes Achieve the hannon ate-distortion Function and the hannon Capacity, 2014, Online: http://arxiv.org/abs/1401.6482.