Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents
|
|
- Aron Spencer
- 6 years ago
- Views:
Transcription
1 Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents Meryem Benammar, Valerio Bioglio, Frédéric Gabry, Ingmar Land Mathematical and Algorithmic Sciences Lab Paris Research Center, Huawei Technologies France SASU arxiv:79.37v [cs.it] 29 Sep 27 Abstract In this paper, we investigate a novel family of polar codes based on multi-kernel constructions, proving that this construction actually polarizes. To this end, we derive a new and more general proof of polarization, which gives sufficient conditions for kernels to polarize. Finally, we derive the convergence rate of the multi-kernel construction and relate it to the convergence rate of each of the constituent kernels. I. INTRODUCTION Channel polarization is a novel technique to create capacityachieving codes over various channels []. In its original construction, a polar code is generated by a sub-matrix of the transformation matrix T2 n with the binary kernel defined by T 2. While the polarization of the Kronecker powers of the T 2 kernel is presented in [], the proof has been generalized in [2] for the Kronecker power of larger binary kernels. Hereafter, various kernels have been proposed, along with their convergence rates [3]. Recently, a novel family of polar codes, where binary kernels of different sizes are mixed, has been proposed in [4]. These codes, coined as multi-kernel polar codes, make it possible to construct polar codes of any block lengths, not limited to powers of integers as when using a single kernel. In [4], authors conecture that the proposed mixed construction polarizes; this conecture is confirmed by the density evolution algorithm used to calculate the bit reliabilities [5], however, a rigorous convergence proof is missing. In this paper, we fill the gap left in [4] by presenting a proof of the polarization of multi-kernel polar codes. To this end, we describe the polarization of any kernel through an inequality related to the reliability transitions in every step of the polarization. This inequality is different from the ones presented in other proofs [], [2]; however, we will show that this inequality holds for Arikan s original kernel, along with other selected kernels. Besides, we derive the convergence rate of the obtained multi-kernel polar code based on convergence rate of each of the constituent kernels. This paper is organized as follows. Section II introduces the information theoretic model and reviews the code construction of multi-kernel polar codes. Section III presents the main result of this paper which consists in the proof of polarization, while These results were derived when F. Gabry was still with the Mathematical and Algorithmic Sciences Lab of Huawei. Section IV derives the convergence rates or error exponents of the resulting multi-kernel polar code. II. SYSTEM MODEL AND CODE DEFINITION In this section, we introduce the fundamental definitions related to polar coding and the underlying information theoretic model. Moreover, we briefly present the multi-kernel construction of polar codes. A. Channel model Let W : X Y be a discrete input channel defined by its associated probability mass function pmf W y x, and let W N : X N Y N be its N-th memoryless extension with associated pmf W N y N x N, W N y N x N N W y i x i. Let U N U,..., U N be N auxiliary random variables satisfying the Markov chain U,..., U N X i Y i for all i [ : N], where [ : N] represents the set of integers from to N. In the following, the input alphabet X and the respective auxiliary alphabets U are all binary, i.e., for all i [ : N], X U {, }. The auxiliary variables, or bit components, U i are all pairwise independent Bern 2 variables. B. Channel polarization A polar code of length N is a linear block code which maps the bits u N u,..., u N into the channel input array x,..., x N through a linear invertible mapping, i.e., x N x n u n G N, 2 where G N is depicted as the transformation matrix of the polar code. Since the bits U,..., U N are independent Bern 2 distributed and G N is invertible, then the channel inputs X,..., X N are also independent Bern 2 distributed. This yields to the information conservation principle IU N ; Y N IX N ; Y N N IX; Y, 3 which stems from that W N is memoryless. In the following, since the input distribution P X is fixed, we will use IW IX; Y to denote the dependence of IX; Y only on the channel W.
2 To introduce the polarization principle, let us first use the independence of the bits U i to write the following IU N ; Y N N N IU i ; Y N U i 4 IU i ; Y N, U i. 5 Let us then define the channel W N i, with output Z i Y N, U i and associated pmf W N i z i u i W N i y N, u i u i. 6 To polarize the channel W N means to create N virtual channels W N i, each being either a degraded version of W or an enhanced version of it. By a proper choice of the transformation matrix G N, as the one suggested by Arikan [], it can be shown that, as the code length N goes to infinity, the resulting virtual channels W N i are either perfectly reliable channels or totally noisy channels, i.e., i [ : N] IU i ; Y N, U i or. 7 Arikan showed in [] that the fraction of perfectly reliable channels among all, i.e., the fraction of bits that can be transmitted reliably, is given by the mutual information IX; Y. C. Multi-kernel code construction The transformation matrix G N of a polar code of length N is defined by the n-fold Kronecker product of the binary kernel T 2, namely G N T2 n, where N 2 n. Note that the structure of the transformation matrix renders it invertible, and that the admissible blocklengths N are all powers of 2, which might be an impediment for practical applications with arbitrary blocklengths. However, in [2] authors prove that the kernel T 2 can be replaced by any polarizing kernel T l with dimension l l, leading to codes of blocklengths of the form N l n. An example of such kernels, which we will resort to in the document, is the T 3 kernel given by: T 3. 8 The multi-kernel polar code construction is introduced in [4], in which multiple binary kernels are used in the construction of the code. Consider to this end a collection of kernels T l,..., T lm where, for [ : m], T l is a binary matrix of dimension. A multi-kernel transformation matrix is constructed as the Kronecker product of these kernels, G N T l T l2 T lm. 9 Note here that the size of the resulting code is N m. An N, K multi-kernel polar code is defined by the transformation matrix G N in 9, and the information set I of size I K, or conversely by the frozen set F [ : N] \ I. For the encoding, each frozen bit is set to zero, i.e., u i for Fig.. Polarization tree of a kernel T 2 T 3... T lm i F, while information is stored in the remaining bits, whose indices constitute the information set I. Then, the channel input x N is obtained by x N u N G N. In [4], authors conecture that the Kronecker product of polarizing kernels result in a polarizing transformation matrix, and they calculate the reliabilities of the bits through density evolution [5]. The information set I is then constituted by the K bit positions with the highest reliability. In this paper, we present a prove of this conecture, confirming the goodness of the multi-kernel construction in [4]. Decoding of multi-kernel polar codes is performed similarly to Arikan s polar code, using successive cancellation decoding. At each step, a bit u i is decoded from the channel outputs y n using the previously made hard decisions on the bits u i. In practice, the decoding is performed on the Tanner graph of the code, where each block of the decoder consists in the basic decoding operations of the kernel T l as explained in [4]. III. POLARIZATION OF MULTI-KERNEL POLAR CODES In this section, we prove the main result of the paper: the polarization of multi-kernel polar codes. The approach we adopt here is somewhat similar to the one proposed by Arikan in [], but differing in the last steps, where we prove that polarization is highly kernel dependent. A. Definitions In the following, the sub-channels W N i where i [ : N] are defined as W N i W b,...,b m where the index b, for [ : m], takes values in [ : ], constituting the mixed radix decomposition of i in the base l,..., l m. Due to the Kronecker construction of the transformation matrix G N in 9, the channel polarization occurs iteratively in that, at each step m, each of the previous channels W b,...,b m is polarized into l m new channels W b,...,b m,,..., W b,...,b m,b m,..., W b,...,b m,l m as shown in Figure. Let us assume that the transitions on the polarization tree occur uniformly at random, i.e. the indices B,..., B m are uniformly distributed, i.e., b [ : ], P B b. The mutual information IW B,...,B m is then a random variable
3 depending on the random variables B,..., B m, which we will denote by I m IW B,...,B m where I IW is a deterministic variable indicating the channel capacity. Finally, we give a formal definition of channel polarization for multi-kernel polar codes. Definition. A multi-kernel polar code is polarizing if ɛ >, lim PI m ɛ lim PI m ɛ IW. B. Proof of polarization Similarly to the case of T 2 kernels investigated by Arikan, the proof of polarization of multi-kernel polar codes is twofold. First, we prove that the random process {I m } m is converging to a random variable I in probability. Then, we show that, if the kernels are selected properly, the distribution of I is a Bernoulli distribution with probability IW, and I m follows the Definition. Lemma. The sequence {I m } m is a bounded martingale with respect to B,..., B m, and thus converges in probability to a variable I whose expected value verifies EI I. Proof. To begin with, we have that I m for all m N, which is due to the fact that the channel W B,...,B m has binary inputs. Let m N, we have that E Bm+ I m+ B,..., B m 2 E Bm+ IWB,...,B m,b m+ B,..., B m 3 l m+ b PB m+ biw B,...,B m,b 4 l m+ IW B,...,B l m,b a IW B,...,B m 5 m+ b where a is a consequence the information conservation principle 3 and of 5. Finally, one can write that E B,...,B m,b m+ I m+ E B,...,B m EBm+ IWB,...,B m,b m+ B,..., B m a E B,...,B m I m I m where a is a consequence of 5. Thus, {EI m } m is constant, and EI m EI m EI I. 6 Thus, {I m } m is a bounded martingale, hence, uniformly integrable and thus convergent in probability to a random variable I such that EI I, which is due to 6. The convergence of the martingale {I m } m follows mainly from the information conservation property in 3 and is thus universal. On the other hand, polarization depends on to the kernel properties, as shown in the following Theorem. Theorem. If for every kernel T l forming the transformation matrix of the code, and every past sequence b,..., b m, there exist α, β > such that the following inequality holds: b m [ : l m ], IW b,...,b m,b m IW b,...,b m IW b,...,b m α IW b,...,b m β,7 then I is a Bernoulli distributed variable with PI PI I. 8 Proof. Here we present a novel proof showing clearly the dependency of the polarization process on the choice of the kernel. Let m N and let b,..., b m be the vector of past indices in the polarization tree. Under the assumptions of Theorem, we have that for all possible b m [ : l m ], IW b,...,b m,b m IW b,...,b m IW b,...,b m α IW b,...,b m β.9 This implies that on the cylinder set B,..., B m, the following inequality holds with probability, i.e., I m I m I α m I m β. 2 Hence, since {I m } m is a uniformly converging sequence, with limit I, and since the function f : x x α x β is continuous, then I α I β, 2 which in turn implies that I or. What we presented can be seen as an alternative to the original proof of polarization made by Arikan and then extended in [2] to arbitrary kernels. Even if this inequality may seem a bit restrictive, it is verified for a big family of kernels; in the following, we prove it for Arikan s T 2 binary kernel, and for the T 3 kernel given in 8. C. Examples of kernels polarization In the following, we prove that the kernels T 2 and T 3 are polarizing according to the multi-kernel definition by showing that the constraints in 7 are met for these two kernels. For the kernel T 2, consider a channel W with binary inputs and finite outputs. Let U, U 2 be the Bern 2 auxiliary inputs and X, X 2 be the binary channel inputs where X 2 U 2 T 2. The independence of X, X 2 implies that X, Y and X 2, Y 2 are all pairwise independent and HX Y HX 2 Y 2 HX Y H. The inequality in 7 implies the pair of constraints: IU ; Y 2 IX; Y IX; Y α IX; Y β IU 2 ; Y 2 U IX; Y IX; Y α IX; Y β, but since IU 2 ; Y 2 U 2IX; Y IU ; Y 2, then it suffices to prove the first inequality, which amounts to HX X 2 Y 2 H H β. H α. 22 We prove in Appendix A-A, that this holds with the choice α and β 2.
4 As for T 3, under similar assumptions on U, U 2, U 3 and the independence of X, Y, X 2, Y 2 and X 3, Y 3, and defining HX Y HX 2 Y 2 HX 3 Y 3 H, the condition in 7 amounts to proving that: IU ; Y 3 IX; Y IX; Y α IX; Y β IU 2 ; Y 3 U IX; Y IX; Y α IX; Y β IU 3 ; Y 3 U 2 IX; Y IX; Y α IX; Y β, Again, it suffices to prove only the two first inequalities, which amounts to proving that HX X 2 X 3 Y 3 H H β H α 23 HX 2 X 3 Y 3, X X 2 X 3 H H β H α. 24 In Appendix A-B, we show that this holds with α β 2. IV. RATE OF CONVERGENCE OF MULTI-KERNEL POLAR CODES The rate of convergence of the sequence {I m } m, which is related to the error exponent of the generator matrix G N T l T lm, is the asymptotic convergence rate of the probability of error. In this section, we show how to derive the convergence rate of a multi-kernel polar code based on the rate of convergence of each of the constituent kernels T l,..., T lm. A. Definitions Let us first extend the definition of the Bhattacharyya parameter, a key measure in the rate of convergence of polar codes, to the case of multi-kernel polar codes. We recall that the Bhattacharyya parameter associated with a binary input channel W is ZW y Y W y W y. 25 Accordingly to the notation W N i W b,...,b m, where b,..., b m are the mixed radix decomposition of i in the basis l,..., l m, we define a random Bhattacharyya parameter associated to the random realization of B,..., B m as follows: Definition 2 Bhattacharyya parameter. The random Bhattacharyya parameter associated to the random realization of B,..., B m is given by Z m ZW B,...,B m z Z where z is defined in 6. W B,...,B m z W B,...,B m z. The Bhattacharyya parameter is particularly useful since it yields bounds on the block error probability P e N of a length N code, see [2]. Next, we define the rate of convergence of a multi-kernel polar code based on the convergence of the sequence {Z m } m. Definition 3. A multi-kernel polar code has a rate of convergence E if and only if the following properties hold: For all γ E, lim PZ m 2 N γ ; 26 2 for all < γ E lim PZ m 2 N γ IW. 27 The convergence rate relates directly to the error exponent, i.e. the rate of convergence of the block error probability to. As such, if a polar code has an convergence rate of E, then the block error probability satisfies P e N n 2 N E. 28 For polar codes based on Arikan s kernels, i.e. G N T n 2, the convergence rate was shown in [6] to be equal to E.5. For polar codes formed by larger kernels, authors in [2] derive the rate of convergence through the partial distances of the given kernel T l of length l yielding, for instance, for kernel T 3 proposed in 8 a rate of convergence of E.42. B. Calculation of the rate of convergence The rate of convergence of a polar code can be derived for Arikan s T 2 kernels, and more generally, for arbitrary kernels T l, through the following result. Lemma 2. [2, Theorem 4] Let T l be an arbitrary kernel with size l l, then the rate of convergence of the associated polar code is given by E l l l log l D i 29 where D i is the partial distance of the i-th row of the transformation matrix T l t,..., t i,..., t l, defined as D i distt i, < t i+,..., t l > 3 where < t i+,..., t l > is the linear code spanned by the remaining rows of T l and A the transpose of the matrix A. In the following, we show that the rate of convergence of a multi-kernel polar code can be derived on the basis of the rate of convergence of each of the constituent kernels. Theorem 2. Consider a multi-kernel polar code in which each of the s distinct constituent kernels T l has an error exponent E l for [ : s] and is used with frequency p in the Kronecker composition G N as N. Then, the rate of convergence of the resulting mixed kernel polar code is given by s p log E 2 p log 2 E 3 Proof. Let m be the index of the current iteration in the Kronecker product, let i [ : N] and let b,..., b m be its corresponding mixed radix decomposition. The proof of this theorem follows from the following inequality, which is a result of [2, Lemma 3], ZW b,...,b m D bm ZWb,...,b m 2 lmbm ZW b,...,b m D bm 32
5 where D bm, with b m [ : l m ], is the partial distance of the row t bm in the kernel T lm. The proof that items and 2 in definition 3 holds, follows by analytic calculus and is presented in Appendix B. As a result, the rate of convergence of a multi-kernel polar code consists in a weighted sum of the error exponents of each of the constituent kernels, where the weights are related to the frequency of occurrence of a kernel in the construction. APPENDIX A PROOF OF POLARIZATION OF T 2 AND T 3 A. Proof of polarization of T 2 To prove 22, note that HX X 2 Y 2 y,y 2 P YY 2 y, y 2 HX X 2 y, y 2 33 a y,y 2 P Y y P Y2 y 2 HX X 2 y, y 2 34 b y,y 2 P Y y P Y2 y 2 h 2 q y q y2 35 where, a and b are consequences of 38. Next, to prove 24, we write the following: HX 2 X 3 Y 3, X X 2 X 3 HX 2 X 3 Y 3 + HX 2 Y 3 HX X 2 X 3 Y 3 HX 2 Y 2 + HX 2 X 3 Y 3 HX X 2 X 3 Y 3 H + HX 2 X 3 Y2 3 HX X 2 X 3 Y 3. Next, we note that we can write the following upper bound on HX 2 + X 3 Y 3 2. HX 2 X 3 Y 3 2 y 2,y 3 P Y2 y 2 P Y3 y 2 h 2 q y2 q y3 42 a HX 2 Y 2 + HX 3 Y 3 HX 2 Y 2 43 H + H H 44 where a is due to a consequence of Mrs Gerber s lemma [7], h 2 a b h 2 a + h 2 b h 2 ah 2 b. 45 where a is a consequence from the independence of Y and Y 2, and b follows since, conditioned on y, y 2, X X 2 is a binary variable with probability PX X 2 y, y 2 PX y, y 2 PX 2 y, y 2 PX y PX 2 y 2 q y q y2, where denotes the binary convolution operator, and where the last step is a result of the memorylessness of the channel. As defined, q y and q y2 satisfy the following equality y P Y y h 2 q y y 2 P Y2 y 2 h 2 q y HX Y H. Then, resolting to Mrs Gerber s Lemma [7], i.e., convexity of h 2 p h 2 x in x, we can write HX X 2 Y 2 y P Y y h 2 qy h 2 H 36 h 2 h 2 H h 2 H 37 Finally, it can be proved that, a [ : /2], h 2 a a h 2 a h 2 2a h 2 a. 38 which, when replacing a h 2 H, yields the desired result. B. Proof of polarization of T 3 To prove 23, we first note that similar to 37, HX X 2 X 3 Y 3 h 2 h 2 H h 2 H h 2 H 39 a H + h 2 h 2 H h 2 H 2 H 4 b H + H 2 H H + H 2 H 2 4 Thus, we can finally write that HX 2 X 3 Y 3, X X 2 X 3 H HX 2 X 3 Y2 3 HX X 2 X 3 Y 3 a H 2 HX 2 X 3 Y2 3 H 2 H H H H 2 H 2 where a follows similarly from 37 and 38 by leaving X 2 + X 3 grouped. APPENDIX B ERROR EXPONENTS: PROOF OF THEOREM 2 In order to prove that the error exponent of a mutli-kernel polar code is given by E s p l log 2 p log 2 E 46 we would need to prove that E satisfies both conditions in definition 3, namely Condition : For all γ E, lim PZ m 2 N γ ; 47 Condition 2 for all < γ E lim PZ m 2 N γ IW. 48 Before proving that both these conditions hold, we list some preliminaries which are essential to the proof.
6 A. Preliminaries Let m be the current iteration in the multi-kernel construction given by G N m k T l k where the kernels T lk can take values in a set of distinct kernels {T,..., T s } with a probability p for [ : s]. Let b,..., b m the realization of the random variables B,..., B m up to this iteration. We will use the shorthand notation Z m to denote the realization of the random Bhattacharryaa parameter ZW b,...,b m defined in Definition 2. Let D b,..., D bm denote the corresponding partial distances and N m k be the blocklength at iteration m. Definition 4. We define the occurrences set of a kernel T l in G N as N m {k [ : m], T k T l } 49 and the occurrences set of an index i [ : ] in the kernel T l as with cardinalities N m i, {k N m, b k b i }, 5 n m N m 5 n m i, N m i,. 52 By the law of large numbers, and since the variables B,..., B m are each uniformly distributed, the cardinalities of these sets verify n m lim m p, As such, ɛ >, M, s.t. n m i, lim n m 53 n m i, lim m p. 54 n m i, m p ɛ for all m M. 55 In the following, we list some key properties verified by the sequence of Bhattacharyya parameters Z m m. Lemma 3. There exists K, such that, for all m, Z D bm m Z m Z D bm m. 56 Proof. This inequality follows from simple analytic manipulations of the result of [2, Lemma 3], where Z D bm m Z m 2 lmbm Z D bm m 57 2 l Z D bm m, 58 l max [:s]. 59 The results follows then by defining K 2 l. Lemma 4. For any polarizing kernel T l, the partial distances D i for i [ : ], satisfy the following All partial distances are greater than and bounded up by i [ : ], D i ; 2 there exists at least a partial distance D i, of a row i of the kernel s matrix T l, such that D i 2. Proof. The first claim follows from the invertibility of a polarizing kernel matrix and the definition of partial distances. Concerning the second claim, if all partial distances D i are equal to for a Kernel T l, 56 becomes Z m Z m, for all m. 6 Thus, Z m m is a non-decreasing positive sequence, which can converge to only if it s constant and equal to, contradicting the polarization property. The following Lemma will be instrumental in the second part of the proof. Lemma 5. For all m and m 2 such that m 2 m, we have that Z m2 K m2m Z km + m. 6 Proof. Let m 2 m be two integers. We have from the right hand side of inequality 56 that Z m2 Z D bm 2 m 2 62 K D bm 2 Z D bm 2 D bm2 a m m 2 m 2 + mm +2 km km + m 2 mm + km m 2m b Zm 64 km + Zm 65 km + km + Zm 66 K m2m Z km + m, 67 where a and b result from both that K, and from claim 2 Lemma 4. B. Proof of condition To show condition, we rely on the left hand side of 56. Given an ɛ with m M satisfying 55, we have that m k D bm Z m Z D bm m Z. 68 The exponent of Z can be upper bounded as m D bm 69 k k N m D nm i, b i 7
7 a n m i, log l l 7 l m m log l 72 l n m i, mp +ɛ log l l l 73 l mp+ɛ E l 74 b N Eɛ 75 where a stems from 55, and E l while b stems from m N l k where k E ɛ is as defined in 29, and ɛ ɛ max [:s] ɛ l, 76 l nm l mpɛ s mp + ɛ log E l s mp ɛ log s l mpɛ,77 78 p + ɛ log s p ɛ log E 79 E + ɛ, 8 with ɛ proportional to ɛ. Thus, we have that, for m M, Finally, for all γ E, C. Proof of condition 2 Z m Z N E+ɛ 8 2 [N E+ɛ log 2 Z ] 2 [N E+ɛ N log N log 2 Z ] 2 [N E+ɛ +log N log 2 Z ] [N E+ɛ ]. 85 lim PZ m 2 N γ. 86 To prove condition 2, we will rely on the right hand side of 56 by showing that the exponential decay of Z m annihilates the role of the multiplicative constant K. Lemma 6. For every ɛ >, there exists M such that P Z m l + m M > I ɛ, 87 where l max [:s]. Proof. The proof of this Lemma is a generalization of the proof of [8, Lemma 5.]. Defining the event where Ω {w : lim Z mw } 88 {w : k >, m s.t Z m w k m m } 89 A m,k, 9 k> m A m,k {w : Z m w k m m }. 9 Since the multi-kernel polar code defined by G N polarizes, i.e. the sequence Z m m> converges to a Bern I random variable, we have that PΩ I. 92 In the following, the notation Z m w will be shortened to Z m since the dependence on w is implicit. Let k be fixed. The sequence of sets A m,k is increasing in m m, thus we can write A m,k A n,k. 93 n m If we define A,k m we can write that lim PA m m,k lim P m P lim A m,k, 94 n m A n,k A n,k m n m P A,k. 97 Thus, for all k, for all ɛ > there exists M such that for all m M PA m,k P A,k ɛ. 98 More specifically, for k K l +, for all ɛ > there exists M such that PA M,K + P A l,k l + ɛ 99 which concludes the proof. P A,k ɛ k P Ω ɛ I ɛ, 2 The following shows a key inequality on the exponential decay of the sequence Z m m>.
8 Lemma 7. For every ɛ >, there exists M such that for all m M P Z m m 4l > I ɛ. 3 Proof. Fix an ɛ >. Following the result of the previous lemma, let M > such that PA M,K l + I ɛ. 4 Besides, from the right-hand side of inequality 56, we have also that where a follows from Z m+ Z D b m+ m 5 KZ D b m+ m Z m 6 a K l +D bm+ Z m 7 K l +D bm+ Z m, 8 Z m l +, 9 for every w A M,K l +, for all m M. Now, given the result of Lemma 4, we can bound Z m+ as { Zm+ Z m K l if b m+ i m+ Z m+ Z m K if b m+ i m+ where i m+ is the index for which the partial distance satisfies D im+ 2. Given two integers m, m such that m m M, Z m Z m K mmαm m +l, where αm m is the fraction of occurrences of the indices i between iterations m and m, i.e., α m m k [m : m ], b k i k m m. 2 If we define the typical set Tα,m m as the set of events for which each index b occurs at least αp fraction of the time between m and m, we have that αm m s p α α 3 inside Tα,m m. Note that the typical sets Tα,m m are nonincreasing in α for a fixed m and m, and increasing in m for a fixed α and m. Given for α it holds that Thus, since the sets Tα,m m write that α 2l + 2l + l, 4 α l for al [ : s]. 5 m lim PTα m,m are non-increasing in α, we can lim m PT m l,m. 6 We now define the typical set Tm m as the set of events for which each index b [ : ] appears with a probability at least equal to p. Since every index b occurs asymptotically with a probability p, then m lim PTm m, 7 hence, and for all ɛ > there exists an M M such that, for all m M, Next, since T m m PT m m ɛ. 8 T m l,m, we have that PT m l,m PT m m. 9 Now, combining 6 and 9, for all ɛ > there exists an M M such that, for all m M, PT m α,m ɛ. 2 For such ɛ, let us choose M such that M 2M, so that, for all m M, inside the set T m α,m A M,K l +, Z m Z M K mmαm M +l a Z M K mmα +l 2 22 b Z M K m 4l 23 c m 4l, 24 where a follows from 3, b follows from the assumption m M 2M and the definition of α as m M m 2 and α + l 2l, 25 and c stems from Z M. The lemma is now proved noting that the probability of the set T m α,m A M,K l + satisfies P T m α,m A M,K l + I ɛ ɛ 26 I 2ɛ. 27 Now we are ready to prove condition 2 of definition 3. Let ɛ > and let α < l, and let γ < such that [ : s] α γ > ɛ. 28 We define the constant C, independent of the blocklength, as C l ple. 29 Let m 3 be an integer large enough so that, by defining m log2m 38l 3 α logc m 2 + 8l m 3
9 it holds that m maxm, 8l 32 m 3 m 2 γm 3, 33 while the typical sets Tα,m m2, Tα,m m3 2 verify P T m2 α,m > ɛ and P T m 3 α,m 2 > ɛ. 34 We start the proof by bounding Z m2 : Z m2 a m2m Z m im + 35 b m2m m c K m 8l 4l im + 36 im + 37 d im + 38 e K m D αpm2m b m 39 l E αp m 2m 4 K Cαm 2 m 4 f m K Cα 8l, 42 where a stems from Lemma 5, b is the result of Lemma 7, c results from the definition of m 2 in 3, d is a consequence of the constraint 32, e stems from the definition of Tα,m m2, and f is a result of 3. Let us now bound the term Z m3 as m 3 e f l E αp γm 3 l E p ɛm g Eɛ N K 3, 5 where a is a consequence of 42, b stems from the definition of m in 3, c results from the not-limiting assumption m 3 > 2, d follows from the definition of Tα,m m3 2, e follows from the assumption in 33, f is a consequence of 28, while g follows by noting that the block length at iteration m 3 is N 3 m 3. The proof is concluded using similar calculations as in 8, and noticing that P {w : Z m m 4l } T m2 α,m T m3 α,m 2 I ɛ. 52 REFERENCES [] E. Arikan, Channel polarization: a method for constructing capacityachieving codes for symmetric binary-input memoryless channels, IEEE Transactions on Information Theory, vol. 55, no. 7, pp , July 29. [2] S. B. Korada, E. Sasoglu, and R. Urbanke, Polar codes: Characterization of exponent, bounds, and constructions, IEEE Transactions on Information Theory, vol. 56, no. 2, pp , Dec. 2. [3] N. Presman, O. Shapira, S. Litsyn, T. Etzion, and A. Vardy, Binary polarization kernels from code decompositions, IEEE Transactions on Information Theory, vol. 6, no. 5, pp , May 25. [4] F. Gabry, V. Bioglio, I. Land, and J.-C. Belfiore, Multi-kernel construction of polar codes, in IEEE International Conference on Communications ICC, Paris, France, May 27, preprint available on arxiv: [5] R. Mori and T. Tanaka, Performance of polar codes with the construction using density evolution, IEEE Communications Letters, vol. 3, no. 7, pp , July 29. [6] E. Arikan and E. Telatar, On the rate of channel polarization, in Information Theory, 29. ISIT 29. IEEE International Symposium on. IEEE, 29, pp [7] A. Wyner, A theorem on the entropy of certain binary sequences and applications - part I, IEEE Transactions on Information Theory, vol. 9, no. 6, pp , Nov 973. [8] E. Sasoglu, Polarization and polar codes, Foundations and Trends in Communications and Information Theory, vol. 8, no. 4, pp , 22. Z m3 m3m2 Z m2 im a m m3cα b K m 3 2 m 3 8 m 3 m 3 im im c im d K m D αpm3m2 b m 47 l E αp m 3m 2 48
Minimum-Distance Based Construction of Multi-Kernel Polar Codes
Minimum-Distance Based Construction of Multi-Kernel Polar Codes Valerio Bioglio, Frédéric Gabry, Ingmar Land, Jean-Claude Belfiore Mathematical and Algorithmic Sciences Lab France Research Center, Huawei
More informationPractical Polar Code Construction Using Generalised Generator Matrices
Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:
More informationLow-Complexity Puncturing and Shortening of Polar Codes
Low-Complexity Puncturing and Shortening of Polar Codes Valerio Bioglio, Frédéric Gabry, Ingmar Land Mathematical and Algorithmic Sciences Lab France Research Center, Huawei Technologies Co. Ltd. Email:
More informationThe Compound Capacity of Polar Codes
The Compound Capacity of Polar Codes S. Hamed Hassani, Satish Babu Korada and Rüdiger Urbanke arxiv:97.329v [cs.it] 9 Jul 29 Abstract We consider the compound capacity of polar codes under successive cancellation
More informationThe Least Degraded and the Least Upgraded Channel with respect to a Channel Family
The Least Degraded and the Least Upgraded Channel with respect to a Channel Family Wei Liu, S. Hamed Hassani, and Rüdiger Urbanke School of Computer and Communication Sciences EPFL, Switzerland Email:
More informationOn Bit Error Rate Performance of Polar Codes in Finite Regime
On Bit Error Rate Performance of Polar Codes in Finite Regime A. Eslami and H. Pishro-Nik Abstract Polar codes have been recently proposed as the first low complexity class of codes that can provably achieve
More informationConstruction of Polar Codes with Sublinear Complexity
1 Construction of Polar Codes with Sublinear Complexity Marco Mondelli, S. Hamed Hassani, and Rüdiger Urbanke arxiv:1612.05295v4 [cs.it] 13 Jul 2017 Abstract Consider the problem of constructing a polar
More informationPolar Codes for Sources with Finite Reconstruction Alphabets
Polar Codes for Sources with Finite Reconstruction Alphabets Aria G. Sahebi and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 4809,
More informationCapacity of the Discrete Memoryless Energy Harvesting Channel with Side Information
204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener
More informationConstructing Polar Codes Using Iterative Bit-Channel Upgrading. Arash Ghayoori. B.Sc., Isfahan University of Technology, 2011
Constructing Polar Codes Using Iterative Bit-Channel Upgrading by Arash Ghayoori B.Sc., Isfahan University of Technology, 011 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree
More informationLecture 11: Polar codes construction
15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last
More informationAn Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets
An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets Jing Guo University of Cambridge jg582@cam.ac.uk Jossy Sayir University of Cambridge j.sayir@ieee.org Minghai Qin
More informationCompound Polar Codes
Compound Polar Codes Hessam Mahdavifar, Mostafa El-Khamy, Jungwon Lee, Inyup Kang Mobile Solutions Lab, Samsung Information Systems America 4921 Directors Place, San Diego, CA 92121 {h.mahdavifar, mostafa.e,
More informationRCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths
RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths Kasra Vakilinia, Dariush Divsalar*, and Richard D. Wesel Department of Electrical Engineering, University
More informationPolar codes for the m-user MAC and matroids
Research Collection Conference Paper Polar codes for the m-user MAC and matroids Author(s): Abbe, Emmanuel; Telatar, Emre Publication Date: 2010 Permanent Link: https://doi.org/10.3929/ethz-a-005997169
More informationPolar Codes: Construction and Performance Analysis
Master Project Polar Codes: Construction and Performance Analysis Ramtin Pedarsani ramtin.pedarsani@epfl.ch Supervisor: Prof. Emre Telatar Assistant: Hamed Hassani Information Theory Laboratory (LTHI)
More informationOn the Polarization Levels of Automorphic-Symmetric Channels
On the Polarization Levels of Automorphic-Symmetric Channels Rajai Nasser Email: rajai.nasser@alumni.epfl.ch arxiv:8.05203v2 [cs.it] 5 Nov 208 Abstract It is known that if an Abelian group operation is
More informationPolar Code Construction for List Decoding
1 Polar Code Construction for List Decoding Peihong Yuan, Tobias Prinz, Georg Böcherer arxiv:1707.09753v1 [cs.it] 31 Jul 2017 Abstract A heuristic construction of polar codes for successive cancellation
More informationChannel Polarization and Blackwell Measures
Channel Polarization Blackwell Measures Maxim Raginsky Abstract The Blackwell measure of a binary-input channel (BIC is the distribution of the posterior probability of 0 under the uniform input distribution
More informationJoint Write-Once-Memory and Error-Control Codes
1 Joint Write-Once-Memory and Error-Control Codes Xudong Ma Pattern Technology Lab LLC, U.S.A. Email: xma@ieee.org arxiv:1411.4617v1 [cs.it] 17 ov 2014 Abstract Write-Once-Memory (WOM) is a model for many
More informationPolar Codes for Arbitrary DMCs and Arbitrary MACs
Polar Codes for Arbitrary DMCs and Arbitrary MACs Rajai Nasser and Emre Telatar, Fellow, IEEE, School of Computer and Communication Sciences, EPFL Lausanne, Switzerland Email: rajainasser, emretelatar}@epflch
More informationβ-expansion: A Theoretical Framework for Fast and Recursive Construction of Polar Codes
β-expansion: A Theoretical Framework for Fast and Recursive Construction of Polar Codes Gaoning He, Jean-Claude Belfiore, Ingmar Land, Ganghua Yang Paris Research Center, Huawei Technologies 0 quai du
More informationOn the Convergence of the Polarization Process in the Noisiness/Weak- Topology
1 On the Convergence of the Polarization Process in the Noisiness/Weak- Topology Rajai Nasser Email: rajai.nasser@alumni.epfl.ch arxiv:1810.10821v2 [cs.it] 26 Oct 2018 Abstract Let W be a channel where
More informationPerformance of Polar Codes for Channel and Source Coding
Performance of Polar Codes for Channel and Source Coding Nadine Hussami AUB, Lebanon, Email: njh03@aub.edu.lb Satish Babu Korada and üdiger Urbanke EPFL, Switzerland, Email: {satish.korada,ruediger.urbanke}@epfl.ch
More informationUnified Scaling of Polar Codes: Error Exponent, Scaling Exponent, Moderate Deviations, and Error Floors
Unified Scaling of Polar Codes: Error Exponent, Scaling Exponent, Moderate Deviations, and Error Floors Marco Mondelli, S. Hamed Hassani, and Rüdiger Urbanke Abstract Consider transmission of a polar code
More informationPolar Codes: Graph Representation and Duality
Polar Codes: Graph Representation and Duality arxiv:1312.0372v1 [cs.it] 2 Dec 2013 M. Fossorier ETIS ENSEA/UCP/CNRS UMR-8051 6, avenue du Ponceau, 95014, Cergy Pontoise, France Email: mfossorier@ieee.org
More informationPolar Codes are Optimal for Lossy Source Coding
Polar Codes are Optimal for Lossy Source Coding Satish Babu Korada and Rüdiger Urbanke EPFL, Switzerland, Email: satish.korada,ruediger.urbanke}@epfl.ch Abstract We consider lossy source compression of
More informationChapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University
Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationAalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes
Aalborg Universitet Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Published in: 2004 International Seminar on Communications DOI link to publication
More informationLower Bounds on the Graphical Complexity of Finite-Length LDPC Codes
Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International
More informationLecture 7. Union bound for reducing M-ary to binary hypothesis testing
Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing
More informationThe Method of Types and Its Application to Information Hiding
The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,
More informationBounds on Mutual Information for Simple Codes Using Information Combining
ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar
More informationA Comparison of Superposition Coding Schemes
A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA
More informationDispersion of the Gilbert-Elliott Channel
Dispersion of the Gilbert-Elliott Channel Yury Polyanskiy Email: ypolyans@princeton.edu H. Vincent Poor Email: poor@princeton.edu Sergio Verdú Email: verdu@princeton.edu Abstract Channel dispersion plays
More informationSuperposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels
Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:
More informationVariable Length Codes for Degraded Broadcast Channels
Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates
More informationGeneralized Partial Orders for Polar Code Bit-Channels
Fifty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 3-6, 017 Generalized Partial Orders for Polar Code Bit-Channels Wei Wu, Bing Fan, and Paul H. Siegel Electrical and Computer
More informationFeedback Capacity of a Class of Symmetric Finite-State Markov Channels
Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada
More informationMultiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets
Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,
More informationAn instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1
Kraft s inequality An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if N 2 l i 1 Proof: Suppose that we have a tree code. Let l max = max{l 1,...,
More informationEquidistant Polarizing Transforms
DRAFT 1 Equidistant Polarizing Transorms Sinan Kahraman Abstract arxiv:1708.0133v1 [cs.it] 3 Aug 017 This paper presents a non-binary polar coding scheme that can reach the equidistant distant spectrum
More informationEE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes
EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check
More informationThe Gallager Converse
The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing
More informationCoding of memoryless sources 1/35
Coding of memoryless sources 1/35 Outline 1. Morse coding ; 2. Definitions : encoding, encoding efficiency ; 3. fixed length codes, encoding integers ; 4. prefix condition ; 5. Kraft and Mac Millan theorems
More information5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010
5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora
More informationLECTURE 13. Last time: Lecture outline
LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to
More informationA Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources
A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College
More informationEE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018
Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code
More informationUpper Bounds on the Capacity of Binary Intermittent Communication
Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,
More informationShannon s noisy-channel theorem
Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for
More informationMultimedia Communications. Mathematical Preliminaries for Lossless Compression
Multimedia Communications Mathematical Preliminaries for Lossless Compression What we will see in this chapter Definition of information and entropy Modeling a data source Definition of coding and when
More informationCut-Set Bound and Dependence Balance Bound
Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems
More informationOn The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers
On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers Albrecht Wolf, Diana Cristina González, Meik Dörpinghaus, José Cândido Silveira Santos Filho, and Gerhard Fettweis Vodafone
More informationGuesswork Subject to a Total Entropy Budget
Guesswork Subject to a Total Entropy Budget Arman Rezaee, Ahmad Beirami, Ali Makhdoumi, Muriel Médard, and Ken Duffy arxiv:72.09082v [cs.it] 25 Dec 207 Abstract We consider an abstraction of computational
More information5 Mutual Information and Channel Capacity
5 Mutual Information and Channel Capacity In Section 2, we have seen the use of a quantity called entropy to measure the amount of randomness in a random variable. In this section, we introduce several
More informationLecture 6 I. CHANNEL CODING. X n (m) P Y X
6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder
More informationConstruction and Decoding of Product Codes with Non-Systematic Polar Codes
Construction and Decoding of Product Codes with on-systematic Polar Codes Valerio Bioglio, Carlo Condo, Ingmar Land Mathematical and Algorithmic Sciences Lab Huawei Technologies France SASU Email: {valerio.bioglio,carlo.condo,ingmar.land}@huawei.com
More informationRemote Source Coding with Two-Sided Information
Remote Source Coding with Two-Sided Information Basak Guler Ebrahim MolavianJazi Aylin Yener Wireless Communications and Networking Laboratory Department of Electrical Engineering The Pennsylvania State
More informationSymmetric Characterization of Finite State Markov Channels
Symmetric Characterization of Finite State Markov Channels Mohammad Rezaeian Department of Electrical and Electronic Eng. The University of Melbourne Victoria, 31, Australia Email: rezaeian@unimelb.edu.au
More informationOn the Duality between Multiple-Access Codes and Computation Codes
On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch
More informationApplications of Algebraic Geometric Codes to Polar Coding
Clemson University TigerPrints All Dissertations Dissertations 5-015 Applications of Algebraic Geometric Codes to Polar Coding Sarah E Anderson Clemson University Follow this and additional works at: https://tigerprintsclemsonedu/all_dissertations
More informationSecret Key and Private Key Constructions for Simple Multiterminal Source Models
Secret Key and Private Key Constructions for Simple Multiterminal Source Models arxiv:cs/05050v [csit] 3 Nov 005 Chunxuan Ye Department of Electrical and Computer Engineering and Institute for Systems
More informationPolar Write Once Memory Codes
Polar Write Once Memory Codes David Burshtein, Senior Member, IEEE and Alona Strugatski Abstract arxiv:1207.0782v2 [cs.it] 7 Oct 2012 A coding scheme for write once memory (WOM) using polar codes is presented.
More informationLecture 9 Polar Coding
Lecture 9 Polar Coding I-Hsiang ang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 29, 2015 1 / 25 I-Hsiang ang IT Lecture 9 In Pursuit of Shannon s Limit Since
More informationThe Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan
The Communication Complexity of Correlation Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan Transmitting Correlated Variables (X, Y) pair of correlated random variables Transmitting
More informationSC-Fano Decoding of Polar Codes
SC-Fano Decoding of Polar Codes Min-Oh Jeong and Song-Nam Hong Ajou University, Suwon, Korea, email: {jmo0802, snhong}@ajou.ac.kr arxiv:1901.06791v1 [eess.sp] 21 Jan 2019 Abstract In this paper, we present
More informationTime-invariant LDPC convolutional codes
Time-invariant LDPC convolutional codes Dimitris Achlioptas, Hamed Hassani, Wei Liu, and Rüdiger Urbanke Department of Computer Science, UC Santa Cruz, USA Email: achlioptas@csucscedu Department of Computer
More informationExercises with solutions (Set B)
Exercises with solutions (Set B) 3. A fair coin is tossed an infinite number of times. Let Y n be a random variable, with n Z, that describes the outcome of the n-th coin toss. If the outcome of the n-th
More informationITCT Lecture IV.3: Markov Processes and Sources with Memory
ITCT Lecture IV.3: Markov Processes and Sources with Memory 4. Markov Processes Thus far, we have been occupied with memoryless sources and channels. We must now turn our attention to sources with memory.
More informationHomework Set #2 Data Compression, Huffman code and AEP
Homework Set #2 Data Compression, Huffman code and AEP 1. Huffman coding. Consider the random variable ( x1 x X = 2 x 3 x 4 x 5 x 6 x 7 0.50 0.26 0.11 0.04 0.04 0.03 0.02 (a Find a binary Huffman code
More informationOn Scalable Coding in the Presence of Decoder Side Information
On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,
More informationarxiv: v2 [cs.it] 28 May 2017
Feedback and Partial Message Side-Information on the Semideterministic Broadcast Channel Annina Bracher and Michèle Wigger arxiv:1508.01880v2 [cs.it] 28 May 2017 May 30, 2017 Abstract The capacity of the
More informationOn Common Information and the Encoding of Sources that are Not Successively Refinable
On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa
More informationarxiv: v1 [cs.it] 5 Sep 2008
1 arxiv:0809.1043v1 [cs.it] 5 Sep 2008 On Unique Decodability Marco Dalai, Riccardo Leonardi Abstract In this paper we propose a revisitation of the topic of unique decodability and of some fundamental
More informationSerially Concatenated Polar Codes
Received September 11, 2018, accepted October 13, 2018. Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Digital Object Identifier 10.1109/ACCESS.2018.2877720 Serially Concatenated
More informationAn Achievable Error Exponent for the Mismatched Multiple-Access Channel
An Achievable Error Exponent for the Mismatched Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@camacuk Albert Guillén i Fàbregas ICREA & Universitat Pompeu Fabra University of
More informationAchievable Rates for Probabilistic Shaping
Achievable Rates for Probabilistic Shaping Georg Böcherer Mathematical and Algorithmic Sciences Lab Huawei Technologies France S.A.S.U. georg.boecherer@ieee.org May 3, 08 arxiv:707.034v5 cs.it May 08 For
More informationNational University of Singapore Department of Electrical & Computer Engineering. Examination for
National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:
More informationIntermittent Communication
Intermittent Communication Mostafa Khoshnevisan, Student Member, IEEE, and J. Nicholas Laneman, Senior Member, IEEE arxiv:32.42v2 [cs.it] 7 Mar 207 Abstract We formulate a model for intermittent communication
More informationAN INTRODUCTION TO SECRECY CAPACITY. 1. Overview
AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits
More informationConcatenated Polar Codes
Concatenated Polar Codes Mayank Bakshi 1, Sidharth Jaggi 2, Michelle Effros 1 1 California Institute of Technology 2 Chinese University of Hong Kong arxiv:1001.2545v1 [cs.it] 14 Jan 2010 Abstract Polar
More informationReliable Computation over Multiple-Access Channels
Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,
More informationArimoto Channel Coding Converse and Rényi Divergence
Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code
More informationDepth versus Breadth in Convolutional Polar Codes
Depth versus Breadth in Convolutional Polar Codes Maxime Tremlay, Benjamin Bourassa and David Poulin,2 Département de physique & Institut quantique, Université de Sherrooke, Sherrooke, Quéec, Canada JK
More informationOn the Construction of Polar Codes for Channels with Moderate Input Alphabet Sizes
1 On the Construction of Polar Codes for Channels with Moderate Input Alphabet Sizes Ido Tal Department of Electrical Engineering, Technion, Haifa 32000, Israel. Email: idotal@ee.technion.ac.il Abstract
More informationConstruction of Barnes-Wall Lattices from Linear Codes over Rings
01 IEEE International Symposium on Information Theory Proceedings Construction of Barnes-Wall Lattices from Linear Codes over Rings J Harshan Dept of ECSE, Monh University Clayton, Australia Email:harshanjagadeesh@monhedu
More informationPolar Codes for Some Multi-terminal Communications Problems
Polar Codes for ome Multi-terminal Communications Problems Aria G. ahebi and. andeep Pradhan Department of Electrical Engineering and Computer cience, niversity of Michigan, Ann Arbor, MI 48109, A. Email:
More informationLecture 3: Lower Bounds for Bandit Algorithms
CMSC 858G: Bandits, Experts and Games 09/19/16 Lecture 3: Lower Bounds for Bandit Algorithms Instructor: Alex Slivkins Scribed by: Soham De & Karthik A Sankararaman 1 Lower Bounds In this lecture (and
More informationFeedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case
1 arxiv:0901.3580v1 [cs.it] 23 Jan 2009 Feedback Capacity of the Gaussian Interference Channel to Within 1.7075 Bits: the Symmetric Case Changho Suh and David Tse Wireless Foundations in the Department
More informationOptimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels
Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Bike Xie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract arxiv:0811.4162v4 [cs.it] 8 May 2009
More informationCharacterising Probability Distributions via Entropies
1 Characterising Probability Distributions via Entropies Satyajit Thakor, Terence Chan and Alex Grant Indian Institute of Technology Mandi University of South Australia Myriota Pty Ltd arxiv:1602.03618v2
More informationDistributed Functional Compression through Graph Coloring
Distributed Functional Compression through Graph Coloring Vishal Doshi, Devavrat Shah, Muriel Médard, and Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology
More informationSubset Typicality Lemmas and Improved Achievable Regions in Multiterminal Source Coding
Subset Typicality Lemmas and Improved Achievable Regions in Multiterminal Source Coding Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California - Santa Barbara {kumar,eakyol,rose}@ece.ucsb.edu
More informationPermuted Successive Cancellation Decoder for Polar Codes
Permuted Successive Cancellation Decoder for Polar Codes Harish Vangala, Emanuele Viterbo, and Yi Hong, Dept. of ECSE, Monash University, Melbourne, VIC 3800, Australia. Email: {harish.vangala, emanuele.viterbo,
More informationConnection to Branching Random Walk
Lecture 7 Connection to Branching Random Walk The aim of this lecture is to prepare the grounds for the proof of tightness of the maximum of the DGFF. We will begin with a recount of the so called Dekking-Host
More informationInformation Theory CHAPTER. 5.1 Introduction. 5.2 Entropy
Haykin_ch05_pp3.fm Page 207 Monday, November 26, 202 2:44 PM CHAPTER 5 Information Theory 5. Introduction As mentioned in Chapter and reiterated along the way, the purpose of a communication system is
More informationA Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission
Li and Kang: A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing 1 A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization
More informationPrimary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel
Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Stefano Rini, Ernest Kurniawan and Andrea Goldsmith Technische Universität München, Munich, Germany, Stanford University,
More information