Polar Codes for Sources with Finite Reconstruction Alphabets

Size: px
Start display at page:

Download "Polar Codes for Sources with Finite Reconstruction Alphabets"

Transcription

1 Polar Codes for Sources with Finite Reconstruction Alphabets Aria G. Sahebi and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 4809, USA. Abstract Polar codes are a new class of codes introduced by Arikan capable of achieving the symmetric capacity of discrete memoryless channels. It was later shown that polar codes can achieve the symmetric rate-distortion bound for the lossy source coding problem when the size of the reconstruction alphabet is a prime. In this paper, we show that polar codes achieve the symmetric rate-distortion bound for finite-alphabet sources regardless of the size of the reconstruction alphabet. I. ITRODUCTIO Polar codes were originally proposed by Arikan in [] for discrete memoryless channels with a binary input alphabet. Polar codes over binary input channels are coset codes capable of achieving the symmetric capacity of channels. These codes are constructed [ ] based on the Kronecker power 0 of the 2 2 matrix and are the first known class of capacity achieving codes with an explicit construction. It was subseuently shown by Korada and Urbanke [2] that polar codes employed with a successive cancelation encoder are also optimal for the lossy source coding problem when the size of the reconstruction alphabet is two. Karzand and Telatar extended this result to the case where the size of the reconstruction alphabet is a prime [3]. In an earlier work [4], we have shown that polar codes can achieve the symmetric capacity of arbitrary discrete memoryless channels regardless of the size of the channel input alphabet. It is shown in [4] that in general, when the channel input alphabet is an Abelian group G, channel polarization occurs in several levels. For each subgroup H of G, there exists a corresponding asymptotic case in which the channel output can determine the coset of the subgroup to which the channel input belongs. Then a modification is made to the encoding and decoding rules to achieve the symmetric capacity of the channel. As mentioned in [4], polar codes employed with the modified encoding rule fall into a class of codes called nested group codes. In this paper, motivated by the results of [2] [4], we show that polar codes achieve the symmetric rate-distortion bound for the lossy source coding problem when the size of the reconstruction alphabet is finite. We show that similar to the channel coding problem, polar transformations applied to test channels can converge to several asymptotic cases each corresponding to a subgroup H of the reconstruction alphabet G. We employ a modified randomized rounding encoding rule to achieve the symmetric rate-distortion bound. This paper is organized as follows: In Section II, some definitions and basic facts are stated which are used in the paper. In Section III we show that polar codes achive the symmetric rate-distortion bound when the size of te reconstruction alphabet is of the form p r where p is a prime and r is a positive integer. This result is generalized in Section IV to the case where the reconstruction alphabet is some arbitrary Abelian group of arbitrary size and arbitrary group operation. We conclude in Section V. II. PRELIMIARIES Source and Channel Models: The source is modeled as a discrete-time random process X with each sample taking values in a fixed finite set X with probability distribution p X. The reconstruction alphabet is denoted by U and the uality of reconstruction is measured by a single-letter distortion function d : X U R +. We denote the source by X, U,p X,d. A test channel is specified by a tuple U, X,W where U is the channel input alphabet, X is the channel output alphabet and W is the channel transition kernel. 2 Achievability and the Rate-Distortion Function: A transmission system with parameters n,θ,,τ for compressing a given source X, U,p X,d consists of an encoding mapping and a decoding mapping e : X n {,2,,Θ}, g : {,2,,Θ} U n such that the following condition is met: P dx n,gex n > τ where X n is the random vector of length n generated by the source. In this transmission system, n denotes the block length, log Θ denotes the number of channel uses, denotes the distortion level and τ denotes the probability of exceeding the distortion level. Given a source, a pair of non-negative real numbers R,D is said to be achievable if there exists for every ǫ > 0, and for

2 all sufficiently large numbers n a transmission system with parameters n,θ,,τ for compressing the source such that log Θ R + ǫ, D + ǫ, τ ǫ n The optimal rate distortion function R D of the source is given by the infimum of the rates R such that R,D is achievable. It is known that the optimal rate-distortion function is given by: RD min IX;U p U X E px p U X {dx,u} D where p U X is the conditional probability of U given X and p X p U X is the joint distribution of X and U. The symmetric rate-distortion function RD is defined as follows: RD min p U X E px p U X {dx,u} D p U IX;U where p U is the marginal distribution of U given by p U u x X p Xxp U X u x and is the size of the reconstruction alphabet U. 3 Binary Polar Codes: For any 2 n, a polar code of length designed for the channel Z 2, Y,W is a linear code characterized by a generator matrix G and a set of indices A {,,} of perfect channels. The generator matrix for polar codes is defined as [ G ] B F n where 0 B is a permutation of rows, F and denotes the Kronecker product. The set A is a function of the channel. The decoding algorithm for polar codes is a specific form of successive cancellation []. For the source coding problem, when the size of the reconstruction alphabet is binary, the encoding rule is a form of soft decicion successive encoding and the decoding rule is a matrix multiplication [2]. 4 Groups, Rings and Fields: All groups referred to in this paper are Abelian groups. Given a group G, +, a subset H of G is called a subgroup of G if it is closed under the group operation. In this case, H,+ is a group on its own right. This is denoted by H G. A coset C of a subgroup H is a shift of H by an arbitrary element a G i.e. C a + H for some a G. For any subgroup H of G, its cosets partition the group G. A transversal T of a subgroup H of G is a subset of G containing one and only one element from each coset shift of H. We give some examples in the following: The simplest nontrivial example of groups is Z 2 with addition mod-2 which is a ring and a field with multiplication mod-2. The group Z 2 Z 2 is also a ring and a field under component-wise mod-2 addition and a carefully defined multiplication. The group Z 4 with mod-4 addition and multiplication is a ring but not a field since the element 2 Z 4 does not have a multiplicative inverse. The subset {0, 2} is a subgroup of Z 4 since it is closed under mod-4 addition. {0} and Z 4 are the two other subgroups of Z 4. The group Z 6 is neither a field nor a ring. Subgroups of Z 6 are: {0}, {0,3}, {0,2,4} and Z 6. 5 Polar Codes Over Abelian Groups: For any discrete memoryless channel, there always exists an Abelian group of the same size as that of the channel input alphabet. In general, for an Abelian group, there may not exist a multiplication operation. Since polar encoders are characterized by a matrix multiplication, before using these codes for channels of arbitrary input alphabet sizes, a generator matrix for codes over Abelian groups needs to be properly defined. In Appendix A of [5] a convention is introduced to generate codes over groups using {0, }-valued generator matrices. 6 Channel Parameters: For a test channel U, X, W assume U is euipped with the structure of a group G,+. The symmetric capacity of the test channel is defined as I 0 W IU;X where the channel input U is uniformly distributed over U and X is the output of the channel; i.e. for U, I 0 W u U x X Wx ulog Wx u ũ U Wx ũ The Bhattacharyya distance between two distinct input symbols u and ũ is defined as ZW {u,ũ} x X Wx uwx ũ and the average Bhattacharyya distance is defined as ZW u,ũ U u ũ ZW {u,ũ} We use the following two uantities in the paper extensively: D d W Wx u Wx u + d u U x X D d W Wx u Wx u + d 2 u U x X where d is some element of G and + is the group operation. 7 otation: We denote by Oǫ any function of ǫ which is right-continuous around 0 and that Oǫ 0 as ǫ 0. We denote by a ǫ b to mean a b + Oǫ. For positive integers and r, let {A 0,A,,A r } be a partition of the index set {,2,,}. Given sets T t for t 0,,r, the direct sum r T t At is defined as the set of all tuples u u,,u such that u i T t whenever i A t.

3 III. POLAR CODES FOR SOURCES WITH RECOSTRUCTIO ALPHABET Z p r In this section, we consider sources whose reconstruction alphabet size is of the form p r for some prime number p and a positive integer r. In this case, the reconstruction alphabet can be considered as a ring with addition and multiplication modulo p r. We prove the achievability of the symmetric rate-distortion bound for these sources using polar codes and later in Section IV we will generalize this result to sources with arbitrary finite reconstruction alphabets. A. Z p r Rings Let G Z p r {0,,2,,p r } with addition and multiplication modulo p r be the input alphabet of the channel, where p is a prime and r is an integer. For t 0,,,r, define the subgroups H t of G as the set: H t p t G {0,p t,2p t,,p r t p t } and for t 0,,,r, define the subsets K t of G as K t H t \H t+ ; i.e. K t is defined as the set of elements of G which are a multiple of p t but are not a multiple of p t+. ote that K 0 is the set of all invertible elements of G and K r {0}. One can sort the sets K 0 > K > > K r in a decreasing order of invertibility of its elements. Let T t be a transversal of H t in G; i.e. a subset of G containing one and only one element from each coset of H t in G. One valid choice for T t is {0,,,p t }. ote that given H t and T t, each element g of G can be represented uniuely as a sum g ĝ + g where ĝ T t and g H t. B. Recursive Channel Transformation The Basic Channel Transforms: For a test channel U G, X,W where G, the channel transformations are given by: W x,x 2 u Wx u + u 2Wx 2 u 2 u 2 G W + x,x 2,u u 2 Wx u + u 2 Wx 2 u 2 2 for x,x 2 X and u,u 2 G. Repeating these operations n times recursively, we obtain 2 n channels W,,W. For i,,, these channels are given by: W i x,u i u i W x u G u i+ G i where G is the generator matrix for polar codes. Let V be a random vector uniformly distributed over G and assume the random vectors V, U and X are distributed over G G X according to p V U Xv,u,x n {u v G } 3 Wx i u i i The conditional probability distribution induced from the above euation is consistent with 3. We use this probability distribution extensively throughout the paper. C. Encoding and Decoding Let n be a positive integer and let G be the generator matrix for polar codes where 2 n. Let {A t 0 t r} be a partition of the index set {,2,,}. Let b be an arbitrary element from the set r HAt t. Assume that the partition {A t 0 t r} and the vector b are known to both the encoder and the decoder. The encoder maps a source seuence x to a vector v G by the following rule: For i,,, if i A t, let v i be a random element g from the set b i + T t picked with probability Pv i g P Vi V i,x g v i,x P Vi V i,x b i + T t v i,x This encoding rule is a generalization of the randomized rounding encoding rule used for the binary case in [2]. Given a seuence v G, the decoder decodes it to v G. For the sake of analysis we assume that the vector b is uniformly randomly distributed over the set r HAt t Although it s common information between the encoder and the decoder. The average distortion is given by E{dX,U } and the rate of the code is given by R r r A t A t log T t t log p D. Test Channel Polarization The following result has been proved in [5]: For all ǫ > 0, there exists a number ǫ 2 nǫ and a partition {A ǫ 0,A ǫ,,a ǫ r} of {,,} such that for t 0,,r and i A ǫ t, Z d W i < Oǫ if d H s for 0 s < t and Z d W i > Oǫ if d H s for t s < r. For t 0,,r and i A ǫ t, we have IW i t logp+oǫ and Z t W i Oǫ where Z t W Z d W H t d H t Moreover, as ǫ 0, p 0,,p r. A ǫ t p t for some probabilities In the next section, we show that for any β < 2 and for t 0,,r, lim P Z t n > 2 2βn P Z t 4 n r st Remark III.. This observation implies the following stronger result: For all ǫ > 0, there exists a number ǫ 2 nǫ and a partition {A ǫ 0,A ǫ,,a ǫ r} of {,,} such that for t 0,,r and i A ǫ t, p s

4 IW i t logp + Oǫ and Zt W i > 2 2βnǫ. A Moreover, as ǫ 0, ǫ t p t for some probabilities p 0,,p r. E. Rate of Polarization In this section we derive a rate of polarization result for the source coding problem. In this proof, we do not assume is a power of a prime and hence the rate of polarization result derived in this section is valid for the general case. It is shown in [6] with a slight generalization that if a random process Z n satisfies the following two properties Z n+ kz n w.p. 2 Z n+ Zn 2 w.p. 6 2 for some constant k, then for any β < 2, lim n PZ n < 2 2βn PZ 0. We prove that the random process D d n satisfied these properties. First note that by definition D d W + [ Wx u + u 2 Wx 2 u 2 u 2 G x,x 2 X u G Wx u + u 2 + dwx 2 u 2 + d If we add and subtract the term Wx u +u 2 Wx 2 u 2 +d to the term inside brackets and use the ineuality a+b 2 2a 2 + b 2 with a Wx u +u 2 Wx 2 u 2 Wx u + u 2 Wx 2 u 2 +d b Wx u + u 2 Wx 2 u 2 + d Wx u + u 2 + dwx 2 u 2 + d we obtain D d W u 2 G x [,x 2 X,u G Wx u +u 2 Wx 2 u 2 Wx u +u 2 Wx 2 u 2 +d 2 + Wx u + u 2 Wx 2 u 2 + d 5 Wx u + u 2 + dwx 2 u 2 + d 2] 7 This summation can be expanded into two separate summations. For the first summation, we have 2 2 u 2 G x,x 2 X,u G Wx u +u 2 Wx 2 u 2 Wx u +u 2 Wx 2 u 2 +d u 2 G x,x 2 X,u G Wx u + u 2 2 Wx 2 u 2 Wx 2 u 2 + d 2 Wx 2 u 2 Wx 2 u 2 + d 2 u 2 G x 2 X 2 D d W 8 ] 2 Similarly, for the second summation we can show that u 2 G x,x 2 X u G 2 2 Wx u + u 2 Wx 2 u 2 + d Wx u + u 2 + dwx 2 u 2 + d 2 2 D d W 9 Therefore, it follows from 7, 8 and 9 that condition 5 is satisfied for k 4. ext we show that Dd W Dd W 2. ote that D d W v G x,x 2 X v G x,x 2 X [ v 2 G 2 v 2 G Wx v + v 2 Wx 2 v 2 Wx v + d + v 2 Wx 2 v 2 [ v 2 G ] 2 Wx 2 v 2 Wx v + v 2 2 ] 2 2 Wx v + d + v 2 The suared term in the brackets can be expanded as 2 v 2,v 2 G Wx 2 v 2 Wx 2 v 2Wx v + v 2 Wx v + d + v 2 Wx v + v 2 Wx v + d + v 2 Therefore, Dd W can be written as a summation of four terms D d W D + D 2 + D 3 + D 4 where D v G x,x 2 X 2 v 2,v 2 G Wx 2 v 2 Wx 2 v 2Wx v + v 2 Wx v + v 2 D2 2 Wx 2 v 2 v G x,x 2 X v 2,v 2 G Wx 2 v 2Wx v + v 2 + dwx v + v 2 + d D3 2 Wx 2 v 2 v G x,x 2 X v 2,v 2 G Wx 2 v 2Wx v + v 2 Wx v + v 2 + d D4 2 Wx 2 v 2 v G x,x 2 X v 2,v 2 G Wx 2 v 2Wx v + v 2 + dwx v + v 2 For d G define S d W Wx vwx v + d v G x X

5 ote that S d W S d W. We have D 2 Wx 2 v 2 Wx 2 v 2 x 2 X v 2,v 2 G Wx v + v 2 Wx v + v 2 v G x X 2 Wx 2 v 2 Wx 2 v 2S v2 v W 2 x 2 X v 2,v 2 G 2 Wx 2 v 2 Wx 2 v 2 as a W x 2 X v 2 G v 2 v2 a 2 S a W Wx 2 v 2 Wx 2 v 2 a v 2 G x 2 X 2 S a WS a W 2 S a W 2 With similar arguments we can show that D2 2 S a W 2 D3 2 S a WS a d W D4 2 S a WS a d W Therefore D d W 4 Sa W 2 S a WS a d W 2 S a W S a d W 2 ote that D d W Wx v Wx v + d 2 Therefore v G x X 2S 0 W 2S d W 2 Dd W 4S0 W S d W 2 2 To show that Dd W Dd W it suffices to show that S a W S a d W S 0 W S d W. We will make use of the rearrangement ineuality: Lemma III.. Let π be an arbitrary permutation of the set {,,n. If a a n and b b n then n n a i b i a i b πi i i The rearrangement ineuality implies that S 0 W S d W S a W S a d W. Therefore it follows that condition 6 is also satisfied and hence lim n P D d n < 2 2βn P D d 0. It has been shown in Appendix D of [5] that D d W < ǫ implies Z d W > ǫ. This completes the rate of polarization result. F. Polar Codes Achieve the Rate-Distortion Bound The average distortion for the encoding and decoding rules described in Section III-C is given by D avg p Xx x X b L r HA t t v b +L r T A t t r g v i,x P Vi V i,x P i A t Vi V i,x b i + T t v i,x r dx H t At,v G p Xx x X b L r HA t t v b +L r T A t t r g v i,x i A t This can be written as P Vi V i,x P Vi V i,x b i + T t v i,x H t D avg E Q {dx,v G } where the distribution Q is defined by and Qv i v i,x and hence Qv,x Recall that i dx,v G P Vi V i,x v i v i,x P Vi V i,x Qx p Xx Qv i v i,x r i A t Pv,x b i + T t v i,x H t P Vi V i,x v i v i,x P Vi V i,x i b i + T t v i,x H t P Vi V i,x v i v i,x The total variation distance between the distributions P and Q is given by P Q t.v. Qv,x Pv,x v G,x X x X p Xx v G Qv x Pv x

6 We have v G v G a [ v G Qv x Pv x r i Pv i v,x Pb i A i + T t v i t,x H t r Pv i v i,x i A t r i A t Pv i v i,x Pb i + T t v i r r,x H t Pv i v i ji+ j A t i j Pv i v i,x Pb i + T t v i,x i Pv i v,x j A t,x H t where in a we used the telescopic ineuality introduced in [2]. It is straightforward to show that P Q t.v. r E i A t { } Pb i + T t v i,x H t It has been shown in Appendix D of [5] that if Z d W > ǫ then D d W 2ǫ ǫ 2. Therefore if Z d W i > ǫ for all d H we have D d W i v i G v i G i,x X W i vi,x v i W i vi 2ǫ ǫ 2 Therefore for all v i G v i G i x X,x v i + d W i vi,x v i W i vi,x v i + d 2ǫ ǫ 2 We have { } E H t PT t V i,x i Pv,x H t PT t v i,x v i G i,x X H t Pvi,x Pg,v i,x g T t v i G i x X v i G i,x X v i G i,x X G H t g T t d H 2ǫ ǫ2 H t g T t [ G Wvi,x g H t G Wvi,x g + d] d H [ Wv i,x g G d H g T t H t Wvi,x g + d] v i G i,x X Wv i,x g Wv i,x g + d Therefore for if for all d H t, Z d > ǫ then for δ 2ǫ ǫ 2 H t we have { } E P H t Pb i + T t V i,x < δ A similar argument as in [2] implies that D avg E Q{dX,V G } E P {dx,v G } + r A t d max δ where d max is the maximum value of the distortion function. ote that Therefore, E P {dx,v G } D D avg D + r A t d max δ ote that from the rate of polarization derived in Section III-E we can choose ǫ to be ǫ 2 2βn. This implies that as n, D + r A t d max δ D. Also note that the rate of the code R r A t t log p converges to r p tt log p and this last uantity is eual to the symmetric capacity of the test channel since the mutual information is a martingale. This means the rate IX;U is achievable with distortion D.

7 IV. ARBITRARY RECOSTRUCTIO ALPHABETS For an arbitrary Abelian group G, The following polarization result has been provided in [5]: For all ǫ > 0, there exists a number ǫ 2 nǫ and a partition {A ǫ H H G} of {,,} such that for H G and i A ǫ H, Z dw i < Oǫ if d H and Z dw i > Oǫ if d / H. For H G and i A ǫ H, we have IW i G log H + Oǫ and ZH W i Oǫ where Moreover, as ǫ 0, p H,H G. Z H W Z d W H d H A ǫ H p H for some probabilities As mentioned earlier the rate of polarization result derived in Section III-E is valid for the general case. Therefore it follows that for any β < 2 and for H G, lim P Z H n > 2 2βn P Z H n 0 V. COCLUSIO We have shown that polar codes achieve the symmetric rate-distortion bound for finite-alphabet sources regardless of the size of the reconstruction alphabet. REFERECES [] E. Arikan, Channel Polarization: A Method for Constructing Capacity- Achieving Codes for Symmetric Binary-Input Memoryless Channels, IEEE Transactions on Information Theory, vol. 55, no. 7, pp , [2] S. B. Korada and R. Urbanke, Polar Codes are Optimal for Lossy Source Coding, IEEE Transactions on Information Theory, vol. 56, no. 4, pp , Apr [3] M. Karzand and E. Telatar, Polar Codes for Q-ary Source Coding, Proceedings of IEEE International Symposium on Information Theory, 200, austin, TX. [4] A. G. Sahebi and S. S. Pradhan, Multilevel Polarization of Polar Codes Over Arbitrary Discrete Memoryless Channels, Proc. 49th Allerton Conference on Communication, Control and Computing, Sept. 20. [5], Multilevel Polarization of Polar Codes over Arbitrary Discrete Memoryless Channels, July 20, online: [6] E. Arikan and E. Telatar, On the rate of channel polarization, Proceedings of IEEE International Symposium on Information Theory, 2009, Seoul, Korea. S H p H Remark IV.. This observation implies the following stronger result: For all ǫ > 0, there exists a number ǫ 2 nǫ and a partition {A ǫ H H G} of {,,} such that for H G and i A ǫ i G H, IW log H +Oǫ and Z H W i >. Moreover, as ǫ 0, 2 2βnǫ A ǫ H p H for some probabilities p H,H G. The encoding and decoding rules for the general case is as follows: Let n be a positive integer and let G be the generator matrix for polar codes where 2 n. Let {A H H G} be a partition of the index set {,2,,}. Let b be an arbitrary element from the set H G HAH. Assume that the partition {A H H G} and the vector b are known to both the encoder and the decoder. The encoder maps a source seuence x to a vector v G by the following rule: For a subgroup H of G, let T H be a transversal of H in G. For i,,, if i A H, let v i be a random element g from the set b i + T H picked with probability Pv i g P Vi V i,x g v i,x P Vi V i,x b i + T H v i,x Given a seuence v G, the decoder decodes it to v G. It follows from the analysis of the Z p r case in a straightforward fashion that this encoding/decoding scheme achieves the symmetric rate-distortion bound when the group G is an arbitrary Abelian group.

Polar Codes for Some Multi-terminal Communications Problems

Polar Codes for Some Multi-terminal Communications Problems Polar Codes for ome Multi-terminal Communications Problems Aria G. ahebi and. andeep Pradhan Department of Electrical Engineering and Computer cience, niversity of Michigan, Ann Arbor, MI 48109, A. Email:

More information

Polar Codes are Optimal for Write-Efficient Memories

Polar Codes are Optimal for Write-Efficient Memories Polar Codes are Optimal for Write-Efficient Memories Qing Li Department of Computer Science and Engineering Texas A & M University College Station, TX 7784 qingli@cse.tamu.edu Anxiao (Andrew) Jiang Department

More information

On the Polarization Levels of Automorphic-Symmetric Channels

On the Polarization Levels of Automorphic-Symmetric Channels On the Polarization Levels of Automorphic-Symmetric Channels Rajai Nasser Email: rajai.nasser@alumni.epfl.ch arxiv:8.05203v2 [cs.it] 5 Nov 208 Abstract It is known that if an Abelian group operation is

More information

Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents

Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents Meryem Benammar, Valerio Bioglio, Frédéric Gabry, Ingmar Land Mathematical and Algorithmic Sciences Lab Paris Research Center, Huawei

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

Compound Polar Codes

Compound Polar Codes Compound Polar Codes Hessam Mahdavifar, Mostafa El-Khamy, Jungwon Lee, Inyup Kang Mobile Solutions Lab, Samsung Information Systems America 4921 Directors Place, San Diego, CA 92121 {h.mahdavifar, mostafa.e,

More information

Polar Codes are Optimal for Lossy Source Coding

Polar Codes are Optimal for Lossy Source Coding Polar Codes are Optimal for Lossy Source Coding Satish Babu Korada and Rüdiger Urbanke EPFL, Switzerland, Email: satish.korada,ruediger.urbanke}@epfl.ch Abstract We consider lossy source compression of

More information

Polar Codes for Arbitrary DMCs and Arbitrary MACs

Polar Codes for Arbitrary DMCs and Arbitrary MACs Polar Codes for Arbitrary DMCs and Arbitrary MACs Rajai Nasser and Emre Telatar, Fellow, IEEE, School of Computer and Communication Sciences, EPFL Lausanne, Switzerland Email: rajainasser, emretelatar}@epflch

More information

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets Jing Guo University of Cambridge jg582@cam.ac.uk Jossy Sayir University of Cambridge j.sayir@ieee.org Minghai Qin

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

On the Convergence of the Polarization Process in the Noisiness/Weak- Topology

On the Convergence of the Polarization Process in the Noisiness/Weak- Topology 1 On the Convergence of the Polarization Process in the Noisiness/Weak- Topology Rajai Nasser Email: rajai.nasser@alumni.epfl.ch arxiv:1810.10821v2 [cs.it] 26 Oct 2018 Abstract Let W be a channel where

More information

Polar codes for the m-user MAC and matroids

Polar codes for the m-user MAC and matroids Research Collection Conference Paper Polar codes for the m-user MAC and matroids Author(s): Abbe, Emmanuel; Telatar, Emre Publication Date: 2010 Permanent Link: https://doi.org/10.3929/ethz-a-005997169

More information

MATH32031: Coding Theory Part 15: Summary

MATH32031: Coding Theory Part 15: Summary MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,

More information

Performance of Polar Codes for Channel and Source Coding

Performance of Polar Codes for Channel and Source Coding Performance of Polar Codes for Channel and Source Coding Nadine Hussami AUB, Lebanon, Email: njh03@aub.edu.lb Satish Babu Korada and üdiger Urbanke EPFL, Switzerland, Email: {satish.korada,ruediger.urbanke}@epfl.ch

More information

Equidistant Polarizing Transforms

Equidistant Polarizing Transforms DRAFT 1 Equidistant Polarizing Transorms Sinan Kahraman Abstract arxiv:1708.0133v1 [cs.it] 3 Aug 017 This paper presents a non-binary polar coding scheme that can reach the equidistant distant spectrum

More information

How to Compute Modulo Prime-Power Sums?

How to Compute Modulo Prime-Power Sums? How to Compute Modulo Prime-Power Sums? Mohsen Heidari, Farhad Shirani, and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, USA.

More information

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

On Bit Error Rate Performance of Polar Codes in Finite Regime

On Bit Error Rate Performance of Polar Codes in Finite Regime On Bit Error Rate Performance of Polar Codes in Finite Regime A. Eslami and H. Pishro-Nik Abstract Polar codes have been recently proposed as the first low complexity class of codes that can provably achieve

More information

Group, Lattice and Polar Codes for Multi-terminal Communications

Group, Lattice and Polar Codes for Multi-terminal Communications Group, Lattice and Polar Codes for Multi-terminal Communications by Aria Ghasemian Sahebi A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Electrical

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Polar Write Once Memory Codes

Polar Write Once Memory Codes Polar Write Once Memory Codes David Burshtein, Senior Member, IEEE and Alona Strugatski Abstract arxiv:1207.0782v2 [cs.it] 7 Oct 2012 A coding scheme for write once memory (WOM) using polar codes is presented.

More information

Construction of Polar Codes with Sublinear Complexity

Construction of Polar Codes with Sublinear Complexity 1 Construction of Polar Codes with Sublinear Complexity Marco Mondelli, S. Hamed Hassani, and Rüdiger Urbanke arxiv:1612.05295v4 [cs.it] 13 Jul 2017 Abstract Consider the problem of constructing a polar

More information

0 Sets and Induction. Sets

0 Sets and Induction. Sets 0 Sets and Induction Sets A set is an unordered collection of objects, called elements or members of the set. A set is said to contain its elements. We write a A to denote that a is an element of the set

More information

Group, Rings, and Fields Rahul Pandharipande. I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S,

Group, Rings, and Fields Rahul Pandharipande. I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S, Group, Rings, and Fields Rahul Pandharipande I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S, A binary operation φ is a function, S S = {(x, y) x, y S}. φ

More information

Construction of Barnes-Wall Lattices from Linear Codes over Rings

Construction of Barnes-Wall Lattices from Linear Codes over Rings 01 IEEE International Symposium on Information Theory Proceedings Construction of Barnes-Wall Lattices from Linear Codes over Rings J Harshan Dept of ECSE, Monh University Clayton, Australia Email:harshanjagadeesh@monhedu

More information

Applications of Algebraic Geometric Codes to Polar Coding

Applications of Algebraic Geometric Codes to Polar Coding Clemson University TigerPrints All Dissertations Dissertations 5-015 Applications of Algebraic Geometric Codes to Polar Coding Sarah E Anderson Clemson University Follow this and additional works at: https://tigerprintsclemsonedu/all_dissertations

More information

Joint Write-Once-Memory and Error-Control Codes

Joint Write-Once-Memory and Error-Control Codes 1 Joint Write-Once-Memory and Error-Control Codes Xudong Ma Pattern Technology Lab LLC, U.S.A. Email: xma@ieee.org arxiv:1411.4617v1 [cs.it] 17 ov 2014 Abstract Write-Once-Memory (WOM) is a model for many

More information

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016 Midterm Exam Time: 09:10 12:10 11/23, 2016 Name: Student ID: Policy: (Read before You Start to Work) The exam is closed book. However, you are allowed to bring TWO A4-size cheat sheet (single-sheet, two-sided).

More information

Channel Polarization and Blackwell Measures

Channel Polarization and Blackwell Measures Channel Polarization Blackwell Measures Maxim Raginsky Abstract The Blackwell measure of a binary-input channel (BIC is the distribution of the posterior probability of 0 under the uniform input distribution

More information

arxiv: v1 [cs.it] 5 Feb 2016

arxiv: v1 [cs.it] 5 Feb 2016 An Achievable Rate-Distortion Region for Multiple Descriptions Source Coding Based on Coset Codes Farhad Shirani and S. Sandeep Pradhan Dept. of Electrical Engineering and Computer Science Univ. of Michigan,

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 2037 The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani Abstract The capacity

More information

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9 Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received

More information

Constructing c-ary Perfect Factors

Constructing c-ary Perfect Factors Constructing c-ary Perfect Factors Chris J. Mitchell Computer Science Department Royal Holloway University of London Egham Hill Egham Surrey TW20 0EX England. Tel.: +44 784 443423 Fax: +44 784 443420 Email:

More information

Generalized Partial Orders for Polar Code Bit-Channels

Generalized Partial Orders for Polar Code Bit-Channels Fifty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 3-6, 017 Generalized Partial Orders for Polar Code Bit-Channels Wei Wu, Bing Fan, and Paul H. Siegel Electrical and Computer

More information

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 x + + a n 1 x n 1 + a n x n, where the coefficients a 0, a 1, a 2,,

More information

Linear Codes, Target Function Classes, and Network Computing Capacity

Linear Codes, Target Function Classes, and Network Computing Capacity Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

The Compound Capacity of Polar Codes

The Compound Capacity of Polar Codes The Compound Capacity of Polar Codes S. Hamed Hassani, Satish Babu Korada and Rüdiger Urbanke arxiv:97.329v [cs.it] 9 Jul 29 Abstract We consider the compound capacity of polar codes under successive cancellation

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

Counting Matrices Over a Finite Field With All Eigenvalues in the Field

Counting Matrices Over a Finite Field With All Eigenvalues in the Field Counting Matrices Over a Finite Field With All Eigenvalues in the Field Lisa Kaylor David Offner Department of Mathematics and Computer Science Westminster College, Pennsylvania, USA kaylorlm@wclive.westminster.edu

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Introduction to finite fields

Introduction to finite fields Chapter 7 Introduction to finite fields This chapter provides an introduction to several kinds of abstract algebraic structures, particularly groups, fields, and polynomials. Our primary interest is in

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

Coding for Noisy Write-Efficient Memories

Coding for Noisy Write-Efficient Memories Coding for oisy Write-Efficient Memories Qing Li Computer Sci. & Eng. Dept. Texas A & M University College Station, TX 77843 qingli@cse.tamu.edu Anxiao (Andrew) Jiang CSE and ECE Departments Texas A &

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Solutions to Homework Set #1 Sanov s Theorem, Rate distortion

Solutions to Homework Set #1 Sanov s Theorem, Rate distortion st Semester 00/ Solutions to Homework Set # Sanov s Theorem, Rate distortion. Sanov s theorem: Prove the simple version of Sanov s theorem for the binary random variables, i.e., let X,X,...,X n be a sequence

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Algebraic gossip on Arbitrary Networks

Algebraic gossip on Arbitrary Networks Algebraic gossip on Arbitrary etworks Dinkar Vasudevan and Shrinivas Kudekar School of Computer and Communication Sciences, EPFL, Lausanne. Email: {dinkar.vasudevan,shrinivas.kudekar}@epfl.ch arxiv:090.444v

More information

Polar Codes are Optimal for Lossy Source Coding

Polar Codes are Optimal for Lossy Source Coding Polar Codes are Optimal for Lossy Source Coding Satish Babu Korada and Rüdiger Urbanke Abstract We consider lossy source compression of a binary symmetric source using polar codes and the low-complexity

More information

Computing sum of sources over an arbitrary multiple access channel

Computing sum of sources over an arbitrary multiple access channel Computing sum of sources over an arbitrary multiple access channel Arun Padakandla University of Michigan Ann Arbor, MI 48109, USA Email: arunpr@umich.edu S. Sandeep Pradhan University of Michigan Ann

More information

A field F is a set of numbers that includes the two numbers 0 and 1 and satisfies the properties:

A field F is a set of numbers that includes the two numbers 0 and 1 and satisfies the properties: Byte multiplication 1 Field arithmetic A field F is a set of numbers that includes the two numbers 0 and 1 and satisfies the properties: F is an abelian group under addition, meaning - F is closed under

More information

Polar Codes: Graph Representation and Duality

Polar Codes: Graph Representation and Duality Polar Codes: Graph Representation and Duality arxiv:1312.0372v1 [cs.it] 2 Dec 2013 M. Fossorier ETIS ENSEA/UCP/CNRS UMR-8051 6, avenue du Ponceau, 95014, Cergy Pontoise, France Email: mfossorier@ieee.org

More information

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x), Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 + + a n 1 x n 1 + a n x n, where the coefficients a 1, a 2,, a n are

More information

General Strong Polarization

General Strong Polarization General Strong Polarization Madhu Sudan Harvard University Joint work with Jaroslaw Blasiok (Harvard), Venkatesan Gurswami (CMU), Preetum Nakkiran (Harvard) and Atri Rudra (Buffalo) December 4, 2017 IAS:

More information

Some Goodness Properties of LDA Lattices

Some Goodness Properties of LDA Lattices Some Goodness Properties of LDA Lattices Shashank Vatedka and Navin Kashyap {shashank,nkashyap}@eceiiscernetin Department of ECE Indian Institute of Science Bangalore, India Information Theory Workshop

More information

New communication strategies for broadcast and interference networks

New communication strategies for broadcast and interference networks New communication strategies for broadcast and interference networks S. Sandeep Pradhan (Joint work with Arun Padakandla and Aria Sahebi) University of Michigan, Ann Arbor Distributed Information Coding

More information

The Least Degraded and the Least Upgraded Channel with respect to a Channel Family

The Least Degraded and the Least Upgraded Channel with respect to a Channel Family The Least Degraded and the Least Upgraded Channel with respect to a Channel Family Wei Liu, S. Hamed Hassani, and Rüdiger Urbanke School of Computer and Communication Sciences EPFL, Switzerland Email:

More information

Lecture 15: Conditional and Joint Typicaility

Lecture 15: Conditional and Joint Typicaility EE376A Information Theory Lecture 1-02/26/2015 Lecture 15: Conditional and Joint Typicaility Lecturer: Kartik Venkat Scribe: Max Zimet, Brian Wai, Sepehr Nezami 1 Notation We always write a sequence of

More information

Algebraic Structures Exam File Fall 2013 Exam #1

Algebraic Structures Exam File Fall 2013 Exam #1 Algebraic Structures Exam File Fall 2013 Exam #1 1.) Find all four solutions to the equation x 4 + 16 = 0. Give your answers as complex numbers in standard form, a + bi. 2.) Do the following. a.) Write

More information

Lecture 9 Polar Coding

Lecture 9 Polar Coding Lecture 9 Polar Coding I-Hsiang ang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 29, 2015 1 / 25 I-Hsiang ang IT Lecture 9 In Pursuit of Shannon s Limit Since

More information

Universality of Logarithmic Loss in Lossy Compression

Universality of Logarithmic Loss in Lossy Compression Universality of Logarithmic Loss in Lossy Compression Albert No, Member, IEEE, and Tsachy Weissman, Fellow, IEEE arxiv:709.0034v [cs.it] Sep 207 Abstract We establish two strong senses of universality

More information

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org

More information

Bounds for the Eventual Positivity of Difference Functions of Partitions into Prime Powers

Bounds for the Eventual Positivity of Difference Functions of Partitions into Prime Powers 3 47 6 3 Journal of Integer Seuences, Vol. (7), rticle 7..3 ounds for the Eventual Positivity of Difference Functions of Partitions into Prime Powers Roger Woodford Department of Mathematics University

More information

Remote Source Coding with Two-Sided Information

Remote Source Coding with Two-Sided Information Remote Source Coding with Two-Sided Information Basak Guler Ebrahim MolavianJazi Aylin Yener Wireless Communications and Networking Laboratory Department of Electrical Engineering The Pennsylvania State

More information

On Two Probabilistic Decoding Algorithms for Binary Linear Codes

On Two Probabilistic Decoding Algorithms for Binary Linear Codes On Two Probabilistic Decoding Algorithms for Binary Linear Codes Miodrag Živković Abstract A generalization of Sullivan inequality on the ratio of the probability of a linear code to that of any of its

More information

Exercise 1. = P(y a 1)P(a 1 )

Exercise 1. = P(y a 1)P(a 1 ) Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

Variable Length Codes for Degraded Broadcast Channels

Variable Length Codes for Degraded Broadcast Channels Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates

More information

X 1 : X Table 1: Y = X X 2

X 1 : X Table 1: Y = X X 2 ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multiple-access

More information

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths Kasra Vakilinia, Dariush Divsalar*, and Richard D. Wesel Department of Electrical Engineering, University

More information

Constellation Shaping for Communication Channels with Quantized Outputs

Constellation Shaping for Communication Channels with Quantized Outputs Constellation Shaping for Communication Channels with Quantized Outputs Chandana Nannapaneni, Matthew C. Valenti, and Xingyu Xiang Lane Department of Computer Science and Electrical Engineering West Virginia

More information

LECTURE NOTES IN CRYPTOGRAPHY

LECTURE NOTES IN CRYPTOGRAPHY 1 LECTURE NOTES IN CRYPTOGRAPHY Thomas Johansson 2005/2006 c Thomas Johansson 2006 2 Chapter 1 Abstract algebra and Number theory Before we start the treatment of cryptography we need to review some basic

More information

Iterative Encoding of Low-Density Parity-Check Codes

Iterative Encoding of Low-Density Parity-Check Codes Iterative Encoding of Low-Density Parity-Check Codes David Haley, Alex Grant and John Buetefuer Institute for Telecommunications Research University of South Australia Mawson Lakes Blvd Mawson Lakes SA

More information

CONSTRUCTION OF SLICED SPACE-FILLING DESIGNS BASED ON BALANCED SLICED ORTHOGONAL ARRAYS

CONSTRUCTION OF SLICED SPACE-FILLING DESIGNS BASED ON BALANCED SLICED ORTHOGONAL ARRAYS Statistica Sinica 24 (2014), 1685-1702 doi:http://dx.doi.org/10.5705/ss.2013.239 CONSTRUCTION OF SLICED SPACE-FILLING DESIGNS BASED ON BALANCED SLICED ORTHOGONAL ARRAYS Mingyao Ai 1, Bochuan Jiang 1,2

More information

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus Turbo Compression Andrej Rikovsky, Advisor: Pavol Hanus Abstract Turbo codes which performs very close to channel capacity in channel coding can be also used to obtain very efficient source coding schemes.

More information

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane 2301532 : Coding Theory Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, 2006 http://pioneer.chula.ac.th/ upattane Chapter 1 Error detection, correction and decoding 1.1 Basic definitions and

More information

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. Binary codes Let us assume that a message to be transmitted is in binary form. That is, it is a word in the alphabet

More information

A prolific construction of strongly regular graphs with the n-e.c. property

A prolific construction of strongly regular graphs with the n-e.c. property A prolific construction of strongly regular graphs with the n-e.c. property Peter J. Cameron and Dudley Stark School of Mathematical Sciences Queen Mary, University of London London E1 4NS, U.K. Abstract

More information

Section Summary. Relations and Functions Properties of Relations. Combining Relations

Section Summary. Relations and Functions Properties of Relations. Combining Relations Chapter 9 Chapter Summary Relations and Their Properties n-ary Relations and Their Applications (not currently included in overheads) Representing Relations Closures of Relations (not currently included

More information

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality 0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California

More information

RON M. ROTH * GADIEL SEROUSSI **

RON M. ROTH * GADIEL SEROUSSI ** ENCODING AND DECODING OF BCH CODES USING LIGHT AND SHORT CODEWORDS RON M. ROTH * AND GADIEL SEROUSSI ** ABSTRACT It is shown that every q-ary primitive BCH code of designed distance δ and sufficiently

More information

Constructing Polar Codes Using Iterative Bit-Channel Upgrading. Arash Ghayoori. B.Sc., Isfahan University of Technology, 2011

Constructing Polar Codes Using Iterative Bit-Channel Upgrading. Arash Ghayoori. B.Sc., Isfahan University of Technology, 2011 Constructing Polar Codes Using Iterative Bit-Channel Upgrading by Arash Ghayoori B.Sc., Isfahan University of Technology, 011 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree

More information

Randomized Quantization and Optimal Design with a Marginal Constraint

Randomized Quantization and Optimal Design with a Marginal Constraint Randomized Quantization and Optimal Design with a Marginal Constraint Naci Saldi, Tamás Linder, Serdar Yüksel Department of Mathematics and Statistics, Queen s University, Kingston, ON, Canada Email: {nsaldi,linder,yuksel}@mast.queensu.ca

More information

Algebraic structures I

Algebraic structures I MTH5100 Assignment 1-10 Algebraic structures I For handing in on various dates January March 2011 1 FUNCTIONS. Say which of the following rules successfully define functions, giving reasons. For each one

More information

ECEN 5682 Theory and Practice of Error Control Codes

ECEN 5682 Theory and Practice of Error Control Codes ECEN 5682 Theory and Practice of Error Control Codes Introduction to Algebra University of Colorado Spring 2007 Motivation and For convolutional codes it was convenient to express the datawords and the

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

Computing with polynomials: Hensel constructions

Computing with polynomials: Hensel constructions Course Polynomials: Their Power and How to Use Them, JASS 07 Computing with polynomials: Hensel constructions Lukas Bulwahn March 28, 2007 Abstract To solve GCD calculations and factorization of polynomials

More information

MATH 433 Applied Algebra Lecture 19: Subgroups (continued). Error-detecting and error-correcting codes.

MATH 433 Applied Algebra Lecture 19: Subgroups (continued). Error-detecting and error-correcting codes. MATH 433 Applied Algebra Lecture 19: Subgroups (continued). Error-detecting and error-correcting codes. Subgroups Definition. A group H is a called a subgroup of a group G if H is a subset of G and the

More information

Distributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient

Distributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient Distributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient Viveck R Cadambe, Syed A Jafar, Hamed Maleki Electrical Engineering and

More information

Dispersion of the Gilbert-Elliott Channel

Dispersion of the Gilbert-Elliott Channel Dispersion of the Gilbert-Elliott Channel Yury Polyanskiy Email: ypolyans@princeton.edu H. Vincent Poor Email: poor@princeton.edu Sergio Verdú Email: verdu@princeton.edu Abstract Channel dispersion plays

More information

Teddy Einstein Math 4320

Teddy Einstein Math 4320 Teddy Einstein Math 4320 HW4 Solutions Problem 1: 2.92 An automorphism of a group G is an isomorphism G G. i. Prove that Aut G is a group under composition. Proof. Let f, g Aut G. Then f g is a bijective

More information

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu

More information

A GUIDE FOR MORTALS TO TAME CONGRUENCE THEORY

A GUIDE FOR MORTALS TO TAME CONGRUENCE THEORY A GUIDE FOR MORTALS TO TAME CONGRUENCE THEORY Tame congruence theory is not an easy subject and it takes a considerable amount of effort to understand it. When I started this project, I believed that this

More information

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College

More information

Minimum Repair Bandwidth for Exact Regeneration in Distributed Storage

Minimum Repair Bandwidth for Exact Regeneration in Distributed Storage 1 Minimum Repair andwidth for Exact Regeneration in Distributed Storage Vivec R Cadambe, Syed A Jafar, Hamed Malei Electrical Engineering and Computer Science University of California Irvine, Irvine, California,

More information