Polar Codes are Optimal for Write-Efficient Memories
|
|
- Agnes Gregory
- 5 years ago
- Views:
Transcription
1 Polar Codes are Optimal for Write-Efficient Memories Qing Li Department of Computer Science and Engineering Texas A & M University College Station, TX 7784 qingli@cse.tamu.edu Anxiao (Andrew) Jiang Department of Computer Science and Engineering Texas A & M University College Station, TX 7784 ajiang@cse.tamu.edu Abstract Write-efficient memories (WEM) [] were introduced by Ahswede and Zhang as a model for storing and updating information on a rewritable medium with constraints. A coding scheme for WEM using recently proposed polar codes is presented. The coding scheme achieves rewriting capacity, and the rewriting and decoding operation can be done in time O( log ), where is the length of the codeword. I. ITRODUCTIO Write-efficient memories (WEM) are models for storing and updating information on a rewritable medium with constraints. WEM is widely used in data storage area: in flash memories, write-once memories (WOM) [], and the recently proposed compressed rank modulation (CRM) [9] are examples of WEM; codes based on WEM were proposed for phase change memories [8]. The recently proposed scheme, that polar codes are constructed for WOM codes achieving capacity [3], motivates us to construct codes for WEM. A. WEM with a maximal rewriting cost constraint Let X {,,..., q } be the storage alphabet. R + [, + ), and ϕ : X X R + is the rewriting cost function. Suppose that a memory def consists of cells. Given one cell state, x (x, x,..., x ) X, and another cell state, y X, the rewriting cost of changing x to y is ϕ(x, y ) ϕ(x i, y i ). i Let M, and D {,,..., M }. Define the decoding function, D : X D, i.e., D(x ) i. Given the current cell state x, and data to rewrite, j, the rewriting function is defined as R : X D X such that D(R(x, j)) j. Definition. [5] An (, M, q, d) WEM code is a collection of subsets, C {C i : i D}, where C i X, and x C i D(x ) i, such that i j, C i Cj. j, x C, y R(x, j) such that ϕ(x, y ) d. The rewriting rate of an (, M, q, d) WEM code is R log 2 M, and the rewriting capacity function, R(q, d), is the largest d-admissible rate when. Let P(X X ) be the set of joint probability distributions over X X. For a pair of random variables (X, Y ) (X, X ), let P XY denote the joint probability distribution, P Y denotes the marginal distribution, and P Y X denotes the conditional probability distribution. E( ) denotes the expectation operator. If X is uniformly distributed over {,,..., q }, denote it as X U(q). Let P(q, d) {P XY P(X X ) : P X P Y, E(ϕ(X, Y )) d}. R(q, d) is determined as [5]: R(q, d) H(Y X). max P XY P(q,d) For WOM codes, the cell state can only increase but not decrease. WOM codes are special cases of WEM codes: for a WOM cell if we update it from x X to y X, the cost is measured by ϕ(x, y) if y x, and otherwise. Therefore, WOM codes are such WEM codes with ϕ( ) defined previously, and d is equal to. In this paper, we focus on symmetric rewriting capacity function R s (q, d): Definition 2. For X, Y X with P X, P Y and P XY, and ϕ : X X R +, R s (q, d) H(Y X), max P XY P s (q,d) where P s (q, d) def {P XY P(X X ) : P X P Y, X U(q), E(ϕ(X, Y )) d}. We present an example of WEM with R s (q, d) below. (mq)! S q,m is the set of m! permutations over m q m {}}{{}}{ {,,...,,..., q, q,..., q}. We abuse the notation of u qm def [u, u,..., u qm ] to denote an element of S q,m, which denotes the mapping i u i. Example 3. A rewriting code for CRM with a maximal
2 rewriting cost constraint, (q, m, M, d), is defined by replacing X by S q,m, ϕ( ) by Chebyshev distance between u qm, v qm S q,m, d (u qm, v qm ) def max u j v j, and d by d in definition. j {,,...,qm } The above rewriting code is actually an instance of WEM: for x, y X, let ϕ(x, y) if x y d, and otherwise. ow the (q, m, M, d) CRM is an (qm, M, q, ) WEM with X replaced by S q,m, and ϕ( ) is defined previously. B. WEM with an average rewriting cost constraint With deterministic D(x ) and R(x, i), suppose that message sequences M, M 2,..., M t (t ) are written into the memory medium, where M i D is uniformly distributed, represented by x (i), and it forms a markov chain. Its transition probability matrix X is given by µ(y x ) y,x µ(y x ) { M if i s.t.y R(x, i), otherwise. Denote the stationary distribution of x by π(x ). The average rewriting cost D is determined as follows: D lim t t x t i E(ϕ(x (i), x (i + ))), π(x ) ϕ(x, y ), x π(x ) j y µ(y x ) D j (x ), () where D j (x ) is the average rewriting cost of updating from x to data j. Definition 4. An (, M, q, d) ave WEM code is a collection of subsets, C {C i : i D}, where C i X, and x C i D(x ) i, such that i j, C i Cj ; The average rewriting cost D d. The rewriting rate of an (, M, q, d) ave code is R ave log 2 M, and its rewriting capacity function, R ave (q, d), is the largest d-admissible rate when. It is proved that R ave (q, d) R(q, d) []. Similarly, we focus on the symmetric rewriting capacity function, R s ave(q, d), as defined in definition 2. C. The outline The connection between rate-distortion theorem and rewriting capacity theorem is presented in section II; The binary polar WEM codes with an average rewriting cost constraint and a maximal rewriting cost constraint are presented in subsection A and B of section III, respectively; The q-ary polar WEM codes, based on the recently proposed q-ary polar codes [], are presented in subsection A and B of section IV for an average rewriting cost constraint and a maximal rewriting cost constraint, respectively; The conclusion is obtained in section V. II. LOSSY SOURCE CODIG AD ITS DUALITY WITH WEM In this section, we present briefly background of lossy source coding and its duality with WEM, which inspires code constructions for WEM. Let X also denote the variable space, and Y denotes the reconstruction space. Let d : Y X R + denote the distortion function, and the distortion among a vector x and its reconstructed vector y is d(x, y ) i d(x i, y i ). A (q R, ) rate distortion code consists of a encoding function f : X {,,..., q R } and a reproduction function g : {,,..., q R } Y. The associated distortion is defined as E(d(X, g (f (X )))), where the expectation is with respect to the probability distribution on X. R(q, D) is the infimum of rates R such that E(d(X, g (f (X )))) is at most D as. Let P (q, D) def {P XY P(X Y) : E(d(X, Y )) D}, and R(q, D) is min P XY P (q,d) I(X; Y ) [4]. We focus on the double symmetric rate-distortion for (X, Y ) (X X ), and d(x, y), R S (q, D), which is R s (q, D) min I(Y ; X), where P XY P s (q,d) P s (q, D) def {P XY P(X X ) : P X P Y, X U(q), E(d(X, Y )) D}. The duality between R s (q, D) and R s (q, D) is captured by the following lemma, the proof of which is ommited due to being straightforward. Lemma 5. With the same d( ) and ϕ( ), R s (q, D) + R s (q, D) log 2 q. (2) III. POLAR CODES ARE OPTIMAL FOR BIARY WEM Inspired by lemma 5, we show that polar codes can be used to construct binary WEM codes with R s (2, D) in a way related to the code construction for lossy source coding of [7] in this section.
3 A. A code construction for binary WEM with an average rewriting cost constraint ) Background on polar codes [2]: Let W : {, } Y be a binary-input discrete memoryless channel for some output alphabet Y. Let I(W ) [, ] denote the mutual information between the input and output of W with a uniform distribution on the input. Let G denote n-th Kronecker product of ( ). Let the Bhattacharyya parameter Z(W ) WY X (y )W Y X (y ). y X The polar code, C (F, u F ), F {,,..., }, u F denotes the subvector u i : i F, and u F {, } F, is a linear code given by C (F, u F ) {x u G : u F c {, } F c }. The polar code ensemble, C (F ), F {,,..., }, is C (F ) {C (F, u F ), u F {, } F }. The secret of polar codes achieving I(W ) lies in how to select F : define W (i) : {, } Y {, } i as sub-channel i with input u i, output (y, u i W (i) (y, u i u i ) def ) and transition probabilities W (y u ), 2 u i+ where W (y u ) def i W (y i (u G ) i ), and (u G ) i denotes the i-th element of u G ; The fraction of {W (i) } that are approaching noiseless, i.e., Z(W (i) ) 2 β for β 2, approaches I(W ); The F is chozen as indecies with large Z(W (i) ), that is F def {i {,..., } : Z(W (i) ) 2 β } for β /2. The encoding is done by a linear transformation, and the decoding is done by successive cancellation (SC). 2) Polar codes on lossy source coding: We sketch the binary polar code construction for lossy source coding [7] as follows. ote that R s (2, D) can be obtained through the following optimization function: min : I(Y ; X), s.t. : 2 P (y x) x y x y 2 P (x y) 2, P (y x)d(y, x) D. (3) 2 Let P (y x) be the probability distribution minimizing the objective function of (3). P (y x) plays the role of a channel. By convention, we call P (y x) as test channel, and denote it as W (y x). ow, we construct the source code with R s (2, D) using the polar code for W (y x), and denote the rate of the source code by R: set F as ( R) sub-channels with the highest Z(W (i) ), set F c as the remaining R sub-channels, and set u F to an arbitrary value. A source codeword y is mapped to a codeword x C (F, u F ), and x is described by the index u F c (x G ) F c. The reproduction process is done as follows: we do SC encoding scheme, û Û(y, u F ), that is for each k in the range till : ) If k F, set û k u k ; 2) Else, set û k m with the posterior P (m û i, y ). The reproduction codeword is û G. Thus, the average distortion D (F, u F ) (over the source codeword y and the encoder randomness for the code C (F, u F )) is : y P (y ) û F c P (û i û i, y ) i F c d(y, û G ), where û F u F. The expectation of D (F, u F ) over the uniform choice of u F, D (F ), is D (F ) u F 2 F D (F, u F ). Let Q U,Y Q Y (y ) P (y ), and Q U Y Q(u i u i, y ) denote the distribution defined by by { 2, if i F, P (u i u i, y ), otherwise. Thus, D (F ) is equivalent to E Q (d(y, u G )), where E Q ( ) denotes the expectation with respect to the distribution Q U.,Y It is proved that D (F ) D+O(2 β ) for β 2, the rate of the above scheme is R F c, and polar codes achieve the rate-distortion bound by Theorem 3 of [7]. On the other hand, Theorem 2 of [3] further states the strong converse result of the rate distortion theory. More precisely, if y is uniformly distributed over {, }, then δ >, < β < 2, sufficiently large, and with the above SC encoding process and the induced Q, Q(d(u G, y ) D + δ) < 2 β. That is, for y, the above reproduction process obtains x u G such that the distortion d(x, y ) is less than D almost by sure. 3) The code construction: We focus on the code construction with symmetric rewriting cost function, which satisfies x, y, z {, }, ϕ(x, y) ϕ(x + z, y + z), where + is over GF(2).
4 To construct codes for WEM with R s (2, D), we utilize its related form R s (2, D) in (2) and the test channel W (y x) for R s (2, D). ote that W (y x) is a binary symmetric channel. The code construction for (, M, 2, D) ave with rate R is presented in Algorithm III.: Algorithm III. A code construction for (, M, 2, D) ave WEM : Set F as R sub-channels with the highest Z(W (i) ), and set F c as the remaining ( R) sub-channels. 2: The (, M, 2, D) ave code is C {C i : C i C (F, u F (i))}, where u F (i) is the binary representation form of i for i {,,..., M }. Clearly, since G is of full rank [2], u F (i) u F (j), C (F, u F (i)) C (F, u F (j)). The rewriting operation and the decoding operation are defined in Algorithm III.2 and Algorithm III.3, respectively, where the dither g is inspired by [3]. Algorithm III.2 The rewriting operation y R(x, i). : Let v x + g, where g is known both to the decoding function and to the rewriting function, it is chosen such that v is uniformly distributed over {, }, and + is over GF(2). 2: SC encoding v, and this results u Û(v, u F (i)) and ŷ u G. 3: y ŷ + g. Algorithm III.3 The decoding operation u F (i) D(x ). : y x + g. 2: u F (i) (y G ) F. Lemma 6. D(R(y, i)) i holds for each rewriting. Proof: From the rewriting operation, y ŷ + g u G + g Û(v, u F (i))g + g, from the decoding function Û(v, u F (i))g + g + g, which is Û(v, u F (i))g, thus the decoding result is i. 4) The average rewriting cost analysis: From y R(x, i), we know that y Û(v, u F (i))g + g Û(x + g, u F (i))g + g, thus ϕ(x, y ) is ϕ(x, Û(x + g, u F (i))g + g ), which is +g, Û(x +g, u F (i))g ) due to ϕ( ) being symmetric. Denote w x + g, thus ϕ(x, y ) ϕ(w, Û(w, u F (i))g ). The average rewriting cost D lim t t lim t t t i t i Û(w E(ϕ(x (i), x (i + ))), E(ϕ(w,, u F (M i+ ))G )), π(w ) D j (w ). (4) j w Let us focus on D j (w ), which is the average (in this case, over the probability of rewriting to data j and the randomness of the encoder) rewriting cost of updating w to a codeword representing j. Thus D j (w ) P (u 2 F i u i, w )ϕ(w, u G ). u F c i F c Therefore, interpreting ϕ( ) as d( ), D is actually the average distortion over the ensemble C (F ), D (F ). The following lemma from [7] is to bound D: Lemma 7. [7] Let β < 2 be a constant and let σ 2 2 β. When the polar code for the source code with R s (2, D) is constructed with F : F {i {,,..., } : Z(W (i) ) 2σ2 }, then D (F ) D + O(2 β ), where D is the average rewriting cost constraint. Therefore, with the same β, σ, F, and polar code ensemble C (F ), D D + O(2 β ). According to [2], lim lim 2 n,n F c {i {,,..., } : Z(W (i) ) 2 2nβ } I(W ) R s (2, D), thus this implies that for sufficiently large a set F such that F c Rs (2, D) ɛ, ɛ >. In other words, the rate of the constructed WEM code, R F F c Rs (2, D) + ɛ. The complexity of the decoding and the rewriting operation is of the order O( log ) according to [2]. We conclude the theoretical performance of the above polar WEM code as follows:
5 Theorem 8. For a binary symmetric rewriting cost function ϕ : X X R +. Fix a rewriting cost D and < β < 2. For any rate R < Rs (2, D), there exists a sequence of polar WEM codes of length and rate R R, so that under the above rewriting operation, D satisfies D D+O(2 β ). The decoding and rewriting operation complexity of the codes is O( log ). B. A code construction for binary WEM with a maximal rewriting cost constraint The code construction, the rewriting operation and the decoding operation are exactly the same as Algorithm III., Algorithm III.2, and Algorithm III.3, respectively. The rewriting capacity is guaranteed by Lemma 5, the decoding and rewriting operation complexity is the same as polar codes, and the rewriting cost is obtained due to the strong converse result of rate distortion theory, i.e., Theorem 2 of [3]. Thus, we have: Theorem 9. For a binary symmetric rewriting cost function ϕ : X X R +. Fix a rewriting cost D, δ, and < β < 2. For any rate R < Rs (2, D), there exists a sequence of polar WEM codes of length and rate R R, so that under the above rewriting operation and the induced probability distribution Q, the rewriting cost between a current codeword y and its updated codeword x satisfies Q(ϕ(y, x ) D+δ) < 2 β. The decoding and rewriting operation complexity of the codes is O( log ). IV. POLAR CODES ARE OPTIMAL FOR q-ary WEM, q 2 r In this section, we extend the previous binary scheme to q-ary WEM (q 2 r ), considering the length of polar codes, which is 2 n. A. A code construction for q-ary WEM with an average rewriting cost constraint, q 2 r ) Background of q-ary polar codes, q 2 r []: The storage alphabet is X, X q, and for x X, (x, x,..., x r ) is its binary representation. Let W : X Y be a discrete memoryless channel. I(W ) is q W (y x) log W (y x) 2 x X y Y. x q W (y x ) X The sub-channel i, W (i), is defined by W (i) (y, u i u i ) def W (y q u ), where W (y u ) def u i+ W (y i (u G ) i ). i Consider the following bit channel and its Bhattacharyya: Fix k {,,..., r}, and W [r k] (y u) W (y x), where 2 k x:x r k u y Y, u {, } r k, and x (x, x,..., x r ) X ; Define Z(W {x,x }) W (y x)w (y x ), let 2 r x X y Y Z v (W ) Z(W {x,x+v} ) for v X \{}, and Z i (W ) 2 Z i v (W ), where i,,..., r, and v X i X i {v X : i arg max v j } with the binary j r representation of v, (v, v,..., v r ). Z i (W (j) ) i {,,..., r } and j {,,..., } converges to Z i, {, }, and with probability one (Z,, Z,,..., Z r, ) takes one of the values (Z,,..., Z k,, Z k,,..., Z r, ) k,,..., r, i.e., Theorem.b of []. j A k,n iff (Z (W (j) where R k (ɛ) def ), Z (W (j) ),..., Z r (W (j) )) R k(ɛ), ( k D ) ( r D ), D [, ɛ), i ik D ( ɛ, ]. The channel polarizes in the sense that A ki,n i {,,...,} r I(W ) as. For u X, we also represent it by its binary form, that is u (u I(,),..., u I(,r ),..., u I(,),..., u I(,r ) ) {, } r, where u I(i,),..., u I(i,r ) is the binary representation of u i X, and I(i, j) i r + j. i A ki,n, let the frozen bit set be determined by F {I(i, j) : i {,,..., }, j {,,..., k i }} {,,..., r }. Frozen bits for u are defined as u F {, } F with the subvector u i : i F. Finally, the polar code, C (F, u F ), F {,,..., r } and u F {, } F, is a linear code given by C (F, u F ) {x u G : u F c {, } F c }. The polar code ensemble, C (F ), F {,,..., r }, is C (F ) {C (F, u F ) : u F {, } F }. 2) The code construction: We focus on the q-ary WEM code construction with a symmetric rewriting cost function, which satisfies x, y, z {,,..., q }, ϕ(x, y) ϕ(x + z, y + z), where + is over GF(q). Similar to the binary case, to construct codes for WEM with R s (q, D), we utilize its related form R s (q, D) in Lemma 5 and its test channel W (y x). The code construction is presented in Algorithm IV.. Algorithm IV. A code construction for (, M, q, D) ave WEM : i A ki,n, F {I(i, j) : i {,,..., }, j {,,..., k i }}. 2: The (, M, q, D) ave code is C {C i : C i C (F, u F (i))}, where u F (i) is the binary representation form of i for i {,,..., M }.
6 The WEM code rate is R F r, and the polar code rate is R F c r. Similarly, when R approaches R s (q, D), R approaches R s (q, D) based on Lemma 5. The rewriting operation and the decoding operation are defined in Algorithm IV.2 and Algorithm IV.3, respectively, where the SC encoding is a generalization of q-ary lossy source coding, and the dither g is inspired by [3] to make sure the uniform distribution of v. Algorithm R(x, i). IV.2 The rewriting operation y : Let v x + g, where g is known both to the decoding function and to the rewriting function, it is chosen such that v is uniformly distributed, and + is over GF(q). 2: SC encoding v, û Û(v, u F (i)), that is for each k in the range till : û j { u j if j A r,n, m with the posterior P (m û j, v ), where in the above m,,..., q and if j A k,n for k r, the first k bits of û j are fixed, and let ŷ û G. 3: y ŷ g, and is over GF(q). Algorithm IV.3 The decoding operation u F (i) D(x ). : y x + g. 2: u F (i) (y G ) F. The correctness of the above rewriting function can be verified similarly to Lemma 6. 3) The average rewriting cost analysis: Similar to the analysis of equation (4), we obtain that D π(w ) D j (w ). j w In the following, we first focus on D j (w ). ote that û Û(v, u F (j)) is random, i.e., the SC encoding function may result in different outputs for the same input. More precisely, in step i of the SC encoding process, i r A k,n, û i m with the posterior k P (m û i, v ), where if i A k,n, the first k bits of û i are fixed and known. This implies that the probability of picking a vector u with Âr,n given v with A r,n is equal to if  r,n A r,n, i r P (u i u i, v ) otherwise, A k,n k where in the second case i A k,n, the first k bits of u i are fixed. Therefore, the average (in this case, over the probability of rewriting to data j and the randomness of the encoder) rewriting cost of updating w to a codeword representing j, Dj (w 2 F u F c ), is i r k A k,n ϕ(w, u G ), w P (u i u i, w ) where u F u F (j), u F c {, } F c, and the summation over u F c takes care of A r,n and the fact that i A k,n, the first k bits of u i are fixed. Thus, we obtain that D π(w ) D j (w ), j w π(w ) u F (j) P (u i u i, w < π(w w 2 F )ϕ(w ) q Ar,n u c F i r k A k,n, u G ), u i r k A k,n P (u i u i, w )ϕ(w, u G ). (5) Let Q U,W Q W by denote the distribution defined by defined (w ) π(w ), and Q U W Q(u i u i, w ) { q if i A r,n, P (u i u i, w ) otherwise. Then, inequation (5) is equivalent to D < E Q (ϕ(w, u G )), where E Q ( ) denotes the expectation with respect to the distribution Q U,W. Similarly, let E P ( ) denote the expectation with respect to the distribution P U,W. The following three lemmas are already proved in [7] for q 2 and in [6] for primary q, they extend trivially to q 2 r, and we omit their proofs. Lemma. Q(w, u ) P (w, u ) P (u i u i, w ) ). w,u q i A r,n u i E P ( q
7 Lemma. Let F be chosen such that for i A r,n, q u i E P ( q P (u i u i, w ) ) σ. Then, the average rewriting cost is bounded by E Q(ϕ(w, u G )) E P (ϕ(w, u G )) + A r,n d max σ, where d max def max x,y ϕ(x, y). Lemma 2. E P (ϕ(w, u G )) D. The following lemma, which is a modification of Lemma 5 [6], presents that it is sufficient to choose the set F as those bits with indexes I(i, j) for which i {,,..., }, j {,,..., k i } with i A ki,n. Lemma 3. If i A r,n, that is (Z (W (i) ),..., Z r (W (i) )) R r(ɛ), and let ɛ r ɛ, then q u i E P ( q P (u i u i, w ) ) 2ɛ. Proof: By Pinsker s inequality, for two distribution functions P and Q defined on X, P Q 2D(P Q), where D(P Q) is the Kullback-Leibler divergence between two distributions, that is D(P Q) i log 2( P (i) )P (i), we obtain that: q u i Q(i) E P q P (u i u i, w ) P (u i) q w,u i P (u i )P (u i, w ) P (u i u i, w )P (u i, w ), P (u i )P (u i, w ) w,u i P (u i, w ), 2 log 2 w,u i P (u i, w ), 2I(W (i) ), 2ɛ, P (u i, w ) P (u i )P (u i, w ) where the last inequality is based on the conclusion of Lemma [], that is if (Z (W (i) ),..., Z r (W (i) )) R k (ɛ) k,,..., r, then I(W (i) ) (r k) γ with γ max(k ɛ, (2 r k )ɛ log 2 e). Therefore, the performance of the above polar WEM code can be concluded as follows: Theorem 4. For a q-ary (q 2 r ) symmetric rewriting cost function ϕ : X X R +. Fix a rewriting cost D and < β < 2. For any rate R < Rs (q, D), there exists a sequence of polar WEM codes of length and rate R R, so that under the above rewriting operation, D satisfies D D+O(2 β ). The decoding and rewriting operation complexity of the codes is O( log ). B. A code construction for q-ary WEM with a maximal rewriting cost constraint, q 2 r Similarly, the code construction, the rewriting operation and the decoding operation are exactly the same as Algorithm IV., Algorithm IV.2, and Algorithm IV.3, respectively. ext, we mainly focus on its performance. Theorem 5. For a q-ary (q 2 r ) symmetric rewriting cost function ϕ : X X R +. Fix a rewriting cost D, δ, and < β < 2. For any rate R < Rs (q, D), there exists a sequence of polar WEM codes of length and rate R R, so that under the above rewriting operation and the induced probability distribution Q, the rewriting cost between a current codeword y and its updated codeword x satisfies Q(ϕ(y, x ) D+δ) < 2 β. The decoding and rewriting operation complexity of the codes is O( log ). Proof: We mainly focus on the rewriting cost analysis. The proof of this part is based on the ɛ-strong typical sequence [4] and Theorem 4 and 5 of [3]. We give the sketch as follows. We recall ɛ-strong typical sequences x y X Y with respect to p(x, y) over X Y, and denote it by A () ɛ (X, Y ). We denote by C(a, b x, y ) the number of occurrences of a, b in x, y with the same indexes, and require the following. First, a, b X Y with p(a, b) >, C(a, b x, y )/ p(a, b) < ɛ. Second, a, b X Y with p(a, b), C(a, b x, y ). In our case y u G. Due to the full rank of G, there is a one-to-one correspondence between u and y. We say that u, x A () ɛ (U, X) if x, u G Aɛ () (X, Y ) with respect to q W (y x), where W (y x) is the test channel. The first conclusion is that for sufficiently large, Q(A () ɛ (U, X)) > 2 β for < β < 2, ɛ >, which is a generalization of Theorem 4 of [3], and where Q(A () ɛ (U, X)) Q( a, b :
8 C(a, b u G, x ) q W (a b) ɛ). The outline is that based on Lemma and Lemma we obtain that u,x Aɛ () (U,X) Q(u, x ) P (u, x ) A r,n σ d max. Thus, we obtain Q(u, x ) P (u, x ) u,x u,x A r,n σ d max, Q(u, x ) P (u, x ), where in the above u, x A () ɛ (U, X). By lower bounding P (A () ɛ (U, X)) P ( a, b : C(a, b u G, x ) q W (a b) ɛ) 2q 2 e 2ɛ2 based on Hoeffding s inequality, and (U, X)) P (A () ɛ (U, X)) A r,n σ d max, Q(A () ɛ we obtain the desired result by setting σ 2 β The second conclusion is that let y R(x, i), then δ >, < β < 2 and sufficiently large, Q(ϕ(y, x )/ D + δ) < 2 β, which is a generalization of Theorem 5 of [3]. The outline is that 2d max. Q(ϕ(x, y )/ D + δ) Q(ϕ(x, y )/ D + δ x, y A () ɛ (X, Y )) + Q(x, y A () ɛ (X, Y )), 2 β, where the last inequality is based on the conclusion just obtained Q(x, y A () ɛ (X, Y )) < 2 β, and that when x, y Aɛ () (X, Y ), for ɛ sufficiently small and sufficiently large, ϕ(y, x )/ D + δ. V. COCLUSIO Code constructions for WEM using recently proposed polar codes have been presented. Future work focuses on exploring error-correcting codes for WEM. VI. ACKOWLEDGEMET This work was supported in part by the ational Science Foundation (SF) CAREER Award CCF and the SF Grant CCF REFERECES [] R. Ahlswede and Z. Zhang, Coding for write-efficient memory, Inform. Comput, vol. 83, no., pp. 8 97, Oct 989. [2] E. Arikan, Channel polarization: a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels, IEEE Trans on Inf Theory, vol. 55, no. 7, pp , July 29. [3] D. Burshtein and A. Strugatski, Polar write once memory codes, in Proceedings of the 22 IEEE international conference on Symposium on Information Theory (ISIT 2), Boston, MA, July 22, pp [4] T. M. Cover and J. A. Thomas, Elements of Information Theory. ew York: Wiley, 99. [5] F. Fu and R. W. Yeung, On the capacity and error-correcting codes of write-efficient memories, IEEE Trans on Inf Theory, vol. 46, no. 7, pp , ov 2. [6] M. Karzand and E. Teletar, Polar codes for q-ary source coding, in Proc. IEEE International Symposium on Information Theory (ISIT), Austin, Texas, June 2, pp [7] S. B. Korada and R. Urbanke, Polar codes are optimal for lossy source coding, IEEE Trans. Inf. Theory, vol. 56, no. 4, pp , April 2. [8] L. A. Lastras-Montano, M. Franceschini, T. Mittelholzer, J. Karidis, and M. Wegman, On the lifetime of multilevel memories, in Proceedings of the 29 IEEE international conference on Symposium on Information Theory (ISIT 9), Coex, Seoul, Korea, 29, pp [9] Q. Li, Compressed rank modulation, in Proc. 5th Annual Allerton Conference on Communication, Control and Computing (Allerton), Monticello, IL, October 22. [] W. Park and A. Barg, Polar codes for q-ary channels, q 2 r, in Proc. IEEE International Symposium on Information Theory (ISIT), Boston, MA, June 22, pp [] R. L. Rivest and A. Shamir, How to reuse a write-once memory, Inf. Contr, vol. 55, pp. 9, 982.
Coding for Noisy Write-Efficient Memories
Coding for oisy Write-Efficient Memories Qing Li Computer Sci. & Eng. Dept. Texas A & M University College Station, TX 77843 qingli@cse.tamu.edu Anxiao (Andrew) Jiang CSE and ECE Departments Texas A &
More informationPolar Codes for Sources with Finite Reconstruction Alphabets
Polar Codes for Sources with Finite Reconstruction Alphabets Aria G. Sahebi and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 4809,
More informationPolar Codes are Optimal for Lossy Source Coding
Polar Codes are Optimal for Lossy Source Coding Satish Babu Korada and Rüdiger Urbanke EPFL, Switzerland, Email: satish.korada,ruediger.urbanke}@epfl.ch Abstract We consider lossy source compression of
More informationCoding for Noisy Write-Efficient Memories
Coding for oisy Write-Effiient Memories Qing Li Computer Si. & Eng. Dept. Texas A & M University College Station, TX 77843 qingli@se.tamu.edu Anxiao (Andrew) Jiang CSE and ECE Departments Texas A & M University
More informationCompound Polar Codes
Compound Polar Codes Hessam Mahdavifar, Mostafa El-Khamy, Jungwon Lee, Inyup Kang Mobile Solutions Lab, Samsung Information Systems America 4921 Directors Place, San Diego, CA 92121 {h.mahdavifar, mostafa.e,
More informationPolar Write Once Memory Codes
Polar Write Once Memory Codes David Burshtein, Senior Member, IEEE and Alona Strugatski Abstract arxiv:1207.0782v2 [cs.it] 7 Oct 2012 A coding scheme for write once memory (WOM) using polar codes is presented.
More informationJoint Write-Once-Memory and Error-Control Codes
1 Joint Write-Once-Memory and Error-Control Codes Xudong Ma Pattern Technology Lab LLC, U.S.A. Email: xma@ieee.org arxiv:1411.4617v1 [cs.it] 17 ov 2014 Abstract Write-Once-Memory (WOM) is a model for many
More informationPolar Codes for Some Multi-terminal Communications Problems
Polar Codes for ome Multi-terminal Communications Problems Aria G. ahebi and. andeep Pradhan Department of Electrical Engineering and Computer cience, niversity of Michigan, Ann Arbor, MI 48109, A. Email:
More informationEECS 750. Hypothesis Testing with Communication Constraints
EECS 750 Hypothesis Testing with Communication Constraints Name: Dinesh Krithivasan Abstract In this report, we study a modification of the classical statistical problem of bivariate hypothesis testing.
More informationAn Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets
An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets Jing Guo University of Cambridge jg582@cam.ac.uk Jossy Sayir University of Cambridge j.sayir@ieee.org Minghai Qin
More informationThe Method of Types and Its Application to Information Hiding
The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,
More informationLecture 5 Channel Coding over Continuous Channels
Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From
More informationHigh Sum-Rate Three-Write and Non-Binary WOM Codes
01 IEEE International Symposium on Information Theory Proceedings High Sum-Rate Three-Write and Non-Binary WOM Codes Eitan Yaakobi, Amir Shpilka Dept. of Electrical Engineering Dept. of Electrical and
More informationOn Common Information and the Encoding of Sources that are Not Successively Refinable
On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa
More informationCapacity of the Discrete Memoryless Energy Harvesting Channel with Side Information
204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener
More informationGeneralized Partial Orders for Polar Code Bit-Channels
Fifty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 3-6, 017 Generalized Partial Orders for Polar Code Bit-Channels Wei Wu, Bing Fan, and Paul H. Siegel Electrical and Computer
More informationFeedback Capacity of a Class of Symmetric Finite-State Markov Channels
Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada
More informationChapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University
Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission
More informationVariable Length Codes for Degraded Broadcast Channels
Variable Length Codes for Degraded Broadcast Channels Stéphane Musy School of Computer and Communication Sciences, EPFL CH-1015 Lausanne, Switzerland Email: stephane.musy@ep.ch Abstract This paper investigates
More informationPolar Codes are Optimal for Lossy Source Coding
Polar Codes are Optimal for Lossy Source Coding Satish Babu Korada and Rüdiger Urbanke Abstract We consider lossy source compression of a binary symmetric source using polar codes and the low-complexity
More informationUpper Bounds on the Capacity of Binary Intermittent Communication
Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,
More informationSHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe
SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/41 Outline Two-terminal model: Mutual information Operational meaning in: Channel coding: channel
More informationOn Bit Error Rate Performance of Polar Codes in Finite Regime
On Bit Error Rate Performance of Polar Codes in Finite Regime A. Eslami and H. Pishro-Nik Abstract Polar codes have been recently proposed as the first low complexity class of codes that can provably achieve
More informationSecurity in Locally Repairable Storage
1 Security in Locally Repairable Storage Abhishek Agarwal and Arya Mazumdar Abstract In this paper we extend the notion of locally repairable codes to secret sharing schemes. The main problem we consider
More informationSuperposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels
Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:
More informationConstruction of Polar Codes with Sublinear Complexity
1 Construction of Polar Codes with Sublinear Complexity Marco Mondelli, S. Hamed Hassani, and Rüdiger Urbanke arxiv:1612.05295v4 [cs.it] 13 Jul 2017 Abstract Consider the problem of constructing a polar
More informationWrite Once Memory Codes and Lattices for Flash Memories
Write Once Memory Codes and Lattices for Flash Memories Japan Advanced Institute of Science and Technology September 19, 2012 Seoul National University Japan Advanced Institute of Science and Technology
More informationSHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe
SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/40 Acknowledgement Praneeth Boda Himanshu Tyagi Shun Watanabe 3/40 Outline Two-terminal model: Mutual
More informationFlip-N-Write: A Simple Deterministic Technique to Improve PRAM Write Performance, Energy and Endurance. Presenter: Brian Wongchaowart March 17, 2010
Flip-N-Write: A Simple Deterministic Technique to Improve PRAM Write Performance, Energy and Endurance Sangyeun Cho Hyunjin Lee Presenter: Brian Wongchaowart March 17, 2010 Motivation Suppose that you
More informationJoint Coding for Flash Memory Storage
ISIT 8, Toronto, Canada, July 6-11, 8 Joint Coding for Flash Memory Storage Anxiao (Andrew) Jiang Computer Science Department Texas A&M University College Station, TX 77843, U.S.A. ajiang@cs.tamu.edu Abstract
More informationGeneralized Writing on Dirty Paper
Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland
More informationMultiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets
Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,
More informationTHE term rank modulation refers to the representation of information by permutations. This method
Rank-Modulation Rewrite Coding for Flash Memories Eyal En Gad, Eitan Yaakobi, Member, IEEE, Anxiao (Andrew) Jiang, Senior Member, IEEE, and Jehoshua Bruck, Fellow, IEEE 1 Abstract The current flash memory
More information4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information
4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information Ramji Venkataramanan Signal Processing and Communications Lab Department of Engineering ramji.v@eng.cam.ac.uk
More informationPractical Polar Code Construction Using Generalised Generator Matrices
Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:
More informationPolar codes for the m-user MAC and matroids
Research Collection Conference Paper Polar codes for the m-user MAC and matroids Author(s): Abbe, Emmanuel; Telatar, Emre Publication Date: 2010 Permanent Link: https://doi.org/10.3929/ethz-a-005997169
More informationVariable-Rate Universal Slepian-Wolf Coding with Feedback
Variable-Rate Universal Slepian-Wolf Coding with Feedback Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Dept. of Electrical and Computer Engineering Rice University, Houston, TX 77005 Abstract
More informationECE598: Information-theoretic methods in high-dimensional statistics Spring 2016
ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture : Mutual Information Method Lecturer: Yihong Wu Scribe: Jaeho Lee, Mar, 06 Ed. Mar 9 Quick review: Assouad s lemma
More informationAn instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1
Kraft s inequality An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if N 2 l i 1 Proof: Suppose that we have a tree code. Let l max = max{l 1,...,
More informationA Formula for the Capacity of the General Gel fand-pinsker Channel
A Formula for the Capacity of the General Gel fand-pinsker Channel Vincent Y. F. Tan Institute for Infocomm Research (I 2 R, A*STAR, Email: tanyfv@i2r.a-star.edu.sg ECE Dept., National University of Singapore
More informationCut-Set Bound and Dependence Balance Bound
Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems
More informationPerformance of Polar Codes for Channel and Source Coding
Performance of Polar Codes for Channel and Source Coding Nadine Hussami AUB, Lebanon, Email: njh03@aub.edu.lb Satish Babu Korada and üdiger Urbanke EPFL, Switzerland, Email: {satish.korada,ruediger.urbanke}@epfl.ch
More informationOptimality of Walrand-Varaiya Type Policies and. Approximation Results for Zero-Delay Coding of. Markov Sources. Richard G. Wood
Optimality of Walrand-Varaiya Type Policies and Approximation Results for Zero-Delay Coding of Markov Sources by Richard G. Wood A thesis submitted to the Department of Mathematics & Statistics in conformity
More informationRemote Source Coding with Two-Sided Information
Remote Source Coding with Two-Sided Information Basak Guler Ebrahim MolavianJazi Aylin Yener Wireless Communications and Networking Laboratory Department of Electrical Engineering The Pennsylvania State
More informationSOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003
SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SLEPIAN-WOLF RESULT { X i} RATE R x ENCODER 1 DECODER X i V i {, } { V i} ENCODER 0 RATE R v Problem: Determine R, the
More informationLattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function
Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,
More informationHypothesis Testing with Communication Constraints
Hypothesis Testing with Communication Constraints Dinesh Krithivasan EECS 750 April 17, 2006 Dinesh Krithivasan (EECS 750) Hyp. testing with comm. constraints April 17, 2006 1 / 21 Presentation Outline
More informationArimoto Channel Coding Converse and Rényi Divergence
Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code
More information5 Mutual Information and Channel Capacity
5 Mutual Information and Channel Capacity In Section 2, we have seen the use of a quantity called entropy to measure the amount of randomness in a random variable. In this section, we introduce several
More informationMulti-Kernel Polar Codes: Proof of Polarization and Error Exponents
Multi-Kernel Polar Codes: Proof of Polarization and Error Exponents Meryem Benammar, Valerio Bioglio, Frédéric Gabry, Ingmar Land Mathematical and Algorithmic Sciences Lab Paris Research Center, Huawei
More informationX 1 : X Table 1: Y = X X 2
ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multiple-access
More informationError-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories
2 IEEE International Symposium on Information Theory Proceedings Error-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories Hongchao Zhou Electrical Engineering Department California Institute
More informationDispersion of the Gilbert-Elliott Channel
Dispersion of the Gilbert-Elliott Channel Yury Polyanskiy Email: ypolyans@princeton.edu H. Vincent Poor Email: poor@princeton.edu Sergio Verdú Email: verdu@princeton.edu Abstract Channel dispersion plays
More informationHigh Sum-Rate Three-Write and Non-Binary WOM Codes
Submitted to the IEEE TRANSACTIONS ON INFORMATION THEORY, 2012 1 High Sum-Rate Three-Write and Non-Binary WOM Codes Eitan Yaakobi, Amir Shpilka Abstract Write-once memory (WOM) is a storage medium with
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationEquivalence for Networks with Adversarial State
Equivalence for Networks with Adversarial State Oliver Kosut Department of Electrical, Computer and Energy Engineering Arizona State University Tempe, AZ 85287 Email: okosut@asu.edu Jörg Kliewer Department
More informationOn the Duality between Multiple-Access Codes and Computation Codes
On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch
More informationLecture 2: August 31
0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 2: August 3 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy
More informationLecture 6 I. CHANNEL CODING. X n (m) P Y X
6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder
More informationShannon s Noisy-Channel Coding Theorem
Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 13, 2015 Lucas Slot, Sebastian Zur Shannon s Noisy-Channel Coding Theorem February 13, 2015 1 / 29 Outline 1 Definitions and Terminology
More informationSolutions to Homework Set #1 Sanov s Theorem, Rate distortion
st Semester 00/ Solutions to Homework Set # Sanov s Theorem, Rate distortion. Sanov s theorem: Prove the simple version of Sanov s theorem for the binary random variables, i.e., let X,X,...,X n be a sequence
More informationAn Achievable Error Exponent for the Mismatched Multiple-Access Channel
An Achievable Error Exponent for the Mismatched Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@camacuk Albert Guillén i Fàbregas ICREA & Universitat Pompeu Fabra University of
More informationCompression in the Space of Permutations
Compression in the Space of Permutations Da Wang Arya Mazumdar Gregory Wornell EECS Dept. ECE Dept. Massachusetts Inst. of Technology Cambridge, MA 02139, USA {dawang,gww}@mit.edu University of Minnesota
More informationELEMENT OF INFORMATION THEORY
History Table of Content ELEMENT OF INFORMATION THEORY O. Le Meur olemeur@irisa.fr Univ. of Rennes 1 http://www.irisa.fr/temics/staff/lemeur/ October 2010 1 History Table of Content VERSION: 2009-2010:
More informationChannel Polarization and Blackwell Measures
Channel Polarization Blackwell Measures Maxim Raginsky Abstract The Blackwell measure of a binary-input channel (BIC is the distribution of the posterior probability of 0 under the uniform input distribution
More informationOn Function Computation with Privacy and Secrecy Constraints
1 On Function Computation with Privacy and Secrecy Constraints Wenwen Tu and Lifeng Lai Abstract In this paper, the problem of function computation with privacy and secrecy constraints is considered. The
More informationThe Capacity of the Semi-Deterministic Cognitive Interference Channel and its Application to Constant Gap Results for the Gaussian Channel
The Capacity of the Semi-Deterministic Cognitive Interference Channel and its Application to Constant Gap Results for the Gaussian Channel Stefano Rini, Daniela Tuninetti, and Natasha Devroye Department
More informationThe sequential decoding metric for detection in sensor networks
The sequential decoding metric for detection in sensor networks B. Narayanaswamy, Yaron Rachlin, Rohit Negi and Pradeep Khosla Department of ECE Carnegie Mellon University Pittsburgh, PA, 523 Email: {bnarayan,rachlin,negi,pkk}@ece.cmu.edu
More informationProblemsWeCanSolveWithaHelper
ITW 2009, Volos, Greece, June 10-12, 2009 ProblemsWeCanSolveWitha Haim Permuter Ben-Gurion University of the Negev haimp@bgu.ac.il Yossef Steinberg Technion - IIT ysteinbe@ee.technion.ac.il Tsachy Weissman
More informationNational University of Singapore Department of Electrical & Computer Engineering. Examination for
National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:
More informationThe Compound Capacity of Polar Codes
The Compound Capacity of Polar Codes S. Hamed Hassani, Satish Babu Korada and Rüdiger Urbanke arxiv:97.329v [cs.it] 9 Jul 29 Abstract We consider the compound capacity of polar codes under successive cancellation
More informationThe Poisson Channel with Side Information
The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH
More informationInteractive Hypothesis Testing with Communication Constraints
Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October - 5, 22 Interactive Hypothesis Testing with Communication Constraints Yu Xiang and Young-Han Kim Department of Electrical
More informationDistributed Source Coding Using LDPC Codes
Distributed Source Coding Using LDPC Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete May 29, 2010 Telecommunications Laboratory (TUC) Distributed Source Coding
More informationA Simple Converse Proof and a Unified Capacity Formula for Channels with Input Constraints
A Simple Converse Proof and a Unified Capacity Formula for Channels with Input Constraints Youjian Eugene Liu Department of Electrical and Computer Engineering University of Colorado at Boulder eugeneliu@ieee.org
More informationCapacity of a channel Shannon s second theorem. Information Theory 1/33
Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,
More information(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute
ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html
More informationInformation Theory. Coding and Information Theory. Information Theory Textbooks. Entropy
Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is
More informationThe Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan
The Communication Complexity of Correlation Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan Transmitting Correlated Variables (X, Y) pair of correlated random variables Transmitting
More information18.2 Continuous Alphabet (discrete-time, memoryless) Channel
0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not
More informationCapacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel
Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Jun Chen Dept. of Electrical and Computer Engr. McMaster University Hamilton, Ontario, Canada Chao Tian AT&T Labs-Research 80 Park
More informationConstructing Polar Codes Using Iterative Bit-Channel Upgrading. Arash Ghayoori. B.Sc., Isfahan University of Technology, 2011
Constructing Polar Codes Using Iterative Bit-Channel Upgrading by Arash Ghayoori B.Sc., Isfahan University of Technology, 011 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree
More informationMultiterminal Source Coding with an Entropy-Based Distortion Measure
Multiterminal Source Coding with an Entropy-Based Distortion Measure Thomas Courtade and Rick Wesel Department of Electrical Engineering University of California, Los Angeles 4 August, 2011 IEEE International
More informationOn the Rate-Limited Gelfand-Pinsker Problem
On the Rate-Limited Gelfand-Pinsker Problem Ravi Tandon Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 ravit@umd.edu ulukus@umd.edu Abstract
More informationConvexity/Concavity of Renyi Entropy and α-mutual Information
Convexity/Concavity of Renyi Entropy and -Mutual Information Siu-Wai Ho Institute for Telecommunications Research University of South Australia Adelaide, SA 5095, Australia Email: siuwai.ho@unisa.edu.au
More informationUniversal Incremental Slepian-Wolf Coding
Proceedings of the 43rd annual Allerton Conference, Monticello, IL, September 2004 Universal Incremental Slepian-Wolf Coding Stark C. Draper University of California, Berkeley Berkeley, CA, 94720 USA sdraper@eecs.berkeley.edu
More informationEntropies & Information Theory
Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information
More informationMultidimensional Flash Codes
Multidimensional Flash Codes Eitan Yaakobi, Alexander Vardy, Paul H. Siegel, and Jack K. Wolf University of California, San Diego La Jolla, CA 9093 0401, USA Emails: eyaakobi@ucsd.edu, avardy@ucsd.edu,
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science Transmission of Information Spring 2006
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.44 Transmission of Information Spring 2006 Homework 2 Solution name username April 4, 2006 Reading: Chapter
More informationOn the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel
On the Simulatability Condition in Key Generation Over a Non-authenticated Public Channel Wenwen Tu and Lifeng Lai Department of Electrical and Computer Engineering Worcester Polytechnic Institute Worcester,
More informationOn Scalable Coding in the Presence of Decoder Side Information
On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,
More informationCapacity achieving multiwrite WOM codes
Capacity achieving multiwrite WOM codes Amir Shpilka Abstract arxiv:1209.1128v1 [cs.it] 5 Sep 2012 In this paper we give an explicit construction of a capacity achieving family of binary t-write WOM codes
More informationECEN 655: Advanced Channel Coding
ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History
More informationBioinformatics: Biology X
Bud Mishra Room 1002, 715 Broadway, Courant Institute, NYU, New York, USA Model Building/Checking, Reverse Engineering, Causality Outline 1 Bayesian Interpretation of Probabilities 2 Where (or of what)
More informationLayered Index-less Indexed Flash Codes for Improving Average Performance
Layered Index-less Indexed Flash Codes for Improving Average Performance Riki Suzuki, Tadashi Wadayama Department of Computer Science, Nagoya Institute of Technology Email: wadayama@nitech.ac.jp arxiv:1102.3513v1
More information2012 IEEE International Symposium on Information Theory Proceedings
Decoding of Cyclic Codes over Symbol-Pair Read Channels Eitan Yaakobi, Jehoshua Bruck, and Paul H Siegel Electrical Engineering Department, California Institute of Technology, Pasadena, CA 9115, USA Electrical
More informationReliable Computation over Multiple-Access Channels
Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,
More informationNoisy channel communication
Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University
More informationEE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15
EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 1. Cascade of Binary Symmetric Channels The conditional probability distribution py x for each of the BSCs may be expressed by the transition probability
More information