Coding for a Non-symmetric Ternary Channel

Size: px
Start display at page:

Download "Coding for a Non-symmetric Ternary Channel"

Transcription

1 Coding for a Non-symmetric Ternary Channel Nicolas Bitouzé and Alexandre Graell i Amat Department of Electronics, Institut TELECOM-TELECOM Bretagne, 938 Brest, France nicolas.bitouze@telecom-bretagne.eu, alexandre.graell@telecom-bretagne.eu Abstract Non-symmetric ternary channels can be used to model the behavior of some memory devices. In this work, error correction coding for a non-symmetric ternary channel where some of the error transitions are not allowed, is considered. We study distance properties of ternary codes over this channel and define the maximum likelihood (ML) decoding rule. It is shown that the ML decoding rule is too complex, since it depends on the channel error probability. A simpler alternative decoding rule, called d A-decoding, is then proposed. It is shown that d A-decoding and ML decoding are equivalent for values of p under a certain threshold. Assuming d A-decoding we characterize the error correction capabilities of ternary codes over the nonsymmetric ternary channel. We also provide an upper bound and a constructive lower bound on the size of such codes given the code length and the minimum distance. I. INTRODUCTION Electrically erasable programmable read-only memories (EEPROMs) are semiconductor memories that retain their data contents when power is off. They can be read and written to like standard RAM and are suitable for applications where storage of small amounts of data is critical and periodic writing of new data is required. Typical applications are radio frequency identification tag, smart dust, or automotive applications including car audio and multimedia, chassis and safety, and power train. The communication channel underlying EEPROMs can be suitably modeled as a binary symmetric channel (BSC). Currently, very simple error correction codes based on the well-known Hamming codes combined with hard decoding are implemented on-chip to correct single bit errors [1]. However, next generation devices demand for more stringent requirements in terms of reliability as well as storage density. A suitable modification of the physics of EEPROM memories allows to store the information in three levels, thus higher densities can be achieved. A suitable model of the resulting channel is the discrete memoryless non-symmetric ternary channel depicted in Fig. 1, and denoted by H. The channel is characterized by an input alphabet X = {0, 1, }, an output alphabet Y = {0, 1, }, and a set of conditional probabilities P(y x) such that the symbol 0 is received correctly with probability 1 p and received as a 1 or a with crossover probability p/, and symbols 1 and are received as a 0 with crossover probability p/ and received correctly with probability 1 p/. Hence, transitions 1 and 1 are not allowed. A. Graell i Amat was funded by a Marie Curie Intra-European Fellowship within the 6th European Community Framework Programme. 0 1 p/ p/ p/ p/ 1-p/ 1-p 1-p/ Fig. 1. Non-symmetric ternary channel H. In this paper we consider coding for the non-symmetric ternary channel of Fig. 1. We define the maximum likelihood (ML) decoding rule over this channel and show that its implementation becomes too complex, since it depends on p. As an alternative, a simpler decoding rule, called d A -decoding, is proposed based on a more appropriate distance measure. It is shown that the proposed decoding rule is equivalent to ML decoding for the values of p of interest. We then study error correction capabilities of ternary codes under d A -decoding rule. In particular, we derive an upper bound on the size of the code. We also derive a lower bound on the size of the code given the code length n and its minimum distance, which proves the existence of good codes. For small values of the minimum distance the lower bound and the upper bound are close. II. MAXIMUM LIKELIHOOD DECODING For later use, let C be a ternary code of length n, and let x = (x 1,..., x n ) C and y = (y 1,..., y n ) represent a codeword transmitted over channel H, and a received vector, respectively. Definition 1. Let u = (u 1,...,u n ) and v = (v 1,...,v n ) two vectors in F n 3. The distance1 d ML (u,v) between u and v is defined as: d ML (u,v) = δ 0 (u,v)log(1 p) if δ 1 (u,v) > 0 δ 1 (u,v)log(1 p/) otherwise δ 0 (u,v)log(p/) (1) 1 Note that d ML is not formally a distance as it is not symmetric and as the identity of indiscernibles does not hold. 0 1

2 where δ 0 (u,v) = {i u i = v i = 0} δ 1 (u,v) = {i u i = v i 0} δ 0 (u,v) = {i u i v i u i v i = 0} δ 1(u,v) = {i u i v i u i v i 0}. In () δ 0 (u,v) and δ 1 (u,v) denote the number of positions i where u i and v i are both equal, and u i = v i = 0 and u i = v i 0, respectively. For symmetric channels, there is no need to distinguish between positions where u i = v i = 0 and positions where u i = v i 0, since the probability of the transitions 0 0 is the same as the probability for any other non-error transition. However, for channel H, this distinction is required. Also, δ 0 (u,v) and δ 1 (u,v) denote the number of positions i where u i and v i differ, and u i v i = 0 and u i v i 0, respectively. Again, this distinction is not necessary for symmetric channels. As the channel is memoryless, the probability P(y x) of receiving y, being codeword x transmitted, is n P(y x) = P(y i x i ) (3) We can therefore relate d ML (x,y) to P(y x): log(p(y x)) = () log(p(y i x i )) = d ML (x,y). (4) Using d ML, the ML decoding rule can be formulated as follows: Given a received word y, decode to the codeword x that minimizes the distance d ML (x,y). Proof: It is sufficient to prove that for a given y, when x varies among codewords, d ML (x,y) increases when P(x y) decreases. For d ML (x,y) =, there is a position i such that the transition that goes from x i to y i is not allowed, therefore P(x y) = 0. We consider the case where d ML (x,y) < : P(x y) = P(x) P(x) P(y x) = P(y) P(y) exp ( d ML(x,y)). (5) Since P(x) does not depend on the transmitted codeword x, we have: P(x y) exp ( d ML (x,y)). (6) We conclude by monotonicity of the exponential. Unfortunately, d ML depends on the channel transition probability p. Therefore, ML decoding based on d ML is cumbersome. To circumvent this drawback, a simple alternative decoding rule is proposed in the following. Definition. Let u and v be two vectors in F n 3. The distance d A (u,v) between u and v is defined as: d A (u,v) = d A (u i, v i ) (7) where 0 if u i = v i, d A (u i, v i ) = 1 if u i v i u i v i = 0, if u i v i u i v i 0. Using this distance measure, we can define the following decoding rule which does not depend on p: Given a received word y, decode to the codeword x that minimizes the distance d A (x,y). In the rest of the paper we shall refer to this decoding rule as to d A -decoding. We denote by t A the error correction capability of a code C over the channel H under d A -decoding. Note that d A -decoding does not necessarily minimize the probability of error. However, the following theorem gives an upper bound on p under which, if less than or equal to t A errors occur, d A -decoding is equivalent to ML decoding: Theorem 1. Let C be a ternary code, and let H be the ternary channel of Fig. 1 with transition error probability p. d A - decoding and ML decoding of vectors transmitted with less than or equal to t A errors over H are equivalent for all codes C of length n if and only if: Proof: p/ 1 p < ( ) 1 p n 1 1 p/ (8). (9) Direct implication: We assume that (9) does not hold. For n odd, we write n = m + 1. Consider the code C = {0 n,1 n } consisting of two codewords, the all-zero codeword and the allone codeword, and the received vector y = 0 m+1 1 m consisting of m + 1 zeros and m ones. Clearly, d A - decoding decodes y to the all-zero codeword 0 n. On the other hand, P(y 0 n ) = (1 p) m+1 (p/) m, P(y 1 n ) = (p/) m+1 (1 p/) m. (10) Using the hypothesis, we obtain P(y 0 n ) P(y 1 n ). Therefore, ML decoding will not necessarily decode to 0 n. If n is even, we use the same argument considering the same vectors with an extra zero appended at the end. Converse: Only the sketch of the proof is given here. We prove that for a vector and y of weight w, and another vector x in F n 3 such that d A(x,y) = d, if d is finite, then: { ( p ) d ( P(y x) 1 p w ) (1 p) n w d if d n w, ( p d ( ) 1 p n d ) otherwise. (11) We denote this upper bound by B + (n, w, d). We find a similar lower bound that we call B (n, w, d), such that Note that in this case the identity of indiscernibles holds and the distance is symmetric. However, the triangular inequality does not hold anymore.

3 It is easy to check that d B is such that, for u i and v i in F 3 : 0 if u i = v i, d B (u i, v i ) = 1 if u i v i u i v i = 0, (15) if u i v i u i v i 0. Definition 4. Let x and x be two codewords of C. We define the minimum d B -distance of a code C, d B,min, as d B,min = min x, x C x x d B (x, x). (16) Fig.. Maximum value of p for equivalence between ML decoding and d A -decoding rules. for all x,y F n 3, if w denotes the weight of y and d the d A -distance between x and y, B (n, w, d) P(y x) B + (n, w, d). (1) Then, we show that for all n, for all w n and for all d n 1, the following inequality is equivalent to (9): B (n, w, d) > B + (n, w, d + 1). (13) Thus, if (9) holds and if the minimization of d A (x,y) yields a unique x (which happens whenever less than or equal to t A errors occur), the x obtained is the same as the one obtained by maximizing P(x,y), i.e. the two decoding methods are equivalent. The upper bound on p given by (9) is depicted in Fig.. d A -decoding and ML decoding are equivalent for all values of p under the curve. For reasonable values of n, i.e. n < 100, the equivalence holds for values of p < 0.1 compatible with the error rate in EEPROM memories. Therefore, the d A - decoding rule can be considered instead of the more complex ML decoding rule with no loss in performance. III. ERROR CORRECTION CAPABILITIES In this section we analyze distance properties and error correction capabilities of ternary codes over the ternary channel H under d A -decoding rule defined in the previous section. To do so, instead of using directly the distance measure d A, we require the definition of another distance measure: Definition 3. Let u and v be two vectors in F n 3. The distance d B (u,v) between u and v is defined as: d B (u,v) = min (d A (u,w) + d A (w,v)). (14) w F n 3 Then, assuming d A -decoding, the error correction capability t A of a code over the channel H is given by the following proposition: Proposition 1. The error correction capability t A of a code C over the ternary channel H is: db,min 1 t A =. (17) Proof: Let x and y be the transmitted and the received vectors, respectively. If a decoder implementing the d A - decoding rule erroneously decodes y to x x, then Using (18) and Definition 3, d A (x,y) d A ( x,y) (18) d A (x,y) d A (x,y) + d A (y, x) d B (x, x) db,min 1 d B,min >. (19) Therefore, we successfully d A -decode y if db,min 1 d A (x,y), (0) where d A (x,y) is the number of errors that occurred during the transmission of x. Conversely, by Definitions 3 and 4, there exist two codewords x and x and a vector y F n 3 such that d B (x, x) = d B,min d B (x, x) = d A (x,y) + d A (y, x). Therefore, if d A (x,y) > t A, then d A (x,y) d B,min (1) = d A(x,y) + d A (y, x), () thus d A (y, x) d A (x,y), and the d A -decoder may fail to decode y to x. IV. A SPHERE PACKING BOUND In this section we give a simple upper bound on the size of codes over the ternary channel H assuming d A -decoding. The bound we introduce is a sphere-packing bound. However, its formulation is harder than for the case of symmetric channels. Due to the fact that transitions between symbols 1 and are not allowed, the ternary space we deal with is not isotropic and has the shape of an hypercube of dimension n, centered on

4 Fig F 3 3 with distance d B the all-zero vector (see Fig. 3 for n = 3). Therefore, spheres have smaller volumes if they are closer to the vertices of the hypercube. The goal here is to find how many spheres of a given radius can be placed in the ternary space. We give a lower bound on the volume of the spheres: Proposition. We denote by S(u, r) the sphere with center u and radius r in F n 3 : S(u, r) = {v F n 3 d B (u,v) r}. (3) The volume S(u, r) of S(u, r) is such that: S(u, r) r d/ d=0 e =0 ( )( ) n n e e d e (4) Proof: We first prove that the smallest spheres are the ones centered in words of maximum weight (the vertices of the hypercube). Let n and r be two integers. For w n, let u w be a vector of F n 3 of weight w. The volume of S(u w, r) is independent from the choice of u w. We denote it by V(n, w, r). For n > 0, we denote by u w the vector of Fn 1 3 obtained by removing the last symbol of u w : V(n, w, r) = {v F n 3 d B (u w,v) r} = {w0 w F n 1 3 d B (u w,w) r} + {w1 w F n 1 3 d B (u w,w) r 1} + {w w F n 1 3 d B (u w,w) r 1} = V(n 1, w, r) + V(n 1, w, r 1) (5) where for w F n 1 3, w0 denotes the vector of F n 3 obtained by appending a 0 at the end of w. Similarly, we show that for w n 1, V(n, w + 1, r) = V(n 1, w, r) + V(n 1, w, r 1) +V(n 1, w, r ). (6) Therefore, if n > 0 and w n 1, V(n, w, r) V(n, w + 1, r) = = V(n 1, w, r 1) V(n 1, w, r ) 0. (7) From (7) it follows that the spheres of minimal volume are the one centered on words of maximum weight. Now, we give an expression for V(n, n, r). We consider the all-one vector 1 n. Let v F n 3 : v is in S(1 n, r) if and only if d B (1 n,v) r. We denote by d this distance, and by e the number of positions i where v i =. The number of positions j where v j = 0 is d e. The number of vectors v that match a given d and e is therefore: ( )( ) n n e. e d e We conclude by summing over all possible d and e : d/ r ( )( ) n n S(1 n e, r) = e d e d=0 e =0 (8) It is now possible to formulate the sphere-packing bound for our channel: Theorem. Let C be a code of length n and minimum d B - distance d B,min over the ternary channel H. If C denotes the size of C, C t A d/ d=0 e =0 3 n ( )( ). (9) n n e e d e Proof: Let x and x be two codewords of C. Since d B (x, x) t A + 1, the spheres S(x, t A ) and S( x, t A ) are non-intersecting. This implies that S(x, t A ) = S(x, t A ) x C x C t A d/ ( )( ) (30) n n e C. e d e Furthermore, d=0 e =0 S(x, t A ) Fn 3 = 3n. (31) x C Therefore, we conclude that C t A d/ d=0 e =0 3 n ( )( ). n n e e d e Nit that this bound is less tighter as larger d B,min are considered, since the tightness of the lower bound on the volume of the spheres given by Proposition also decays when d B,min increases. This explains the results of Table I presented at the end of the article.

5 V. CONSTRUCTIVE LOWER BOUND In this section, we give a constructive lower bound on the size of codes over channel H and show the existence of good codes. We define mappings that are applied to binary codes to generate a set of codewords of F n 3 that respects a given minimum d B -distance d B,min. A. Mappings and their topological properties Let u be a vector of F n and w u its Hamming weight. We denote by g u (j) (1 j w u ) the j-th non-zero coordinate of u. We define the mapping ϕ u such that: ϕ u : F wu 3 F n 3 w u û û j e gu(j) (3) where (e i ) 1 i n is the canonical basis of F n 3. We also call E u the subspace of F n 3 defined by E u = ϕ u (F wu 3 ). For instance, for u = , ϕ u(01) = and the elements of E u are the vectors of the form a00bc000 for a, b, c F 3. We define another mapping ψ that transforms a binary word into a ternary word with no 0 coordinate by changing the symbols 0 into 1 s and the symbols 1 into s: ψ : F n F n 3 u (u i + 1)e i. (33) These mappings have several topological properties regarding d B : Proposition 3. Let u and v be two vectors in F n, and ψ the mapping defined in (33). It follows: d B (ψ(u), ψ(v)) = d B (u,v). (34) Proof: Since d B (1, ) = d B (0, 1) and d B (a, a) = 0 for all a F 3, we have d B (ψ(u), ψ(v)) = = = d B ((ψ(u)) i, (ψ(v)) i ) d B (u i + 1, v i + 1) d B (u i, v i ) = d B (u,v). Proposition 4. Let u F n. For û,û F wu 3 it follows, (35) d B (ϕ u (û), ϕ u (û )) = d B (û,û ). (36) Proof: w u w u d B (ϕ u (û), ϕ u (û )) = d B û j e gu(j), û j e g u(j) w u = d B (û j, û j ) = d B(û,û ). (37) Proposition 5. For u,v F n, and û F wu 3 and ˆv F wv 3 both with no 0 coordinate, the following inequality holds: d B (ϕ u (û), ϕ v (ˆv) d B (u,v). (38) Proof: We denote the vector of F m 3 with all coordinates at 1 by 1 m. As û and ˆv have no 0 coordinate: d B (ϕ u (û), ϕ v (ˆv)) d B (ϕ u (1 wu ), ϕ v (1 wv )) = d B (u,v). (39) B. Construction and lower bound The aim of this section is to build, with n and minimum d B -distance d B,min given, say d B,min = d, an [n, M, d] code for the ternary channel H with reasonable M, where M = C is the cardinality of the code, starting from elementary binary codes. Let C be an (n, k, d) binary code with minimum (Hamming) distance d H,min equal to d and denote by A h its weight enumerator, the number of codewords of weight h, 0 h n. For all h such that A h 0, let C h be a (h, k h, d/ ) binary code. We consider the following ternary code: C = x C ϕ x ( ψ( Cw x ) ) (40) Proposition 6. The cardinality of code C satisfies: C = A h C h (41) h=0 Proof: Since for all x, ϕ x and ψ are trivially injective, it is enough to prove that the union x C ϕ x ( ψ( Cw x ) ) is disjoint. For x, z C such that x z, let x ϕ x (ψ( C w x )) and z ϕ z (ψ( C w z )). By Proposition 5, d B (x,z) d H ( x, z) > 0, thus x z. Corollary 1. C = A h k h (4) h=0 Proposition 7. Let x and z be two codewords of C. Then d B (x,z) d. Proof: Let x and z be two distinct codewords of C. We denote by x the codeword of C and x the codeword of C w x such that x = ϕ x (ψ( x )) (the unicity is proved in the proof of Proposition 6). Likewise, we define z and z with respect to z. We consider two cases:

6 TABLE I CONSTRUCTIVE LOWER BOUND AND UPPER BOUND (IN BRACKETS) ON THE SIZE (log M ) OF CODES, AS A FUNCTION OF n AND d B,min d B,min \n (1) 4 (5) 49 (50) 100 (101) 01 (0) 4 7 (9) 19 (1) 43 (45) 93 (95) 193 (195) 6 1 (18) 34 (41) 8 (90) 180 (189) 8 4 (5) 11 (15) 8 (38) 75 (85) 17 (184) (81) 159 (179) 1 1 (3) 61 (78) 151 (174) 14 5 (74) 140 (170) 16 5 (7) 17 (7) 46 (71) 13 (165) (157) 36 (6) 106 (154) 4 36 (59) 98 (150) 8 30 (54) 86 (143) 3 6 (1) 8 (49) 78 (137) (119) (114) (104) 64 7 () 43 (94) 18 8 (41) Case x = z: In this case x and z are two different codewords of C w x (otherwise x = z). Thus, by choice of the code C w x it follows that d B ( x, z ) d/. By Propositions 3 and 4, d B (x,z) = d B (ϕ x (ψ( x )), ϕ z (ψ( z ))) = d B (ψ( x ), ψ( z )) = d B ( x, z ) d/ d. (43) Case x z: By choice of C it follows that d B ( x, z) d. Now, by Proposition 5, d B (x,z) = d B (ϕ x (ψ( x )), ϕ z (ψ( z ))) d B ( x, z) = d. (44) In both cases, d B (x,z) d, which concludes the proof. C. Results The constructive method proposed above gives a lower bound on the cardinality of ternary codes over H. We used this method to construct codes based on extended BCH codes as C and codes obtained from the tables in [], [3] as C w. Note that the full knowledge of the binary codes used in the construction is not required to compute the lower bound: given n and d, we only require to know the weight enumerator A h of C, which can be found in [4]. On the other hand, for codes C w, only the knowledge of the size k h is required. The results are shown in Table I. For given n and d B,min, we report in the table the value log M. The upper bound on the size of codes over H of length n and minimum d B -distance d B,min is also given in the table in brackets (also in the form log M). VI. CONCLUSION In this paper, coding for a particular non-symmetric ternary channel where some transitions are not allowed, was addressed. We derived the maximum likelihood decoding rule for this channel and showed that it is too complex, since it depends on the error transition probability p. We then proposed an alternative decoding rule, called d A -decoding, based on a more suitable distance measure. We showed that d A -decoding and ML decoding are equivalent for values of p of interest. Further, we analyzed error correction capabilities of ternary codes over this particular channel under d A -decoding. We gave an upper bound and a constructive lower bound on the code size, showing the existence of good codes. Following the proposed constructive method, we found good codes for several values of n and d B,min. In the paper we did not make any considerations regarding the complexity of encoding and decoding. Further research includes finding binary codes better adapted to the construction method, and a simple method to map a binary input to the ternary codewords to be effectively used in memory devices. REFERENCES [1] T. J. Ting, T. Chang, T. Lin, C. S. Jenq, and K. K. C. Naiff, A 50-ns CMOS 56k EEPROM, IEEE J. Solid-St. Circ., vol. 3, pp , Oct [] M. Grassl, Searching for linear codes with large minimum distance, in Discovering Mathematics with Magma Reducing the Abstract to the Concrete (W. Bosma and J. Cannon, eds.), vol. 19 of Algorithms and Computation in Mathematics, pp , Heidelberg: Springer, 006. [3] M. Grassl, Bounds on the minimum distance of linear codes and quantum codes. Online available at [4] R. H. Morelos-Zaragoza, The Art of Error Correcting Coding. John Wiley & Sons, 006.

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane 2301532 : Coding Theory Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, 2006 http://pioneer.chula.ac.th/ upattane Chapter 1 Error detection, correction and decoding 1.1 Basic definitions and

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code Chapter 3 Source Coding 3. An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code 3. An Introduction to Source Coding Entropy (in bits per symbol) implies in average

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 26 Linear Block Codes Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay July 28, 2014 Binary Block Codes 3 / 26 Let F 2 be the set

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Hamming Codes 11/17/04

Hamming Codes 11/17/04 Hamming Codes 11/17/04 History In the late 1940 s Richard Hamming recognized that the further evolution of computers required greater reliability, in particular the ability to not only detect errors, but

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

Can You Hear Me Now?

Can You Hear Me Now? Can You Hear Me Now? An Introduction to Coding Theory William J. Turner Department of Mathematics & Computer Science Wabash College Crawfordsville, IN 47933 19 October 2004 W. J. Turner (Wabash College)

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

Exercise 1. = P(y a 1)P(a 1 )

Exercise 1. = P(y a 1)P(a 1 ) Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a

More information

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Error-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories

Error-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories 2 IEEE International Symposium on Information Theory Proceedings Error-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories Hongchao Zhou Electrical Engineering Department California Institute

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

Lecture 2 Linear Codes

Lecture 2 Linear Codes Lecture 2 Linear Codes 2.1. Linear Codes From now on we want to identify the alphabet Σ with a finite field F q. For general codes, introduced in the last section, the description is hard. For a code of

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

Lecture 18: Shanon s Channel Coding Theorem. Lecture 18: Shanon s Channel Coding Theorem

Lecture 18: Shanon s Channel Coding Theorem. Lecture 18: Shanon s Channel Coding Theorem Channel Definition (Channel) A channel is defined by Λ = (X, Y, Π), where X is the set of input alphabets, Y is the set of output alphabets and Π is the transition probability of obtaining a symbol y Y

More information

On Locating-Dominating Codes in Binary Hamming Spaces

On Locating-Dominating Codes in Binary Hamming Spaces Discrete Mathematics and Theoretical Computer Science 6, 2004, 265 282 On Locating-Dominating Codes in Binary Hamming Spaces Iiro Honkala and Tero Laihonen and Sanna Ranto Department of Mathematics and

More information

Orthogonal Arrays & Codes

Orthogonal Arrays & Codes Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-oa, is a λv t x k array of v symbols, such that in any t columns of the array every one of the possible

More information

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x) Outline Recall: For integers Euclidean algorithm for finding gcd s Extended Euclid for finding multiplicative inverses Extended Euclid for computing Sun-Ze Test for primitive roots And for polynomials

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps 2012 IEEE International Symposium on Information Theory Proceedings Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps Yeow Meng Chee and Punarbasu Purkayastha Division of Mathematical

More information

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing

More information

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 2037 The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani Abstract The capacity

More information

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming

More information

Characterization of 2 n -Periodic Binary Sequences with Fixed 2-error or 3-error Linear Complexity

Characterization of 2 n -Periodic Binary Sequences with Fixed 2-error or 3-error Linear Complexity Characterization of n -Periodic Binary Sequences with Fixed -error or 3-error Linear Complexity Ramakanth Kavuluru Department of Computer Science, University of Kentucky, Lexington, KY 40506, USA. Abstract

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

EE 229B ERROR CONTROL CODING Spring 2005

EE 229B ERROR CONTROL CODING Spring 2005 EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with

More information

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. Binary codes Let us assume that a message to be transmitted is in binary form. That is, it is a word in the alphabet

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

A Singleton Bound for Lattice Schemes

A Singleton Bound for Lattice Schemes 1 A Singleton Bound for Lattice Schemes Srikanth B. Pai, B. Sundar Rajan, Fellow, IEEE Abstract arxiv:1301.6456v4 [cs.it] 16 Jun 2015 In this paper, we derive a Singleton bound for lattice schemes and

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

MATH32031: Coding Theory Part 15: Summary

MATH32031: Coding Theory Part 15: Summary MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,

More information

Solutions to Homework Set #3 Channel and Source coding

Solutions to Homework Set #3 Channel and Source coding Solutions to Homework Set #3 Channel and Source coding. Rates (a) Channels coding Rate: Assuming you are sending 4 different messages using usages of a channel. What is the rate (in bits per channel use)

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt

More information

A characterization of graphs by codes from their incidence matrices

A characterization of graphs by codes from their incidence matrices A characterization of graphs by codes from their incidence matrices Peter Dankelmann Department of Mathematics University of Johannesburg P.O. Box 54 Auckland Park 006, South Africa Jennifer D. Key pdankelmann@uj.ac.za

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................

More information

Entropies & Information Theory

Entropies & Information Theory Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information

More information

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei COMPSCI 650 Applied Information Theory Apr 5, 2016 Lecture 18 Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei 1 Correcting Errors in Linear Codes Suppose someone is to send

More information

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications on the ML Decoding Error Probability of Binary Linear Block Codes and Department of Electrical Engineering Technion-Israel Institute of Technology An M.Sc. Thesis supervisor: Dr. Igal Sason March 30, 2006

More information

The E8 Lattice and Error Correction in Multi-Level Flash Memory

The E8 Lattice and Error Correction in Multi-Level Flash Memory The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M Kurkoski University of Electro-Communications Tokyo, Japan kurkoski@iceuecacjp Abstract A construction using the E8 lattice and Reed-Solomon

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Support weight enumerators and coset weight distributions of isodual codes

Support weight enumerators and coset weight distributions of isodual codes Support weight enumerators and coset weight distributions of isodual codes Olgica Milenkovic Department of Electrical and Computer Engineering University of Colorado, Boulder March 31, 2003 Abstract In

More information

An Introduction to (Network) Coding Theory

An Introduction to (Network) Coding Theory An Introduction to (Network) Coding Theory Anna-Lena Horlemann-Trautmann University of St. Gallen, Switzerland July 12th, 2018 1 Coding Theory Introduction Reed-Solomon codes 2 Introduction Coherent network

More information

Chapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code

Chapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code Chapter 2 Date Compression: Source Coding 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code 2.1 An Introduction to Source Coding Source coding can be seen as an efficient way

More information

An Achievable Error Exponent for the Mismatched Multiple-Access Channel

An Achievable Error Exponent for the Mismatched Multiple-Access Channel An Achievable Error Exponent for the Mismatched Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@camacuk Albert Guillén i Fàbregas ICREA & Universitat Pompeu Fabra University of

More information

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS Ramin Khalili, Kavé Salamatian LIP6-CNRS, Université Pierre et Marie Curie. Paris, France. Ramin.khalili, kave.salamatian@lip6.fr Abstract Bit Error

More information

Sphere Packing and Shannon s Theorem

Sphere Packing and Shannon s Theorem Chapter 2 Sphere Packing and Shannon s Theorem In the first section we discuss the basics of block coding on the m-ary symmetric channel. In the second section we see how the geometry of the codespace

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Module 1. Introduction to Digital Communications and Information Theory. Version 2 ECE IIT, Kharagpur

Module 1. Introduction to Digital Communications and Information Theory. Version 2 ECE IIT, Kharagpur Module ntroduction to Digital Communications and nformation Theory Lesson 3 nformation Theoretic Approach to Digital Communications After reading this lesson, you will learn about Scope of nformation Theory

More information

Hamming codes and simplex codes ( )

Hamming codes and simplex codes ( ) Chapter 6 Hamming codes and simplex codes (2018-03-17) Synopsis. Hamming codes are essentially the first non-trivial family of codes that we shall meet. We start by proving the Distance Theorem for linear

More information

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes 1 Zheng Wang, Student Member, IEEE, Jie Luo, Member, IEEE arxiv:0808.3756v1 [cs.it] 27 Aug 2008 Abstract We show that

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels POST-PRIT OF THE IEEE TRAS. O IFORMATIO THEORY, VOL. 54, O. 5, PP. 96 99, MAY 8 An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels Gil Wiechman Igal Sason Department

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory 1 The intuitive meaning of entropy Modern information theory was born in Shannon s 1948 paper A Mathematical Theory of

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

Theory of Computation 1 Sets and Regular Expressions

Theory of Computation 1 Sets and Regular Expressions Theory of Computation 1 Sets and Regular Expressions Frank Stephan Department of Computer Science Department of Mathematics National University of Singapore fstephan@comp.nus.edu.sg Theory of Computation

More information

Cut-Set Bound and Dependence Balance Bound

Cut-Set Bound and Dependence Balance Bound Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems

More information

Low-Complexity Puncturing and Shortening of Polar Codes

Low-Complexity Puncturing and Shortening of Polar Codes Low-Complexity Puncturing and Shortening of Polar Codes Valerio Bioglio, Frédéric Gabry, Ingmar Land Mathematical and Algorithmic Sciences Lab France Research Center, Huawei Technologies Co. Ltd. Email:

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information

INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson

INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS Michael A. Lexa and Don H. Johnson Rice University Department of Electrical and Computer Engineering Houston, TX 775-892 amlexa@rice.edu,

More information

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding SIGNAL COMPRESSION Lecture 3 4.9.2007 Shannon-Fano-Elias Codes and Arithmetic Coding 1 Shannon-Fano-Elias Coding We discuss how to encode the symbols {a 1, a 2,..., a m }, knowing their probabilities,

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Prelude Information transmission 0 0 0 0 0 0 Channel Information transmission signal 0 0 threshold

More information

Lecture Notes on Channel Coding

Lecture Notes on Channel Coding Lecture Notes on Channel Coding arxiv:1607.00974v1 [cs.it] 4 Jul 2016 Georg Böcherer Institute for Communications Engineering Technical University of Munich, Germany georg.boecherer@tum.de July 5, 2016

More information

Partial cubes: structures, characterizations, and constructions

Partial cubes: structures, characterizations, and constructions Partial cubes: structures, characterizations, and constructions Sergei Ovchinnikov San Francisco State University, Mathematics Department, 1600 Holloway Ave., San Francisco, CA 94132 Abstract Partial cubes

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

Strongly chordal and chordal bipartite graphs are sandwich monotone

Strongly chordal and chordal bipartite graphs are sandwich monotone Strongly chordal and chordal bipartite graphs are sandwich monotone Pinar Heggernes Federico Mancini Charis Papadopoulos R. Sritharan Abstract A graph class is sandwich monotone if, for every pair of its

More information

Math 512 Syllabus Spring 2017, LIU Post

Math 512 Syllabus Spring 2017, LIU Post Week Class Date Material Math 512 Syllabus Spring 2017, LIU Post 1 1/23 ISBN, error-detecting codes HW: Exercises 1.1, 1.3, 1.5, 1.8, 1.14, 1.15 If x, y satisfy ISBN-10 check, then so does x + y. 2 1/30

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

EECS 750. Hypothesis Testing with Communication Constraints

EECS 750. Hypothesis Testing with Communication Constraints EECS 750 Hypothesis Testing with Communication Constraints Name: Dinesh Krithivasan Abstract In this report, we study a modification of the classical statistical problem of bivariate hypothesis testing.

More information

Lecture 4: Codes based on Concatenation

Lecture 4: Codes based on Concatenation Lecture 4: Codes based on Concatenation Error-Correcting Codes (Spring 206) Rutgers University Swastik Kopparty Scribe: Aditya Potukuchi and Meng-Tsung Tsai Overview In the last lecture, we studied codes

More information

High Sum-Rate Three-Write and Non-Binary WOM Codes

High Sum-Rate Three-Write and Non-Binary WOM Codes Submitted to the IEEE TRANSACTIONS ON INFORMATION THEORY, 2012 1 High Sum-Rate Three-Write and Non-Binary WOM Codes Eitan Yaakobi, Amir Shpilka Abstract Write-once memory (WOM) is a storage medium with

More information

Side-information Scalable Source Coding

Side-information Scalable Source Coding Side-information Scalable Source Coding Chao Tian, Member, IEEE, Suhas N. Diggavi, Member, IEEE Abstract The problem of side-information scalable (SI-scalable) source coding is considered in this work,

More information

UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, Solutions to Take-Home Midterm (Prepared by Pinar Sen)

UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, Solutions to Take-Home Midterm (Prepared by Pinar Sen) UCSD ECE 255C Handout #12 Prof. Young-Han Kim Tuesday, February 28, 2017 Solutions to Take-Home Midterm (Prepared by Pinar Sen) 1. (30 points) Erasure broadcast channel. Let p(y 1,y 2 x) be a discrete

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

Arimoto Channel Coding Converse and Rényi Divergence

Arimoto Channel Coding Converse and Rényi Divergence Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code

More information

2012 IEEE International Symposium on Information Theory Proceedings

2012 IEEE International Symposium on Information Theory Proceedings Decoding of Cyclic Codes over Symbol-Pair Read Channels Eitan Yaakobi, Jehoshua Bruck, and Paul H Siegel Electrical Engineering Department, California Institute of Technology, Pasadena, CA 9115, USA Electrical

More information