ABriefReviewof CodingTheory

Size: px
Start display at page:

Download "ABriefReviewof CodingTheory"

Transcription

1 ABriefReviewof CodingTheory Pascal O. Vontobel JTG Summer School, IIT Madras, Chennai, India June 16 19, 2014

2 ReliableCommunication Oneofthemainmotivationforstudyingcodingtheoryisthedesireto reliably transmit information over noisy channels. Stream of input symbols Noisy Channel Stream of output symbols

3 DiscreteMemorylessChannels (Part 1) x n, x n 1,..., x 3, x 2, x 1 y n, y n 1,..., y 3, y 2, y 1 DMC A simple class of channel models is the class of discrete memoryless channels(dmcs). A DMC is a statistical channel model that is characterized by

4 DiscreteMemorylessChannels (Part 1) x n, x n 1,..., x 3, x 2, x 1 y n, y n 1,..., y 3, y 2, y 1 DMC A simple class of channel models is the class of discrete memoryless channels(dmcs). A DMC is a statistical channel model that is characterized by a discrete(possibly countably infinite) input alphabet X,

5 DiscreteMemorylessChannels (Part 1) x n, x n 1,..., x 3, x 2, x 1 y n, y n 1,..., y 3, y 2, y 1 DMC A simple class of channel models is the class of discrete memoryless channels(dmcs). A DMC is a statistical channel model that is characterized by a discrete(possibly countably infinite) input alphabet X, a discrete(possibly countably infinite) output alphabet Y,

6 DiscreteMemorylessChannels (Part 2) x n, x n 1,..., x 3, x 2, x 1 y n, y n 1,..., y 3, y 2, y 1 DMC (list continued)

7 DiscreteMemorylessChannels (Part 2) x n, x n 1,..., x 3, x 2, x 1 y n, y n 1,..., y 3, y 2, y 1 DMC (list continued) aconditionalprobabilitymassfunction(pmf)p Yi X i (y i x i )that tellsustheprobabilityofobservingtheoutputsymboly i given thattheinputsymbolx i wassent,

8 DiscreteMemorylessChannels (Part 2) x n, x n 1,..., x 3, x 2, x 1 y n, y n 1,..., y 3, y 2, y 1 DMC (list continued) aconditionalprobabilitymassfunction(pmf)p Yi X i (y i x i )that tellsustheprobabilityofobservingtheoutputsymboly i given thattheinputsymbolx i wassent, the fact that the transmission at different time indices is statisticallyindependent,i.e.,usingx (x 1,...,x n )and y (y 1,...,y n )wehave P Y X (y x) = n P Yi X i (y i x i ). i=1

9 TheBinarySymmetricChannel 0 1 ε 0 ε ε 1 1 ε 1 Letε [0,1]. Asimplemodelis,e.g.,thebinarysymmetricchannel(BSC) withcross-overprobabilityε. ItisaDMC

10 TheBinarySymmetricChannel 0 1 ε 0 ε ε 1 1 ε 1 Letε [0,1]. Asimplemodelis,e.g.,thebinarysymmetricchannel(BSC) withcross-overprobabilityε. ItisaDMC withinputalphabetx = {0, 1},

11 TheBinarySymmetricChannel 0 1 ε 0 ε ε 1 1 ε 1 Letε [0,1]. Asimplemodelis,e.g.,thebinarysymmetricchannel(BSC) withcross-overprobabilityε. ItisaDMC withinputalphabetx = {0, 1}, withoutputalphabety = {0, 1},

12 TheBinarySymmetricChannel 0 1 ε 0 ε ε ε Letε [0,1]. Asimplemodelis,e.g.,thebinarysymmetricchannel(BSC) withcross-overprobabilityε. ItisaDMC withinputalphabetx = {0, 1}, withoutputalphabety = {0, 1}, and with conditional probability mass function 1 ε (y i = x i ) P Yi X i (y i x i ) = ε (y i x i ).

13 TheBinaryErasureChannel δ 1 δ δ δ 0 1 Letδ [0,1]. Anotherpopularmodelisthebinaryerasurechannel(BEC) witherasureprobabilityδ. ItisaDMC

14 TheBinaryErasureChannel δ 1 δ δ δ 0 1 Letδ [0,1]. Anotherpopularmodelisthebinaryerasurechannel(BEC) witherasureprobabilityδ. ItisaDMC withinputalphabetx = {0, 1},

15 TheBinaryErasureChannel δ 1 δ δ δ 0 1 Letδ [0,1]. Anotherpopularmodelisthebinaryerasurechannel(BEC) witherasureprobabilityδ. ItisaDMC withinputalphabetx = {0, 1}, withoutputalphabety = {0,, 1},

16 TheBinaryErasureChannel 0 1 δ δ 0 δ δ Letδ [0,1]. Anotherpopularmodelisthebinaryerasurechannel(BEC) witherasureprobabilityδ. ItisaDMC withinputalphabetx = {0, 1}, withoutputalphabety = {0,, 1}, and with conditional probability mass function 1 δ (y i = x i ) P Yi X i (y i x i ) = δ (y i = ).

17 TheBinary-InputAWGNC Letσ 2 beanon-negativerealnumber.yetanotherpopularmodel (which is strictly speaking not a DMC, though) is the binary-input additive white Gaussian noise channel(awgnc). It is a memoryless channel model

18 TheBinary-InputAWGNC Letσ 2 beanon-negativerealnumber.yetanotherpopularmodel (which is strictly speaking not a DMC, though) is the binary-input additive white Gaussian noise channel(awgnc). It is a memoryless channel model withdiscreteinputalphabetx = {0, 1},

19 TheBinary-InputAWGNC Letσ 2 beanon-negativerealnumber.yetanotherpopularmodel (which is strictly speaking not a DMC, though) is the binary-input additive white Gaussian noise channel(awgnc). It is a memoryless channel model withdiscreteinputalphabetx = {0, 1}, with continuous output alphabet Y = R,

20 TheBinary-InputAWGNC Letσ 2 beanon-negativerealnumber.yetanotherpopularmodel (which is strictly speaking not a DMC, though) is the binary-input additive white Gaussian noise channel(awgnc). It is a memoryless channel model withdiscreteinputalphabetx = {0, 1}, with continuous output alphabet Y = R, and with conditional probability density function p Yi X i (y i x i ) = 1 exp ( (y ) i x i ) 2 2πσ 2σ 2, where x i 1 2x i +1 (x i = 0) 1 (x i = 1).

21 UncodedTransmission x n, x n 1,..., x 3, x 2, x 1 y n, y n 1,..., y 3, y 2, y 1 DMC Consider a BSC with cross-over probability ε [0, 1/2]. Assume that we use uncoded transmission, i.e., we directly send the information bits over the BSC.

22 UncodedTransmission x n, x n 1,..., x 3, x 2, x 1 y n, y n 1,..., y 3, y 2, y 1 DMC Consider a BSC with cross-over probability ε [0, 1/2]. Assume that we use uncoded transmission, i.e., we directly send the information bits over the BSC. Ourbestdecisionaboutx i willbe ˆx i y i.

23 UncodedTransmission x n, x n 1,..., x 3, x 2, x 1 y n, y n 1,..., y 3, y 2, y 1 DMC Consider a BSC with cross-over probability ε [0, 1/2]. Assume that we use uncoded transmission, i.e., we directly send the information bits over the BSC. Ourbestdecisionaboutx i willbe ˆx i y i. Itiseasilyseenthattheerrorprobabilityis ( ) Pr ˆXi X i = ε.

24 ABetterApproach (Part 1) {u t } t {u tk+1,..., u tk+k } t {(x tn+1,..., x tn+n )} t Parser Encoder DMC {(û tk+1,..., û tk+k )} t {(ˆx tn+1,..., ˆx tn+n )} t Decoder {(y tn+1,..., y tn+n )} t

25 ABetterApproach (Part 1) {u t } t {u tk+1,..., u tk+k } t {(x tn+1,..., x tn+n )} t Parser Encoder DMC {(û tk+1,..., û tk+k )} t {(ˆx tn+1,..., ˆx tn+n )} t Decoder {(y tn+1,..., y tn+n )} t Firstly, we parse the string of information symbols into blocks of length k.

26 ABetterApproach (Part 1) {u t } t {u tk+1,..., u tk+k } t {(x tn+1,..., x tn+n )} t Parser Encoder DMC {(û tk+1,..., û tk+k )} t {(ˆx tn+1,..., ˆx tn+n )} t Decoder {(y tn+1,..., y tn+n )} t Firstly, we parse the string of information symbols into blocks of length k. Secondly, instead of sending the components of the informationword (u tk+1,...,u tk+k ) overthechannel,wemap(encode)theinformationwordtoa codeword ( ) x tn+1,...,x tn+n, whose components are then sent over the channel.

27 ABetterApproach (Part 2) {u t } t {u tk+1,..., u tk+k } t {(x tn+1,..., x tn+n )} t Parser Encoder DMC {(û tk+1,..., û tk+k )} t {(ˆx tn+1,..., ˆx tn+n )} t Decoder {(y tn+1,..., y tn+n )} t

28 ABetterApproach (Part 2) {u t } t {u tk+1,..., u tk+k } t {(x tn+1,..., x tn+n )} t Parser Encoder DMC {(û tk+1,..., û tk+k )} t {(ˆx tn+1,..., ˆx tn+n )} t Decoder {(y tn+1,..., y tn+n )} t Basedonthe observed channel output ( ytn+1,...,y tn+n )

29 ABetterApproach (Part 2) {u t } t {u tk+1,..., u tk+k } t {(x tn+1,..., x tn+n )} t Parser Encoder DMC {(û tk+1,..., û tk+k )} t {(ˆx tn+1,..., ˆx tn+n )} t Decoder {(y tn+1,..., y tn+n )} t Basedonthe observed channel output ( ytn+1,...,y tn+n ) wemakeadecision (û tk+1,...,û tk+k ) abouttheinformationvector (u tk+1,...,u tk+k ),

30 ABetterApproach (Part 2) {u t } t {u tk+1,..., u tk+k } t {(x tn+1,..., x tn+n )} t Parser Encoder DMC {(û tk+1,..., û tk+k )} t {(ˆx tn+1,..., ˆx tn+n )} t Decoder {(y tn+1,..., y tn+n )} t Basedonthe observed channel output ( ytn+1,...,y tn+n ) wemakeadecision (û tk+1,...,û tk+k ) abouttheinformationvector (u tk+1,...,u tk+k ), or a decision (ˆxtn+1,...,ˆx tn+n ) aboutthecodeword ( x tn+1,...,x tn+n ).

31 ABetterApproach (Part 3) Considerthefollowingen-/de-codingschemewithU = X = {0,1},k = 1, andn = 5thatisusedfordatatransmissionoveraBSCwithcross-over probabilityε [0, 1/2]. (Withoutlossofgenerality,wecanfocusont = 0.) u 1 (u 1 ) (x 1,..., x 5 ) Parser Encoder BSC (û 1 ) (ˆx 1,..., ˆx 5 ) Decoder (y 1,..., y 5 )

32 ABetterApproach (Part 3) Considerthefollowingen-/de-codingschemewithU = X = {0,1},k = 1, andn = 5thatisusedfordatatransmissionoveraBSCwithcross-over probabilityε [0, 1/2]. (Withoutlossofgenerality,wecanfocusont = 0.) u 1 (u 1 ) (x 1,..., x 5 ) Parser Encoder BSC (û 1 ) (ˆx 1,..., ˆx 5 ) Decoder (y 1,..., y 5 ) If(u 1 ) = (0)thenwesendthecodewordx = (0, 0, 0, 0, 0).

33 ABetterApproach (Part 3) Considerthefollowingen-/de-codingschemewithU = X = {0,1},k = 1, andn = 5thatisusedfordatatransmissionoveraBSCwithcross-over probabilityε [0, 1/2]. (Withoutlossofgenerality,wecanfocusont = 0.) u 1 (u 1 ) (x 1,..., x 5 ) Parser Encoder BSC (û 1 ) (ˆx 1,..., ˆx 5 ) Decoder (y 1,..., y 5 ) If(u 1 ) = (0)thenwesendthecodewordx = (0, 0, 0, 0, 0). If(u 1 ) = (1)thenwesendthecodewordx = (1, 1, 1, 1, 1).

34 ABetterApproach (Part 3) Considerthefollowingen-/de-codingschemewithU = X = {0,1},k = 1, andn = 5thatisusedfordatatransmissionoveraBSCwithcross-over probabilityε [0, 1/2]. (Withoutlossofgenerality,wecanfocusont = 0.) u 1 (u 1 ) (x 1,..., x 5 ) Parser Encoder BSC (û 1 ) (ˆx 1,..., ˆx 5 ) Decoder (y 1,..., y 5 ) If(u 1 ) = (0)thenwesendthecodewordx = (0, 0, 0, 0, 0). If(u 1 ) = (1)thenwesendthecodewordx = (1, 1, 1, 1, 1). Weusethedecoder (0) if y contains more zeros than ones (û 1 ) = (1) ifycontainsmoreonesthanzeros.

35 ABetterApproach (Part 4) For obvious reasons, the above coding scheme is called a repetition code.

36 ABetterApproach (Part 4) For obvious reasons, the above coding scheme is called a repetition code. TherateofthecodeisR = k/n = 1/5.

37 ABetterApproach (Part 4) For obvious reasons, the above coding scheme is called a repetition code. TherateofthecodeisR = k/n = 1/5. The error probability is ) ( ) 5 Pr (Û1 U 1 = (1 ε) 2 ε ( ) 5 (1 ε) 1 ε ( ) 5 (1 ε) 0 ε 5, 5 whichforsmallεisclearlysmallerthanintheuncodedcase,butwehave to pay for this improvement by sending more symbols over the channel.

38 ABetterApproach (Part 4) For obvious reasons, the above coding scheme is called a repetition code. TherateofthecodeisR = k/n = 1/5. The error probability is ) ( ) 5 Pr (Û1 U 1 = (1 ε) 2 ε ( ) 5 (1 ε) 1 ε ( ) 5 (1 ε) 0 ε 5, 5 whichforsmallεisclearlysmallerthanintheuncodedcase,butwehave to pay for this improvement by sending more symbols over the channel. Despite this initial success, one has the feeling that one could construct muchbetterrate-1/5codesbytakingkandnlargerwithn = 5k.

39 ABetterApproach (Part 5) Thecode(orcodebook)isthesetofallcodewords: C { x X n thereexistsan u U k s.t. x = Encoder(u) }

40 ABetterApproach (Part 5) Thecode(orcodebook)isthesetofallcodewords: C { x X n thereexistsan u U k s.t. x = Encoder(u) } The dimensionless rate of the code is R k n.

41 ABetterApproach (Part 5) Thecode(orcodebook)isthesetofallcodewords: C { x X n thereexistsan u U k s.t. x = Encoder(u) } The dimensionless rate of the code is R k n. Thedimensionedrateofthecodeis R klog 2 U n [bits per channel use].

42 ABetterApproach (Part 5) Thecode(orcodebook)isthesetofallcodewords: C { x X n thereexistsan u U k s.t. x = Encoder(u) } The dimensionless rate of the code is R k n. Thedimensionedrateofthecodeis R klog 2 U n [bits per channel use]. Notethatif U = 2thenthedimensionlessandthedimensionedrate areequal.inthefollowing,wewillmostlydealwiththecase U = X = 2andsowewillsimplytalkabouttherateR.

43 ABetterApproach (Part 6) An important quantity characterizing a code is the minimum Hamming distance d min (C) min x, x C x x d H (x,x ), whered H (x,x )isthehammingdistancebetweenxandx.

44 ABetterApproach (Part 6) An important quantity characterizing a code is the minimum Hamming distance d min (C) min x, x C x x d H (x,x ), whered H (x,x )isthehammingdistancebetweenxandx. Foralinearblockcodewehave d min (C) = min x C x 0 w H (x), wherew H (x)isthehammingweightofx.

45 InformationTheory (Part 1) {u t } t {u tk+1,..., u tk+k } t {(x tn+1,..., x tn+n )} t Parser Encoder DMC {(û tk+1,..., û tk+k )} t {(ˆx tn+1,..., ˆx tn+n )} t Decoder {(y tn+1,..., y tn+n )} t What does information theory tell us about our setup?

46 InformationTheory (Part 1) {u t } t {u tk+1,..., u tk+k } t {(x tn+1,..., x tn+n )} t Parser Encoder DMC {(û tk+1,..., û tk+k )} t {(ˆx tn+1,..., ˆx tn+n )} t Decoder {(y tn+1,..., y tn+n )} t What does information theory tell us about our setup? = ToeveryDMCwecanassociateanumbercalledthe capacity C [bits per channel use].

47 InformationTheory (Part 2) Channel Coding Theorem Letthe(dimensioned)rateRbesuchthatR < C.

48 InformationTheory (Part 2) Channel Coding Theorem Letthe(dimensioned)rateRbesuchthatR < C. Fixanarbitraryǫ > 0.

49 InformationTheory (Part 2) Channel Coding Theorem Letthe(dimensioned)rateRbesuchthatR < C. Fixanarbitraryǫ > 0. Then there exists a sequence of encoders/decoders with informationwordlengthk l andblocklengthn l with R = k llog 2 ( U ) n l

50 InformationTheory (Part 2) Channel Coding Theorem Letthe(dimensioned)rateRbesuchthatR < C. Fixanarbitraryǫ > 0. Then there exists a sequence of encoders/decoders with informationwordlengthk l andblocklengthn l with R = k llog 2 ( U ) n l such that the block error probability fulfills ) ) Pr((Û1,...,Ûk l (U 1,...,U kl ) < ǫ ask l (andthereforeasn l ).

51 InformationTheory (Part 3) Converse to the Channel Coding Theorem Letthe(dimensioned)rateRbesuchthatR > C.

52 InformationTheory (Part 3) Converse to the Channel Coding Theorem Letthe(dimensioned)rateRbesuchthatR > C. Then for any sequence of encoders/decoders with information wordlengthk l andblocklengthn l with R = k llog 2 ( U ) n l

53 InformationTheory (Part 3) Converse to the Channel Coding Theorem Letthe(dimensioned)rateRbesuchthatR > C. Then for any sequence of encoders/decoders with information wordlengthk l andblocklengthn l with R = k llog 2 ( U ) n l the block error probability Pr((Û1,...,Ûk l ) ) (U 1,...,U kl ) isstrictlyboundedawayfromzeroforanyk l (andthereforealso foranyn l ).

54 InformationTheory (Part 3) Converse to the Channel Coding Theorem Letthe(dimensioned)rateRbesuchthatR > C. Then for any sequence of encoders/decoders with information wordlengthk l andblocklengthn l with R = k llog 2 ( U ) n l the block error probability Pr((Û1,...,Ûk l ) ) (U 1,...,U kl ) isstrictlyboundedawayfromzeroforanyk l (andthereforealso foranyn l ). Formoreprecisestatements,see,e.g.,CoverandThomas[1].

55 InformationTheory (Part 4) Notethatthechannelcodingtheoremisapurelyexistentialresultandisbasedontheuseofso-called random codes, i.e., one can show that the average random code is good enough under maximum likelihood(ml) decoding.

56 InformationTheory (Part 4) Notethatthechannelcodingtheoremisapurelyexistentialresultandisbasedontheuseofso-called random codes, i.e., one can show that the average random code is good enough under maximum likelihood(ml) decoding. Arandomcodecanbeconstructedasfollows: the?-entriesintheencodingtablebelowmustbefilledwith randomlyselectedelementsofx. (Hereshownfor U = {0,1},k = 3,andn = 5). (u 1,u 2,u 3 ) (x 1,x 2,x 3,x 4,x 5 ) (0, 0, 0) (?,?,?,?,?) (0, 0, 1) (?,?,?,?,?) (0, 1, 0) (?,?,?,?,?) (0, 1, 1) (?,?,?,?,?) (1, 0, 0) (?,?,?,?,?) (1, 0, 1) (?,?,?,?,?) (1, 1, 0) (?,?,?,?,?) (1, 1, 1) (?,?,?,?,?)

57 InformationTheory (Part 4) Notethatthechannelcodingtheoremisapurelyexistentialresultandisbasedontheuseofso-called random codes, i.e., one can show that the average random code is good enough under maximum likelihood(ml) decoding. Arandomcodecanbeconstructedasfollows: the?-entriesintheencodingtablebelowmustbefilledwith randomlyselectedelementsofx. (Hereshownfor U = {0,1},k = 3,andn = 5). (u 1,u 2,u 3 ) (x 1,x 2,x 3,x 4,x 5 ) (0, 0, 0) (?,?,?,?,?) (0, 0, 1) (?,?,?,?,?) (0, 1, 0) (?,?,?,?,?) (0, 1, 1) (?,?,?,?,?) (1, 0, 0) (?,?,?,?,?) (1, 0, 1) (?,?,?,?,?) (1, 1, 0) (?,?,?,?,?) (1, 1, 1) (?,?,?,?,?) If one wants to generate a sequence of capacity-achieving(c.a.) codes then the?-entries must be filled with randomly and independently selected elements from X according to the so-called c.a. input distribution. Moreover,kandnmustgoto wherebyr = klog 2 ( U )/n.

58 InformationTheory (Part 5) Forthebinary-inputAWGNC,theBSC,andtheBEC,thismeansthatallentries should be randomly and independently chosen such that there are about the samenumberofzerosandones.

59 InformationTheory (Part 5) Forthebinary-inputAWGNC,theBSC,andtheBEC,thismeansthatallentries should be randomly and independently chosen such that there are about the samenumberofzerosandones. However, encoding has extremely high memory complexity because the whole encoding table has to be stored.

60 InformationTheory (Part 5) Forthebinary-inputAWGNC,theBSC,andtheBEC,thismeansthatallentries should be randomly and independently chosen such that there are about the samenumberofzerosandones. However, encoding has extremely high memory complexity because the whole encoding table has to be stored. Moreover, ML decoding(or even some sub-optimal decoding) of such a code has extremely high memory and computational complexity.

61 InformationTheory (Part 5) Forthebinary-inputAWGNC,theBSC,andtheBEC,thismeansthatallentries should be randomly and independently chosen such that there are about the samenumberofzerosandones. However, encoding has extremely high memory complexity because the whole encoding table has to be stored. Moreover, ML decoding(or even some sub-optimal decoding) of such a code has extremely high memory and computational complexity. Encoding/decoding of such random codes of reasonable length and rate is highly impractical.

62 InformationTheory (Part 5) Forthebinary-inputAWGNC,theBSC,andtheBEC,thismeansthatallentries should be randomly and independently chosen such that there are about the samenumberofzerosandones. However, encoding has extremely high memory complexity because the whole encoding table has to be stored. Moreover, ML decoding(or even some sub-optimal decoding) of such a code has extremely high memory and computational complexity. Encoding/decoding of such random codes of reasonable length and rate is highly impractical. Weneedcodeswithmorestructure!

63 InformationTheory (Part 5) Forthebinary-inputAWGNC,theBSC,andtheBEC,thismeansthatallentries should be randomly and independently chosen such that there are about the samenumberofzerosandones. However, encoding has extremely high memory complexity because the whole encoding table has to be stored. Moreover, ML decoding(or even some sub-optimal decoding) of such a code has extremely high memory and computational complexity. Encoding/decoding of such random codes of reasonable length and rate is highly impractical. Weneedcodeswithmorestructure! Luckily, the channel coding theorem imposes only small constraints on the codes,i.e.,itleavesalotoffreedomindesigninggoodcodes.

64 CodingTheory (Part 1.0) In order to obtain practical encoding and coding schemes, people have restricted themselves to certain classes of codes that have some structure that can be exploited for encoding/decoding.

65 CodingTheory (Part 1.0) In order to obtain practical encoding and coding schemes, people have restricted themselves to certain classes of codes that have some structure that can be exploited for encoding/decoding. Hereweonlydiscussthecase U = X = {0,1}.

66 CodingTheory (Part 1.0) In order to obtain practical encoding and coding schemes, people have restricted themselves to certain classes of codes that have some structure that can be exploited for encoding/decoding. Hereweonlydiscussthecase U = X = {0,1}. Of course, by restricing oneself to certain classes of codes, it can happen that one loses in performance compared to the the best possible coding scheme where no restrictions are imposed on the encoding and decoding complexity.

67 CodingTheory (Part 1.1) Restriction: encodingmapislinearoverf 2.

68 CodingTheory (Part 1.1) Restriction: encodingmapislinearoverf 2. This allows one to use results from linear algebra.

69 CodingTheory (Part 1.1) Restriction: encodingmapislinearoverf 2. This allows one to use results from linear algebra. Encodingcanbecharacterizedbyak nmatrixgoverf 2 : { C = x F n } 2 thereexistsan u F k 2 suchthat x = u G. G is called the generator matrix.

70 CodingTheory (Part 1.1) Restriction: encodingmapislinearoverf 2. This allows one to use results from linear algebra. Encodingcanbecharacterizedbyak nmatrixgoverf 2 : { C = x F n } 2 thereexistsan u F k 2 suchthat x = u G. G is called the generator matrix. ThecodeC isak-dimensionalsubspaceoff n 2. Theparameterkis therefore often called the dimension of the code.

71 CodingTheory (Part 1.1) Restriction: encodingmapislinearoverf 2. This allows one to use results from linear algebra. Encodingcanbecharacterizedbyak nmatrixgoverf 2 : { C = x F n } 2 thereexistsan u F k 2 suchthat x = u G. G is called the generator matrix. ThecodeC isak-dimensionalsubspaceoff n 2. Theparameterkis therefore often called the dimension of the code. Arank-(n k)matrixhofsizem noverf 2 suchthat { C = x F n } 2 x H T = 0. iscalledaparity-checkmatrix.notethatm n k.(itisclearthatfora given code C there are many possible parity-check matrices.)

72 CodingTheory (Part 1.1) Restriction: encodingmapislinearoverf 2 (continued).

73 CodingTheory (Part 1.1) Restriction: encodingmapislinearoverf 2 (continued). Some simplifications can be done in the ML decoder.

74 CodingTheory (Part 1.1) Restriction: encodingmapislinearoverf 2 (continued). Some simplifications can be done in the ML decoder. The all-zero word is always a codeword. For analysis purposes, we can always assume that the all-zero codeword was sent. (For this statement we assumed that the channel is output-symmetric and that the decoder is symmetric.)

75 CodingTheory (Part 1.1) Restriction: encodingmapislinearoverf 2 (continued). Some simplifications can be done in the ML decoder. The all-zero word is always a codeword. For analysis purposes, we can always assume that the all-zero codeword was sent. (For this statement we assumed that the channel is output-symmetric and that the decoder is symmetric.) The resulting codes are called binary linear block codes.

76 CodingTheory (Part 1.1) Restriction: encodingmapislinearoverf 2 (continued). Some simplifications can be done in the ML decoder. The all-zero word is always a codeword. For analysis purposes, we can always assume that the all-zero codeword was sent. (For this statement we assumed that the channel is output-symmetric and that the decoder is symmetric.) The resulting codes are called binary linear block codes. A binary linear code of length n, dimension k, and minimum distance d min iscalledan[n,k]binarylinearcodeoran[n,k,d min ]binarylinear code.

77 CodingTheory (Part 2) Restriction: encodingmapislinearoverf 2 and cyclic shifts of codewords are again codewords.

78 CodingTheory (Part 2) Restriction: encodingmapislinearoverf 2 and cyclic shifts of codewords are again codewords. This allows one to use results from linear algebra and results about polynomials. Fundamental theorem of algebra, discrete Fourier transform.

79 CodingTheory (Part 2) Restriction: encodingmapislinearoverf 2 and cyclic shifts of codewords are again codewords. This allows one to use results from linear algebra and results about polynomials. Fundamental theorem of algebra, discrete Fourier transform. Encoding can be characterized by a monic degree-(n k) polynomial g(x) F 2 [X]: C = c(x) F 2[X] thereexistsan u(x) F 2 [X] s.t. deg(u(x)) < k ands.t. c(x) = u(x) g(x). g(x) is called the generator polynomial.

80 CodingTheory (Part 2) Restriction: encodingmapislinearoverf 2 and cyclic shifts of codewords are again codewords(continued). Thereisamonicdegree-kpolynomialh(X) F 2 [X]suchthat C = c(x) F deg(c(x)) < n 2[X] c(x) h(x) = 0(mod X n 1). h(x) is called the parity-check polynomial.

81 CodingTheory (Part 2) Restriction: encodingmapislinearoverf 2 and cyclic shifts of codewords are again codewords(continued). Thereisamonicdegree-kpolynomialh(X) F 2 [X]suchthat C = c(x) F deg(c(x)) < n 2[X] c(x) h(x) = 0(mod X n 1). h(x) is called the parity-check polynomial. Encoding can be done very efficiently(especially in hardware).

82 CodingTheory (Part 2) Restriction: encodingmapislinearoverf 2 and cyclic shifts of codewords are again codewords(continued). Thereisamonicdegree-kpolynomialh(X) F 2 [X]suchthat C = c(x) F deg(c(x)) < n 2[X] c(x) h(x) = 0(mod X n 1). h(x) is called the parity-check polynomial. Encoding can be done very efficiently(especially in hardware). Theresultingclassofcodesiscalledcyclicblockcodes.

83 CodingTheory (Part 3) Some remarks:

84 CodingTheory (Part 3) Some remarks: Cyclic block codes have traditionally been one of the most popular classes of codes. Reed-Solomon codes, BCH codes, Reed-Muller codes, etc.

85 CodingTheory (Part 3) Some remarks: Cyclic block codes have traditionally been one of the most popular classes of codes. Reed-Solomon codes, BCH codes, Reed-Muller codes, etc. Within the class of linear block codes there are many special classes, e.g., the class of algebraic-geometry codes. (Here one can use the powerful Riemann-Roch Theorem.)

86 CodingTheory (Part 3) Some remarks: Cyclic block codes have traditionally been one of the most popular classes of codes. Reed-Solomon codes, BCH codes, Reed-Muller codes, etc. Within the class of linear block codes there are many special classes, e.g., the class of algebraic-geometry codes. (Here one can use the powerful Riemann-Roch Theorem.) Etc.

87 CodingTheory (Part 3) Some remarks: Cyclic block codes have traditionally been one of the most popular classes of codes. Reed-Solomon codes, BCH codes, Reed-Muller codes, etc. Within the class of linear block codes there are many special classes, e.g., the class of algebraic-geometry codes. (Here one can use the powerful Riemann-Roch Theorem.) Etc. See, e.g., the book by MacWilliams and Sloane[2] that contains many results on traditional coding theory.

88 CodingTheory (Part 4) Modern coding theory is based on codes that have a sparse graphical representation with small state-space sizes.

89 CodingTheory (Part 4) Modern coding theory is based on codes that have a sparse graphical representation with small state-space sizes. For such codes, very efficient, although usually suboptimal, decoding algorithms are known(sum-product algorithm decoding, min-sum algorithm decoding, etc.).

90 CodingTheory (Part 4) Modern coding theory is based on codes that have a sparse graphical representation with small state-space sizes. For such codes, very efficient, although usually suboptimal, decoding algorithms are known(sum-product algorithm decoding, min-sum algorithm decoding, etc.). Designing good codes is about finding graphical representations where these decoding algorithms work well.

91 Traditional vs. Modern CodingandDecoding Code design Decoding Traditional Reed-Solomon codes etc.? Modern? Iterative decoding (Sum-product algorithm, etc.)

92 Traditional vs. Modern CodingandDecoding Code design Decoding Traditional Reed-Solomon codes etc. Berlekamp-Massey decoder etc. Modern Codes on Graphs (LDPC/Turbo codes, etc.) Iterative decoding (Sum-product algorithm, etc.)

93 TheLawofLargeNumbers The channel coding theorem and many other results in information theory rely on the law of large numbers. That is why coding/decoding works better the longer the codes are. However, in many practical applications one wants to limit delays. Giventhis,codesusedinpracticetypicallyhaveblocklengthsofafew hundreds up to a few thousands(and sometimes a few ten thousands).

94 References [1] T.M.CoverandJ.A.Thomas,ElementsofInformationTheory. Wiley Series in Telecommunications, New York: John Wiley& Sons Inc., A Wiley-Interscience Publication. [2] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes. New York: North-Holland, [3] J. L. Massey, Applied Digital Information Theory I and II. Lecture Notes, ETH Zurich, Available online under free_docs.en.html.

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Coding theory: Applications

Coding theory: Applications INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

Math 512 Syllabus Spring 2017, LIU Post

Math 512 Syllabus Spring 2017, LIU Post Week Class Date Material Math 512 Syllabus Spring 2017, LIU Post 1 1/23 ISBN, error-detecting codes HW: Exercises 1.1, 1.3, 1.5, 1.8, 1.14, 1.15 If x, y satisfy ISBN-10 check, then so does x + y. 2 1/30

More information

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 26 Linear Block Codes Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay July 28, 2014 Binary Block Codes 3 / 26 Let F 2 be the set

More information

Error Correction Review

Error Correction Review Error Correction Review A single overall parity-check equation detects single errors. Hamming codes used m equations to correct one error in 2 m 1 bits. We can use nonbinary equations if we create symbols

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission

An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission P.Mozhiarasi, C.Gayathri, V.Deepan Master of Engineering, VLSI design, Sri Eshwar College of Engineering, Coimbatore- 641 202,

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9 Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received

More information

Making Error Correcting Codes Work for Flash Memory

Making Error Correcting Codes Work for Flash Memory Making Error Correcting Codes Work for Flash Memory Part I: Primer on ECC, basics of BCH and LDPC codes Lara Dolecek Laboratory for Robust Information Systems (LORIS) Center on Development of Emerging

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

Polynomial Codes over Certain Finite Fields

Polynomial Codes over Certain Finite Fields Polynomial Codes over Certain Finite Fields A paper by: Irving Reed and Gustave Solomon presented by Kim Hamilton March 31, 2000 Significance of this paper: Introduced ideas that form the core of current

More information

Graph-based codes for flash memory

Graph-based codes for flash memory 1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

ERROR CORRECTING CODES

ERROR CORRECTING CODES ERROR CORRECTING CODES To send a message of 0 s and 1 s from my computer on Earth to Mr. Spock s computer on the planet Vulcan we use codes which include redundancy to correct errors. n q Definition. A

More information

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding Chapter 6 Reed-Solomon Codes 6. Finite Field Algebra 6. Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding 6. Finite Field Algebra Nonbinary codes: message and codeword symbols

More information

Error Detection and Correction: Hamming Code; Reed-Muller Code

Error Detection and Correction: Hamming Code; Reed-Muller Code Error Detection and Correction: Hamming Code; Reed-Muller Code Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin Hamming Code: Motivation

More information

6.1.1 What is channel coding and why do we use it?

6.1.1 What is channel coding and why do we use it? Chapter 6 Channel Coding 6.1 Introduction 6.1.1 What is channel coding and why do we use it? Channel coding is the art of adding redundancy to a message in order to make it more robust against noise. It

More information

Cyclic codes: overview

Cyclic codes: overview Cyclic codes: overview EE 387, Notes 14, Handout #22 A linear block code is cyclic if the cyclic shift of a codeword is a codeword. Cyclic codes have many advantages. Elegant algebraic descriptions: c(x)

More information

Information redundancy

Information redundancy Information redundancy Information redundancy add information to date to tolerate faults error detecting codes error correcting codes data applications communication memory p. 2 - Design of Fault Tolerant

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming

More information

Exercise 1. = P(y a 1)P(a 1 )

Exercise 1. = P(y a 1)P(a 1 ) Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a

More information

Decoding Reed-Muller codes over product sets

Decoding Reed-Muller codes over product sets Rutgers University May 30, 2016 Overview Error-correcting codes 1 Error-correcting codes Motivation 2 Reed-Solomon codes Reed-Muller codes 3 Error-correcting codes Motivation Goal: Send a message Don t

More information

Noisy channel communication

Noisy channel communication Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University

More information

Quasi-cyclic Low Density Parity Check codes with high girth

Quasi-cyclic Low Density Parity Check codes with high girth Quasi-cyclic Low Density Parity Check codes with high girth, a work with Marta Rossi, Richard Bresnan, Massimilliano Sala Summer Doctoral School 2009 Groebner bases, Geometric codes and Order Domains Dept

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

1.6: Solutions 17. Solution to exercise 1.6 (p.13).

1.6: Solutions 17. Solution to exercise 1.6 (p.13). 1.6: Solutions 17 A slightly more careful answer (short of explicit computation) goes as follows. Taking the approximation for ( N K) to the next order, we find: ( N N/2 ) 2 N 1 2πN/4. (1.40) This approximation

More information

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008 Codes on Graphs Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 27th, 2008 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 1 / 31

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

A Brief Encounter with Linear Codes

A Brief Encounter with Linear Codes Boise State University ScholarWorks Mathematics Undergraduate Theses Department of Mathematics 8-2014 A Brief Encounter with Linear Codes Brent El-Bakri Boise State University, brentelbakri@boisestate.edu

More information

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes S-723410 BCH and Reed-Solomon Codes 1 S-723410 BCH and Reed-Solomon Codes 3 Background The algebraic structure of linear codes and, in particular, cyclic linear codes, enables efficient encoding and decoding

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information

Section 3 Error Correcting Codes (ECC): Fundamentals

Section 3 Error Correcting Codes (ECC): Fundamentals Section 3 Error Correcting Codes (ECC): Fundamentals Communication systems and channel models Definition and examples of ECCs Distance For the contents relevant to distance, Lin & Xing s book, Chapter

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

Error Correction and Trellis Coding

Error Correction and Trellis Coding Advanced Signal Processing Winter Term 2001/2002 Digital Subscriber Lines (xdsl): Broadband Communication over Twisted Wire Pairs Error Correction and Trellis Coding Thomas Brandtner brandt@sbox.tugraz.at

More information

EE 229B ERROR CONTROL CODING Spring 2005

EE 229B ERROR CONTROL CODING Spring 2005 EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Mapper & De-Mapper System Document

Mapper & De-Mapper System Document Mapper & De-Mapper System Document Mapper / De-Mapper Table of Contents. High Level System and Function Block. Mapper description 2. Demodulator Function block 2. Decoder block 2.. De-Mapper 2..2 Implementation

More information

Binary Convolutional Codes

Binary Convolutional Codes Binary Convolutional Codes A convolutional code has memory over a short block length. This memory results in encoded output symbols that depend not only on the present input, but also on past inputs. An

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

Revision of Lecture 5

Revision of Lecture 5 Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information

More information

LDPC Codes. Intracom Telecom, Peania

LDPC Codes. Intracom Telecom, Peania LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom,

More information

ECEN 604: Channel Coding for Communications

ECEN 604: Channel Coding for Communications ECEN 604: Channel Coding for Communications Lecture: Introduction to Cyclic Codes Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 604: Channel Coding for Communications

More information

Lecture 2 Linear Codes

Lecture 2 Linear Codes Lecture 2 Linear Codes 2.1. Linear Codes From now on we want to identify the alphabet Σ with a finite field F q. For general codes, introduced in the last section, the description is hard. For a code of

More information

Algebraic Codes for Error Control

Algebraic Codes for Error Control little -at- mathcs -dot- holycross -dot- edu Department of Mathematics and Computer Science College of the Holy Cross SACNAS National Conference An Abstract Look at Algebra October 16, 2009 Outline Coding

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem IITM-CS6845: Theory Toolkit February 08, 2012 Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem Lecturer: Jayalal Sarma Scribe: Dinesh K Theme: Error correcting codes In the previous lecture,

More information

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70 Name : Roll No. :.... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/2011-12 2011 CODING & INFORMATION THEORY Time Allotted : 3 Hours Full Marks : 70 The figures in the margin indicate full marks

More information

Assume that the follow string of bits constitutes one of the segments we which to transmit.

Assume that the follow string of bits constitutes one of the segments we which to transmit. Cyclic Redundancy Checks( CRC) Cyclic Redundancy Checks fall into a class of codes called Algebraic Codes; more specifically, CRC codes are Polynomial Codes. These are error-detecting codes, not error-correcting

More information

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16 Solutions of Exam Coding Theory (2MMC30), 23 June 2016 (1.a) Consider the 4 4 matrices as words in F 16 2, the binary vector space of dimension 16. C is the code of all binary 4 4 matrices such that the

More information

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise 9 THEORY OF CODES Chapter 9 Theory of Codes After studying this chapter you should understand what is meant by noise, error detection and correction; be able to find and use the Hamming distance for a

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. Binary codes Let us assume that a message to be transmitted is in binary form. That is, it is a word in the alphabet

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Cyclic Redundancy Check Codes

Cyclic Redundancy Check Codes Cyclic Redundancy Check Codes Lectures No. 17 and 18 Dr. Aoife Moloney School of Electronics and Communications Dublin Institute of Technology Overview These lectures will look at the following: Cyclic

More information

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes Kevin Buckley - 2010 109 ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes m GF(2 ) adder m GF(2 ) multiplier

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

Reed Muller Error Correcting Codes

Reed Muller Error Correcting Codes Reed Muller Error Correcting Codes Ben Cooke Abstract. This paper starts by defining key terms and operations used with Reed Muller codes and binary numbers. Reed Muller codes are then defined and encoding

More information

Electrical and Information Technology. Information Theory. Problems and Solutions. Contents. Problems... 1 Solutions...7

Electrical and Information Technology. Information Theory. Problems and Solutions. Contents. Problems... 1 Solutions...7 Electrical and Information Technology Information Theory Problems and Solutions Contents Problems.......... Solutions...........7 Problems 3. In Problem?? the binomial coefficent was estimated with Stirling

More information

UNIT I INFORMATION THEORY. I k log 2

UNIT I INFORMATION THEORY. I k log 2 UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper

More information

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013 Coding Theory and Applications Solved Exercises and Problems of Cyclic Codes Enes Pasalic University of Primorska Koper, 2013 Contents 1 Preface 3 2 Problems 4 2 1 Preface This is a collection of solved

More information

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Thomas R. Halford and Keith M. Chugg Communication Sciences Institute University of Southern California Los Angeles, CA 90089-2565 Abstract

More information

Arrangements, matroids and codes

Arrangements, matroids and codes Arrangements, matroids and codes first lecture Ruud Pellikaan joint work with Relinde Jurrius ACAGM summer school Leuven Belgium, 18 July 2011 References 2/43 1. Codes, arrangements and matroids by Relinde

More information

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Anna Dovzhik 1 Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Sharing data across channels, such as satellite, television, or compact disc, often

More information

Chapter 7 Reed Solomon Codes and Binary Transmission

Chapter 7 Reed Solomon Codes and Binary Transmission Chapter 7 Reed Solomon Codes and Binary Transmission 7.1 Introduction Reed Solomon codes named after Reed and Solomon [9] following their publication in 1960 have been used together with hard decision

More information

Lecture 7 September 24

Lecture 7 September 24 EECS 11: Coding for Digital Communication and Beyond Fall 013 Lecture 7 September 4 Lecturer: Anant Sahai Scribe: Ankush Gupta 7.1 Overview This lecture introduces affine and linear codes. Orthogonal signalling

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Cyclic codes. I give an example of a shift register with four storage elements and two binary adders.

Cyclic codes. I give an example of a shift register with four storage elements and two binary adders. Good afternoon, gentleman! Today I give you a lecture about cyclic codes. This lecture consists of three parts: I Origin and definition of cyclic codes ;? how to find cyclic codes: The Generator Polynomial

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

New communication strategies for broadcast and interference networks

New communication strategies for broadcast and interference networks New communication strategies for broadcast and interference networks S. Sandeep Pradhan (Joint work with Arun Padakandla and Aria Sahebi) University of Michigan, Ann Arbor Distributed Information Coding

More information

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems Prof. Shu Lin Dr. Cathy Liu Dr. Michael Steinberger U.C.Davis Avago SiSoft A Brief Tour of FEC for Serial Link Systems Outline Introduction Finite Fields and Vector Spaces Linear Block Codes Cyclic Codes

More information

An Introduction to (Network) Coding Theory

An Introduction to (Network) Coding Theory An Introduction to (Network) Coding Theory Anna-Lena Horlemann-Trautmann University of St. Gallen, Switzerland July 12th, 2018 1 Coding Theory Introduction Reed-Solomon codes 2 Introduction Coherent network

More information

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes. 5 Binary Codes You have already seen how check digits for bar codes (in Unit 3) and ISBN numbers (Unit 4) are used to detect errors. Here you will look at codes relevant for data transmission, for example,

More information

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane 2301532 : Coding Theory Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, 2006 http://pioneer.chula.ac.th/ upattane Chapter 1 Error detection, correction and decoding 1.1 Basic definitions and

More information

Lecture Notes on Channel Coding

Lecture Notes on Channel Coding Lecture Notes on Channel Coding arxiv:1607.00974v1 [cs.it] 4 Jul 2016 Georg Böcherer Institute for Communications Engineering Technical University of Munich, Germany georg.boecherer@tum.de July 5, 2016

More information

Iterative Encoding of Low-Density Parity-Check Codes

Iterative Encoding of Low-Density Parity-Check Codes Iterative Encoding of Low-Density Parity-Check Codes David Haley, Alex Grant and John Buetefuer Institute for Telecommunications Research University of South Australia Mawson Lakes Blvd Mawson Lakes SA

More information

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013 Coding Theory and Applications Linear Codes Enes Pasalic University of Primorska Koper, 2013 2 Contents 1 Preface 5 2 Shannon theory and coding 7 3 Coding theory 31 4 Decoding of linear codes and MacWilliams

More information

Polar Coding. Part 1 - Background. Erdal Arıkan. Electrical-Electronics Engineering Department, Bilkent University, Ankara, Turkey

Polar Coding. Part 1 - Background. Erdal Arıkan. Electrical-Electronics Engineering Department, Bilkent University, Ankara, Turkey Polar Coding Part 1 - Background Erdal Arıkan Electrical-Electronics Engineering Department, Bilkent University, Ankara, Turkey Algorithmic Coding Theory Workshop June 13-17, 2016 ICERM, Providence, RI

More information

Coding Techniques for Data Storage Systems

Coding Techniques for Data Storage Systems Coding Techniques for Data Storage Systems Thomas Mittelholzer IBM Zurich Research Laboratory /8 Göttingen Agenda. Channel Coding and Practical Coding Constraints. Linear Codes 3. Weight Enumerators and

More information

Communication by Regression: Sparse Superposition Codes

Communication by Regression: Sparse Superposition Codes Communication by Regression: Sparse Superposition Codes Department of Statistics, Yale University Coauthors: Antony Joseph and Sanghee Cho February 21, 2013, University of Texas Channel Communication Set-up

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

Lecture 6: Expander Codes

Lecture 6: Expander Codes CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.

More information

Lecture 4: Codes based on Concatenation

Lecture 4: Codes based on Concatenation Lecture 4: Codes based on Concatenation Error-Correcting Codes (Spring 206) Rutgers University Swastik Kopparty Scribe: Aditya Potukuchi and Meng-Tsung Tsai Overview In the last lecture, we studied codes

More information

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Slides originally from I. Land p.1 Slides originally from I. Land p.1 LDPC Codes Definition of LDPC Codes Factor Graphs to use in decoding Decoding for binary erasure channels EXIT charts Soft-Output Decoding Turbo principle applied to

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information