Polar Coding. Part 1 - Background. Erdal Arıkan. Electrical-Electronics Engineering Department, Bilkent University, Ankara, Turkey

Size: px
Start display at page:

Download "Polar Coding. Part 1 - Background. Erdal Arıkan. Electrical-Electronics Engineering Department, Bilkent University, Ankara, Turkey"

Transcription

1 Polar Coding Part 1 - Background Erdal Arıkan Electrical-Electronics Engineering Department, Bilkent University, Ankara, Turkey Algorithmic Coding Theory Workshop June 13-17, 2016 ICERM, Providence, RI

2 Outline Sequential decoding and the cutoff rate Guessing and cutoff rate Boosting the cutoff rate Pinsker s scheme Massey s scheme Polar coding

3 Sequential decoding and the cutoff rate Guessing and cutoff rate Boosting the cutoff rate Pinsker s scheme Massey s scheme Polar coding Sequential decoding and the cutoff rate 1 / 72

4 Tree coding and sequential decoding (SD) Consider a tree code (of rate 1/2) A path is chosen and transmitted Given the channel output, search the tree for the correct (transmitted) path The tree structure turns the ML decoding problem into a tree search problem A depth-first search algorithm exists called sequential decoding (SD) Transmitted path Sequential decoding and the cutoff rate 2 / 72

5 Tree coding and sequential decoding (SD) Consider a tree code (of rate 1/2) A path is chosen and transmitted Given the channel output, search the tree for the correct (transmitted) path The tree structure turns the ML decoding problem into a tree search problem A depth-first search algorithm exists called sequential decoding (SD) Transmitted path Sequential decoding and the cutoff rate 2 / 72

6 Tree coding and sequential decoding (SD) Consider a tree code (of rate 1/2) A path is chosen and transmitted Given the channel output, search the tree for the correct (transmitted) path The tree structure turns the ML decoding problem into a tree search problem A depth-first search algorithm exists called sequential decoding (SD) Transmitted path Sequential decoding and the cutoff rate 2 / 72

7 Tree coding and sequential decoding (SD) Consider a tree code (of rate 1/2) A path is chosen and transmitted Given the channel output, search the tree for the correct (transmitted) path The tree structure turns the ML decoding problem into a tree search problem A depth-first search algorithm exists called sequential decoding (SD) Transmitted path Sequential decoding and the cutoff rate 2 / 72

8 Tree coding and sequential decoding (SD) Consider a tree code (of rate 1/2) A path is chosen and transmitted Given the channel output, search the tree for the correct (transmitted) path The tree structure turns the ML decoding problem into a tree search problem A depth-first search algorithm exists called sequential decoding (SD) Transmitted path Sequential decoding and the cutoff rate 2 / 72

9 Search metric SD uses a metric to distinguish the correct path from the incorrect ones Fano s metric: Γ(y n, x n ) = log P(y n x n ) P(y n ) nr path length candidate path received sequence code rate n x n y n R Sequential decoding and the cutoff rate 3 / 72

10 History Tree codes were introduced by Elias (1955) with the aim of reducing the complexity of ML decoding (the tree structure makes it possible to use search heuristics for ML decoding) Sequential decoding was introduced by Wozencraft (1957) as part of his doctoral thesis Fano (1963) simplified the search algorithm and introduced the above metric Sequential decoding and the cutoff rate 4 / 72

11 History Tree codes were introduced by Elias (1955) with the aim of reducing the complexity of ML decoding (the tree structure makes it possible to use search heuristics for ML decoding) Sequential decoding was introduced by Wozencraft (1957) as part of his doctoral thesis Fano (1963) simplified the search algorithm and introduced the above metric Sequential decoding and the cutoff rate 4 / 72

12 History Tree codes were introduced by Elias (1955) with the aim of reducing the complexity of ML decoding (the tree structure makes it possible to use search heuristics for ML decoding) Sequential decoding was introduced by Wozencraft (1957) as part of his doctoral thesis Fano (1963) simplified the search algorithm and introduced the above metric Sequential decoding and the cutoff rate 4 / 72

13 Drift properties of the metric On the correct path, the expectation of the metric per channel symbol is [ p(x, y) log p(y x) ] P(y) R = I (X ; Y ) R. y,x On any incorrect path, the expectation is [ p(x)p(y) log p(y x) ] p(y) R R x,y A properly designed SD scheme given enough time identifies the correct path with probability one at any rate R < I (X ; Y ). Sequential decoding and the cutoff rate 5 / 72

14 Drift properties of the metric On the correct path, the expectation of the metric per channel symbol is [ p(x, y) log p(y x) ] P(y) R = I (X ; Y ) R. y,x On any incorrect path, the expectation is [ p(x)p(y) log p(y x) ] p(y) R R x,y A properly designed SD scheme given enough time identifies the correct path with probability one at any rate R < I (X ; Y ). Sequential decoding and the cutoff rate 5 / 72

15 Drift properties of the metric On the correct path, the expectation of the metric per channel symbol is [ p(x, y) log p(y x) ] P(y) R = I (X ; Y ) R. y,x On any incorrect path, the expectation is [ p(x)p(y) log p(y x) ] p(y) R R x,y A properly designed SD scheme given enough time identifies the correct path with probability one at any rate R < I (X ; Y ). Sequential decoding and the cutoff rate 5 / 72

16 Computation problem in sequential decoding Computation in sequential decoding is a random quantity, depending on the code rate R and the noise realization Bursts of noise create barriers for the depth-first search algorithm, necessitating excessive backtracking in the search Still, the average computation per decoded digit in sequential decoding can be kept bounded provided the code rate R is below the cutoff rate ( R 0 = log Q(x) ) 2 W (y x) y x So, SD solves the coding problem for rates below R 0 Indeed, SD was the method of choice in space communications, albeit briefly Sequential decoding and the cutoff rate 6 / 72

17 Computation problem in sequential decoding Computation in sequential decoding is a random quantity, depending on the code rate R and the noise realization Bursts of noise create barriers for the depth-first search algorithm, necessitating excessive backtracking in the search Still, the average computation per decoded digit in sequential decoding can be kept bounded provided the code rate R is below the cutoff rate ( R 0 = log Q(x) ) 2 W (y x) y x So, SD solves the coding problem for rates below R 0 Indeed, SD was the method of choice in space communications, albeit briefly Sequential decoding and the cutoff rate 6 / 72

18 Computation problem in sequential decoding Computation in sequential decoding is a random quantity, depending on the code rate R and the noise realization Bursts of noise create barriers for the depth-first search algorithm, necessitating excessive backtracking in the search Still, the average computation per decoded digit in sequential decoding can be kept bounded provided the code rate R is below the cutoff rate ( R 0 = log Q(x) ) 2 W (y x) y x So, SD solves the coding problem for rates below R 0 Indeed, SD was the method of choice in space communications, albeit briefly Sequential decoding and the cutoff rate 6 / 72

19 Computation problem in sequential decoding Computation in sequential decoding is a random quantity, depending on the code rate R and the noise realization Bursts of noise create barriers for the depth-first search algorithm, necessitating excessive backtracking in the search Still, the average computation per decoded digit in sequential decoding can be kept bounded provided the code rate R is below the cutoff rate ( R 0 = log Q(x) ) 2 W (y x) y x So, SD solves the coding problem for rates below R 0 Indeed, SD was the method of choice in space communications, albeit briefly Sequential decoding and the cutoff rate 6 / 72

20 Computation problem in sequential decoding Computation in sequential decoding is a random quantity, depending on the code rate R and the noise realization Bursts of noise create barriers for the depth-first search algorithm, necessitating excessive backtracking in the search Still, the average computation per decoded digit in sequential decoding can be kept bounded provided the code rate R is below the cutoff rate ( R 0 = log Q(x) ) 2 W (y x) y x So, SD solves the coding problem for rates below R 0 Indeed, SD was the method of choice in space communications, albeit briefly Sequential decoding and the cutoff rate 6 / 72

21 References on complexity of sequential decoding Achievability: Wozencraft (1957), Reiffen (1962), Fano (1963), Stiglitz and Yudkin (1964) Converse: Jacobs and Berlekamp (1967) Refinements: Wozencraft and Jacobs (1965), Savage (1966), Gallager (1968), Jelinek (1968), Forney (1974), Arıkan (1986), Arıkan (1994) Sequential decoding and the cutoff rate 7 / 72

22 Sequential decoding and the cutoff rate Guessing and cutoff rate Boosting the cutoff rate Pinsker s scheme Massey s scheme Polar coding Guessing and cutoff rate 8 / 72

23 A computational model for sequential decoding SD visits nodes at level N in a certain order No look-ahead assumption: SD forgets what it saw beyond level N upon backtracking Complexity measure G N : The number of nodes searched (visited) at level N until the correct node is visited for the first time Guessing and cutoff rate 9 / 72

24 A computational model for sequential decoding SD visits nodes at level N in a certain order No look-ahead assumption: SD forgets what it saw beyond level N upon backtracking Complexity measure G N : The number of nodes searched (visited) at level N until the correct node is visited for the first time Guessing and cutoff rate 9 / 72

25 A computational model for sequential decoding SD visits nodes at level N in a certain order No look-ahead assumption: SD forgets what it saw beyond level N upon backtracking Complexity measure G N : The number of nodes searched (visited) at level N until the correct node is visited for the first time Guessing and cutoff rate 9 / 72

26 A bound of computational complexity Let R be a fixed code rate. There exist tree codes of rate R such that E[G N ] N(R0 R). Conversely, for any tree code of rate R, E[G N ] N(R 0 R) Guessing and cutoff rate 10 / 72

27 A bound of computational complexity Let R be a fixed code rate. There exist tree codes of rate R such that E[G N ] N(R0 R). Conversely, for any tree code of rate R, E[G N ] N(R 0 R) Guessing and cutoff rate 10 / 72

28 A bound of computational complexity Let R be a fixed code rate. There exist tree codes of rate R such that E[G N ] N(R0 R). Conversely, for any tree code of rate R, E[G N ] N(R 0 R) Guessing and cutoff rate 10 / 72

29 The Guessing Problem Alice draws a sample of a random variable X P. Bob wishes to determine X by asking questions of the form Is X equal to x? which are answered truthfully by Alice. Bob s goal is to minimize the expected number of questions until he gets a YES answer. Guessing and cutoff rate 11 / 72

30 The Guessing Problem Alice draws a sample of a random variable X P. Bob wishes to determine X by asking questions of the form Is X equal to x? which are answered truthfully by Alice. Bob s goal is to minimize the expected number of questions until he gets a YES answer. Guessing and cutoff rate 11 / 72

31 The Guessing Problem Alice draws a sample of a random variable X P. Bob wishes to determine X by asking questions of the form Is X equal to x? which are answered truthfully by Alice. Bob s goal is to minimize the expected number of questions until he gets a YES answer. Guessing and cutoff rate 11 / 72

32 Guessing with Side Information Alice samples (X, Y ) P(x, y). Bob observes Y and is to determine X by asking the same type of questions Is X equal to x? The goal is to minimize the expected number of quesses. Guessing and cutoff rate 12 / 72

33 Guessing with Side Information Alice samples (X, Y ) P(x, y). Bob observes Y and is to determine X by asking the same type of questions Is X equal to x? The goal is to minimize the expected number of quesses. Guessing and cutoff rate 12 / 72

34 Guessing with Side Information Alice samples (X, Y ) P(x, y). Bob observes Y and is to determine X by asking the same type of questions Is X equal to x? The goal is to minimize the expected number of quesses. Guessing and cutoff rate 12 / 72

35 Optimal guessing strategies Let G be the number of guesses to determine X. The expected no of guesses is given by E[G] = x X P(x)G(x) A guessing strategy minimizes E[G] if P(x) > P(x ) = G(x) < G(x ). Guessing and cutoff rate 13 / 72

36 Optimal guessing strategies Let G be the number of guesses to determine X. The expected no of guesses is given by E[G] = x X P(x)G(x) A guessing strategy minimizes E[G] if P(x) > P(x ) = G(x) < G(x ). Guessing and cutoff rate 13 / 72

37 Optimal guessing strategies Let G be the number of guesses to determine X. The expected no of guesses is given by E[G] = x X P(x)G(x) A guessing strategy minimizes E[G] if P(x) > P(x ) = G(x) < G(x ). Guessing and cutoff rate 13 / 72

38 Upper bound on guessing effort For any optimal guessing function [ ] 2 E[G (X )] P(x) Proof. x G (x) all x P(x )/P(x) = M ip G (i) i=1 E[G (X )] x P(x) [ ] 2 P(x )/P(x) = P(x). x x Guessing and cutoff rate 14 / 72

39 Lower bound on guessing effort For any guessing function for a target r.v. X with M possible values, [ ] 2 E[G(X )] (1 + ln M) 1 P(x) x For the proof we use the following variant of Hölder s inequality. Guessing and cutoff rate 15 / 72

40 Lemma Let a i, p i be positive numbers. [ a i p i i i a 1 i ] 1 [ ] 2 pi. Proof. Let λ = 1/2 and put A i = a 1 i, B i = ai λpλ i, in Hölder s inequality [ A i B i i i A 1/(1 λ) i ] 1 λ [ i i ] λ B 1/λ i. Guessing and cutoff rate 16 / 72

41 Proof of Lower Bound M E[G(X ) = ip G (i) i=1 ( M ) 1 ( M 1/i pg (i) i=1 i=1 ) 2 ( M ) 1 ( ) 2 = 1/i P(x) i=1 x ( ) 2 (1 + ln M) 1 P(x) x Guessing and cutoff rate 17 / 72

42 Essense of the inequalities For any set of real numbers p 1 p 2 p M > 0, 1 M i=1 i p i [ M ] 2 (1 + ln M) 1 i=1 pi Guessing and cutoff rate 18 / 72

43 Guessing Random Vectors Let X = (X 1,..., X n ) P(x 1,..., x n ). Guessing X means asking questions of the form Is X = x? for possible values x = (x 1,..., x n ) of X. Notice that coordinate-wise probes of the type Is X i = x i? are not allowed. Guessing and cutoff rate 19 / 72

44 Complexity of Vector Guessing Suppose X i has M i possible values, i = 1,..., n. Then, 1 E[G (X 1,..., X n )] [ ] 2 [1 + ln(m 1 M n )] 1 x1,...,xn P(x1,..., x n ) In particular, if X 1,..., X n are i.i.d. P with a common alphabet X, 1 E[G (X 1,..., X n )] [ ] 2n x X P(x) [1 + n ln X ] 1 Guessing and cutoff rate 20 / 72

45 Guessing with Side Information (X, Y ) a pair of random variables with a joint distribution P(x, y). Y known. X to be guessed as before. G(x y) the number of guesses when X = x, Y = y. Guessing and cutoff rate 21 / 72

46 Lower Bound For any guessing strategy and any ρ > 0, [ ] 2 P(x, y) E[G(X Y )] (1 + ln M) 1 y x where M is the number of possible values of X. Proof. E[G(X Y )] = y y P(y)E[G(X Y = y)] P(y)(1 + ln M) 1 [ x P(x y) ] 2 [ ] 2 P(x, y) = (1 + ln M) 1 y x Guessing and cutoff rate 22 / 72

47 Upper bound Optimal guessing functions satisfy [ ] 2 P(x, y). E[G (X Y )] y x Proof. E[G (X Y )] = y P(y) x P(x y)g (x y) y [ ] 2 P(y) P(x y) x [ ] 2 P(x, y). = y x Guessing and cutoff rate 23 / 72

48 Generalization to Random Vectors For optimal guessing functions, for ρ > 0, E[G (X 1,..., X k Y 1,..., Y n )] 1 [ y 1,...,y n x 1,...,x P(x1 k,..., x k, y 1,..., y n ) [1 + ln(m 1 M k )] 1 where M i denotes the number of possible values of X i. ] 2 Guessing and cutoff rate 24 / 72

49 A guessing decoder Consider a block code with M codewords x 1,..., x M of block length N. Suppose a codeword is chosen at random and sent over a channel W Given the channel output y, a guessing decoder decodes by asking questions of the form Is the correct codeword the mth one? to which it receives a truthful YES or NO answer. On a NO answer it repeats the question with a new m. The complexity C for this decoder is the number of questions until a YES answer. Guessing and cutoff rate 25 / 72

50 A guessing decoder Consider a block code with M codewords x 1,..., x M of block length N. Suppose a codeword is chosen at random and sent over a channel W Given the channel output y, a guessing decoder decodes by asking questions of the form Is the correct codeword the mth one? to which it receives a truthful YES or NO answer. On a NO answer it repeats the question with a new m. The complexity C for this decoder is the number of questions until a YES answer. Guessing and cutoff rate 25 / 72

51 A guessing decoder Consider a block code with M codewords x 1,..., x M of block length N. Suppose a codeword is chosen at random and sent over a channel W Given the channel output y, a guessing decoder decodes by asking questions of the form Is the correct codeword the mth one? to which it receives a truthful YES or NO answer. On a NO answer it repeats the question with a new m. The complexity C for this decoder is the number of questions until a YES answer. Guessing and cutoff rate 25 / 72

52 A guessing decoder Consider a block code with M codewords x 1,..., x M of block length N. Suppose a codeword is chosen at random and sent over a channel W Given the channel output y, a guessing decoder decodes by asking questions of the form Is the correct codeword the mth one? to which it receives a truthful YES or NO answer. On a NO answer it repeats the question with a new m. The complexity C for this decoder is the number of questions until a YES answer. Guessing and cutoff rate 25 / 72

53 A guessing decoder Consider a block code with M codewords x 1,..., x M of block length N. Suppose a codeword is chosen at random and sent over a channel W Given the channel output y, a guessing decoder decodes by asking questions of the form Is the correct codeword the mth one? to which it receives a truthful YES or NO answer. On a NO answer it repeats the question with a new m. The complexity C for this decoder is the number of questions until a YES answer. Guessing and cutoff rate 25 / 72

54 Optimal guessing decoder An optimal guessing decoder is one that minimizes the expected complexity E[C]. Clearly, E[C] is minimized by generating the guesses in decreasing order of likelihoods W (y x m ). x i1 1st guess (the most likely codeword given y) x i2 2nd guess (2nd most likely codeword given y). x L correct codeword obtained; guessing stops Complexity C equals the number of guesses L Guessing and cutoff rate 26 / 72

55 Application to the guessing decoder A block code C = {x 1,..., x M } with M = e NR codewords of block length N. A codeword X chosen at random and sent over a DMC W. Given the channel output vector Y, the decoder guesses X. A special case of guessing with side information where P(X = x, Y = y) = e NR N i=1 W (y i x i ), x C Guessing and cutoff rate 27 / 72

56 Cutoff rate bound E[G (X Y)] [1 + NR] 1 y = [1 + NR] 1 e NR y [ ] 2 P(x, y) [1 + NR] 1 e N(R R 0(W )) x [ Q N (x) W N (x, y) x ] 2N where R 0 (W ) = max Q ln y [ Q(x) ] 2 W (y x) x is the channel cutoff rate. Guessing and cutoff rate 28 / 72

57 Sequential decoding and the cutoff rate Guessing and cutoff rate Boosting the cutoff rate Pinsker s scheme Massey s scheme Polar coding Boosting the cutoff rate 29 / 72

58 Boosting the cutoff rate It was clear almost from the beginning that R 0 was at best shaky in its role as a limit to practical communications There were many attempts to boost the cutoff rate by devising clever schemes for searching a tree One striking example is Pinsker s scheme that displayed the strange nature of R 0 Boosting the cutoff rate 30 / 72

59 Boosting the cutoff rate It was clear almost from the beginning that R 0 was at best shaky in its role as a limit to practical communications There were many attempts to boost the cutoff rate by devising clever schemes for searching a tree One striking example is Pinsker s scheme that displayed the strange nature of R 0 Boosting the cutoff rate 30 / 72

60 Boosting the cutoff rate It was clear almost from the beginning that R 0 was at best shaky in its role as a limit to practical communications There were many attempts to boost the cutoff rate by devising clever schemes for searching a tree One striking example is Pinsker s scheme that displayed the strange nature of R 0 Boosting the cutoff rate 30 / 72

61 Sequential decoding and the cutoff rate Guessing and cutoff rate Boosting the cutoff rate Pinsker s scheme Massey s scheme Polar coding Pinsker s scheme 31 / 72

62 Binary Symmetric Channel We will describe Pinsker s scheme using the BSC example: Capacity C = 1 + ɛ log 2 (ɛ) + (1 ɛ) log 2 (1 ɛ) Cutoff rate R 0 = log ɛ(1 ɛ) Pinsker s scheme 32 / 72

63 Binary Symmetric Channel We will describe Pinsker s scheme using the BSC example: Capacity C = 1 + ɛ log 2 (ɛ) + (1 ɛ) log 2 (1 ɛ) Cutoff rate R 0 = log ɛ(1 ɛ) Pinsker s scheme 32 / 72

64 Capacity and cutoff rate for the BSC R 0 and C R 0 /C Pinsker s scheme 33 / 72

65 Pinsker s scheme Based on the observations that as ɛ 0 R 0 (ɛ) C(ɛ) 1 and R 0(ɛ) 1, Pinsker (1965) proposed concatenation scheme that achieved capacity within constant average cost per decoded bit irrespective of the level of reliability Pinsker s scheme 34 / 72

66 Pinsker s scheme x 1 y 1 d 1 u W 1 û 1 ˆd1 CE 1 SD 1 x 2 y 2 d 2 u W 2 û 2 ˆd2 CE 2 Block SD 2 Block decoder encoder (ML) d K2 CEK2 u K2 û K2 SDK2 ˆdK2 K 2 identical convolutional encoders x N2 W N 2 independent copies of W y N2 K 2 independent sequential decoders The inner block code does the initial clean-up at huge but finite complexity; the outer convolutional encoding (CE) and sequential decoding (SD) boosts the reliability at little extra cost. Pinsker s scheme 35 / 72

67 Discussion Although Pinsker s scheme made a very strong theoretical point, it was not practical. There were many more attempts to go around the R 0 barrier in 1960s: D. Falconer, A Hybrid Sequential and Algebraic Decoding Scheme, Sc.D. thesis, Dept. of Elec. Eng., M.I.T., I. Stiglitz, Iterative sequential decoding, IEEE Transactions on Information Theory, vol. 15, no. 6, pp , Nov F. Jelinek and J. Cocke, Bootstrap hybrid decoding for symmetrical binary input channels, Inform. Contr., vol. 18, no. 3, pp , Apr It is fair to say that none of these schemes had any practical impact Pinsker s scheme 36 / 72

68 Discussion Although Pinsker s scheme made a very strong theoretical point, it was not practical. There were many more attempts to go around the R 0 barrier in 1960s: D. Falconer, A Hybrid Sequential and Algebraic Decoding Scheme, Sc.D. thesis, Dept. of Elec. Eng., M.I.T., I. Stiglitz, Iterative sequential decoding, IEEE Transactions on Information Theory, vol. 15, no. 6, pp , Nov F. Jelinek and J. Cocke, Bootstrap hybrid decoding for symmetrical binary input channels, Inform. Contr., vol. 18, no. 3, pp , Apr It is fair to say that none of these schemes had any practical impact Pinsker s scheme 36 / 72

69 Discussion Although Pinsker s scheme made a very strong theoretical point, it was not practical. There were many more attempts to go around the R 0 barrier in 1960s: D. Falconer, A Hybrid Sequential and Algebraic Decoding Scheme, Sc.D. thesis, Dept. of Elec. Eng., M.I.T., I. Stiglitz, Iterative sequential decoding, IEEE Transactions on Information Theory, vol. 15, no. 6, pp , Nov F. Jelinek and J. Cocke, Bootstrap hybrid decoding for symmetrical binary input channels, Inform. Contr., vol. 18, no. 3, pp , Apr It is fair to say that none of these schemes had any practical impact Pinsker s scheme 36 / 72

70 Discussion Although Pinsker s scheme made a very strong theoretical point, it was not practical. There were many more attempts to go around the R 0 barrier in 1960s: D. Falconer, A Hybrid Sequential and Algebraic Decoding Scheme, Sc.D. thesis, Dept. of Elec. Eng., M.I.T., I. Stiglitz, Iterative sequential decoding, IEEE Transactions on Information Theory, vol. 15, no. 6, pp , Nov F. Jelinek and J. Cocke, Bootstrap hybrid decoding for symmetrical binary input channels, Inform. Contr., vol. 18, no. 3, pp , Apr It is fair to say that none of these schemes had any practical impact Pinsker s scheme 36 / 72

71 Discussion Although Pinsker s scheme made a very strong theoretical point, it was not practical. There were many more attempts to go around the R 0 barrier in 1960s: D. Falconer, A Hybrid Sequential and Algebraic Decoding Scheme, Sc.D. thesis, Dept. of Elec. Eng., M.I.T., I. Stiglitz, Iterative sequential decoding, IEEE Transactions on Information Theory, vol. 15, no. 6, pp , Nov F. Jelinek and J. Cocke, Bootstrap hybrid decoding for symmetrical binary input channels, Inform. Contr., vol. 18, no. 3, pp , Apr It is fair to say that none of these schemes had any practical impact Pinsker s scheme 36 / 72

72 Discussion Although Pinsker s scheme made a very strong theoretical point, it was not practical. There were many more attempts to go around the R 0 barrier in 1960s: D. Falconer, A Hybrid Sequential and Algebraic Decoding Scheme, Sc.D. thesis, Dept. of Elec. Eng., M.I.T., I. Stiglitz, Iterative sequential decoding, IEEE Transactions on Information Theory, vol. 15, no. 6, pp , Nov F. Jelinek and J. Cocke, Bootstrap hybrid decoding for symmetrical binary input channels, Inform. Contr., vol. 18, no. 3, pp , Apr It is fair to say that none of these schemes had any practical impact Pinsker s scheme 36 / 72

73 R 0 as practical capacity The failure to beat the cutoff rate bound in a meaningful manner despite intense efforts elevated R 0 to the status of a realistic limit to reliable communications R 0 appears as the key figure-of-merit for communication system design in the influential works of the period: Wozencraft and Jacobs, Principles of Communication Engineering, 1965 Wozencraft and Kennedy, Modulation and demodulation for probabilistic coding, IT Trans.,1966 Massey, Coding and modulation in digital communications, Zürich, 1974 Forney (1995) gives a first-hand account of this situation in his Shannon Lecture Performance and Complexity Pinsker s scheme 37 / 72

74 R 0 as practical capacity The failure to beat the cutoff rate bound in a meaningful manner despite intense efforts elevated R 0 to the status of a realistic limit to reliable communications R 0 appears as the key figure-of-merit for communication system design in the influential works of the period: Wozencraft and Jacobs, Principles of Communication Engineering, 1965 Wozencraft and Kennedy, Modulation and demodulation for probabilistic coding, IT Trans.,1966 Massey, Coding and modulation in digital communications, Zürich, 1974 Forney (1995) gives a first-hand account of this situation in his Shannon Lecture Performance and Complexity Pinsker s scheme 37 / 72

75 R 0 as practical capacity The failure to beat the cutoff rate bound in a meaningful manner despite intense efforts elevated R 0 to the status of a realistic limit to reliable communications R 0 appears as the key figure-of-merit for communication system design in the influential works of the period: Wozencraft and Jacobs, Principles of Communication Engineering, 1965 Wozencraft and Kennedy, Modulation and demodulation for probabilistic coding, IT Trans.,1966 Massey, Coding and modulation in digital communications, Zürich, 1974 Forney (1995) gives a first-hand account of this situation in his Shannon Lecture Performance and Complexity Pinsker s scheme 37 / 72

76 R 0 as practical capacity The failure to beat the cutoff rate bound in a meaningful manner despite intense efforts elevated R 0 to the status of a realistic limit to reliable communications R 0 appears as the key figure-of-merit for communication system design in the influential works of the period: Wozencraft and Jacobs, Principles of Communication Engineering, 1965 Wozencraft and Kennedy, Modulation and demodulation for probabilistic coding, IT Trans.,1966 Massey, Coding and modulation in digital communications, Zürich, 1974 Forney (1995) gives a first-hand account of this situation in his Shannon Lecture Performance and Complexity Pinsker s scheme 37 / 72

77 R 0 as practical capacity The failure to beat the cutoff rate bound in a meaningful manner despite intense efforts elevated R 0 to the status of a realistic limit to reliable communications R 0 appears as the key figure-of-merit for communication system design in the influential works of the period: Wozencraft and Jacobs, Principles of Communication Engineering, 1965 Wozencraft and Kennedy, Modulation and demodulation for probabilistic coding, IT Trans.,1966 Massey, Coding and modulation in digital communications, Zürich, 1974 Forney (1995) gives a first-hand account of this situation in his Shannon Lecture Performance and Complexity Pinsker s scheme 37 / 72

78 R 0 as practical capacity The failure to beat the cutoff rate bound in a meaningful manner despite intense efforts elevated R 0 to the status of a realistic limit to reliable communications R 0 appears as the key figure-of-merit for communication system design in the influential works of the period: Wozencraft and Jacobs, Principles of Communication Engineering, 1965 Wozencraft and Kennedy, Modulation and demodulation for probabilistic coding, IT Trans.,1966 Massey, Coding and modulation in digital communications, Zürich, 1974 Forney (1995) gives a first-hand account of this situation in his Shannon Lecture Performance and Complexity Pinsker s scheme 37 / 72

79 Other attempts to boost the cutoff rate Efforts to beat the cutoff rate continues to this day D. J. Costello and F. Jelinek, P. R. Chevillat and D. J. Costello Jr., F. Hemmati, B. Radosavljevic, E. Arıkan, B. Hajek, J. Belzile and D. Haccoun, S. Kallel and K. Li, E. Arıkan, Pinsker s scheme 38 / 72

80 Other attempts to boost the cutoff rate Efforts to beat the cutoff rate continues to this day D. J. Costello and F. Jelinek, P. R. Chevillat and D. J. Costello Jr., F. Hemmati, B. Radosavljevic, E. Arıkan, B. Hajek, J. Belzile and D. Haccoun, S. Kallel and K. Li, E. Arıkan, In fact, polar coding originates from such attempts. Pinsker s scheme 38 / 72

81 Sequential decoding and the cutoff rate Guessing and cutoff rate Boosting the cutoff rate Pinsker s scheme Massey s scheme Polar coding Massey s scheme 39 / 72

82 The R 0 debate A case study by McEliece (1980) cast a big doubt on the significance of R 0 as a practical limit McEliece s study was concerned with a Pulse Position Modulation (PPM) scheme, modeled as a q-ary erasure channel Capacity: C(q) = (1 ɛ) log q Cutoff rate: R 0 (q) = log As the bandwidth (q) grew, q 1+(q 1)ɛ ε 1 ε R 0 (q) C(q) 0 Algebraic coding (Reed-Solomon) scored a big win over probabilistic coding! q q? Massey s scheme 40 / 72

83 The R 0 debate A case study by McEliece (1980) cast a big doubt on the significance of R 0 as a practical limit McEliece s study was concerned with a Pulse Position Modulation (PPM) scheme, modeled as a q-ary erasure channel Capacity: C(q) = (1 ɛ) log q Cutoff rate: R 0 (q) = log As the bandwidth (q) grew, q 1+(q 1)ɛ ε 1 ε R 0 (q) C(q) 0 Algebraic coding (Reed-Solomon) scored a big win over probabilistic coding! q q? Massey s scheme 41 / 72

84 Massey meets the challenge Massey (1981) showed that there was a different way of doing coding and modulation on a q-ary erasure channel that boosted R 0 effortlessly Paradoxically, as Massey restored the status of R 0, he exhibited the flaky nature of this parameter Massey s scheme 42 / 72

85 Massey meets the challenge Massey (1981) showed that there was a different way of doing coding and modulation on a q-ary erasure channel that boosted R 0 effortlessly Paradoxically, as Massey restored the status of R 0, he exhibited the flaky nature of this parameter Massey s scheme 42 / 72

86 Channel splitting to boost cutoff rate (Massey, 1981) 1 2 ε 1 ε ε ε ε ε 0 1 1? ε ε 0??? 1 1? Begin with a quaternary erasure channel (QEC) Massey s scheme 43 / 72

87 Channel splitting to boost cutoff rate (Massey, 1981) 1 2 ε 1 ε ε ε ε ε 0 1 1? ε ε 0??? 1 1? Relabel the inputs Massey s scheme 44 / 72

88 Channel splitting to boost cutoff rate (Massey, 1981) 1 2 ε 1 ε ε ε ε ε 0 1 1? ε ε 0??? 1 1? Split the QEC into two binary erasure channels (BEC) BECs fully correlated: erasures occur jointly Massey s scheme 45 / 72

89 Capacity, cutoff rate for one QEC vs two BECs Ordinary coding of QEC Independent coding of BECs E BEC D E QEC D E BEC D C(QEC) = 2(1 ɛ) R 0 (QEC) = log 4 1+3ɛ C(BEC) = (1 ɛ) R 0 (BEC) = log 2 1+ɛ Massey s scheme 46 / 72

90 Capacity, cutoff rate for one QEC vs two BECs Ordinary coding of QEC Independent coding of BECs E BEC D E QEC D E BEC D C(QEC) = 2(1 ɛ) R 0 (QEC) = log 4 1+3ɛ C(BEC) = (1 ɛ) R 0 (BEC) = log 2 1+ɛ C(QEC) = 2 C(BEC) Massey s scheme 46 / 72

91 Capacity, cutoff rate for one QEC vs two BECs Ordinary coding of QEC Independent coding of BECs E BEC D E QEC D E BEC D C(QEC) = 2(1 ɛ) R 0 (QEC) = log 4 1+3ɛ C(BEC) = (1 ɛ) R 0 (BEC) = log 2 1+ɛ C(QEC) = 2 C(BEC) R 0 (QEC) 2 R 0 (BEC) with equality iff ɛ = 0 or 1. Massey s scheme 46 / 72

92 Cutoff rate improvement by splitting capacity and cutoff rate (bits) BEC cutoff rate QEC capacity QEC cutoff rate 0 0 erasure probability (ǫ) 1 Massey s scheme 47 / 72

93 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

94 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

95 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

96 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

97 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

98 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

99 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

100 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

101 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

102 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

103 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

104 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

105 Comparison of Pinsker s and Massey s schemes Pinsker Construct a superchannel by combining independent copies of a given DMC W Split the superchannel into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Can be used universally Can achieve capacity Not practical Massey Split the given DMC W into correlated subchannels Ignore correlations between the subchannels, encode and decode them independently Applicable only to specific channels Cannot achieve capacity Practical Massey s scheme 48 / 72

106 A conservation law for the cutoff rate Derived (Vector) Channel K Block Encoder N Memoryless Channel W N Block Decoder K Rate K/N Parallel channels theorem (Gallager, 1965) R 0 (Derived vector channel) N R 0 (W ) Cleaning up the channel by pre-/post-processing can only hurt R 0 Shows that boosting cutoff rate requires more than one sequential decoder Massey s scheme 49 / 72

107 A conservation law for the cutoff rate Derived (Vector) Channel K Block Encoder N Memoryless Channel W N Block Decoder K Rate K/N Parallel channels theorem (Gallager, 1965) R 0 (Derived vector channel) N R 0 (W ) Cleaning up the channel by pre-/post-processing can only hurt R 0 Shows that boosting cutoff rate requires more than one sequential decoder Massey s scheme 49 / 72

108 A conservation law for the cutoff rate Derived (Vector) Channel K Block Encoder N Memoryless Channel W N Block Decoder K Rate K/N Parallel channels theorem (Gallager, 1965) R 0 (Derived vector channel) N R 0 (W ) Cleaning up the channel by pre-/post-processing can only hurt R 0 Shows that boosting cutoff rate requires more than one sequential decoder Massey s scheme 49 / 72

109 A conservation law for the cutoff rate Derived (Vector) Channel K Block Encoder N Memoryless Channel W N Block Decoder K Rate K/N Parallel channels theorem (Gallager, 1965) R 0 (Derived vector channel) N R 0 (W ) Cleaning up the channel by pre-/post-processing can only hurt R 0 Shows that boosting cutoff rate requires more than one sequential decoder Massey s scheme 49 / 72

110 Sequential decoding and the cutoff rate Guessing and cutoff rate Boosting the cutoff rate Pinsker s scheme Massey s scheme Polar coding Polar coding 50 / 72

111 Prescription for a new scheme Consider small constructions Retain independent encoding for the subchannels Do not ignore correlations between subchannels at the expense of capacity This points to multi-level coding and successive cancellation decoding Polar coding 51 / 72

112 Multi-stage decoding architecture Channel W N d 1 CE 1 u 1 x 1 W d 2 CE 2 u 2 x 2 W One-to-one mapper f N y 1 l 1 y 2 l 2 Soft-decision generator g N û 1 SD 1 ˆd1 û 2 SD 2 ˆd2 d N CE N u N x N W y N l N û N SD N ˆdN N convolutional encoders N independent copies of W N sequential decoders Polar coding 52 / 72

113 Prescription for a new scheme Consider small constructions Retain independent encoding for the subchannels Do not ignore correlations between subchannels at the expense of capacity This points to multi-level coding and successive cancellation decoding Polar coding 53 / 72

114 Notation Let V : F 2 = {0, 1} Y be an arbitrary binary-input memoryless channel Let (X, Y ) be an input-output ensemble for channel V with X uniform on F 2 The (symmetric) capacity is defined as I (V ) = I (X ; Y ) = y Y x F 2 The (symmetric) cutoff rate is defined as R 0 (V ) = R 0 (X ; Y ) = log y Y 1 2 V (y x) log V (y x) 1 2 V (y 0) V (y 1) x F2 1 2 V (y x) 2 Polar coding 54 / 72

115 The basic construction Given two copies of a binary input channel W : F 2 = {0, 1} Y X 1 W Y 1 X 2 W Y 2 consider the transformation above to generate two channels W : F 2 Y 2 and W + : F 2 Y 2 F 2 with W (y 1 y 2 u 1 ) = u W (y 1 u 1 + u 2 )W (y 2 u 2 ) W + (y 1 y 2 u 1 u 2 ) = 1 2 W (y 1 u 1 + u 2 )W (y 2 u 2 ) Polar coding 55 / 72

116 The basic construction Given two copies of a binary input channel W : F 2 = {0, 1} Y U 1 + W Y 1 U 2 W Y 2 consider the transformation above to generate two channels W : F 2 Y 2 and W + : F 2 Y 2 F 2 with W (y 1 y 2 u 1 ) = u W (y 1 u 1 + u 2 )W (y 2 u 2 ) W + (y 1 y 2 u 1 u 2 ) = 1 2 W (y 1 u 1 + u 2 )W (y 2 u 2 ) Polar coding 55 / 72

117 The basic construction Given two copies of a binary input channel W : F 2 = {0, 1} Y U 1 + W Y 1 U 2 W Y 2 consider the transformation above to generate two channels W : F 2 Y 2 and W + : F 2 Y 2 F 2 with W (y 1 y 2 u 1 ) = u W (y 1 u 1 + u 2 )W (y 2 u 2 ) W + (y 1 y 2 u 1 u 2 ) = 1 2 W (y 1 u 1 + u 2 )W (y 2 u 2 ) Polar coding 55 / 72

118 The basic construction Given two copies of a binary input channel W : F 2 = {0, 1} Y U 1 + W Y 1 U 2 W Y 2 consider the transformation above to generate two channels W : F 2 Y 2 and W + : F 2 Y 2 F 2 with W (y 1 y 2 u 1 ) = u W (y 1 u 1 + u 2 )W (y 2 u 2 ) W + (y 1 y 2 u 1 u 2 ) = 1 2 W (y 1 u 1 + u 2 )W (y 2 u 2 ) Polar coding 55 / 72

119 The basic construction Given two copies of a binary input channel W : F 2 = {0, 1} Y U 1 + W Y 1 U 2 W Y 2 consider the transformation above to generate two channels W : F 2 Y 2 and W + : F 2 Y 2 F 2 with W (y 1 y 2 u 1 ) = u W (y 1 u 1 + u 2 )W (y 2 u 2 ) W + (y 1 y 2 u 1 u 2 ) = 1 2 W (y 1 u 1 + u 2 )W (y 2 u 2 ) Polar coding 55 / 72

120 The basic construction Given two copies of a binary input channel W : F 2 = {0, 1} Y U 1 + W Y 1 U 2 W Y 2 consider the transformation above to generate two channels W : F 2 Y 2 and W + : F 2 Y 2 F 2 with W (y 1 y 2 u 1 ) = u W (y 1 u 1 + u 2 )W (y 2 u 2 ) W + (y 1 y 2 u 1 u 2 ) = 1 2 W (y 1 u 1 + u 2 )W (y 2 u 2 ) Polar coding 55 / 72

121 The 2x2 transformation is information lossless With independent, uniform U 1, U 2, I (W ) = I (U 1 ; Y 1 Y 2 ), I (W + ) = I (U 2 ; Y 1 Y 2 U 1 ). Thus, I (W ) + I (W + ) = I (U 1 U 2 ; Y 1 Y 2 ) = 2I (W ), and I (W ) I (W ) I (W + ). Polar coding 56 / 72

122 The 2x2 transformation creates cutoff rate With independent, uniform U 1, U 2, R 0 (W ) = R 0 (U 1 ; Y 1 Y 2 ), R 0 (W + ) = R 0 (U 2 ; Y 1 Y 2 U 1 ). Theorem (2005) Correlation helps create cutoff rate: R 0 (W ) + R 0 (W + ) 2R 0 (W ) with equality iff W is a perfect channel, I (W ) = 1, or a pure noise channel, I (W ) = 0. Cutoff rates start polarizing: R 0 (W ) R 0 (W ) R 0 (W + ) Polar coding 57 / 72

123 The 2x2 transformation creates cutoff rate With independent, uniform U 1, U 2, R 0 (W ) = R 0 (U 1 ; Y 1 Y 2 ), R 0 (W + ) = R 0 (U 2 ; Y 1 Y 2 U 1 ). Theorem (2005) Correlation helps create cutoff rate: R 0 (W ) + R 0 (W + ) 2R 0 (W ) with equality iff W is a perfect channel, I (W ) = 1, or a pure noise channel, I (W ) = 0. Cutoff rates start polarizing: R 0 (W ) R 0 (W ) R 0 (W + ) Polar coding 57 / 72

124 The 2x2 transformation creates cutoff rate With independent, uniform U 1, U 2, R 0 (W ) = R 0 (U 1 ; Y 1 Y 2 ), R 0 (W + ) = R 0 (U 2 ; Y 1 Y 2 U 1 ). Theorem (2005) Correlation helps create cutoff rate: R 0 (W ) + R 0 (W + ) 2R 0 (W ) with equality iff W is a perfect channel, I (W ) = 1, or a pure noise channel, I (W ) = 0. Cutoff rates start polarizing: R 0 (W ) R 0 (W ) R 0 (W + ) Polar coding 57 / 72

125 Recursive continuation Do the same recursively: Given W, Duplicate W and obtain W and W +. Duplicate W (W + ), and obtain W and W + (W + and W ++ ). Duplicate W (W +, W +, W ++ ) and obtain W and W + (W +, W ++, W +, W + +, W ++, W +++ ).... Polar coding 58 / 72

126 Recursive continuation Do the same recursively: Given W, Duplicate W and obtain W and W +. Duplicate W (W + ), and obtain W and W + (W + and W ++ ). Duplicate W (W +, W +, W ++ ) and obtain W and W + (W +, W ++, W +, W + +, W ++, W +++ ).... Polar coding 58 / 72

127 Recursive continuation Do the same recursively: Given W, Duplicate W and obtain W and W +. Duplicate W (W + ), and obtain W and W + (W + and W ++ ). Duplicate W (W +, W +, W ++ ) and obtain W and W + (W +, W ++, W +, W + +, W ++, W +++ ).... Polar coding 58 / 72

128 Recursive continuation Do the same recursively: Given W, Duplicate W and obtain W and W +. Duplicate W (W + ), and obtain W and W + (W + and W ++ ). Duplicate W (W +, W +, W ++ ) and obtain W and W + (W +, W ++, W +, W + +, W ++, W +++ ).... Polar coding 58 / 72

129 Recursive continuation Do the same recursively: Given W, Duplicate W and obtain W and W +. Duplicate W (W + ), and obtain W and W + (W + and W ++ ). Duplicate W (W +, W +, W ++ ) and obtain W and W + (W +, W ++, W +, W + +, W ++, W +++ ).... Polar coding 58 / 72

130 Recursive continuation Do the same recursively: Given W, Duplicate W and obtain W and W +. Duplicate W (W + ), and obtain W and W + (W + and W ++ ). Duplicate W (W +, W +, W ++ ) and obtain W and W + (W +, W ++, W +, W + +, W ++, W +++ ).... Polar coding 58 / 72

131 Polarization Process Evolution of I = I (W ), I + = I (W + ), I = I (W ), etc. 1 I 0 Polar coding 59 / 72

132 Polarization Process Evolution of I = I (W ), I + = I (W + ), I = I (W ), etc. 1 I + I I 0 1 Polar coding 60 / 72

133 Polarization Process Evolution of I = I (W ), I + = I (W + ), I = I (W ), etc. 1 I ++ I + I I + I + I I Polar coding 61 / 72

134 Polarization Process Evolution of I = I (W ), I + = I (W + ), I = I (W ), etc. 1 I ++ I + I I + I + I I Polar coding 62 / 72

135 Polarization Process Evolution of I = I (W ), I + = I (W + ), I = I (W ), etc. 1 I ++ I + I I + I + I I Polar coding 63 / 72

136 Polarization Process Evolution of I = I (W ), I + = I (W + ), I = I (W ), etc. 1 I ++ I + I I + I + I I Polar coding 64 / 72

137 Polarization Process Evolution of I = I (W ), I + = I (W + ), I = I (W ), etc. 1 I ++ I + I I + I + I I Polar coding 65 / 72

138 Polarization Process Evolution of I = I (W ), I + = I (W + ), I = I (W ), etc. 1 I ++ I + I I + I + I I Polar coding 66 / 72

Channel combining and splitting for cutoff rate improvement

Channel combining and splitting for cutoff rate improvement Channel combining and splitting for cutoff rate improvement Erdal Arıkan Electrical-Electronics Engineering Department Bilkent University, Ankara, 68, Turkey Email: arikan@eebilkentedutr arxiv:cs/5834v

More information

On the Origin of Polar Coding

On the Origin of Polar Coding This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI.9/JSAC.25.2543,

More information

On the Origin of Polar Coding

On the Origin of Polar Coding On the Origin of Polar Coding Erdal Arıkan Department of Electrical and Electronics Engineering Bilkent University, Ankara, Turkey Email: arikan@ee.bilkent.edu.tr arxiv:5.4838v [cs.it] 6 ov 25 Abstract

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

Lecture 9 Polar Coding

Lecture 9 Polar Coding Lecture 9 Polar Coding I-Hsiang ang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 29, 2015 1 / 25 I-Hsiang ang IT Lecture 9 In Pursuit of Shannon s Limit Since

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

The PPM Poisson Channel: Finite-Length Bounds and Code Design

The PPM Poisson Channel: Finite-Length Bounds and Code Design August 21, 2014 The PPM Poisson Channel: Finite-Length Bounds and Code Design Flavio Zabini DEI - University of Bologna and Institute for Communications and Navigation German Aerospace Center (DLR) Balazs

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

Belief propagation decoding of quantum channels by passing quantum messages

Belief propagation decoding of quantum channels by passing quantum messages Belief propagation decoding of quantum channels by passing quantum messages arxiv:67.4833 QIP 27 Joseph M. Renes lempelziv@flickr To do research in quantum information theory, pick a favorite text on classical

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

Successive Cancellation Decoding of Single Parity-Check Product Codes

Successive Cancellation Decoding of Single Parity-Check Product Codes Successive Cancellation Decoding of Single Parity-Check Product Codes Mustafa Cemil Coşkun, Gianluigi Liva, Alexandre Graell i Amat and Michael Lentmaier Institute of Communications and Navigation, German

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

On Bit Error Rate Performance of Polar Codes in Finite Regime

On Bit Error Rate Performance of Polar Codes in Finite Regime On Bit Error Rate Performance of Polar Codes in Finite Regime A. Eslami and H. Pishro-Nik Abstract Polar codes have been recently proposed as the first low complexity class of codes that can provably achieve

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Communication by Regression: Sparse Superposition Codes

Communication by Regression: Sparse Superposition Codes Communication by Regression: Sparse Superposition Codes Department of Statistics, Yale University Coauthors: Antony Joseph and Sanghee Cho February 21, 2013, University of Texas Channel Communication Set-up

More information

Delay, feedback, and the price of ignorance

Delay, feedback, and the price of ignorance Delay, feedback, and the price of ignorance Anant Sahai based in part on joint work with students: Tunc Simsek Cheng Chang Wireless Foundations Department of Electrical Engineering and Computer Sciences

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

Error Correction Methods

Error Correction Methods Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.

More information

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check

More information

Coding theory: Applications

Coding theory: Applications INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

List Decoding of Reed Solomon Codes

List Decoding of Reed Solomon Codes List Decoding of Reed Solomon Codes p. 1/30 List Decoding of Reed Solomon Codes Madhu Sudan MIT CSAIL Background: Reliable Transmission of Information List Decoding of Reed Solomon Codes p. 2/30 List Decoding

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM Journal of ELECTRICAL ENGINEERING, VOL. 63, NO. 1, 2012, 59 64 SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM H. Prashantha Kumar Udupi Sripati K. Rajesh

More information

Noisy channel communication

Noisy channel communication Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

Coding Techniques for Data Storage Systems

Coding Techniques for Data Storage Systems Coding Techniques for Data Storage Systems Thomas Mittelholzer IBM Zurich Research Laboratory /8 Göttingen Agenda. Channel Coding and Practical Coding Constraints. Linear Codes 3. Weight Enumerators and

More information

Serially Concatenated Polar Codes

Serially Concatenated Polar Codes Received September 11, 2018, accepted October 13, 2018. Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Digital Object Identifier 10.1109/ACCESS.2018.2877720 Serially Concatenated

More information

Decoding Reed-Muller codes over product sets

Decoding Reed-Muller codes over product sets Rutgers University May 30, 2016 Overview Error-correcting codes 1 Error-correcting codes Motivation 2 Reed-Solomon codes Reed-Muller codes 3 Error-correcting codes Motivation Goal: Send a message Don t

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem IITM-CS6845: Theory Toolkit February 08, 2012 Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem Lecturer: Jayalal Sarma Scribe: Dinesh K Theme: Error correcting codes In the previous lecture,

More information

An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels

An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels B. Narayanaswamy, Rohit Negi and Pradeep Khosla Department of ECE Carnegie Mellon University

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes 1 Zheng Wang, Student Member, IEEE, Jie Luo, Member, IEEE arxiv:0808.3756v1 [cs.it] 27 Aug 2008 Abstract We show that

More information

Lecture 11: Polar codes construction

Lecture 11: Polar codes construction 15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last

More information

~1~~ ~ ~~~ ~~~

~1~~ ~ ~~~ ~~~ ~~ ~ IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 38, NO. 6, NOVEMBER 1992 1833 TABLE XI WEIGHT ENUMERATOR OF ck, A [52,27,9]-CODE 1 + 170~ + 442~ + 714y + 6188~ + 28560~ +5304Oyl4 + 77520~ + 308958~

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

ECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE)

ECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE) ECE 74 - Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE) 1. A Huffman code finds the optimal codeword to assign to a given block of source symbols. (a) Show that cannot be a Huffman

More information

Low-complexity error correction in LDPC codes with constituent RS codes 1

Low-complexity error correction in LDPC codes with constituent RS codes 1 Eleventh International Workshop on Algebraic and Combinatorial Coding Theory June 16-22, 2008, Pamporovo, Bulgaria pp. 348-353 Low-complexity error correction in LDPC codes with constituent RS codes 1

More information

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada

More information

Sequential Decoding of Binary Convolutional Codes

Sequential Decoding of Binary Convolutional Codes Sequential Decoding of Binary Convolutional Codes Yunghsiang S. Han Dept. Computer Science and Information Engineering, National Chi Nan University Taiwan E-mail: yshan@csie.ncnu.edu.tw Y. S. Han Sequential

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H. Problem sheet Ex. Verify that the function H(p,..., p n ) = k p k log p k satisfies all 8 axioms on H. Ex. (Not to be handed in). looking at the notes). List as many of the 8 axioms as you can, (without

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes International Symposium on Information Theory and its Applications, ISITA004 Parma, Italy, October 10 13, 004 Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem LECTURE 15 Last time: Feedback channel: setting up the problem Perfect feedback Feedback capacity Data compression Lecture outline Joint source and channel coding theorem Converse Robustness Brain teaser

More information

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Aalborg Universitet Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Published in: 2004 International Seminar on Communications DOI link to publication

More information

An Achievable Error Exponent for the Mismatched Multiple-Access Channel

An Achievable Error Exponent for the Mismatched Multiple-Access Channel An Achievable Error Exponent for the Mismatched Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@camacuk Albert Guillén i Fàbregas ICREA & Universitat Pompeu Fabra University of

More information

Lecture 14 February 28

Lecture 14 February 28 EE/Stats 376A: Information Theory Winter 07 Lecture 4 February 8 Lecturer: David Tse Scribe: Sagnik M, Vivek B 4 Outline Gaussian channel and capacity Information measures for continuous random variables

More information

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 1. Cascade of Binary Symmetric Channels The conditional probability distribution py x for each of the BSCs may be expressed by the transition probability

More information

Performance of Polar Codes for Channel and Source Coding

Performance of Polar Codes for Channel and Source Coding Performance of Polar Codes for Channel and Source Coding Nadine Hussami AUB, Lebanon, Email: njh03@aub.edu.lb Satish Babu Korada and üdiger Urbanke EPFL, Switzerland, Email: {satish.korada,ruediger.urbanke}@epfl.ch

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

3F1 Information Theory, Lecture 3

3F1 Information Theory, Lecture 3 3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2011, 28 November 2011 Memoryless Sources Arithmetic Coding Sources with Memory 2 / 19 Summary of last lecture Prefix-free

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes Digital Communication Systems ECS 452 Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th 5.2 Binary Convolutional Codes 35 Binary Convolutional Codes Introduced by Elias in 1955 There, it is referred

More information

Analyzing Large Communication Networks

Analyzing Large Communication Networks Analyzing Large Communication Networks Shirin Jalali joint work with Michelle Effros and Tracey Ho Dec. 2015 1 The gap Fundamental questions: i. What is the best achievable performance? ii. How to communicate

More information

Stabilization over discrete memoryless and wideband channels using nearly memoryless observations

Stabilization over discrete memoryless and wideband channels using nearly memoryless observations Stabilization over discrete memoryless and wideband channels using nearly memoryless observations Anant Sahai Abstract We study stabilization of a discrete-time scalar unstable plant over a noisy communication

More information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

Exercise 1. = P(y a 1)P(a 1 )

Exercise 1. = P(y a 1)P(a 1 ) Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a

More information

On the Error Exponents of ARQ Channels with Deadlines

On the Error Exponents of ARQ Channels with Deadlines On the Error Exponents of ARQ Channels with Deadlines Praveen Kumar Gopala, Young-Han Nam and Hesham El Gamal arxiv:cs/06006v [cs.it] 8 Oct 2006 March 22, 208 Abstract We consider communication over Automatic

More information

Optimum Rate Communication by Fast Sparse Superposition Codes

Optimum Rate Communication by Fast Sparse Superposition Codes Optimum Rate Communication by Fast Sparse Superposition Codes Andrew Barron Department of Statistics Yale University Joint work with Antony Joseph and Sanghee Cho Algebra, Codes and Networks Conference

More information

Entropies & Information Theory

Entropies & Information Theory Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

On Two Probabilistic Decoding Algorithms for Binary Linear Codes

On Two Probabilistic Decoding Algorithms for Binary Linear Codes On Two Probabilistic Decoding Algorithms for Binary Linear Codes Miodrag Živković Abstract A generalization of Sullivan inequality on the ratio of the probability of a linear code to that of any of its

More information

Graph-based codes for flash memory

Graph-based codes for flash memory 1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

Entropy as a measure of surprise

Entropy as a measure of surprise Entropy as a measure of surprise Lecture 5: Sam Roweis September 26, 25 What does information do? It removes uncertainty. Information Conveyed = Uncertainty Removed = Surprise Yielded. How should we quantify

More information

The sequential decoding metric for detection in sensor networks

The sequential decoding metric for detection in sensor networks The sequential decoding metric for detection in sensor networks B. Narayanaswamy, Yaron Rachlin, Rohit Negi and Pradeep Khosla Department of ECE Carnegie Mellon University Pittsburgh, PA, 523 Email: {bnarayan,rachlin,negi,pkk}@ece.cmu.edu

More information

3F1 Information Theory, Lecture 3

3F1 Information Theory, Lecture 3 3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2013, 29 November 2013 Memoryless Sources Arithmetic Coding Sources with Memory Markov Example 2 / 21 Encoding the output

More information

Shannon s Noisy-Channel Coding Theorem

Shannon s Noisy-Channel Coding Theorem Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 13, 2015 Lucas Slot, Sebastian Zur Shannon s Noisy-Channel Coding Theorem February 13, 2015 1 / 29 Outline 1 Definitions and Terminology

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Asymptotic redundancy and prolixity

Asymptotic redundancy and prolixity Asymptotic redundancy and prolixity Yuval Dagan, Yuval Filmus, and Shay Moran April 6, 2017 Abstract Gallager (1978) considered the worst-case redundancy of Huffman codes as the maximum probability tends

More information

S. Arnstein D. Bridwell A. Bucher Chase D. Falconer L. Greenspan Haccoun M. Heggestad

S. Arnstein D. Bridwell A. Bucher Chase D. Falconer L. Greenspan Haccoun M. Heggestad XV. PROCESSING AND TRANSMISSION OF INFORMATION Academic and Research Staff Prof. R. M. Gallager Prof. E. V. Hoversten Prof. I. M. Jacobs Prof. R. E. Kahn Prof. R. S. Kennedy S. Arnstein D. Bridwell A.

More information

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70 Name : Roll No. :.... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/2011-12 2011 CODING & INFORMATION THEORY Time Allotted : 3 Hours Full Marks : 70 The figures in the margin indicate full marks

More information

Lecture 11: Quantum Information III - Source Coding

Lecture 11: Quantum Information III - Source Coding CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that

More information

LECTURE 13. Last time: Lecture outline

LECTURE 13. Last time: Lecture outline LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to

More information

Lecture 19: Elias-Bassalygo Bound

Lecture 19: Elias-Bassalygo Bound Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecturer: Atri Rudra Lecture 19: Elias-Bassalygo Bound October 10, 2007 Scribe: Michael Pfetsch & Atri Rudra In the last lecture,

More information

UNIT I INFORMATION THEORY. I k log 2

UNIT I INFORMATION THEORY. I k log 2 UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

ABriefReviewof CodingTheory

ABriefReviewof CodingTheory ABriefReviewof CodingTheory Pascal O. Vontobel JTG Summer School, IIT Madras, Chennai, India June 16 19, 2014 ReliableCommunication Oneofthemainmotivationforstudyingcodingtheoryisthedesireto reliably transmit

More information

Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes

Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes Introduction to Coding Theory CMU: Spring 010 Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes April 010 Lecturer: Venkatesan Guruswami Scribe: Venkat Guruswami & Ali Kemal Sinop DRAFT

More information

(Preprint of paper to appear in Proc Intl. Symp. on Info. Th. and its Applications, Waikiki, Hawaii, Nov , 1990.)

(Preprint of paper to appear in Proc Intl. Symp. on Info. Th. and its Applications, Waikiki, Hawaii, Nov , 1990.) (Preprint of paper to appear in Proc. 1990 Intl. Symp. on Info. Th. and its Applications, Waikiki, Hawaii, ov. 27-30, 1990.) CAUSALITY, FEEDBACK AD DIRECTED IFORMATIO James L. Massey Institute for Signal

More information

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels POST-PRIT OF THE IEEE TRAS. O IFORMATIO THEORY, VOL. 54, O. 5, PP. 96 99, MAY 8 An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels Gil Wiechman Igal Sason Department

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information