U Logo Use Guidelines
|
|
- Dorothy Jordan
- 5 years ago
- Views:
Transcription
1 COMP2610/ Information Theory Lecture 22: Hamming Codes U Logo Use Guidelines Mark Reid and Aditya Menon logo is a contemporary n of our heritage. presents our name, d and our motto: arn the nature of things. authenticity of our brand identity, there are n how our logo is used. go - horizontal logo go should be used on a white background. ludes black text with the crest in Deep Gold in MYK. Research School of Computer Science The Australian National University Preferred logo October 15th, 2014 Black version inting is not available, the black logo can hite background. Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
2 Reminder: Repetition Codes The repetition code R 3 : s t {}}{ {}}{ {}}{ {}}{ {}}{ {}}{ {}}{ η r For a BSC with bit flip probability f = 0.1, drives error rate down to 3% For general f, the error probability is f 2 (3 2f ) Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
3 This time Introduction to block codes Extension to basic repetition codes The (7,4) Hamming code Redundancy in (linear) block codes through parity check bits Syndrome decoding Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
4 1 Motivation 2 The (7,4) Hamming code Coding Decoding Syndrome Decoding Error Probabilities 3 Wrapping up Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
5 Motivation Goal: Communication with small probability of error and high rate: Repetition codes introduce redundancy on a per-bit basis Can we improve on repetition codes? Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
6 Motivation Goal: Communication with small probability of error and high rate: Repetition codes introduce redundancy on a per-bit basis Can we improve on repetition codes? Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
7 Motivation Goal: Communication with small probability of error and high rate: Repetition codes introduce redundancy on a per-bit basis Can we improve on repetition codes? Idea: Introduce redundancy to blocks of data instead Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
8 Motivation Goal: Communication with small probability of error and high rate: Repetition codes introduce redundancy on a per-bit basis Can we improve on repetition codes? Idea: Introduce redundancy to blocks of data instead Block Code A block code is a rule for encoding a length-k sequence of source bits s into a length-n sequence of transmitted bits t. Introduce redundancy: N > K Focus on Linear codes Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
9 Motivation Goal: Communication with small probability of error and high rate: Repetition codes introduce redundancy on a per-bit basis Can we improve on repetition codes? Idea: Introduce redundancy to blocks of data instead Block Code A block code is a rule for encoding a length-k sequence of source bits s into a length-n sequence of transmitted bits t. Introduce redundancy: N > K Focus on Linear codes We will introduce a simple type of block code called the (7,4) Hamming code Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
10 An Example The (7, 4) Hamming Code Consider K = 4, and a source message s = The repetition code R 2 produces t = The (7,4) Hamming code produces t = Redundancy, but not repetition How are these magic bits computed? Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
11 1 Motivation 2 The (7,4) Hamming code Coding Decoding Syndrome Decoding Error Probabilities 3 Wrapping up Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
12 The (7,4) Hamming code Coding Consider K = 4, N = 7 and s = It will help to think of the code in terms of overlapping circles Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
13 The (7,4) Hamming code Coding Consider K = 4, N = 7 and s = It will help to think of the code in terms of overlapping circles Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
14 The (7,4) Hamming code Coding Copy the source bits into the the first 4 target bits: Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
15 The (7,4) Hamming code Coding Set parity-check bits so that the number of ones within each circle is even: So we have s = encoder t = Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
16 The (7,4) Hamming code Coding It is clear that we have set: t i = s i for i = 1,..., 4 t 5 = s 1 s 2 s 3 t 6 = s 2 s 3 s 4 t 7 = s 1 s 3 s 4 where we use modulo-2 arithmetic Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
17 The (7,4) Hamming code Coding In matrix form: t = G T s with G T = where s = [ s 1 s 2 s 3 s 4 ] T [ I4 P ] = G is called the Generator matrix of the code. The Hamming code is linear! , Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
18 Codewords Each (unique) sequence that can be transmitted is called a codeword. Codeword examples: s Codeword (t) ? Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
19 Codewords Each (unique) sequence that can be transmitted is called a codeword. Codeword examples: s Codeword (t) ? For the (7,4) Hamming code we have a total of 16 codewords Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
20 Codewords Each (unique) sequence that can be transmitted is called a codeword. Codeword examples: s Codeword (t) ? For the (7,4) Hamming code we have a total of 16 codewords There are other bit strings that immediately imply corruption Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
21 Codewords Each (unique) sequence that can be transmitted is called a codeword. Codeword examples: s Codeword (t) ? For the (7,4) Hamming code we have a total of 16 codewords There are other bit strings that immediately imply corruption Any two codewords differ in at least three bits Each original bit belongs to at least two circles Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
22 The (7,4) Hamming code Coding Write G T = [ G 1 G 2 G 3 G 4 ] where each G i is a 7 dimensional bit vector Then, the transmitted message is t = G T s = [ G 1 G 2 G 3 G 4 ] s = s 1 G s 4 G 4 All codewords can be obtained as linear combinations of the rows of G: { 4 Codewords = α i G i }, i=1 where α i {0, 1} and G i is the ith row of G. Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
23 1 Motivation 2 The (7,4) Hamming code Coding Decoding Syndrome Decoding Error Probabilities 3 Wrapping up Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
24 Decoding We can encode a length-4 sequence s into a length-7 sequence t using 3 parity check bits Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
25 Decoding We can encode a length-4 sequence s into a length-7 sequence t using 3 parity check bits t can be corrupted by noise which can flip any of the 7 bits (including the parity check bits): s {}}{ t η r Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
26 Decoding We can encode a length-4 sequence s into a length-7 sequence t using 3 parity check bits t can be corrupted by noise which can flip any of the 7 bits (including the parity check bits): s {}}{ t η r How should we decode r? We could do this exhaustively using the 16 codewords Assuming BSC, uniform p(s): Get the most probable explanation Find s such that t(s) r 1 is minimum Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
27 Decoding We can encode a length-4 sequence s into a length-7 sequence t using 3 parity check bits t can be corrupted by noise which can flip any of the 7 bits (including the parity check bits): s {}}{ t η r How should we decode r? We could do this exhaustively using the 16 codewords Assuming BSC, uniform p(s): Get the most probable explanation Find s such that t(s) r 1 is minimum We can get the most probable source vector in an more efficient way. Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
28 Decoding Example 1 We have s = encoder t = noise r = : (1) Detect circles with wrong (odd) parity What bit is responsible for this? Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
29 Decoding Example 1 We have s = encoder t = noise r = : (2) Detect culprit bit and flip it The decoded sequence is ŝ = Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
30 Decoding Example 2 We have s = encoder t = noise r = : (1) Detect circles with wrong (odd) parity What bit is responsible for this? Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
31 Decoding Example 2 We have s = encoder t = noise r = : (2) Detect culprit bit and flip it The decoded sequence is ŝ = Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
32 Decoding Example 3 We have s = encoder t = noise r = : (1) Detect circles with wrong (odd) parity What bit is responsible for this? Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
33 Decoding Example 3 We have s = encoder t = noise r = : (2) Detect culprit bit and flip it The decoded sequence is ŝ = Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
34 1 Motivation 2 The (7,4) Hamming code Coding Decoding Syndrome Decoding Error Probabilities 3 Wrapping up Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
35 Optimal Decoding Algorithm: Syndrome Decoding Given r = r 1,..., r 7, assume BSC with small noise level f : 1 Define the syndrome as the length-3 vector z that describes the pattern of violations of the parity bits r 5, r 6, r 7. zi = 1 when the ith parity bit does not match the parity of r Flipping a single bit leads to a different syndrome Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
36 Optimal Decoding Algorithm: Syndrome Decoding Given r = r 1,..., r 7, assume BSC with small noise level f : 1 Define the syndrome as the length-3 vector z that describes the pattern of violations of the parity bits r 5, r 6, r 7. zi = 1 when the ith parity bit does not match the parity of r Flipping a single bit leads to a different syndrome 2 Check parity bits r 5, r 6, r 7 and identify the syndrome Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
37 Optimal Decoding Algorithm: Syndrome Decoding Given r = r 1,..., r 7, assume BSC with small noise level f : 1 Define the syndrome as the length-3 vector z that describes the pattern of violations of the parity bits r 5, r 6, r 7. zi = 1 when the ith parity bit does not match the parity of r Flipping a single bit leads to a different syndrome 2 Check parity bits r 5, r 6, r 7 and identify the syndrome 3 Unflip the single bit responsible for this pattern of violation This syndrome could have been caused by other noise patterns Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
38 Optimal Decoding Algorithm: Syndrome Decoding Given r = r 1,..., r 7, assume BSC with small noise level f : 1 Define the syndrome as the length-3 vector z that describes the pattern of violations of the parity bits r 5, r 6, r 7. zi = 1 when the ith parity bit does not match the parity of r Flipping a single bit leads to a different syndrome 2 Check parity bits r 5, r 6, r 7 and identify the syndrome 3 Unflip the single bit responsible for this pattern of violation This syndrome could have been caused by other noise patterns z Flip bit none r 7 r 6 r 4 r 5 r 1 r 2 r 3 Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
39 Optimal Decoding Algorithm: Syndrome Decoding Given r = r 1,..., r 7, assume BSC with small noise level f : 1 Define the syndrome as the length-3 vector z that describes the pattern of violations of the parity bits r 5, r 6, r 7. zi = 1 when the ith parity bit does not match the parity of r Flipping a single bit leads to a different syndrome 2 Check parity bits r 5, r 6, r 7 and identify the syndrome 3 Unflip the single bit responsible for this pattern of violation This syndrome could have been caused by other noise patterns z Flip bit none r 7 r 6 r 4 r 5 r 1 r 2 r 3 The optimal decoding algorithm unflips at most one bit Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
40 Optimal Decoding Algorithm: Syndrome Decoding When the noise level f on the BSC is small, it may be reasonable that we see only a single bit flip in a sequence of 4 bits The syndrome decoding method exactly recovers the source message in this case c.f. Noise flipping one bit in the repetition code R 3 But what happens if the noise flips more than one bit? Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
41 Decoding Example 4: Flipping 2 Bits We have s = encoder t = noise r = : (1) Detect circles with wrong (odd) parity What bit is responsible for this? Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
42 Decoding Example 4: Flipping 2 Bits We have s = encoder t = noise r = : (2) Detect culprit bit and flip it The decoded sequence is ŝ = We have made 3 errors but only 2 involve the actual message Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
43 Syndrome Decoding: Matrix Form Recall that we just need to compare the expected parity bits with the actual ones we received: z 1 = r 1 r 2 r 3 r 5 z 2 = r 2 r 3 r 4 r 6 z 3 = r 1 r 3 r 4 r 7, Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
44 Syndrome Decoding: Matrix Form Recall that we just need to compare the expected parity bits with the actual ones we received: z 1 = r 1 r 2 r 3 r 5 z 2 = r 2 r 3 r 4 r 6 z 3 = r 1 r 3 r 4 r 7, but in modulo-2 arithmetic 1 1 so we can replace with so we have: z = Hr with H = [ ] P I 3 = Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
45 Syndrome Decoding: Matrix Form Recall that we just need to compare the expected parity bits with the actual ones we received: z 1 = r 1 r 2 r 3 r 5 z 2 = r 2 r 3 r 4 r 6 z 3 = r 1 r 3 r 4 r 7, but in modulo-2 arithmetic 1 1 so we can replace with so we have: z = Hr with H = [ ] P I 3 = What is the syndrome for a codeword? Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
46 Syndrome Decoding: Matrix Form Recall that we obtain a codeword with t = G T s Assume we receive r = t + η, where η = 0 The syndrome is z = Hr = Ht = HG T s = 0 This is because HG T = P + P = 0 Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
47 Syndrome Decoding: Matrix Form For the noisy case we have: r = G T s + η z = Hr = HG T s + Hη = Hη. Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
48 Syndrome Decoding: Matrix Form For the noisy case we have: r = G T s + η z = Hr = HG T s + Hη = Hη. Therefore, syndrome decoding boils down to find the most probable η satisfying Hη = z. Maximum likelihood decoder Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
49 1 Motivation 2 The (7,4) Hamming code Coding Decoding Syndrome Decoding Error Probabilities 3 Wrapping up Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
50 Error Probabilities Decoding Error : Occurs if at least one of the decoded bits ŝ i does not match the corresponding source bit s i for i = 1,... 4 Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
51 Error Probabilities Decoding Error : Occurs if at least one of the decoded bits ŝ i does not match the corresponding source bit s i for i = 1,... 4 p(block Error) : p B = p(ŝ s) Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
52 Error Probabilities Decoding Error : Occurs if at least one of the decoded bits ŝ i does not match the corresponding source bit s i for i = 1,... 4 p(block Error) : p B = p(ŝ s) p(bit Error) : p b = 1 K p(ŝ k s k ) K k=1 Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
53 Error Probabilities Decoding Error : Occurs if at least one of the decoded bits ŝ i does not match the corresponding source bit s i for i = 1,... 4 p(block Error) : p B = p(ŝ s) p(bit Error) : p b = 1 K p(ŝ k s k ) K k=1 Rate : R = K N = 4 7 Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
54 Error Probabilities Decoding Error : Occurs if at least one of the decoded bits ŝ i does not match the corresponding source bit s i for i = 1,... 4 p(block Error) : p B = p(ŝ s) p(bit Error) : p b = 1 K p(ŝ k s k ) K k=1 Rate : R = K N = 4 7 What is the probability of block error for the (7,4) Hamming code with f = 0.1? Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
55 Leading-Term Error Probabilities Block Error: This occurs when 2 or more bits in the block of 7 are flipped We can approximate p B to the leading term: 7 ( ) 7 p B = f m (1 f ) 7 m m m=2 ( ) 7 f 2 = 21f 2. 2 Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
56 Leading-Term Error Probabilities Bit Error: Given that a block error occurs, the noise must corrupt 2 or more bits The most probable case is when the noise corrupts 2 bits, which induces 3 errors in the decoded vector: Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
57 Leading-Term Error Probabilities Bit Error: Given that a block error occurs, the noise must corrupt 2 or more bits The most probable case is when the noise corrupts 2 bits, which induces 3 errors in the decoded vector: p(ŝ i s i ) 3 7 p B for i = 1,..., 7 Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
58 Leading-Term Error Probabilities Bit Error: Given that a block error occurs, the noise must corrupt 2 or more bits The most probable case is when the noise corrupts 2 bits, which induces 3 errors in the decoded vector: p(ŝ i s i ) 3 7 p B for i = 1,..., 7 All bits are equally likely to be corrupted (due to symmetry) Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
59 Leading-Term Error Probabilities Bit Error: Given that a block error occurs, the noise must corrupt 2 or more bits The most probable case is when the noise corrupts 2 bits, which induces 3 errors in the decoded vector: p(ŝ i s i ) 3 7 p B for i = 1,..., 7 All bits are equally likely to be corrupted (due to symmetry) p b 3 7 p B 9f 2 Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
60 What Can Be Achieved with Hamming Codes? 0.1 R R5 H(7,4) R H(7,4) Ô 1e-05 BCH(511,76) more useful codes 0.04 R3 BCH(31,16) 1e R5 BCH(15,7) more useful codes BCH(1023,101) 0 1e Rate Rate H(7,4) improves p b at a moderate rate R = 4/7 BCH are a generalization of Hamming codes. BCH better than R N but still pretty depressing Can we do better? What is achievable / nonachievable? Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
61 1 Motivation 2 The (7,4) Hamming code Coding Decoding Syndrome Decoding Error Probabilities 3 Wrapping up Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
62 Summary The (7,4) Hamming code Redundancy in (linear) block codes through parity check bits Syndrome decoding via identification of single bit noise patterns Block error, bit error, rate Reading: Mackay Mark Reid and Aditya Menon (ANU) COMP2610/ Information Theory October 15th, / 37
U Logo Use Guidelines
COMP2610/6261 - Information Theory Lecture 15: Arithmetic Coding U Logo Use Guidelines Mark Reid and Aditya Menon logo is a contemporary n of our heritage. presents our name, d and our motto: arn the nature
More information9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise
9 THEORY OF CODES Chapter 9 Theory of Codes After studying this chapter you should understand what is meant by noise, error detection and correction; be able to find and use the Hamming distance for a
More informationchannel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.
5 Binary Codes You have already seen how check digits for bar codes (in Unit 3) and ISBN numbers (Unit 4) are used to detect errors. Here you will look at codes relevant for data transmission, for example,
More informationError Detection and Correction: Hamming Code; Reed-Muller Code
Error Detection and Correction: Hamming Code; Reed-Muller Code Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin Hamming Code: Motivation
More information1.6: Solutions 17. Solution to exercise 1.6 (p.13).
1.6: Solutions 17 A slightly more careful answer (short of explicit computation) goes as follows. Taking the approximation for ( N K) to the next order, we find: ( N N/2 ) 2 N 1 2πN/4. (1.40) This approximation
More informationExample: sending one bit of information across noisy channel. Effects of the noise: flip the bit with probability p.
Lecture 20 Page 1 Lecture 20 Quantum error correction Classical error correction Modern computers: failure rate is below one error in 10 17 operations Data transmission and storage (file transfers, cell
More informationAnswers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics
Answers and Solutions to (Even Numbered) Suggested Exercises in Sections 6.5-6.9 of Grimaldi s Discrete and Combinatorial Mathematics Section 6.5 6.5.2. a. r = = + = c + e. So the error pattern is e =.
More informationU Logo Use Guidelines
Information Theory Lecture 3: Applications to Machine Learning U Logo Use Guidelines Mark Reid logo is a contemporary n of our heritage. presents our name, d and our motto: arn the nature of things. authenticity
More informationLecture 12. Block Diagram
Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data
More informationEE 229B ERROR CONTROL CODING Spring 2005
EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with
More informationCyclic Redundancy Check Codes
Cyclic Redundancy Check Codes Lectures No. 17 and 18 Dr. Aoife Moloney School of Electronics and Communications Dublin Institute of Technology Overview These lectures will look at the following: Cyclic
More informationCommunications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved
Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming
More informationexercise in the previous class (1)
exercise in the previous class () Consider an odd parity check code C whose codewords are (x,, x k, p) with p = x + +x k +. Is C a linear code? No. x =, x 2 =x =...=x k = p =, and... is a codeword x 2
More informationLinear Algebra. F n = {all vectors of dimension n over field F} Linear algebra is about vectors. Concretely, vectors look like this:
15-251: Great Theoretical Ideas in Computer Science Lecture 23 Linear Algebra Linear algebra is about vectors. Concretely, vectors look like this: They are arrays of numbers. fig. by Peter Dodds # of numbers,
More information4 An Introduction to Channel Coding and Decoding over BSC
4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the
More informationChannel Coding for Secure Transmissions
Channel Coding for Secure Transmissions March 27, 2017 1 / 51 McEliece Cryptosystem Coding Approach: Noiseless Main Channel Coding Approach: Noisy Main Channel 2 / 51 Outline We present an overiew of linear
More informationPhysical Layer and Coding
Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:
More informationEE512: Error Control Coding
EE512: Error Control Coding Solution for Assignment on Linear Block Codes February 14, 2007 1. Code 1: n = 4, n k = 2 Parity Check Equations: x 1 + x 3 = 0, x 1 + x 2 + x 4 = 0 Parity Bits: x 3 = x 1,
More informationLinear Codes and Syndrome Decoding
Linear Codes and Syndrome Decoding These notes are intended to be used as supplementary reading to Sections 6.7 9 of Grimaldi s Discrete and Combinatorial Mathematics. The proofs of the theorems are left
More informationIntroduction to Low-Density Parity Check Codes. Brian Kurkoski
Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code
More information1. How many errors may be detected (not necessarily corrected) if a code has a Hamming Distance of 6?
Answers to Practice Problems Practice Problems - Hamming distance 1. How many errors may be detected (not necessarily corrected) if a code has a Hamming Distance of 6? 2n = 6; n=3 2. How many errors may
More informationLow-Density Parity-Check Codes
Department of Computer Sciences Applied Algorithms Lab. July 24, 2011 Outline 1 Introduction 2 Algorithms for LDPC 3 Properties 4 Iterative Learning in Crowds 5 Algorithm 6 Results 7 Conclusion PART I
More informationMATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9
Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received
More informationError Correction Review
Error Correction Review A single overall parity-check equation detects single errors. Hamming codes used m equations to correct one error in 2 m 1 bits. We can use nonbinary equations if we create symbols
More informationMa/CS 6b Class 24: Error Correcting Codes
Ma/CS 6b Class 24: Error Correcting Codes By Adam Sheffer Communicating Over a Noisy Channel Problem. We wish to transmit a message which is composed of 0 s and 1 s, but noise might accidentally flip some
More information2. Polynomials. 19 points. 3/3/3/3/3/4 Clearly indicate your correctly formatted answer: this is what is to be graded. No need to justify!
1. Short Modular Arithmetic/RSA. 16 points: 3/3/3/3/4 For each question, please answer in the correct format. When an expression is asked for, it may simply be a number, or an expression involving variables
More informationLinear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay
1 / 26 Linear Block Codes Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay July 28, 2014 Binary Block Codes 3 / 26 Let F 2 be the set
More informationAssume that the follow string of bits constitutes one of the segments we which to transmit.
Cyclic Redundancy Checks( CRC) Cyclic Redundancy Checks fall into a class of codes called Algebraic Codes; more specifically, CRC codes are Polynomial Codes. These are error-detecting codes, not error-correcting
More informationChapter 3 Linear Block Codes
Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and
More information16.36 Communication Systems Engineering
MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication
More informationKnow the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x].
The second exam will be on Friday, October 28, 2. It will cover Sections.7,.8, 3., 3.2, 3.4 (except 3.4.), 4. and 4.2 plus the handout on calculation of high powers of an integer modulo n via successive
More informationLecture 3: Error Correcting Codes
CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error
More informationLecture 12: November 6, 2017
Information and Coding Theory Autumn 017 Lecturer: Madhur Tulsiani Lecture 1: November 6, 017 Recall: We were looking at codes of the form C : F k p F n p, where p is prime, k is the message length, and
More informationLecture 8: Shannon s Noise Models
Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have
More informationCS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability
More informationMultimedia Systems WS 2010/2011
Multimedia Systems WS 2010/2011 15.11.2010 M. Rahamatullah Khondoker (Room # 36/410 ) University of Kaiserslautern Department of Computer Science Integrated Communication Systems ICSY http://www.icsy.de
More information6. Quantum error correcting codes
6. Quantum error correcting codes Error correcting codes (A classical repetition code) Preserving the superposition Parity check Phase errors CSS 7-qubit code (Steane code) Too many error patterns? Syndrome
More informationMa/CS 6b Class 25: Error Correcting Codes 2
Ma/CS 6b Class 25: Error Correcting Codes 2 By Adam Sheffer Recall: Codes V n the set of binary sequences of length n. For example, V 3 = 000,001,010,011,100,101,110,111. Codes of length n are subsets
More informationConvolutional Coding LECTURE Overview
MIT 6.02 DRAFT Lecture Notes Spring 2010 (Last update: March 6, 2010) Comments, questions or bug reports? Please contact 6.02-staff@mit.edu LECTURE 8 Convolutional Coding This lecture introduces a powerful
More informationECEN 604: Channel Coding for Communications
ECEN 604: Channel Coding for Communications Lecture: Introduction to Cyclic Codes Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 604: Channel Coding for Communications
More informationREED-SOLOMON CODE SYMBOL AVOIDANCE
Vol105(1) March 2014 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 13 REED-SOLOMON CODE SYMBOL AVOIDANCE T Shongwe and A J Han Vinck Department of Electrical and Electronic Engineering Science, University
More informationInformation redundancy
Information redundancy Information redundancy add information to date to tolerate faults error detecting codes error correcting codes data applications communication memory p. 2 - Design of Fault Tolerant
More informationMATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q
MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination
More informationLecture 4: Proof of Shannon s theorem and an explicit code
CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated
More informationError Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore
(Refer Slide Time: 00:15) Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore Lecture No. # 03 Mathematical Preliminaries:
More informationSection 3 Error Correcting Codes (ECC): Fundamentals
Section 3 Error Correcting Codes (ECC): Fundamentals Communication systems and channel models Definition and examples of ECCs Distance For the contents relevant to distance, Lin & Xing s book, Chapter
More informationMathematics Department
Mathematics Department Matthew Pressland Room 7.355 V57 WT 27/8 Advanced Higher Mathematics for INFOTECH Exercise Sheet 2. Let C F 6 3 be the linear code defined by the generator matrix G = 2 2 (a) Find
More informationCSCI 2570 Introduction to Nanocomputing
CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication
More informationTurbo Codes are Low Density Parity Check Codes
Turbo Codes are Low Density Parity Check Codes David J. C. MacKay July 5, 00 Draft 0., not for distribution! (First draft written July 5, 998) Abstract Turbo codes and Gallager codes (also known as low
More informationCoset Decomposition Method for Decoding Linear Codes
International Journal of Algebra, Vol. 5, 2011, no. 28, 1395-1404 Coset Decomposition Method for Decoding Linear Codes Mohamed Sayed Faculty of Computer Studies Arab Open University P.O. Box: 830 Ardeya
More informationAn Introduction to Low Density Parity Check (LDPC) Codes
An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,
More informationSolutions to problems from Chapter 3
Solutions to problems from Chapter 3 Manjunatha. P manjup.jnnce@gmail.com Professor Dept. of ECE J.N.N. College of Engineering, Shimoga February 28, 2016 For a systematic (7,4) linear block code, the parity
More informationECE 4450:427/527 - Computer Networks Spring 2017
ECE 4450:427/527 - Computer Networks Spring 2017 Dr. Nghi Tran Department of Electrical & Computer Engineering Lecture 5.2: Error Detection & Correction Dr. Nghi Tran (ECE-University of Akron) ECE 4450:427/527
More informationMATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.
MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. Binary codes Let us assume that a message to be transmitted is in binary form. That is, it is a word in the alphabet
More informationFault Tolerant Computing CS 530 Information redundancy: Coding theory. Yashwant K. Malaiya Colorado State University
CS 530 Information redundancy: Coding theory Yashwant K. Malaiya Colorado State University March 30, 2017 1 Information redundancy: Outline Using a parity bit Codes & code words Hamming distance Error
More informationSolutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16
Solutions of Exam Coding Theory (2MMC30), 23 June 2016 (1.a) Consider the 4 4 matrices as words in F 16 2, the binary vector space of dimension 16. C is the code of all binary 4 4 matrices such that the
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationNoisy channel communication
Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University
More informationRun-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE
General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive
More informationMath 512 Syllabus Spring 2017, LIU Post
Week Class Date Material Math 512 Syllabus Spring 2017, LIU Post 1 1/23 ISBN, error-detecting codes HW: Exercises 1.1, 1.3, 1.5, 1.8, 1.14, 1.15 If x, y satisfy ISBN-10 check, then so does x + y. 2 1/30
More informationIntroduction to Convolutional Codes, Part 1
Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes
More informationMATH 291T CODING THEORY
California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................
More informationChannel Coding and Interleaving
Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage.
More informationx n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)
Cyclic codes: review EE 387, Notes 15, Handout #26 A cyclic code is a LBC such that every cyclic shift of a codeword is a codeword. A cyclic code has generator polynomial g(x) that is a divisor of every
More informationA Study of Topological Quantum Error Correcting Codes Part I: From Classical to Quantum ECCs
A Study of Topological Quantum Error Correcting Codes Part I: From Classical to Quantum ECCs Preetum Nairan preetum@bereley.edu Mar 3, 05 Abstract This survey aims to highlight some interesting ideas in
More informationCodes on graphs and iterative decoding
Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects
More informationTop Ten Algorithms Class 6
Top Ten Algorithms Class 6 John Burkardt Department of Scientific Computing Florida State University... http://people.sc.fsu.edu/ jburkardt/classes/tta 2015/class06.pdf 05 October 2015 1 / 24 Our Current
More informationCoping with disk crashes
Lecture 04.03 Coping with disk crashes By Marina Barsky Winter 2016, University of Toronto Disk failure types Intermittent failure Disk crash the entire disk becomes unreadable, suddenly and permanently
More informationMATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016
MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016 Notation: We will use the notations x 1 x 2 x n and also (x 1, x 2,, x n ) to denote a vector x F n where F is a finite field. 1. [20=6+5+9]
More informationA Novel Modification of Cyclic Redundancy Check for Message Length Detection
International Symposium on Information Theory and its Applications, ISITA2004 Parma, Italy, October 10 13, 2004 A Novel Modification of Cyclic Redundancy Check for Message Length Detection Shin-Lin Shieh,
More informationMATH3302. Coding and Cryptography. Coding Theory
MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................
More informationMore advanced codes 0 1 ( , 1 1 (
p. 1/24 More advanced codes The Shor code was the first general-purpose quantum error-correcting code, but since then many others have been discovered. An important example, discovered independently of
More informationError Correction and Trellis Coding
Advanced Signal Processing Winter Term 2001/2002 Digital Subscriber Lines (xdsl): Broadband Communication over Twisted Wire Pairs Error Correction and Trellis Coding Thomas Brandtner brandt@sbox.tugraz.at
More informationNotes 10: Public-key cryptography
MTH6115 Cryptography Notes 10: Public-key cryptography In this section we look at two other schemes that have been proposed for publickey ciphers. The first is interesting because it was the earliest such
More informationCSE 123: Computer Networks
CSE 123: Computer Networks Total points: 40 Homework 1 - Solutions Out: 10/4, Due: 10/11 Solutions 1. Two-dimensional parity Given below is a series of 7 7-bit items of data, with an additional bit each
More informationFully-parallel linear error block coding and decoding a Boolean approach
Fully-parallel linear error block coding and decoding a Boolean approach Hermann Meuth, Hochschule Darmstadt Katrin Tschirpke, Hochschule Aschaffenburg 8th International Workshop on Boolean Problems, 28
More informationELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes
ELEC 519A Selected Topics in Digital Communications: Information Theory Hamming Codes and Bounds on Codes Single Error Correcting Codes 2 Hamming Codes (7,4,3) Hamming code 1 0 0 0 0 1 1 0 1 0 0 1 0 1
More informationLecture 14: Hamming and Hadamard Codes
CSCI-B69: A Theorist s Toolkit, Fall 6 Oct 6 Lecture 4: Hamming and Hadamard Codes Lecturer: Yuan Zhou Scribe: Kaiyuan Zhu Recap Recall from the last lecture that error-correcting codes are in fact injective
More informationCan You Hear Me Now?
Can You Hear Me Now? An Introduction to Coding Theory William J. Turner Department of Mathematics & Computer Science Wabash College Crawfordsville, IN 47933 19 October 2004 W. J. Turner (Wabash College)
More informationError Detection & Correction
Error Detection & Correction Error detection & correction noisy channels techniques in networking error detection error detection capability retransmition error correction reconstruction checksums redundancy
More informationLecture 4: Codes based on Concatenation
Lecture 4: Codes based on Concatenation Error-Correcting Codes (Spring 206) Rutgers University Swastik Kopparty Scribe: Aditya Potukuchi and Meng-Tsung Tsai Overview In the last lecture, we studied codes
More informationLecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity
5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke
More informationMatrix Embedding with Pseudorandom Coefficient Selection and Error Correction for Robust and Secure Steganography
Matrix Embedding with Pseudorandom Coefficient Selection and Error Correction for Robust and Secure Steganography Anindya Sarkar, Student Member, IEEE, Upamanyu Madhow, Fellow, IEEE, and B. S. Manjunath,
More informationMATH 291T CODING THEORY
California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt
More informationQuantum Error Correction and Fault Tolerance. Classical Repetition Code. Quantum Errors. Barriers to Quantum Error Correction
Quantum Error Correction and Fault Tolerance Daniel Gottesman Perimeter Institute The Classical and Quantum Worlds Quantum Errors A general quantum error is a superoperator: ρ ΣA k ρ A k (Σ A k A k = I)
More informationLecture 6: Expander Codes
CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.
More informationCyclic codes: overview
Cyclic codes: overview EE 387, Notes 14, Handout #22 A linear block code is cyclic if the cyclic shift of a codeword is a codeword. Cyclic codes have many advantages. Elegant algebraic descriptions: c(x)
More information11 Minimal Distance and the Parity Check Matrix
MATH32031: Coding Theory Part 12: Hamming Codes 11 Minimal Distance and the Parity Check Matrix Theorem 23 (Distance Theorem for Linear Codes) Let C be an [n, k] F q -code with parity check matrix H. Then
More informationIntroducing Low-Density Parity-Check Codes
Introducing Low-Density Parity-Check Codes Sarah J. Johnson School of Electrical Engineering and Computer Science The University of Newcastle Australia email: sarah.johnson@newcastle.edu.au Topic 1: Low-Density
More informationLecture 7 September 24
EECS 11: Coding for Digital Communication and Beyond Fall 013 Lecture 7 September 4 Lecturer: Anant Sahai Scribe: Ankush Gupta 7.1 Overview This lecture introduces affine and linear codes. Orthogonal signalling
More informationLocal correctability of expander codes
Local correctability of expander codes Brett Hemenway Rafail Ostrovsky Mary Wootters IAS April 4, 24 The point(s) of this talk Locally decodable codes are codes which admit sublinear time decoding of small
More informationLecture B04 : Linear codes and singleton bound
IITM-CS6845: Theory Toolkit February 1, 2012 Lecture B04 : Linear codes and singleton bound Lecturer: Jayalal Sarma Scribe: T Devanathan We start by proving a generalization of Hamming Bound, which we
More informationThe BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes
S-723410 BCH and Reed-Solomon Codes 1 S-723410 BCH and Reed-Solomon Codes 3 Background The algebraic structure of linear codes and, in particular, cyclic linear codes, enables efficient encoding and decoding
More informationDigital Communications III (ECE 154C) Introduction to Coding and Information Theory
Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I
More informationLecture 11: Polar codes construction
15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last
More informationMATH 433 Applied Algebra Lecture 22: Review for Exam 2.
MATH 433 Applied Algebra Lecture 22: Review for Exam 2. Topics for Exam 2 Permutations Cycles, transpositions Cycle decomposition of a permutation Order of a permutation Sign of a permutation Symmetric
More informationExercise 1. = P(y a 1)P(a 1 )
Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a
More informationLinear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x
Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 x + + a n 1 x n 1 + a n x n, where the coefficients a 0, a 1, a 2,,
More informationWelcome to Comp 411! 2) Course Objectives. 1) Course Mechanics. 3) Information. I thought this course was called Computer Organization
Welcome to Comp 4! I thought this course was called Computer Organization David Macaulay ) Course Mechanics 2) Course Objectives 3) Information L - Introduction Meet the Crew Lectures: Leonard McMillan
More informationCoding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013
Coding Theory and Applications Linear Codes Enes Pasalic University of Primorska Koper, 2013 2 Contents 1 Preface 5 2 Shannon theory and coding 7 3 Coding theory 31 4 Decoding of linear codes and MacWilliams
More information