Similar documents
SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

Chapter 9 Fundamental Limits in Information Theory

Physical Layer and Coding

Cyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay

16.36 Communication Systems Engineering

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A

Channel Coding I. Exercises SS 2017

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Channel Coding I. Exercises SS 2017

3. Coding theory 3.1. Basic concepts

Lecture 12. Block Diagram

UNIT I INFORMATION THEORY. I k log 2

Cyclic Redundancy Check Codes

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

ECEN 604: Channel Coding for Communications

Channel capacity. Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5.

Lecture 22: Final Review

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

Principles of Communications

Fault Tolerance & Reliability CDA Chapter 2 Cyclic Polynomial Codes

Information redundancy

Coding for Discrete Source

Chapter 5. Cyclic Codes

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Cyclic codes. Vahid Meghdadi Reference: Error Correction Coding by Todd K. Moon. February 2008

Revision of Lecture 5

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Communication Theory II

3F1 Information Theory, Lecture 3

Optimum Soft Decision Decoding of Linear Block Codes

Fault Tolerant Computing CS 530 Information redundancy: Coding theory. Yashwant K. Malaiya Colorado State University

ITCT Lecture IV.3: Markov Processes and Sources with Memory

Error Correction Methods

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

CSCI 2570 Introduction to Nanocomputing

ELEC546 Review of Information Theory

Information Theory and Statistics Lecture 2: Source coding

MATH32031: Coding Theory Part 15: Summary

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

Objective: To become acquainted with the basic concepts of cyclic codes and some aspects of encoder implementations for them.

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Solutions to problems from Chapter 3

Shannon s noisy-channel theorem

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)

Chapter 2: Source coding

3F1 Information Theory, Lecture 3

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Optical Storage Technology. Error Correction

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

Error Detection & Correction

Channel Coding and Interleaving

Motivation for Arithmetic Coding

EE 229B ERROR CONTROL CODING Spring 2005

Entropies & Information Theory

Cyclic codes: overview

A Brief Encounter with Linear Codes

Assume that the follow string of bits constitutes one of the segments we which to transmit.

(Classical) Information Theory III: Noisy channel coding

Multimedia Systems WS 2010/2011

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

ECE Information theory Final (Fall 2008)

ECE Information theory Final

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code

Error Correction and Trellis Coding

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

Information Theory - Entropy. Figure 3

Chapter 3 Linear Block Codes

Compression and Coding

Information Redundancy: Coding

Non-binary Distributed Arithmetic Coding

Lecture 18: Gaussian Channel

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

Lecture 14: Hamming and Hadamard Codes

Lecture 2: August 31

Solutions or answers to Final exam in Error Control Coding, October 24, G eqv = ( 1+D, 1+D + D 2)

MATH3302. Coding and Cryptography. Coding Theory

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

EE512: Error Control Coding

Revision of Lecture 4

Generator Matrix. Theorem 6: If the generator polynomial g(x) of C has degree n-k then C is an [n,k]-cyclic code. If g(x) = a 0. a 1 a n k 1.

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Chapter 7: Channel coding:convolutional codes

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

National University of Singapore Department of Electrical & Computer Engineering. Examination for

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

transmission and coding of information problem list José Luis Ruiz july 2018

Transcription:

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability of error, which is also known as channel coding. 2. What is the difference between systematic code and non-systematic code? In the systematic block code the message bits appears at the beginning of the codeword. That is the message bits appears first and then the check bits are transmitted. In non systematic block code it is not possible to identify the message bits and check bits. They are mixed in block. 3. What is a Repetition code? A single message bit is encoded in to a block of n identical bits producing a (n, 1) block code. There are only two code words in the code. They are all zeros and ones.the code is called repetition code since many redundant check bits are transmitted along with a single message bit. 4. What is forward acting error correction method? The method of controlling errors at the receiver through attempts to correct noise-induced errors is called forward acting error correction method. 5. What is error detection? The decoder accepts the received sequence and checks whether it matches a valid message sequence. If not, the decoder discards the received sequence and notifies the transmitter (over the reverse channel from the receiver to the transmitter) that errors have occurred and the received message must be retransmitted. This method of error control is called error detection 6. Define linear block code? If each of the 2k code words can be expressed as linear combination of k linearly independent code vectors then the code is called linear block code. A code is linear if sum of any two code vectors produces another code vector 7. Give the properties of syndrome in linear block code. The syndrome depends only on the error patterns and not on the transmitted code word. All error patterns that differ by a code word have the same syndrome. 8. What is Hamming code? This is a family of (n, k) linear block code. Block length: n= 2 q 1 Number of message bits: k = n-q Number of parity bits: 9. When a code is said to be cyclic? Linearity property The sum of any two code words in the code is also a code word. Cyclic property Any cyclic shift of a code word in the code is also a code word. 10. Give the difference between linear block code and cyclic code. Linear block code can be simply represented in matrix form Cyclic code can be represented by polynomial form 11. What is generator polynomial? CS6304-ADC UNIT4 Page 1

Generator polynomial g (x) is a polynomial of degree n-k that is a factor of X n + 1, where g (x) is the According to this expansion the polynomial g (x) has two terms with coefficient 1 separated by n-k-1 terms. 12. What is parity check polynomial? Parity check polynomial h (x) is a polynomial of degree k that is a factor of X n+ 1, where h (x) is the polynomial of least degree in the code. According to this expansion the polynomial h (x) has two terms with coefficient 1 separated by k-1 terms. 13. How a syndrome polynomial can be calculated? The syndrome polynomial is a remainder that results from dividing r(x) by the generator polynomial g(x). R(x) = q(x) b(x) + S (x) 14. Give two properties of syndrome in cyclic code. The syndrome of a received word polynomial is also the syndrome of the corresponding error polynomial. The syndrome polynomial S(x) is identical to the error polynomial e(x). 15. Define Hamming distance (HD)? The number of bit position in which two adjacent code vectors differs is known as Hamming distance. (e.g) if c1 = 1 0 0 1 0 1 1 0 and c2 = 1 1 0 0 1 1 0 1 then HD=5 16. Define Weight of a code vector? The number of non-zero components in a code vector is known as weight of a code vector. (e.g) if c1= 1 0 0 1 0 1 1 0 then W(c1) = 4 17. Define minimum distance? The minimum distance of a linear block code is the smallest hamming distance between any pairs of code words in a code. (e.g) if c1= 0 0 1 1 1 0 c2 = 0 1 1 0 1 1 = 1 1 0 1 1 0 d min = 3 18. Why cyclic codes are extremely well suited for error detection? Cyclic codes are well suited for error detection because of the following reasons They are easy to encode They have well defined mathematical structure. 19. What is syndrome? Syndrome gives an indication of errors present in received vector Y. If YH T =0, then there are no errors in Y and it is a valid code vector. The non zero value of YH T is called syndrome. It s non zero value indicates that Y is not a valid code vector and it contains errors. 20. Define dual code Let there be (n,k) block code. It satisfies HG T =0. Then the (n, n-k) ie (n,q) block code is called dual code. For every (n,k) block code, there exists a dual code of size (n,q). 21. Write the syndrome properties of linear block codes. Syndrome is obtained by S= YH T. If Y=X, then S=0 ie no error in output If y x, then S 0 ie there is error in output Syndrome depends upon the error pattern only, S=EH T 22. List the properties of generator polynomial of cyclic codes Generator polynomial is a factor x p and (p n + 1) CS6304-ADC UNIT4 Page 2

Code polynomial, message polynomial and generator polynomial are related by x(p)=m(p)g(p). Generator polynomial is of degree q. 23. What is convolutional code? A convolutional code in which parity bits are continuously interleaved by information (or) message bits. 24. Define constraint length? The constraint length (K) of a convolutional code is defined as the number of shifts a single message bit to enter the shift register and finally comes out of the encoder output. K= M + 1 25. What is information? Information can be defined as the inverse of probability of occurrence = - log pk 26. Give two properties of information. Information must be Non-negative (i.e.) I (sk) 0 If probability is less then information is more and if probability is more then Information is less If ( I (sk) > I (si)) then p(sk ) < p(si) 27. What is entropy? Entropy can be defined as the average amount of information per source symbol. K-1 H(K) = - pk log pk, k=0, k=0 4. 28. Give two properties of entropy. H(K) = 0 if and only if p k =1 and the remaining probabilities in the set is equal to 0.The lower bound on entropy corresponds to no uncertainty. H(K) = log 2K if and only if p k= 1/ K then all the probabilities in the set is equiprobable. The upper bound on entropy corresponds to maximum uncertainty. 29. What is extremal property? H(K) log 2K 30. What is extension property? The entropy of the extended source is n times that of the primary source. n H(K) = n * H(K) 31. What is discrete source? If a source emits symbols Ψ= {s0, s1, s2 s k-1} from a fixed finite alphabet then the source is said to be discrete source. 32. State Shannon s first theorem (or) source coding theorem A distortion less coding occurs when L*log 2 r H (K) where L represents the average codeword length and H(K) represents entropy. 33. What is data compaction? Data compaction is used to remove redundant information so that the decoder reconstructs the original data with no loss of information. CS6304-ADC UNIT4 Page 3

34. What is decision tree? Where it is used? The decision tree is a tree that has an initial state and terminal states corresponding to source symbols s0, s1, s2 s k-1. Once each terminal state emits its symbol, the decoder is reset to its initial state. Decision tree is used for decoding operation of prefix codes. 35. What is instantaneous code? If a codeword is not a prefix of any other code word then it is said to be instantaneous code. (e.g) 0 10 110 1110 36. What is uniquely decipherable code? If a codeword is not a combination of any other code word then it is said to be uniquely decipherable code. (e.g) 0 11 101 1001 37. How an encoding operation is taken place in Lempel-ziv coding using binary sequence? Encoding operation can be taken place by parsing the source data stream into segments that are the shortest subsequences not encountered previously. 38. What is discrete channel? The channel is said to be discrete when both the alphabets and have finite sizes. 39. What is memory less channel? The channel is said to be memory less when the current output symbol depends only on the current input symbol and not any of the previous choices. 40. What is the important property while using the joint probability (xj, yk)? The sum of all the elements in a matrix is equal to 1. 41. What is the important property while using the conditional probability (xj /yk)? The sum of all the elements along the column side should be equal to 1. 42. What is the important property while using the conditional probability (yk /xj)? The sum of all the elements along the row side should be equal to 1. 43. State Shannon s third theorem (or) Information capacity theorem The information capacity of a continuous channel of bandwidth B hertz perturbed by additive white Gaussian noise of power spectral density No/2 and limited in bandwidth to B is given by C = B log2 (1 + P ) bits per second 44. What are the two important points while considering a code word? The code words produced by the source encoder are in binary form. The source code is uniquely decodable. 45. What is prefix coding? Prefix coding is variable length coding algorithm. In assigns binary digits to the messages as per their probabilities of occurrence. Prefix of the codeword means any sequence which is initial part of the codeword. In prefix code, no codeword is the prefix of any other codeword. 46. State the channel coding theorem for a discrete memory less channel. Given a source of M equally likely messages, with M>>1, which is generating information t a rate R. Given channel with capacity C. Then if, R C CS6304-ADC UNIT4 Page 4

There exists a coding technique such that the output of the source may be transmitted over the channel with probability of error in the received message which may be made arbitrarily small. 47. Define channel capacity of the discrete memory less channel. The channel capacity of the discrete memory less channel is given as maximum average mutual information. The maximization is taken with respect to input probabilities C = max I(X:Y) P(xi) 48. Define mutual information. The mutual information is defined as the amount of information transferred when x i is transmitted and y i is received. It is represented by I (x i,yi) and given as, I(x I, y i ) = log P(xi y i P(x i ) bits 49. Define code redundancy. It is the measure of redundancy of bits in the encoded message sequence. It is given as, Redundancy = 1 code efficiency = It should be as low as possible. 50. Define rate of information transmission across the channel. Rate of information transmission across the channel is given as, Dt = [H(X) H(X/Y)] r bits/sec Here H(X) is the entropy of the source. H(X/Y) is the conditional entropy. 51. Define code variance Variance is the measure of variability in codeword lengths. It should be as small as possible. Here variance code p k probability of K th symbol PART B 1. The generator matrix for a (6,3) block code is given below. Find all the code vectors of this code. 1 0 0: 0 1 1 G= 0 1 0: 1 0 1 0 0 1: 1 1 0 Considering (7,4) Code defined by generator polynomial g(x)=1+x+x 3 the codeword 0111001 is sent over a noisy Channel producing a received word 0101001 that has a single CS6304-ADC UNIT4 Page 5

error. Determine Syndrome Polynomial S(x) and error polynomial e(x). 2. For a (6,3) systematic linear block code, the three parity check bits c4,c5,c6 are formed from the following equations C4 = d 1 + d 2 + d 3 ; C5 = d 1 + d 2 ; C6 = d 1 + d 3 i) Write down the generator matrix ii)construct all possible codeword 3. iii) Suppose that the received word is 01011.Decode this received word by finding the location of the error and the transmitted data bits. 4. A generator matrix of (6, 3) linear block code is given as 1 0 0: 0 1 1 G= 0 1 0: 1 0 1 0 0 1: 1 1 0 5. Determine the dmin for the above code. Comment on error correction and detection capabilities. If the received sequence is 1 0 1 1 0 1, determine the message bit sequence. 6. How is syndrome calculated in cyclic codes? 7. The parity check matrix of particular (7,4) linear block code is given by 1 1 1 0 1 0 0 [H]= 1 1 01 0 1 0 1 0 1 1 0 0 1 Find whether the marix belongs to hamming code. List all the code vectors. What is the minimum distance between code vectors. How many errors can be corrected and how many errors can be detected 8.For a linear block code, prove with examples that The syndrome depends only on error pattern and not on transmitted codeword All error patterns that differ by a codeword have the same syndrome 9.What is the use of syndromes and explain syndrome decoding 10. What are the hadamard and extended block codes 11. An error control code has following parity check matrix 1 0 1 H= 1 1 0 0 1 1 1 0 0 0 1 0 0 0 1 Determine the generator matrix G Find the codeword that begins with 101 Decode the received codeword 110110. Comment on error detection and correction capability of the code What is the relation between G and H. Verify the same 12. The generator matrtix of a (6,3) systematic block code is given by Find the code vectors Find the parity check matrix Find the error syndrome 1 0 0: 0 1 1 G= 0 1 0: 1 0 1 0 0 1: 1 1 0 CS6304-ADC UNIT4 Page 6

13. The parity check bits of a (8,4) linear block code are generated by C 5 =d 1 + d 2 + d 4 C 6 = d 1 + d 2 + d 3 C 7 = d 1 + d 3 + d 4 C 8 =d 2 + d 3 + d 4 Where d1,d2. D3 and d4 are message bits. Find the generator and parity check matrix of this code. List all code vectors. Find the error detecting and correcting capabilities of this code Show through an example that this code detects upto 3 errors 14. Find the generator matrix of a systematic (7,4) cyclic codes if G(p)= p 3 +p+1. Also find out the parity check matrix 15. Design the encoder for (7,4) cyclic code generated by G(P)= p 3 +p+1 and verify its operation for any message vector 16. The generator polynomial of a (7, 4) cyclic code is G(p) = p 3 + p + 1. Find all the code vectors for the code in non systematic form. 17. The generator polynomial of a (7, 4) cyclic code is G(p) = p 3 + p + 1. Find all the code vectors for the code in the systematic form. 18. Discuss in detail about various definitions and principles 19. Discuss in detail about Golay codes, BCH codes, Shortened codes and burst error correcting codes 20. Discuss in detail about CRC 21. By block diagrams explain the operation of syndrome calculator for cyclic codes 22. Design a syndrome calculator for (7,4) cyclic hamming code generated by the polynomial G(p)=p 3 +p+1. Calculate the syndrome for Y= (1001101) 23. Find the (7,4) cyclic code to encode the message sequence (1011) using generator matrix g(x)=1+x+x 3 24. Verify whether g(x)=1+x+x 2 +x 4 is a valid generator polynomial for generating a cyclic code of message (111) 25. Determine the encoded message for the following 8 bit data codes using the following CRC generating polynomial P(x)=x 4 +x 3 +x 0 1. 11001100 2.01011111 26. Calculate the systematic generator matrix to the polynomial g(x)= 1+x+x 3. Also draw the encoder diagram 28. A discrete memory less source has an alphabet of five symbols with there are given by, [X] = [X1, X2, X3, X4, X5] [P] = [0.45, 0.15, 0.15, 0.10, and 0.15] Compute Entropy. Find the amount of Information gained by observing the source. 29. State and prove the Upper bound & lower bound of Entropy. 30. A discrete memory less source has an alphabet of five symbols with there are given by, [X] = [X1, X2, X3, X4, X5] [P] = [0.45, 0.15, 0.15, 0.10, and 0.15] Compute Two different code for this. for this find efficiency and variance 31. What is coding? Explain the steps involved in Shannon Fano coding CS6304-ADC UNIT4 Page 7

32. State & Explain Shannon first theorem 33. Write short notes on: (a) Binary Communication channel (b) Binary symmetric channel. 34. What is Entropy? Explain the properties and types of Entropy? 35. Explain about the relationship between Joint and Conditional Entropy 36. Prove that the Mutual information of the channel is Symmetric. I(X, Y) =I(Y, X) CS6304-ADC UNIT4 Page 8