SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

Similar documents

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

Channel Coding I. Exercises SS 2017

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.

EE 229B ERROR CONTROL CODING Spring 2005

Physical Layer and Coding

Channel Coding I. Exercises SS 2017

UNIT I INFORMATION THEORY. I k log 2

Chapter 7: Channel coding:convolutional codes

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

Information Theory with Applications, Math6397 Lecture Notes from September 30, 2014 taken by Ilknur Telkes

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Coding for Discrete Source

Chapter 9 Fundamental Limits in Information Theory

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

CSCI 2570 Introduction to Nanocomputing

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

Channel capacity. Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5.

transmission and coding of information problem list José Luis Ruiz july 2018

3F1 Information Theory, Lecture 3

Revision of Lecture 5

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

X 2 3. Derive state transition matrix and its properties [10M] 4. (a) Derive a state space representation of the following system [5M] 1

1 Introduction to information theory

Solutions to problems from Chapter 3

VHDL Implementation of Reed Solomon Improved Encoding Algorithm

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

3F1 Information Theory, Lecture 3

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

Cyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay

16.36 Communication Systems Engineering

Basic information theory

Error Correction and Trellis Coding

Introduction to Convolutional Codes, Part 1

Chapter 2: Source coding

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

Error Correction Methods

MATH32031: Coding Theory Part 15: Summary

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code

Binary Convolutional Codes

Chapter 6. BCH Codes

Chapter 5. Cyclic Codes

Principles of Communications

Information Theory - Entropy. Figure 3

Solutions or answers to Final exam in Error Control Coding, October 24, G eqv = ( 1+D, 1+D + D 2)

Motivation for Arithmetic Coding

Error Correction Review

Lecture 22: Final Review

Cyclic codes: overview

Information Theory and Coding

Channel Coding I. Exercises SS 2016

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

Common Information. Abbas El Gamal. Stanford University. Viterbi Lecture, USC, April 2014

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

ECEN 604: Channel Coding for Communications

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

Coding of memoryless sources 1/35

Lecture 12. Block Diagram

SIDDHARTH GROUP OF INSTITUTIONS:: PUTTUR Siddharth Nagar, Narayanavanam Road QUESTION BANK (DESCRIPTIVE)

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

Communication Theory II

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Code design: Computer search

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

Communications Theory and Engineering

Fault Tolerant Computing CS 530 Information redundancy: Coding theory. Yashwant K. Malaiya Colorado State University

Factor Graphs and Message Passing Algorithms Part 1: Introduction

Information Theory and Statistics Lecture 2: Source coding

Fault Tolerance & Reliability CDA Chapter 2 Cyclic Polynomial Codes

A 2-error Correcting Code

Chapter 3 Linear Block Codes

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

Uncertainity, Information, and Entropy

MATH 291T CODING THEORY

MATH 291T CODING THEORY

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Lecture 2: August 31

7.1 Definitions and Generator Polynomials

ON DISTRIBUTED ARITHMETIC CODES AND SYNDROME BASED TURBO CODES FOR SLEPIAN-WOLF CODING OF NON UNIFORM SOURCES

Information Redundancy: Coding

Math 512 Syllabus Spring 2017, LIU Post

2018/5/3. YU Xiangyu

Communication Theory II

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

Entropies & Information Theory

} has dimension = k rank A > 0 over F. For any vector b!

Lecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1)

Information Theory. Lecture 7

Digital communication system. Shannon s separation principle

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

Lecture Notes on Digital Transmission Source and Channel Coding. José Manuel Bioucas Dias

3. Coding theory 3.1. Basic concepts

Transcription:

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road 517583 QUESTION BANK (DESCRIPTIVE) Subject with Code : CODING THEORY & TECHNIQUES(16EC3810) Course & Branch: M.Tech - DECS Regulation: R16 UNIT I Year & Sem: I-M.Tech & II-Sem 1a) Explain the concept of mathematical and logarithmic measure of information with example. [5M] b) For a binary memory source with two symbols x 1 and x 2, show that entropy H(X) is maximum when both x 1 and x 2 are equiprobable.what are the lower and upper bounds on H(X)? 2a) The channel transition matrix.draw the channel diagram and determine the probabilities associated with outputs assuming equiprobable inputs. Also find Mutual Information I(X;Y) for the channel. b) Explain the following: i. Entropy [3M] ii. Mutual Information 3a) For a discrete memoryless source with K equiprobable symbols, use of fixed length code will provide [3M] same efficiency as any other coding technique. Justify the above statement. State the condition to be satisfied by K for achieving 100% efficiency. [8M] b) Explain about prefix coding. [4M] 4a) State and prove properties of Entropy b) Explain about fixed length and variable length coding. 5. Consider a telegraph source having two symbols dot and dash. The dot duration is 0.2 sec; and the dash duration is 3 times of the dot duration. The probability of the dot s occurring is twice that of dash, and the time between symbols is 0.2sec. Calculate information rate of the telegraph source. 6a) Write a short note on source coding theorem. b) Prove that H(X, Y) =H(X/Y) +H(Y) =H(Y/X) +H(X) 7.For a binary symmetric channel shown in figure, P(x 1 )= β.show that the mutual information I(X;Y) is given by, Page 1

I(X;Y)=H(Y)+Plog 2 P+(1-P)log 2 (1-P).Determine I(X;Y)for p=0.1,β=0.5. 8a) The channel transition matrix.draw the channel diagram and determine the probabilities associated with outputs assuming equiprobable inputs. Also find Mutual Information I(X;Y) for the channel. b) State and prove properties of information. [5M] 9. The binary symmetric channel is shown below. Find rate of information transmission across this channel for p=0.8 and 0.6. The symbols are generated at the rate of 1000 per second. P(x0)=P(x1) =1/2.Also find channel information rate. 10a) A source emits one of four symbols S0, S1, S2 and S3 with probabilities 1/3, 1/4, 1/6 and 1/4 respectively. The successive symbols emitted by the source are stability independent. Calculate entropy of the source. b) Explain Shannon s second fundamental theorem on coding for memory less noise channels. [5M] Page 2

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road 517583 QUESTION BANK (DESCRIPTIVE) Subject with Code : CODING THEORY & TECHNIQUES(16EC3810) Course & Branch: M.Tech - DECS Regulation: R16 Year & Sem: I-M.Tech & II-Sem UNIT II 1. A discrete memoryless source have five symbols x 1,x 2,x 3,x 4 and x 5 with probabilities 0.4,0.19,0.16,0.15,& 0.15 respectively attached to every symbol. Construct a Shannon-fano code for the source and calculate Code efficiency. 2a) The generator matrix for a (6,3) block code is given below.find all code vectors of this code. G= b) Explain the concept of Syndrome decoding. [5M] 3. For a systematic linear block code, the three parity check digits C 4, C 5, C 6 are given by, C 4 = d1 d2 d3 C 5 =d1 d2 C 6 = d1 d3 i) Construct generator matrix. ii) Construct code generated by this matrix. iii) Determine error correcting capability. iv) Prepare a suitable decoding table. 4. A DMS has following alphabet with probability of occurrence as shown below: Symbol S0 S1 S2 S3 S4 S5 S6 Probability 0.125 0.0625 0.25 0.0625 0.125 0.125 0.25 Generate the Huffman code with minimum code variance. Determine code variance and code efficiency. 5a) Explain error detecting and correcting capability of Linear block codes. b) Explain matrix description of linear block code with example. Page 3

6. The parity check matrix of a particular (7,4) linear block code is given by, [H] = i) Find generator matrix (G). ii) List all code vectors iii) What is the minimum distance between code vectors? iv) How many errors can be detected? How many errors can be corrected? 7a) Compare Huffman coding and Shannon-Fano coding with an example. b) Explain encoder implementation of linear block codes. [5M] 8. A DMS consists of three symbols x1, x2, x3 with probabilities 0.45, 0.35, 0.2 respectively. Determine the minimum variance Huffman codes for symbol by symbol occurrence. 9a) Explain the algorithm for Huffman code applied for pair of symbols. b) Distinguish systematic and non-systematic codes. 10. Explain the following: i) Syndrome testing ii) Lempel-Ziv codes Page 4

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road 517583 QUESTION BANK (DESCRIPTIVE) Subject with Code : CODING THEORY & TECHNIQUES(16EC3810) Course & Branch: M.Tech - DECS Regulation: R16 Year & Sem: I-M.Tech & II-Sem UNIT III 1. The polynomial g(x)= x 4 +x+1 is the generator polynomial for the (15, 11) Hamming binary code. (i) Determine the generator matrix of this code in systematic form. (ii) Determine the generator polynomial for the dual code. 2.The generator polynomial over the binary field is g(x)= x 8 +x 6 +x 4 +x 2 +1, (i) Find the lowest rate cyclic code where generator polynomial is g(x). [8M] (ii) What is the rate of this code? [4M] 3. Form a parity check matrix for a (15, 11) systematic Hamming code. Draw the encoding and decoding circuits. 4(a) Draw the block diagram of general type-ii one step majority-logic decoder and explain it. (b) Determine the weight enumerator for the entenderd Hamming code of length 2 m. 5. Show that the minimum Hamming distance of a linear block code is equal to the minimum number of columns of its parity check matrix that are linearly dependent show also that the minimum Hamming distance of a Hamming code is always equal to 3. 6. The polynomial over the binary field is g(p)= p 8 +p 6 +p 4 +p 2 +1, [4+4+4M] (i) Find the lowest rate cyclic code whose generator polynomial is g(p). What is the rate of this code? (ii) Find the minimum distance of the code found in (a). (iii) What is the coding gain for the code found in (a)? 7(a) Explain applications of block codes for error control in data storage systems. (b) Explain encoder of (7,4) Hamming code with neat sketch. 8(a) Explain Syndrome decoding procedure for Hamming codes. [5M] (b) The parity check matrix of (7,4) Hamming code is expressed as, [H] = Evaluate the syndrome vector for single bit errors. Page 5

9(a) Explain generalized weight emunerator with example. (b) Write a short note on perfect codes. [5M] 10(a) Explain probability of an undetected error for linear codes over a binary symmetric channel. (b) Explain the following: [3+3M] i. Hamming distance ii. Error detecting and correcting capabilities of hamming codes. Page 6

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road 517583 QUESTION BANK (DESCRIPTIVE) Subject with Code : CODING THEORY & TECHNIQUES(16EC3810) Course & Branch: M.Tech - DECS Regulation: R16 Year & Sem: I-M.Tech & II-Sem UNIT IV 1. Construct the decoding table for a single error (7, 4) cyclic code whose generator polynomial is g(x)=x 3 +x 2 +1. 2. Briefly explain the following: [4+4+4M] (a) Structural properties of convolutional code. (b) Trellis diagram. (c) Convolutional interleaving. 3. Verify whether g(x)= 1+x+x 2 +x 3 +x 4 is a valid generator polynomial for generating a cyclic code for message [1 1 1]. 4. Consider a (3, 1, 2) convolutional code with g (1) =(011), g (2) =(110),g (3) =(101). [4+4+4M] (a) Draw the encoder block diagram. (b) Find the generation matrix. (c) Find the code-vector corresponding to the information sequence d = 10001. 5a) Draw the encoder circuit for the (7,4) cyclic code generated by g(x)= 1+x+x 3. b) Explain how syndrome is computed for cyclic codes and from this how the error is detected. 6. Draw the circuit diagram for error-tapping decoder for the (15,7) cyclic code generated by g(n)=1+x 4 +x 6 +x 7 +x 8 and explain it. 7a) Consider the (3, 1, 2) convolutional code with g (1) =(110),g (2) =(101),g (3) =(111) i) Draw the encoder block diagram ii) Find the generator matrix. iii) Find code word corresponding to the information sequence u = (1 1 1 0 1) b) Explain the decoding of convolutional codes based on maximum likelihood criteria. [5M] 8. A convolutional code is described by g 1 =[I 0 I], g 2 =[I I I] and g 3 =[I I I] Page 7

a) Draw the encoder corresponding to this code. [4M] b) Draw the state transition diagram for this code. [4M] c) Draw the trellis diagram for this code. [4M] 9. A rate 1/3, K = 6 convolutional code is given by the generator polynomials. g(1,1)=1+x 2 +x 3 +x 5 g(1,2)=1+x+x 4 +x 5 a) Write g(1) and the matrices [Gα ] and [Hα ]. [4M] b) Determine H.D and t for the code [4M] c) Draw a possible decoder for the code, after checking if the code is majority logic decodable. [4M] 10a) Design a syndrome calculator for a (7,4) cyclic code generated by the polynomial G(p)=p 3 +p+1.evaluate the syndrome for Y=(1001101) b) Write the advantages and disadvantages of i) Cyclic codes [3+3M] ii) Convolutional codes. Page 8

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road 517583 QUESTION BANK (DESCRIPTIVE) Subject with Code : CODING THEORY & TECHNIQUES(16EC3810) Course & Branch: M.Tech - DECS Year & Sem: I-M.Tech & II-Sem Regulation: R16 UNIT V 1. (a) Write a stack sequential decoding algorithm for convolution codes. (b) Draw the code tree for (3, 1, 2) code with L = 5 and decode the sequence, r = (011,100,001,111,001, 011, 100). 2. Determine the generator polynomial of all primitive BCH codes of length 31. Use the halois field GF(2 5 ) generated by P(X)=1+X 2 +X 5. 3.Explain in detail the Viterbi algorithm for decoding of convolutional codes with a suitable example. 4. It is directed to determine a BCH code having approximately 1000 bits in a code word with the capability of correcting errors. Find d min,m,r,k, the rate of the code. 5(a) Explain error correcting procedure for BCH codes. (b) Explain construction of Falois Fields GF(2m). 6. Write a short note on: a) BCH Bounds. b) Iterative algorithm. 7. Construct GF(2 3 ) from GF(2) by using the third degree irreducible polynomial p(x)= x 3 +x 2 +1. 8. Consider (31,21), t 2 and (31, 16), t 3 BCH codes. Write the parity check matrices for the codes, and check if the codes are majority logic decodable. Find the syndrome bits in terms of error bits. 9(a) Explain properties of Falois Fields. (b) Explain construction of Falois Fields GF(2m). 10.Explain the following: [4+4+4M] (i) Free distance and coding gain (ii) Metric diversion effect (iii) Decoding procedure for BCH codes. Page 9

Page 10