EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY

Similar documents
MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A


Communication Theory II

Revision of Lecture 5

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

Chapter 9 Fundamental Limits in Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Principles of Communications

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

Channel capacity. Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5.

Information Theory - Entropy. Figure 3

Revision of Lecture 4

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Lecture 22: Final Review

UNIT I INFORMATION THEORY. I k log 2

Block 2: Introduction to Information Theory

Chapter 2: Source coding

Coding for Discrete Source

Chapter I: Fundamental Information Theory

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

ELEC546 Review of Information Theory

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

16.36 Communication Systems Engineering

Basic information theory

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

Digital communication system. Shannon s separation principle

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

Introduction to Information Theory. By Prof. S.J. Soni Asst. Professor, CE Department, SPCE, Visnagar

ELEMENT OF INFORMATION THEORY

Entropy as a measure of surprise

(Classical) Information Theory III: Noisy channel coding

Information and Entropy

Noisy channel communication

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

Lecture 2. Capacity of the Gaussian channel

One Lesson of Information Theory

3F1 Information Theory, Lecture 3

Motivation for Arithmetic Coding

Compression and Coding

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Massachusetts Institute of Technology

ECE Information theory Final

Lecture 8: Shannon s Noise Models

CSCI 2570 Introduction to Nanocomputing

Noisy-Channel Coding

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Information Theory and Coding Techniques

Basic Principles of Video Coding

Lecture 4 Noisy Channel Coding

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

Lecture 3: Channel Capacity

Lecture 18: Gaussian Channel

ITCT Lecture IV.3: Markov Processes and Sources with Memory

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

Physical Layer and Coding

Ch 0 Introduction. 0.1 Overview of Information Theory and Coding

X 1 : X Table 1: Y = X X 2

Shannon s noisy-channel theorem

CHAPTER 3. P (B j A i ) P (B j ) =log 2. j=1

1 Introduction to information theory

Module 1. Introduction to Digital Communications and Information Theory. Version 2 ECE IIT, Kharagpur

Lecture 8: Channel Capacity, Continuous Random Variables

Entropies & Information Theory

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria

3F1 Information Theory, Lecture 3

3F1 Information Theory, Lecture 1

Information Theory and Coding

Lecture 14 February 28

ECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE)

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Introduction to Information Theory. Part 4

Basic Principles of Lossless Coding. Universal Lossless coding. Lempel-Ziv Coding. 2. Exploit dependences between successive symbols.

ELEC 515 Information Theory. Distortionless Source Coding

Chapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

Lecture 4 Channel Coding

Note that the new channel is noisier than the original two : and H(A I +A2-2A1A2) > H(A2) (why?). min(c,, C2 ) = min(1 - H(a t ), 1 - H(A 2 )).

COMM901 Source Coding and Compression. Quiz 1

Chapter 2 Source Models and Entropy. Any information-generating process can be viewed as. computer program in executed form: binary 0

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

CMPT 365 Multimedia Systems. Lossless Compression

at Some sort of quantization is necessary to represent continuous signals in digital form

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

Dept. of Linguistics, Indiana University Fall 2015

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye

Massachusetts Institute of Technology. Solution to Problem 1: The Registrar s Worst Nightmare

Ch. 8 Math Preliminaries for Lossy Coding. 8.4 Info Theory Revisited

2018/5/3. YU Xiangyu

Lecture 12. Block Diagram

Information Sources. Professor A. Manikas. Imperial College London. EE303 - Communication Systems An Overview of Fundamentals

Transcription:

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY Discrete Messages and Information Content, Concept of Amount of Information, Average information, Entropy, Information rate, Source coding to increase average information per bit, Shannon-Fano coding, Huffman coding, Lempel-Ziv (LZ) coding, Shannon s Theorem, Channel Capacity, Bandwidth- S/N tradeoff, Mutual information and channel capacity, rate distortion theory, Lossy Source coding. 1. What is information theory? Information theory deals with the mathematical modeling and analysis of a communication system rather than with physical sources and physical channels. 2. Define lossless channel. The channel described by a channel matrix with only one nonzero element in each column is called a lossless channel. In the lossless channel no sources information is lost in transmission. 3. Define Deterministic channel (NOV/DEC-2009) A channel described by a channel matrix with only one nonzero element in each row is called a deterministic channel and this element must be unity. 4. Define noiseless channel. Channel is called noiseless if it is both lossless and deterministic. The channel matrix has only one element in each row and in each column and this element is unity. The input and output alphabets are of the same size. 5. What is channel redundancy?[april-04] Redundancy is given as, Redundancy (g) =1-Code efficiency Redundancy (g) =1-h The redundancy should be as low as possible. 6. Write about channel capacity? The channel capacity of the discrete memoryless channel is given as maximum average mutual information. The maximization is taken with respect to input probabilities P(xi) C=max I(x; y) {P (xi)} 7. What is the channel capacity of a BSC and BEC? For BSC the channel capacity C=1+plog2 p + (1-p) log2 (1-p). For BEC the channel capacity C= (1-p) 8. State the channel coding theorem for a discrete memoryless Channel. [Dec-03] Given a source of M equally likely messages, with M>>1,which is generating information at a rate R.Given channel with capacity C.Then if, R C There exists a coding technique such that the output of the source may be transmitted over the channel with a probability of error in the received message which may be made arbitrarily small. 9. What is the channel capacity of a discrete signal? (APRIL/MAY-2004) (NOV/DEC-2006), (April/May-2008) The channel capacity of a discrete signal C= max I(X,Y) P(xi) I(X,Y)-mutual information.

10. What is the channel capacity of binary symmetric channel with error probability of 0.2?[Nov-03] W.K.T P=0.2, Hence 1-P=0.8 Channel capacity for binary symmetric channel C=1+Plog2P+ (1-P) log2 (1-P) =1+ 0.2log20.2+ 0.8 log2 0.8 =1-0.4644-0.2575 C=0.278 bits/message. 11. What happens when the number of coding alphabet increases? When the number of coding alphabet increases the efficiency of the coding technique decreases. 12. Prove that I (x i x j ) = I(x i ) + I(x j ) if x i and x j are independent. If x i and x j are independent. P (x i x j ) = P(x i ) P(x j ) I (x i x j ) = log1/p(x i x j ) =log 1/ P(x i ) P(x j ) =I(x i ) + I(x j ) 13. Prove that the following, I(X; Y) =H(X) +H(Y)-H(X, Y) We know the relation H(X, Y) =H(X/Y) +H(Y) There fore H(X/Y) =H(X, Y) -H(Y) --------- (1) Mutual information is given by I(X; Y) =H(X) - H(X/ Y) --------- (2) Substituting equation (1) in (2) I(X; Y) =H(X) + H(Y) - H(X/ Y) Thus the required relation is proved. 14. What is channel diagram and channel matrix? The transition probability diagram of the channel is called the channel diagram and its matrix representation is called the channel matrix. 15. What is meant by Source encoding? The efficient representation of data generated by a discrete source. This process is called Source coding. The device that performs the representation is called a source encoder. 16. Write Source encoding theorem? Given a discrete memoryless source of entropy H(x),the average code word length for any distortion less source encoding is bounded as H(x) Here the entropy H(x) represents the fundamental limit on the average number of bits per source symbol (). 17. Name the source coding techniques.[nov-04] Prefix coding or instantaneous coding Shannon fano coding Huffman coding.

18. What is meant by prefix code?[dec-03] Prefix coding is variable length coding algorithm. It assigns binary digits to the messages as per their probabilities of occurance.prefix of the codeword means any sequence which is initial part of the code word. A prefix code is defined as a code in which no code word is the prefix of any other code word. 19. Explain Shannon-Fano coding. (NOV/DEC-2003) An efficient code can be obtained by the following simple procedure, known as Shannon- Fano algorthim. List the source symbols in order of decreasing probability. Partition the set into two sets that are as close to equiprobable as possible, and sign 0 to the upper set and 1 to the lower set. Continue this process, each time partitioning the sets with as nearly equal probabilities as possible until further partitioning is not possible. 20. What are the types of Correlation? The types of Correlation are Cross Correlation and Auto Correlation 21. What is the difference between Correlation and Convolution? (Apr/May2010) In Correlation physical time t is dummy variable and it disappears after solution of an integral. But in convolution ι is a dummy variable. Convolution is a function of delay parameter t but convolution is a function of t. Convolution is commutative but correlation is noncom mutative. 22. Define Signal. Signal is defined as any physical quantity carrying information that varies with time. The value of signal may be real or complex. The types of signal are continuous signal and discrete time signal. 23. Define entropy. (APRIL/MAY-2004) (NOV/DEC-2006) (APRIL/MAY-2012) Entropy is the measure of the average information content per source symbol in a particular interval. It is given by the Expression H(X)= I P(xi)log 2 P(xi) bits/sample. Here P is the probability of occurrence of Kth message. 24. A Source is emitting symbols x1, x2 and x3 with probabilities, respectively 0.6, 0.3, and 0.1.What is the entropy of the source?[nov-03] Let p1=0.6, p2=0.3, p3=0.1 H(X)= I P(xi)log 2 P(xi) = 0.6 log 2 (0.6) + 0.3 log 2 (0.3) +0.3 log 2 (0.3) 25. Define mutual information. (April/May-2008) The mutual information is defined as the amount of information transferred when Xi is transmitted and YJ is received. It is represented by I(Xi, YJ) and given as, I(X,Y)=H(X)-H(X/Y) bits/symbol H(X)- entropy of the source H(X/Y)- conditional entropy of Y. 26. State the properties of mutual information. (APRIL/MAY-2005), (May/June-2010) I(X,Y)=I(Y,X) I(X,Y)>=0 I(X,Y)=H(Y)-H(Y/X) I(X,Y)=H(X)+H(Y)-H(X,Y).

27. What is amount of information? The amount of information gained after observing the event S=S K, which occurs with probability P K, as the logarithmic function. Amount of information I(S K )=Log(1/P K ) Unit of information is bit. 28. What is mean by one bit? One bit is the amount of information that use we gain, when one of two possible and equally likely (equal-probability) events occurs. 29. Give the relation between the different entropies. H(X,Y)=H(X)+H(Y/X)=H(Y)+H(X/Y) H(X)- entropy of the source,h(y/x),h(x/y)-conditional entropy H(Y)-entropy of destination H(X,Y)- Joint entropy of the source and destination 30. Define information rate. (NOV/DEC-2006) (APRIL/MAY-2007) It is the time rate at which source X emits symbols is r symbols per second (or) the average number of bits of information per second. The information rate R of the source is given by R=r H(X) bits/second H(X)- entropy of the source, r is rate at which messages are generated. 31. What is data compaction? For efficient signal transmission the redundant information must be removed from the signal prior to transmission.this operation with no loss of information is ordinarily performed on a signal in digital form and is referred to as data compaction or lossless data compression. 32. State the property of entropy. 1.0< H(X) < log 2 K, is the radix of the alphabet X of the source. 33. What is differential entropy? The average amount of information per sample value of x(t) is measured by H(X)= - f x (x)log f x (x)dx bit/sample H(X) differential entropy of X. 34. What is source coding and entropy coding? (NOV/DEC-2004, Apr2010) The conversion of the output of a DMS into a sequence of binary symbols is called source coding. he design of a variable length code such that its average cod word length approaches the entropy of the DMS is often referred to as entropy coding. 35. State Shannon Hartley theorem or Channel capacity theorem for a continuous channel. (Apr/May2012) The capacity C of a additive Gaussian noise channel is C=B log 2 (1+S/N) B= channel bandwidth,s/n=signal to noise ratio. 36. What is the entropy of a binary memory-less source? (APRIL/MAY-2009), (Nov/Dec-2010) The entropy of a binary memory-less source H(X)=-p 0 log 2 p 0 -(1-p 0 )log 2 (1-p 0 ) p 0- probability of symbol 0,p1=(1- p 0 ) =probability of transmitting symbol 1 37. Give the relation between the different entropies. H(X; Y) =H(X) +H(Y/X) =H(Y) +H(X/Y) H(X) - entropy of the source(y/x), H(X/Y)-conditional entropy H(Y)-entropy of destination H(X, Y) - Joint entropy of the source and destination.

38. How is the efficiency of the coding technique measured? Efficiency of the code =H(X) /L 39. What is discrete memory less source? The symbols emitted by the source during successive signaling intervals are statistically independent. That source is called discrete memory less source. Here memoryless, means that the symbol emitted any time is independent of previous choices. 40. Define rate of information transmission across the channel. Rate of information transmission across the channel is given as, Dt= [H(X)-H(X/Y)] r bits/sec Here H(X) is the entropy of the source. H(X/Y) is the conditional entropy. 41. For an AWGN channel with 4 KHz band width and noise Power Spectral Density,the signal power required at the Receiver is 0.1 mw.calculate capacity of this channel. The equation of the channel capacity C=B log 2 (1+S/N)Bits/sec. B= 4000 Hz S=0.1x10-3 W And noise power can be obtained as, But N=N0B=10-12x2x4000=8x10-9 W PART-B 1. Explain the procedure of Shannon Fano Coding Algorithm and Huffman Coding algorithm.(april/may-2004),(april/may-2005),(nov/dec-2006),(may/june- 2010),(Nov/Dec-2009) P.No T1-7.23 Source coding definition Types of source coding Shannon Fano coding Algorithm - steps Huffman Coding algorithm steps 2. State and prove the properties of mutual information. P.No T1-7.61 Mutual Information definition Properties of mutual information with proof. 3. Explain the different types of channel. P.No T1-7.40 4. Calculate the capacity of a Gaussian channel.(april/may-2005),(april/may-2004). P.No T1-7.50 Gaussian Channel Capacity Explanation

5. Find the channel capacity of binary erasure channel P (x1 ) = αα (April/May-2008), (May/June-2010), (Nov/Dec-2007). P.No T1-7.49 Binary Erasure Channel Capacity Explanation 6. Draw the channel diagram of the binary Erasure channel and get the channel matrix. P.No T1-7.48 Binary Erasure Channel Capacity Explanation Draw the channel diagram Obtain channel matrix 7. Explain the information capacity theorem(april/may-2005). P.No T1-7.67 Information Capacity theorem Statement Proof 8. Explain BSC,BEC(Nov/Dec-2008). P.No T1-7.48 Binary erasure channel definition Binary symmetric channel definition REFERENCE BOOKS: Communication systems Simon Haykin T1