MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

Similar documents
MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY

UNIT I INFORMATION THEORY. I k log 2


Information Theory - Entropy. Figure 3

Chapter 9 Fundamental Limits in Information Theory

Revision of Lecture 5

Channel capacity. Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5.

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

Communication Theory II

Entropy as a measure of surprise

Principles of Communications

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Introduction to Information Theory. By Prof. S.J. Soni Asst. Professor, CE Department, SPCE, Visnagar

3F1 Information Theory, Lecture 3

Compression and Coding

Coding for Discrete Source

Noisy channel communication

Digital communication system. Shannon s separation principle

3F1 Information Theory, Lecture 3

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.

Dept. of Linguistics, Indiana University Fall 2015

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

Block 2: Introduction to Information Theory

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Information and Entropy

Revision of Lecture 4

Module 1. Introduction to Digital Communications and Information Theory. Version 2 ECE IIT, Kharagpur

Chapter I: Fundamental Information Theory

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)

3F1 Information Theory, Lecture 1

Entropies & Information Theory

Lecture 22: Final Review

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

(Classical) Information Theory III: Noisy channel coding

Basic information theory

ECE Information theory Final

Lecture 4 Noisy Channel Coding

Basic Principles of Video Coding

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

CSCI 2570 Introduction to Nanocomputing

Lecture 11: Information theory THURSDAY, FEBRUARY 21, 2019

Lecture 2: August 31

Massachusetts Institute of Technology

1 Introduction to information theory

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

ELEMENT OF INFORMATION THEORY

Bioinformatics: Biology X

ELEC546 Review of Information Theory

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Ch 0 Introduction. 0.1 Overview of Information Theory and Coding

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

Exercise 1. = P(y a 1)P(a 1 )

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

CS 630 Basic Probability and Information Theory. Tim Campbell

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

X 1 : X Table 1: Y = X X 2

Computational Systems Biology: Biology X

Shannon s A Mathematical Theory of Communication

Lecture 4 Channel Coding

ITCT Lecture IV.3: Markov Processes and Sources with Memory

Shannon s noisy-channel theorem

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

Lecture 11: Continuous-valued signals and differential entropy

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

ECE Information theory Final (Fall 2008)

Information Theory and Coding

Information Theory, Statistics, and Decision Trees

ECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE)

2018/5/3. YU Xiangyu

An introduction to basic information theory. Hampus Wessman

16.36 Communication Systems Engineering

Information Theory and Statistics Lecture 2: Source coding

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Shannon s Noisy-Channel Coding Theorem

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye

Information in Biology

Lecture 11: Quantum Information III - Source Coding

Information Theory and Coding Techniques

Uncertainity, Information, and Entropy

EE 4TM4: Digital Communications II. Channel Capacity

Chapter 2: Source coding

Information Theory (Information Theory by J. V. Stone, 2015)

Information in Biology

Lecture 5 Channel Coding over Continuous Channels

Lecture 1: Introduction, Entropy and ML estimation

Exercises with solutions (Set B)

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

Shannon's Theory of Communication

Information Theory with Applications, Math6397 Lecture Notes from September 30, 2014 taken by Ilknur Telkes

Solutions to Homework Set #1 Sanov s Theorem, Rate distortion

Lecture 1. Introduction

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

Transcription:

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK SATELLITE COMMUNICATION DEPT./SEM.:ECE/VIII UNIT V PART-A 1. What is binary symmetric channel (AUC DEC 2006) 2. Define information rate? (AUC DEC 2007) Information rate R is represented as the average number of bits of information per second. R= rh information bits/second Where H is entropy r is rate at which messages are generated. 3. Define entropy? (AUC MAY 2007) (AUC DEC 2007)(AUC MAY 2011) Page 1

The entropy, H, of a discrete random variable X is a measure of the amount of uncertainty associated with the value of X. Suppose one transmits 1000 bits (0s and 1s). If these bits are known ahead of transmission (to be a certain value with absolute probability), logic dictates that no information has been transmitted. If, however, each is equally and independently likely to be 0 or 1, 1000 bits (in the information theoretic sense) have been transmitted. Between these two extremes, information can be quantified as follows. If is the set of all messages{x1,...,xn} that X could be, and p(x) is the probability of X given some, then the entropy of X is defined: (Here, I(x) is the self-information, which is the entropy contribution of an individual message, and is the expected value.) An important property of entropy is that it is maximized when all the messages in the message space are equiprobable p(x) = 1 / n, i.e., most unpredictable in which case H(X) = logn. The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2: 4. Define mutual information? (AUC MAY 2008) The mutual information is defined as the amount of information transferred when x j is transmitted and y k is received. It is represented by I(x j, y k ) and given as, I(x j, y k ) = log 2 [p(x j /y k ) / p(x j )] p(x j /y k ) conditional probability that x j was transmitted and y k is received. p(x j ) probability of symbol x j for transmission 5. Find the entropy Symbol S0 S1 S2 S3 S4 probability 0.4 0.3 0.1 0.1 0.1 (AUC DEC 2008) 6. State Shannon first theorem? (AUC DEC 2008) Source Coding Theorem (Shannon'sfirst theorem) The theorem can be stated as follows: Given a discrete memoryless source of entropy H(S), the average code-wordlength L for any distortionless source coding is bounded asl ³ H(S) This theorem provides the mathematical tool for assessing data compaction, i.e.lossless data compression, of datagenerated by a discrete memorylesssource. The entropy of a source is a function ofthe probabilities of the source symbols that constitute the alphabet of thesource. Entropy of Discrete Memoryless SourceAssume that the source output ismodeled as a discrete random variable,s, which takes on symbols from a fixedfinite alphabet

The entropy is a measure of the average information content per source symbol. The source coding theorem is also known as the "noiseless coding theorem" in the sense that it establishes the condition for error-free encoding to be possible. 7. State channel capacity theorem. (AUC MAY2008) Communications over a channel such as an ethernet cable are the primary motivation of information theory. As anyone who's ever used a telephone (mobile or landline) knows, however, such channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. How much information can one hope to communicate over a noisy (or otherwise imperfect) channel? Consider the communications process over a discrete channel. A simple model of the process is shown below: Communication channel Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let p(y x) be the conditional probability distribution function of Y given X. We will consider p(y x) to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of f(x), the 8. marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the channel capacity and is given by:

(where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate R and a decoding algorithm, such that the maximal probability of block error is ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error. Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity. 8. State channel coding theorem. (AUC MAY 2007) 9. What is channel capacity of binary synchronous channel with error probability of 0.2? (AUC DEC 2007) 10. Calculate the entropy of the source with a symbol set containing 64symbols each with a probability p i =1/64? (AUC APR 2008) 11. Compare Shannon and Huffman coding. (AUC MAY 2009) Shannon's Theorem gives an upper bound to the capacity of a link, in bits per second (bps), as a function of the available bandwidth and the signal-to-noise ratio of the link. 12. The Theorem can be stated as: = B * log2(1+ S/N) where C is the achievable channel capacity, B is the bandwidth of the line, S is the average signal power and N is the average noise power. 13. The signal-to-noise ratio (S/N) is usually expressed in decibels (db) given by the formula: 10 * log10(s/n) For example a signal-to-noise ratio of 1000 is commonly expressed as 10 * log10(1000) = 30 db. 9. Determine differential entropy.. (AUC MAY 2010)

14. Define Rate Bandwidth and Bandwidth efficiency. (AUC DEC 2010) Bandwidth efficiency is the ratio of the dat rate in bits per second to the efficicny utilized channel bandwidth Ρ =Rb/B Where Rb= data rate 15. A source generates 3 messages with probability 0.5, 0.25, 0.25. Calculate source entropy. (AUC DEC 2010) (AUC MAY 2010) 16. Differentiate between lossless and lossy coding (AUC MAY 2011) lossless coding lossy coding Lossless compression schemes are reversib In lossy compression, sccept some loss of so that the original data can be reconstructe data in order to achieve higher compression Lossless data compression will always to fail Lossy method can produce a much smoker f to compress some files than lossless method 17. State Shannon s channel capacity theorem, for a power and band limited channel. (AUC DEC 2011) Refer question No.10 18. What is information theory? Information theory deals with the mathematical modelling and analysis of a communication system rather than with physical sources and physical channels. 19. What is discrete memory less source? The symbols emitted by the source during successive signalling intervals are statistically independent. That source is called discrete memory less source. Here memoryless, means that the symbol emitted any time is independent of previous choices. 20. What is amount of information?

The amount of information gained after observing the event S=S K, which occurs with probability P K, as the logarithmic function. Amount of information I k = log 2 (1/P k ) Unit of information is bit. 21. What is information rate? Information rate R is represented as the average number of bits of information per second. R= rh information bits/second Where H is entropy r is rate at which messages are generated. 22. What is meant by Source encoding? The efficient representation of data generated by a discrete source. This process is called Source coding. The device that performs the representation is called a source encoder. 23. Name the two source coding techniques. Shannon-Fano coding Huffman coding. 24. Write the expression for code efficiency. _ η = H / N _ H Entropy; N average number of bits/message 25. What is channel redundancy? Redundancy is given as, Redundancy (γ) =1- code efficiency Redundancy (γ) =1-η The redundancy should be as low as possible. 26. Write about data compaction? For efficient signal transmission, the redundant information should be removed from the signal prior to transmission. This operation with no loss of information is ordinarily performed on a signal in digital form. This refers to Data compaction (or) Lossless data. 27. Write about channel capacity? The channel capacity of the discrete memoryless channel is given as maximum average mutual information. The maximization is taken with respect to input probabilities P(x i ) C = Max I(X ; Y) {P(x j )} 28. Define channel efficiency and give its mathematical expression. The transmission efficiency of channel efficiency is defined as the ratio of actual transinformation to maximum transinformation. η = I(X;Y) / Max I(X;Y) η = I(X;Y) / C C Channel capacity. 29. Define redundancy of the channel and give its mathematical expression. The redundancy of the channel is defined as ratio of the difference in actual and maximum transinformation to maximum transinformation. It is denoted as γ γ = 1 η γ = [C - I(X;Y)] / C

30. What is discrete memoryless channel? Ans. A discrete memoryless channel is a statistical model with an input X and an output Y that is a noisy version of X; both X and Y are random variables. The channel is said to be discrete when both X and Y are discrete. The channel is said to be memoryless when the current output symbol depends only on the current input symbol and not any of the previous one. 31. What is mutual information? The mutual information is defined as the amount of information transferred when x j is transmitted and y k is received. It is represented by I(x j, y k ) and given as, I(x j, y k ) = log 2 [p(x j /y k ) / p(x j )] p(x j /y k ) conditional probability that x j was transmitted and y k is received. p(x j ) probability of symbol x j for transmission. 32. Explain Shannon-Fano coding. An efficient code can be obtained by the following simple procedure, known as Shannon- Fano algorithm. 1. List the source symbols in order of decreasing probability. 2. Partition the set into two sets that are as close to equiprobable as possible, and sign 3. to the upper set and 1 to the lower set. 3. Continue this process, each time partitioning the sets with as nearly equal probabilities as possible until further partitioning is not possible. 33. State the properties of mutual information. I(X;Y) = I(Y;X) I(X,Y) 0 I(X;Y) = H(Y)-H(Y/X) I(X; Y) = H(X) +H(Y)-H(X; Y). 34. Give the relation between the different entropies. H(X; Y) = H(X) +H(Y/X) = H(Y) +H(X/Y) H(X) - entropy of the source(y/x), H(X/Y)-conditional entropy H(Y)-entropy of destination H(X, Y) - Joint entropy of the source and destination. 35. What is channel diagram and channel matrix? The transition probability diagram of the channel is called the channel diagram and its matrix representation is called the channel matrix. 36. What is uncertainty? Explain the difference between uncertainty and information? The words uncertainty, surprise and information are all related to each other. Before an event occurs, there is an uncertainty, when the event occurs there is an amount of surprise and after the occurrence of an event there is a gain of information. Consider the source which emits the discrete symbols randomly from the set of fixed alphabet i.e. X={x 0, x 1, x 2,. x K-1 } The various symbols in X have probabilities of p 0, p 1, p 2 etc, which can be written as, P(X=x K ) =P K K=0, 1, 2 K-1 The set of probabilities satisfy the following condition,

The idea of information is related to Uncertainty or Surprise. Considering the emission of symbol X=x K from the source. If the probability of x K is P K =0, then such a symbol is impossible. Similarly when probability P K =1, then such symbol is sure. In both cases there is no Surprise and hence no information is produced when symbol x K is emitted. As the probability P K is low, there is more surprise or uncertainty. Before the event X=x K is emitted, there is an amount of uncertainty. When the symbol X=x K occurs, there is an amount of surprise. After the occurrence of the symbol X=x K there is the gain in amount of information. The essence of which may be viewed as the resolution of uncertainty. 37. Explain Huffman coding. Huffman coding results an optimum code. Thus it is the code that has the highest efficiency. The Procedure is as follows, a. List the source symbols in order of decreasing probability. b. Combine the probabilities of the two symbols having the lowest probabilities, and reorder the resulted probabilities. This step is called reduction. The same procedure is repeated until there are two ordered probabilities remaining. c. Start encoding with the last reduction which consist of exactly two ordered probabilities. Assign 0 as the first digit in the code words for all the source symbols associated with the first probability and assign 1 to the second probability. d. Now go back and assign 0 and 1 to the second digit for the two probabilities that were combined in the previous reduction step, retaining all assignments made in step e. Keep regressing this way until the first column is reached. 38. State Shannon theorem. Given a source of M equally likely messages, with M>>1, which is generating information at a rate R. Given a channel with a channel capacity C. Then if R < C, there exists a coding technique such that the output of the source may be transmitted over the channel with a probability of error of receiving the message which may be made arbitrarily small. 39. Write the mathematical expression of channel capacity. C = B log 2 (1 + S/N) B Bandwidth; S average transmitted power; N average noise power. Shannon s theorem: A given communication system has a maximum rate of information C known as the channel capacity. o If the information rate R is less than C, then one can approach arbitrarily small error probabilities by using intelligent coding techniques. o To get lower error probabilities, the encoder has to work on longer blocks of signal data. This entails longer delays and higher computational requirements. Thus, if R _ C then transmission may be accomplished without error in the presence of noise. Unfortunately, Shannon s theorem is not a constructive proof it merely states that such a coding method exists. The proof can therefore not be used to develop a coding method that reaches the channel capacity.

The negation of this theorem is also true: if R > C, then errors cannot be avoided regardless of the coding technique used. 40. Calculate the Entropy of the Source with symbol probabilities 0.6, 0.3 and 0.1. (AUC DEC 2011) H= = 1.29 bits/symbol 41. A source generate three message with probability 0.5, 0.25, 0.25. calculate H(x) (AUC MAY 2012) H(x) =1.2 bits/msg 42. State the advantages of LempiZiv coding(auc MAY 2012) Receiver does not require prior knowledge of the coding table constructed by the transmitter Eliminates the need of large buffer to store the received code words until such time as the decoding dictionary is complete enough to code them Synchronous transmission 43. Define mutual information Formally, the mutual information of two discrete random variables X and Y can be defined as: where p(x,y) is the joint probability distribution function of X and Y, and p1(x) and p2(y) are the marginal probability distribution functions of X and Y respectively. In the case of a continuous function, summation is matched with a definite double integral: Where p(x, y) is now the joint probability density function of X and Y, and p1(x) p1(x) and p2(y) are the marginal probability density functions of X and Y respectively. 44. Define Rate distortion theory Rate distortion theory: Rate distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal amount of entropy (or information) R that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding a

given distortion D. The functions that relate the rate and distortion are found as the solution of the following minimization problem. In the above equation, I(X,Y) is the Mutual information. 46.An event has six possible outcomes with probabilities {1/2, 1/4, 1/8, 1/16, 1/32, 1/32}. Find the Entropy of the source S (AUC NOV/DEC 2010) 47. What is entropy? (AUC APR/MAY 2011) The entropy of a source is defined as the source which in produces average information per individual message or symbol in particular interval. It is also called entropy 48. What is information rate? (AUC NOV/DEC 2011) Information rate R is represented as the average number of bits of information per second. R= rh information bits/second Where H is entropy r is rate at which messages are generated. PART-B 1. (i) Define Mutual information. Find the relation between the mutual information and the joint entropy of the channel input and channel output. 2. Consider a discrete memory less channel with input alphabet X, output alphabet Y and transition probabilities p(y k / X j ). Find the mutual information of channel to obtain the channel capacity.

3. i)state and prove the properties of mutual information. Define Mutual information. State any two properties(4)

(ii) What are the implications of information capacity theorem?(auc NOV 2006)

(ii) Give the advantage and disadvantage of channel coding in detail. 4. Derive the channel capacity theorem. Shannon's Theorem Shannon's Theorem gives an upper bound to the capacity of a link, in bits per second (bps), as a function of the available bandwidth and the signal-to-noise ratio of the link. The Theorem can be stated as: C = B * log2(1+ S/N) where C is the achievable channel capacity, B is the bandwidth of the line, S is the average signal power and N is the average noise power. The signal-to-noise ratio (S/N) is usually expressed in decibels (db) given by the formula: 10 * log10(s/n) so for example a signal-to-noise ratio of 1000 is commonly expressed as 10 * log10(1000) = 30 db. Here is a graph showing the relationship between C/B and S/N (in db): (iii)discuss the implication of the information capacity theorem. (AUC MAY2007) 5. (i)derive the expression for channel capacity of a binary symmetric channel. Discuss in detail binary symmetric channel and binary erasure channel. ) Derive the channel

capacity for Binary Symmetric channel.(6) (ii) Derive the channel capacity for band limited, power limited Gaussian Channel. (10) (AUC DEC 2010) Also find the channel capacity of binary symmetric channel. (AUC NOV 2008)

6. Give the (Shanon- Hartley) information capacity theorem (AUC MAY 2009) It can be stated as follows: The information capacity of a continuous channel of bandwidth B Hz, perturbed by additive white Gaussian noise of power spectral density 2N0 and limited in bandwidth to B, is given by where P is he average transmitted power. This theorem implies that, for given average transmitted power P and channel bandwidth B, we can transmit information at the rate C bits per second, with arbitrarily small probability of error by employing sufficiently complex encoding systems. 7. Find the code words for five symbols of t he alphabet of a discrete memory- less source with probability {0.4, 0.2, 0.2, 0.1, 0.1}, using Huffman coding and determine the source entropy and average code word length. (10) (AUC NOV 2006) Consider a sequence of letters of English alphabet with their probabilities of occurrence as given. Letter A I L M N O P Y Probability 0.1 0.1 0.2 0.1 0.1 0.2 0.1 0.1 8.Compute two different Huffman codes for this alphabet. Also for each of the two codes, find the average code-word length and variance of average code-word length over ensemble of letters. (AUC NOV 2008) A discrete memory less channel has the following alphabet with probability of occurrence.

Symbol S 0 S 1 S 2 S 3 S 4 S 5 S 6 Probability 0.125 0.0625 0.25 0.0625 0.125 0.125 0.25 9.Generate the Huffman coding. Find average Coded Length, entropy and η. (AUC NOV 2007) Encode the following source using Huffman Coding.X = { x 1, x 2, x 3, x 4, x 5 } P(X) = {0.2, 0.02, 0.1, 0.38, 0.3 } (AUC MAY 2006) 10.(i) Derive the channel capacityfor Binary Symmetric channel.(6) (ii) Derive the (10) (AUC DEC 2010) (AUC MAY 2011) ii)the channel transition matrix [0.9 0.1 0.2 0.8 ]. Draw the channel diagram and determine the probabilities associated with output assuming equipropable inputs. (AUC MAY 2011) 11.Justify the need for an efficient source encoding process in order to increase the average transmitted information per bit, if the source emitted symbol are not equally likely with example.consider a discrete memory less source for your justification(auc DEC 2011)

12. A Database Management System (DMS) has following alphabet with probability of occurrence as shown below Symbol S0 S1 S2 S3 S4 S5 S6 probability 0.125 0.0625 0.25 0.06252 0.125 0.125 0.25 (AUC MAY 2012) 13. Derive Shannon-Hartley theorem for the channel capacity of a continuous channel having an average limitation and perturbed by an additive band limited white Gaussian noise 1. Brief the properties of entropy.

14. Five symbols of the alphabet of discrete memory less source and their probabilities are given below.s={s0,s1,s2,s3,s4} P (S)=(0.4,0.2,0.2,0,1,0.1)Code the symbols using Huffman coding. (12) (AUC NOV/DEC 2010) Using Huffman code I, encode the following symbols.s = [0.3, 0.2, 0.25, 0.12, 0.05, 0.08,]Calculate average code length, entropy of the source, code efficiency, redundancy

Efficieny =96% 15. Write in detail the procedure of Shannon-Fano coding scheme. Eight possible messages m1,m2,m3,m4,m5,m6,m7 and m8 from a source and their P(m1)=0.5,P(m2)=0.15,P(m3)=0.15,P(m4)=0.08,P(m5)=0.08,P(m6)=0.02,P(m7)=0.01,P( m8)=0.01.construct the Shannon fano coding for each of these message in order to increase the average information per bit. Find the coding efficiency. (AUC DEC 2011) In Shannon Fano coding, the symbols are arranged in order from most probable to least probable, and then divided into two sets whose total probabilities are as close as possible to being equal. All symbols then have the first digits of their codes assigned; symbols in the first set receive "0" and symbols in the second set receive "1". As long as any sets with more than one member remain, the same process is repeated on those sets, to determine successive digits of their codes. When a set has been reduced to one symbol, of course, this means the symbol's code is complete and will not form the prefix of any other symbol's code.

The algorithm works, and it produces fairly efficient variable-length encodings; when the two smaller sets produced by a partitioning are in fact of equal probability, the one bit of information used to distinguish them is used most efficiently. Unfortunately, Shannon Fano does not always produce optimal prefix codes. For this reason, Shannon Fano is almost never used; Huffman coding is almost as computationally simple and produces prefix codes that always achieve the lowest expected code word length. Shannon Fano coding is used in the IMPLODE compression method, which is part of the ZIP file format, where it is desired to apply a simple algorithm with high performance and minimum requirements for programming. Shannon-Fano Algorithm: A Shannon Fano tree is built according to a specification designed to define an effective code table. The actual algorithm is simple: 1. For a given list of symbols, develop a corresponding list of probabilities or frequency counts so that each symbol s relative frequency of occurrence is known. 2. Sort the lists of symbols according to frequency, with the most frequently occurring symbols at the left and the least common at the right. 3. Divide the list into two parts, with the total frequency counts of the left part being as close to the total of the right as possible. 4. The left part of the list is assigned the binary digit 0, and the right part is assigned the digit 1. This means that the codes for the symbols in the first part will all start with 0, and the codes in the second part will all start with 1. 5. Recursively apply the steps 3 and 4 to each of the two halves, subdividing groups and adding bits to the codes until each symbol has become a corresponding code leaf on the tree. Example 1: The source of information A generates the symbols {A0, A1, A2, A3 and A4} with the corresponding probabilities {0.4, 0.3, 0.15,

0.1 and 0.05}. Encoding the source symbols using binary encoder and Shannon-Fano encoder gives: 16. Explain the concept of noiseless coding theorem and state its significance Give the (Shanon- Hartley) information capacity theorem and Discuss the implication of the same in detail. State and explain Shannon theorem on channel capacity(12) (ii) Discuss the source coding theorem. (6)(AUC MAY 2010) (i) Discuss the source coding theorem. Elaborate the channel coding theorem with an example Source Coding Theorem (Shannon's first theorem) The theorem can be stated as follows: Given a discrete memoryless source of entropy H(S), the average code-word length L for any distortionless source coding is bounded as L H(S) This theorem provides the mathematical

tool for assessing data compaction, i.e. lossless data compression, of data generated by a discrete memoryless source. The entropy of a source is a function of the probabilities of the source symbols that constitute the alphabet of the source. Entropy of Discrete Memoryless Source Assume that the source output is modeled as a discrete random variable, S, which takes on symbols from a fixed finite alphabet With probabilities after observing the event k S s as the The entropy is a measure of the average information content per source symbol. The source coding theorem is also known as the "noiseless coding theorem" in the sense that it establishes the condition for error-free encoding to be possible Channel Coding Theorem (Shannon's 2nd theorem) The channel coding theorem for a discrete memoryless channel is stated in two parts as follows: (a)let a discrete memoryless source with an alphabet S have entropy H(S) and produce symbols once every S T seconds. Let a discrete memoryless channel have capacity C and be used once every C T seconds. Then if logarithmic function

There exists a coding scheme for which the source output can be transmitted over the channel and be reconstructed with an arbitrarily small probability of error. Information Capacity Theorem(also known as Shannon-Hartley law or Shannon's 3rd theorem) It can be stated as follows: The information capacity of a continuous channel of bandwidth B Hz, perturbed by additive white Gaussian noise of power spectral density 2N0 and limited in bandwidth to B, is given by where P is he average transmittedpower. This theorem implies that, for given average transmitted power P and channel bandwidth B, we can transmit information at the rate C bits per second, with arbitrarily small probability of error by employing sufficiently complex encoding systems. 4. 17. Explain in detail about data compaction codings? Discuss the data compaction.(auc MAY 2007) (i) An Analog signal i s band limited to `B' Hz and sampled at Nyquist rate.the sampled signals are quantized into 4 levels. Each level represents one message. The probability of occurrence of the four messages are p1=p3=1/8; p2=p4=3/8. Find out information rate of the source. (6) (ii) Five source messages are probable to appear as m1 = 0:4, m2 = 0:15, m3 = 0:15, m4 = 0:15, m5 = 0:15. Find coding efficiency for (1) Shannon- Fano coding, (2) Huffman coding. (10) (AUC DEC 2010)

Explain in detail about discrete memory less channel

Explain in detail about channel capacity

18. Write short notes on differential entropy.

19. Explain about channel capacity?

Channel capacity

20. Explain the bandwidth signal to noise radio trade-off for this theorem(auc MAY 2012) (ii) Discuss about rate distortion theory. (6)(AUC MAY 2010) 21.Explain about Data compression. Discuss the various technique used for compression of information (AUC MAY 2009)

22.Explain about rate distortion theory

23.(i) Derive the channel capacity of a continuous band limited white Gaussian noise channel. (10)(ii) Derive the expression for channel capacity of a continuous channel. Find also the expression for channel capacity of a continuous channel of infinite bandwidth comment on the results. (AUC MAY 2006)