Introduction to Information Theory. By Prof. S.J. Soni Asst. Professor, CE Department, SPCE, Visnagar

Size: px
Start display at page:

Download "Introduction to Information Theory. By Prof. S.J. Soni Asst. Professor, CE Department, SPCE, Visnagar"

Transcription

1 Introduction to Information Theory By Prof. S.J. Soni Asst. Professor, CE Department, SPCE, Visnagar

2 Introduction [B.P. Lathi] Almost in all the means of communication, none produces error-free communication. We may able to improve the accuracy in digital signals by reducing the error probability P e. In all the digital systems, P e varies as e keb asymptotically. By increasing E b, the energy per bit, we can reduce P e to any desired level. 2

3 Introduction Now the signal power is S i = E b R b, where R b is the bit rate. Hence, increasing E b means either increasing the signal power S i (for a given bit rate), decreasing the bit rate R b (for a given power), or both. Because of physical limitations, however, S i cannot be increased up to some limit. Hence to reduce P e further, we must reduce R b, the rate of transmission of information digits. 3

4 Introduction In communication, if noise exists, we could not get error free communication. P e -> 0 if R b -> 0 Shannon in 1948, published a paper title A mathematical theory of communication P e -> 0 if R b < C (channel capacity) Then still we can have error free transmission. 4

5 Introduction Disturbances which occur on a communication channel do not limit the accuracy of transmission, what it limits is the rate of transmission of information. Information Theory is a mathematical science. The word, Information is very deceptive. 5

6 Information 1. John was dropped to the airport by taxi. 2. The taxi brought John to the airport. 3. There is a traffic jam on highway NH5, between Mumbai and Pune in India. 4. There is a traffic jam on highway NH5 in India. 6

7 Information Syntactic Information It is related with the symbols which we used to built up our message. In sentences 1 & 2, information is same, but syntactically they differ. Semantic Information Meaning of the message Sentences 3 & 4 are syntactically different, but they are also different semantically. 7

8 Information Pragmatic Information It is related with the effect and the usage of the message. Sentences 3 & 4 For out of country people, above two sentences are not important. 8

9 Syntactic Information For two sentences, we may use different symbols, but ultimate meaning remains same. Source Channel Destination Source Encoder Channel Encoder Channel Decoder Source Decoder Shannon did, way to get, optimal Prof. S.J. source Soni, SPCE, encoder Visnagar and channel encoder. 9

10 Example Binary Data Consider A, B and C are the three cities. A B weather status? A C Sunny 00 (1/4) Sunny 1110 (1/8) Rainy 01 (1/4) Rainy (1/8) Cloudy 10 (1/4) Cloudy - 10 (1/4) Foggy - 11 (1/4) Smoggy 0 (1/2) 10

11 Example What is the difference between communication link from B to A and from C to A? Cost of operation of communication. No. of bits/message/second on average 11

12 Example From C to A L average = 4X1/8 + 3X1/8 + 2X1/4 + 1X1/2 = 1 7/8 binits(binary digits)/message From B to A L average = 2X1/4 + 2X1/4 + 2X1/4 + 2X1/4 = 2 binits/message [more than 1 7/8] 12

13 Example Is it possible for me to get a mapping which is better than this? In case, if it is possible, then how low can I go? If it is possible to go, then how do I synthesis this mapping? Code?? E.g. 01, 11, 111 etc. 13

14 Measure of Information [BP Lathi] Commonsense Measure of Information Consider the following headlines in a morning paper. 1. There will be a daylight tomorrow. 2. China invades the India. 3. India invades the China. Reader s interest, amount of information, probabilities of occurrences of the events. 14

15 Measure of Information Commonsense Measure of Information The information is connected with the element of surprise, which is a result of uncertainty or unexpectedness. If P is the probability of occurrence of a message and I is the information gained from the message, it is evident from the preceding discussion that when P 1, I 0 and when P 0, I, and in general a smaller P gives a larger I. So, I ~ log 1/P 15

16 Measure of Information Engineering Measure of Information From an engineering point of view, the amount of information in a message is proportional to the (minimum) time required to transmit the message. This implies that a message with higher probability can be transmitted in a shorter time than that required for a message with lower probability. This fact may be verified by three city example. E.g. 1/8 4 bits, ½ 1 bit 16

17 Measure of Information Engineering Measure of Information Let us assume for two equiprobable messages m1 and m2, we may use binary digits 0 and 1 respectively. For four equiprobable messages m1,m2,m3, and m4, we may use binary digits 00,01,10,11 respectively. For eight equiprobable messages m1,..,m8, we may use binary digits 000,001,..,111 respectively. 17

18 Measure of Information Engineering Measure of Information In general, we need log 2 n binary digits to encode each of n equiprobable messages. The probability, P, of any one message occurring is 1/n. Hence to encode each message (with probability P), we need log 2 (1/P) binary digits. The information I contained in a message with probability of occurrence P is proportional to log 2 (1/P). I = k log 2 (1/P) => I = log 2 (1/P) bits 18

19 Average Information per Message: Entropy of a Source [BP Lathi] Consider a memoryless source m emitting messages m 1, m 2,.., m n with probabilities P 1, P 2,, P n respectively (P 1 + P P n =1) A memoryless source implies that each message emitted is independent of the previous message(s). The information content of message m i is I i, I i = log (1/P i ) bits 19

20 Average Information per Message: Entropy of a Source Hence, the mean, or average, information per message emitted by the source is given by n i=1 P i I i bits The average information per message of a source m is called its entropy, denoted by H(m). H(m) = n i=1 P i I i bits = n i=1 P i log (1/ P i ) bits = - n i=1 P i log P i bits 20

21 Source Encoding Huffman Code The source encoding theorem says that to encode a source with entropy H(m), we need, on average, a minimum of H(m) binary digits per message. The number of digits in the codeword is the length of the codeword. Thus the average word length of an optimal code is H(m). 21

22 Huffman Code Example Consider the six messages with probabilities 0.30, 0.25, 0.15, 0.12, 0.08 and 0.10 respectively. Original Source Reduced Sources Messages Probabilities S1 S2 S3 S4 m m m m m m

23 Huffman Code Example Original Source Reduced Sources Messages Probabilities S1 Code S2 Code S3 Code S4 Code m m m m m m The optimum (Huffman) code obtained this way is called a compact code. The average length of the compact code is: L = n i=1 P i L i = 0.3(2) (2) (3) (3) + 0.1(3) (3) = 2.45 binary digits The entropy H(m) of the source is given by H(m) = n i=1 P i log 2 (1/P i ) = bits Hence, the minimum possible length is bits. By using direct coding (the Huffman code), it is possible to attain an average length of 2.45 bits in the example given. 23

24 Huffman Code Example Original Source Reduced Sources Messages Probabilities S1 Code S2 Code S3 Code S4 Code m m m m m m L = n i=1 P i L i = 0.3(2) (2) (3) (3) + 0.1(3) (3) = 2.45 binary digits The entropy H(m) of the source is given by H(m) = n i=1 P i log 2 (1/P i ) = bits Code Efficiency Ƞ = H(m) / L = 2.418/2.45 = Redundancy γ = 1 - Ƞ = =

25 Huffman Code Even though the Huffman code is a variable length code, it is uniquely decodable. If we receive a sequence of Huffman-coded messages, it can be decoded only one way, that is, without ambiguity. For example, m 1 m 5 m 2 m 1 m 4 m 3 m 6 it would be encoded as we can verify that it will be decoded in only one way. 25

26 Example A memoryless source emits six messages with probabilities 0.3, 0.25, 0.15, 0.12, 0.1, and Find the 4-ary(quaternary) Huffman Code. Determine its average word length, the efficiency, and the redundancy. 1. Minimum no. of messages = r + k (r-1) = 4 + 1(4-1) = 7 2. Create Table with last column contains 4 messages. 3. Calculate L = ary digits 4. H 4 (m) = ary units = - 6 i=1 P i log 4 P i 5. Code efficiency = Redundancy =

27 Other GTU Examples Apply Huffman coding method for the following message ensemble: [X] = [X1 X2 X3 X4 X5 X6 X7] [P] = [ ] Take M = 2 Calculate: (i) Entropy (ii) Average Length (iii) Efficiency. Define Entropy and its unit. Explain Huffman Coding technique in detail. Explain how the uncertainty and the information are related and entropy of a discrete source is determined. Find a quaternary compact code for the source emitting symbols s1, s2,, s11 with the corresponding probability, 0.21, 0.16, 0.12, 0.10, 0.10, 0.07, 0.07, 0.06, 0.05, 0.05, and 0.01, respectively. 27

28 Tutorial Problems Book: Modern Digital and Analog Communications Systems by B.P. Lathi. Chapter 13: Introduction to information theory Exercise Problems , , , , , ,

29 Coding [Khalid Sayood] It is assignment of binary sequences to elements of an alphabet. The set of binary sequences is called a code, and the individual members of the set are called codewords. An alphabet is a collection of symbols called letters. E.g. Alphabet used in writing books. ASCII code for each letter. It s called fixed-length coding. 29

30 Statistical Methods [David Salomon] Statistical methods use variable-size codes, with the shorter codes assigned to symbols or groups of symbols that appear more often in the data (have a higher probability of occurrence). Designers and implementers of variable size codes have to deal with the two problems Assigning codes that can be decoded unambiguously Assigning codes with the minimum average size. 30

31 Uniquely Decodable Codes [K. Sayood] The average length of the code is not only important point in designing a good code. Consider following example. Suppose source alphabet consists of four letters a1, a2, a3 and a4. With probabilities P(a1)=1/2, P(a2)=1/4, P(a3)=P(a4)=1/8. The entropy for this source is 1.75 bits/symbol. Letters Probability Code 1 Code 2 Code 3 Code 4 a a a a Average Length

32 Uniquely Decodable Codes Based on the average length, Code 1 appears to be the best code. However, it is ambiguous. Because both a1 and a2 assigned the same code 0. In Code 2, each symbol is assigned a distinct codeword. But still it is ambiguous. For example to send a2 a1 a1 we send 100. Decoder can decode a2 a1 a1 OR a2 a3. Code 3 and Code 4 both are uniquely decodable. Both are unambiguous. In case of Code 3, the decoder knows the moment a code is complete. In code 4, we have to wait till the beginning of the next codeword before we know that the current codeword is complete. Because of this, Code 3 is called an instantaneous code. While Code 4 is not an instantaneous code. 32

33 Sardinas-Patterson Theorem 33

34 Example Consider the code. This code is an example of a code which is not uniquely decodable, since the string can be interpreted as the sequence of codewords , but also as the sequence of codewords Two possible decodings of this encoded string are thus given by cdb and babe. 34

35 Example Here, {1, 011, 01110, 1110, 10011} 1 is a prefix of 1110 [dangling suffix is 110], so {1, 011, 01110, 1110, 10011, 110} 1 is also a prefix of [dangling suffix is 0011], so {1, 011, 01110, 1110, 10011, 110, 0011} 011 is prefix of [dangling suffix is 10], so {1, 011, 01110, 1110, 10011, 110, 0011, 10} Here, for added dangling suffix 10, there is another dangling suffix we get, so, 011, which is already a codeword for b, so given code is not uniquely decodable. 35

36 Other Examples [K. Sayood] Consider the code {0, 01, 11} and prove that it is uniquely decodable. Consider the code {0, 01, 10} and prove that it is not uniquely decodable. 36

37 GTU Examples What is uniquely decodable code? Check which codes are uniquely decodable and instantaneous? S1={101,11,00,01,100} S2={0,10,110,1110,..} S3={02,12,20,21,120} What is the need of instantaneous codes? Explain the instantaneous codes in detail with suitable example. 37

38 Kraft-McMillan Inequality [Khalid Shayood] We divide this topic in two parts. The first part provides the necessary condition on the codeword lengths of uniquely decodable codes. The second part shows that we can always find a prefix code that satisfy this necessary condition. 38

39 Kraft-McMillan Inequality Let C be a code with N codewords with lengths l 1,l 2, ln, and l 1 l 2 ln. If C is uniquely decodable, then : K( C) N i 1 2 l i 1 39

40 Shannon-Fano Coding [D. Salomon] Prob. Steps Final The average size of this code is 0.25X X X X3+0.1X3+0.1X4+0.05X4 = 2.7 bits/symbol. Entropy is near to 2.67, so result is good. 40

41 Shannon-Fano Coding [D. Salomon] Repeat the calculation above but place the first split between the third and fourth symbol. Calculate the average size of the code and show that it is greater than 2.67 bits/symbol. This suggest that the shannon-fano method produces better code when the splits are better. 41

42 GTU Examples Apply Shanon Fano coding algorithm to given message ensemble: [X] = [X 1 X 2 X 3 X 4 X 5 X 6] P[X] = [ ] Take M=2 Find out: (i) Entropy (ii) Code words (iii) average length (iv) efficiency Explain Shanon-Fano code in detail with example. Mention its advantages over other coding schemes. 42

43 Arithmetic Coding [David Salomon] If a statistical method assign 90% probability to a given character, the optimal code size would be 0.15 bits. While the Huffman coding system would probably assign a 1-bit code to the symbol, which is six times longer than necessary. Arithmetic coding bypasses the idea of replacing an input symbol with a specific code. It replaces a stream of input symbols with a single floating point output number. 43

44 Character probability Range ^(space) 1/10 A 1/10 B 1/10 E 1/10 G 1/10 I 1/10 L 2/10 S 1/10 T 1/10 Suppose that we want to encode the message BILL GATES 44

45 Arithmetic Coding Encoding algorithm for arithmetic coding: low = 0.0 ; high =1.0 ; while not EOF do range = high - low ; read(c) ; high = low + range high_range(c) ; low = low + range low_range(c) ; end do output(low); 45

46 Arithmetic Coding To encode the first character B properly, the final coded message has to be a number greater than or equal to 0.20 and less than range = = 1.0 high = = 0.3 low = = 0.2 After the first character is encoded, the low end for the range is changed from 0.00 to 0.20 and the high end for the range is changed from 1.00 to

47 Arithmetic Coding The next character to be encoded, the letter I, owns the range 0.50 to 0.60 in the new subrange of 0.20 to So, the new encoded number will fall somewhere in the 50th to 60th percentile of the currently established. Thus, this number is further restricted to 0.25 to

48 Arithmetic Coding Note that any number between 0.25 and 0.26 is a legal encoding number of BI. Thus, a number that is best suited for binary representation is selected. (Condition : the length of the encoded message is known or EOF is used.) 48

49 Arithmetic Coding Character Prob. Range Range High Low B Low =0 High=1 Range = 1 I Low = 0.2 High=0.3 Range = 0.1 L Low = 0.25 High=0.26 Range = X 0.3 = X 0.6 = X0.8 = X 0.2 = X 0.5 = X0.6 =

50 T S L T S L T S L T S L T S L T S L T S L T S L T S L T S L I G E B A ( ) I G E B A ( ) I G E B A ( ) I G E B A ( ) I G E B A ( ) I G E B A ( ) I G E B A ( ) I G E B A ( ) I G E B A ( ) I G E B A ( )

51 Arithmetic Coding Character Low High B I L L ^(space) G A T E S

52 Arithmetic Coding So, the final value (or, any value between and , if the length of the encoded message is known at the decode end), will uniquely encode the message BILL GATES. 52

53 Arithmetic Coding Decoding is the inverse process. Since falls between 0.2 and 0.3, the first character must be B. Removing the effect of B from by first subtracting the low value of B, 0.2, giving Then divided by the width of the range of B, 0.1. This gives a value of

54 Arithmetic Coding In summary, the encoding process is simply one of narrowing the range of possible numbers with every new symbol. The new range is proportional to the predefined probability attached to that symbol. Decoding is the inverse procedure, in which the range is expanded in proportion to the probability of each symbol as it is extracted. 54

55 Arithmetic Coding Coding rate approaches high-order entropy theoretically. Not so popular as Huffman coding because, are needed. 55

56 Example Apply Arithmetic Coding to following string: SWISS_MISS 56

57 Shannon s Theorem & Channel Capacity Shannon's Theorem gives an upper bound to the capacity of a link, in bits per second (bps), as a function of the available bandwidth and the signalto-noise ratio of the link. The Theorem can be stated as: C = B * log 2 (1+ S/N) where C is the achievable channel capacity, B is the bandwidth of the line, S is the average signal power and N is the average noise power. 57

58 Shannon s Theorem & Channel Capacity The signal-to-noise ratio (S/N) is usually expressed in decibels (db) given by the formula: 10 * log 10 (S/N) so for example a signal-to-noise ratio of 1000 is commonly expressed as 10 * log 10 (1000) = 30 db. 58

59 Shannon s Theorem & Channel Capacity Here is a graph showing the relationship between C/B and S/N (in db): 59

60 Examples Here are two examples of the use of Shannon's Theorem. Modem For a typical telephone line with a signal-tonoise ratio of 30dB and an audio bandwidth of 3kHz, we get a maximum data rate of: C = 3000 * log 2 (1001) = 3000*9.967=29901 which is a little less than 30 kbps (30720). 60

61 Examples Satellite TV Channel For a satellite TV channel with a signal-to noise ratio of 20 db and a video bandwidth of 10MHz, we get a maximum data rate of: C= * log 2 (101) which is about 66 Mbps. 61

62 Other Examples The signal-to-noise ratio is often given in decibels. Assume that SNR db = 36 and the channel bandwidth is 2 MHz. The theoretical channel capacity can be calculated as 62

63 My Blog worldsj.wordpress.com 63

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code Chapter 3 Source Coding 3. An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code 3. An Introduction to Source Coding Entropy (in bits per symbol) implies in average

More information

Chapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code

Chapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code Chapter 2 Date Compression: Source Coding 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code 2.1 An Introduction to Source Coding Source coding can be seen as an efficient way

More information

Information Theory and Coding Techniques

Information Theory and Coding Techniques Information Theory and Coding Techniques Lecture 1.2: Introduction and Course Outlines Information Theory 1 Information Theory and Coding Techniques Prof. Ja-Ling Wu Department of Computer Science and

More information

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006) MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK SATELLITE COMMUNICATION DEPT./SEM.:ECE/VIII UNIT V PART-A 1. What is binary symmetric channel (AUC DEC 2006) 2. Define information rate? (AUC DEC 2007)

More information

UNIT I INFORMATION THEORY. I k log 2

UNIT I INFORMATION THEORY. I k log 2 UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet) Compression Motivation Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet) Storage: Store large & complex 3D models (e.g. 3D scanner

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK DEPARTMENT: ECE SEMESTER: IV SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A 1. What is binary symmetric channel (AUC DEC

More information

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY Discrete Messages and Information Content, Concept of Amount of Information, Average information, Entropy, Information rate, Source coding to increase

More information

Coding for Discrete Source

Coding for Discrete Source EGR 544 Communication Theory 3. Coding for Discrete Sources Z. Aliyazicioglu Electrical and Computer Engineering Department Cal Poly Pomona Coding for Discrete Source Coding Represent source data effectively

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information

Chapter 2: Source coding

Chapter 2: Source coding Chapter 2: meghdadi@ensil.unilim.fr University of Limoges Chapter 2: Entropy of Markov Source Chapter 2: Entropy of Markov Source Markov model for information sources Given the present, the future is independent

More information

Entropy as a measure of surprise

Entropy as a measure of surprise Entropy as a measure of surprise Lecture 5: Sam Roweis September 26, 25 What does information do? It removes uncertainty. Information Conveyed = Uncertainty Removed = Surprise Yielded. How should we quantify

More information

Information and Entropy

Information and Entropy Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication

More information

3F1 Information Theory, Lecture 3

3F1 Information Theory, Lecture 3 3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2011, 28 November 2011 Memoryless Sources Arithmetic Coding Sources with Memory 2 / 19 Summary of last lecture Prefix-free

More information

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding SIGNAL COMPRESSION Lecture 3 4.9.2007 Shannon-Fano-Elias Codes and Arithmetic Coding 1 Shannon-Fano-Elias Coding We discuss how to encode the symbols {a 1, a 2,..., a m }, knowing their probabilities,

More information

Digital communication system. Shannon s separation principle

Digital communication system. Shannon s separation principle Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation

More information

ELEC 515 Information Theory. Distortionless Source Coding

ELEC 515 Information Theory. Distortionless Source Coding ELEC 515 Information Theory Distortionless Source Coding 1 Source Coding Output Alphabet Y={y 1,,y J } Source Encoder Lengths 2 Source Coding Two coding requirements The source sequence can be recovered

More information

Exercise 1. = P(y a 1)P(a 1 )

Exercise 1. = P(y a 1)P(a 1 ) Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a

More information

Channel capacity. Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5.

Channel capacity. Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5. Channel capacity Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5. Exercices Exercise session 11 : Channel capacity 1 1. Source entropy Given X a memoryless

More information

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

Multimedia Communications. Mathematical Preliminaries for Lossless Compression Multimedia Communications Mathematical Preliminaries for Lossless Compression What we will see in this chapter Definition of information and entropy Modeling a data source Definition of coding and when

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

Communications Theory and Engineering

Communications Theory and Engineering Communications Theory and Engineering Master's Degree in Electronic Engineering Sapienza University of Rome A.A. 2018-2019 AEP Asymptotic Equipartition Property AEP In information theory, the analog of

More information

Chapter 2 Source Models and Entropy. Any information-generating process can be viewed as. computer program in executed form: binary 0

Chapter 2 Source Models and Entropy. Any information-generating process can be viewed as. computer program in executed form: binary 0 Part II Information Theory Concepts Chapter 2 Source Models and Entropy Any information-generating process can be viewed as a source: { emitting a sequence of symbols { symbols from a nite alphabet text:

More information

3F1 Information Theory, Lecture 3

3F1 Information Theory, Lecture 3 3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2013, 29 November 2013 Memoryless Sources Arithmetic Coding Sources with Memory Markov Example 2 / 21 Encoding the output

More information

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy Haykin_ch05_pp3.fm Page 207 Monday, November 26, 202 2:44 PM CHAPTER 5 Information Theory 5. Introduction As mentioned in Chapter and reiterated along the way, the purpose of a communication system is

More information

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms) Course Code 005636 (Fall 2017) Multimedia Multimedia Data Compression (Lossless Compression Algorithms) Prof. S. M. Riazul Islam, Dept. of Computer Engineering, Sejong University, Korea E-mail: riaz@sejong.ac.kr

More information

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria Source Coding Master Universitario en Ingeniería de Telecomunicación I. Santamaría Universidad de Cantabria Contents Introduction Asymptotic Equipartition Property Optimal Codes (Huffman Coding) Universal

More information

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H. Problem sheet Ex. Verify that the function H(p,..., p n ) = k p k log p k satisfies all 8 axioms on H. Ex. (Not to be handed in). looking at the notes). List as many of the 8 axioms as you can, (without

More information

Ch 0 Introduction. 0.1 Overview of Information Theory and Coding

Ch 0 Introduction. 0.1 Overview of Information Theory and Coding Ch 0 Introduction 0.1 Overview of Information Theory and Coding Overview The information theory was founded by Shannon in 1948. This theory is for transmission (communication system) or recording (storage

More information

Compression and Coding

Compression and Coding Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)

More information

Entropies & Information Theory

Entropies & Information Theory Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information

More information

Information Theory and Statistics Lecture 2: Source coding

Information Theory and Statistics Lecture 2: Source coding Information Theory and Statistics Lecture 2: Source coding Łukasz Dębowski ldebowsk@ipipan.waw.pl Ph. D. Programme 2013/2014 Injections and codes Definition (injection) Function f is called an injection

More information

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1 Kraft s inequality An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if N 2 l i 1 Proof: Suppose that we have a tree code. Let l max = max{l 1,...,

More information

DCSP-3: Minimal Length Coding. Jianfeng Feng

DCSP-3: Minimal Length Coding. Jianfeng Feng DCSP-3: Minimal Length Coding Jianfeng Feng Department of Computer Science Warwick Univ., UK Jianfeng.feng@warwick.ac.uk http://www.dcs.warwick.ac.uk/~feng/dcsp.html Automatic Image Caption (better than

More information

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions Engineering Tripos Part IIA THIRD YEAR 3F: Signals and Systems INFORMATION THEORY Examples Paper Solutions. Let the joint probability mass function of two binary random variables X and Y be given in the

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 11: Information Theory Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt March 15 th, 2015 1 Midterm Degrees Exam Type of Assignment Max. Final Oral* Oral

More information

CMPT 365 Multimedia Systems. Lossless Compression

CMPT 365 Multimedia Systems. Lossless Compression CMPT 365 Multimedia Systems Lossless Compression Spring 2017 Edited from slides by Dr. Jiangchuan Liu CMPT365 Multimedia Systems 1 Outline Why compression? Entropy Variable Length Coding Shannon-Fano Coding

More information

Coding of memoryless sources 1/35

Coding of memoryless sources 1/35 Coding of memoryless sources 1/35 Outline 1. Morse coding ; 2. Definitions : encoding, encoding efficiency ; 3. fixed length codes, encoding integers ; 4. prefix condition ; 5. Kraft and Mac Millan theorems

More information

EECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have

EECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have EECS 229A Spring 2007 * * Solutions to Homework 3 1. Problem 4.11 on pg. 93 of the text. Stationary processes (a) By stationarity and the chain rule for entropy, we have H(X 0 ) + H(X n X 0 ) = H(X 0,

More information

Lecture 3 : Algorithms for source coding. September 30, 2016

Lecture 3 : Algorithms for source coding. September 30, 2016 Lecture 3 : Algorithms for source coding September 30, 2016 Outline 1. Huffman code ; proof of optimality ; 2. Coding with intervals : Shannon-Fano-Elias code and Shannon code ; 3. Arithmetic coding. 1/39

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006 jzhang@cse.unsw.edu.au

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5 Lecture : Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP959 Multimedia Systems S 006 jzhang@cse.unsw.edu.au Acknowledgement

More information

Basic Principles of Lossless Coding. Universal Lossless coding. Lempel-Ziv Coding. 2. Exploit dependences between successive symbols.

Basic Principles of Lossless Coding. Universal Lossless coding. Lempel-Ziv Coding. 2. Exploit dependences between successive symbols. Universal Lossless coding Lempel-Ziv Coding Basic principles of lossless compression Historical review Variable-length-to-block coding Lempel-Ziv coding 1 Basic Principles of Lossless Coding 1. Exploit

More information

A Mathematical Theory of Communication

A Mathematical Theory of Communication A Mathematical Theory of Communication Ben Eggers Abstract This paper defines information-theoretic entropy and proves some elementary results about it. Notably, we prove that given a few basic assumptions

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

Summary of Last Lectures

Summary of Last Lectures Lossless Coding IV a k p k b k a 0.16 111 b 0.04 0001 c 0.04 0000 d 0.16 110 e 0.23 01 f 0.07 1001 g 0.06 1000 h 0.09 001 i 0.15 101 100 root 1 60 1 0 0 1 40 0 32 28 23 e 17 1 0 1 0 1 0 16 a 16 d 15 i

More information

Principles of Communications

Principles of Communications Principles of Communications Weiyao Lin Shanghai Jiao Tong University Chapter 10: Information Theory Textbook: Chapter 12 Communication Systems Engineering: Ch 6.1, Ch 9.1~ 9. 92 2009/2010 Meixia Tao @

More information

Basic information theory

Basic information theory Basic information theory Communication system performance is limited by Available signal power Background noise Bandwidth limits. Can we postulate an ideal system based on physical principles, against

More information

Information Theory - Entropy. Figure 3

Information Theory - Entropy. Figure 3 Concept of Information Information Theory - Entropy Figure 3 A typical binary coded digital communication system is shown in Figure 3. What is involved in the transmission of information? - The system

More information

Revision of Lecture 5

Revision of Lecture 5 Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information

More information

Lecture 1: Shannon s Theorem

Lecture 1: Shannon s Theorem Lecture 1: Shannon s Theorem Lecturer: Travis Gagie January 13th, 2015 Welcome to Data Compression! I m Travis and I ll be your instructor this week. If you haven t registered yet, don t worry, we ll work

More information

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70 Name : Roll No. :.... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/2011-12 2011 CODING & INFORMATION THEORY Time Allotted : 3 Hours Full Marks : 70 The figures in the margin indicate full marks

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Huffman Coding. C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University

Huffman Coding. C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University Huffman Coding C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)573877 cmliu@cs.nctu.edu.tw

More information

Chapter 5: Data Compression

Chapter 5: Data Compression Chapter 5: Data Compression Definition. A source code C for a random variable X is a mapping from the range of X to the set of finite length strings of symbols from a D-ary alphabet. ˆX: source alphabet,

More information

Fibonacci Coding for Lossless Data Compression A Review

Fibonacci Coding for Lossless Data Compression A Review RESEARCH ARTICLE OPEN ACCESS Fibonacci Coding for Lossless Data Compression A Review Ezhilarasu P Associate Professor Department of Computer Science and Engineering Hindusthan College of Engineering and

More information

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes. 5 Binary Codes You have already seen how check digits for bar codes (in Unit 3) and ISBN numbers (Unit 4) are used to detect errors. Here you will look at codes relevant for data transmission, for example,

More information

Entropy Coding. Connectivity coding. Entropy coding. Definitions. Lossles coder. Input: a set of symbols Output: bitstream. Idea

Entropy Coding. Connectivity coding. Entropy coding. Definitions. Lossles coder. Input: a set of symbols Output: bitstream. Idea Connectivity coding Entropy Coding dd 7, dd 6, dd 7, dd 5,... TG output... CRRRLSLECRRE Entropy coder output Connectivity data Edgebreaker output Digital Geometry Processing - Spring 8, Technion Digital

More information

Data Compression Techniques (Spring 2012) Model Solutions for Exercise 2

Data Compression Techniques (Spring 2012) Model Solutions for Exercise 2 582487 Data Compression Techniques (Spring 22) Model Solutions for Exercise 2 If you have any feedback or corrections, please contact nvalimak at cs.helsinki.fi.. Problem: Construct a canonical prefix

More information

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS EC 32 (CR) Total No. of Questions :09] [Total No. of Pages : 02 III/IV B.Tech. DEGREE EXAMINATIONS, APRIL/MAY- 207 Second Semester ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS Time: Three Hours

More information

2018/5/3. YU Xiangyu

2018/5/3. YU Xiangyu 2018/5/3 YU Xiangyu yuxy@scut.edu.cn Entropy Huffman Code Entropy of Discrete Source Definition of entropy: If an information source X can generate n different messages x 1, x 2,, x i,, x n, then the

More information

ECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE)

ECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE) ECE 74 - Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE) 1. A Huffman code finds the optimal codeword to assign to a given block of source symbols. (a) Show that cannot be a Huffman

More information

Motivation for Arithmetic Coding

Motivation for Arithmetic Coding Motivation for Arithmetic Coding Motivations for arithmetic coding: 1) Huffman coding algorithm can generate prefix codes with a minimum average codeword length. But this length is usually strictly greater

More information

10-704: Information Processing and Learning Fall Lecture 10: Oct 3

10-704: Information Processing and Learning Fall Lecture 10: Oct 3 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 0: Oct 3 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of

More information

Text Compression. Jayadev Misra The University of Texas at Austin December 5, A Very Incomplete Introduction to Information Theory 2

Text Compression. Jayadev Misra The University of Texas at Austin December 5, A Very Incomplete Introduction to Information Theory 2 Text Compression Jayadev Misra The University of Texas at Austin December 5, 2003 Contents 1 Introduction 1 2 A Very Incomplete Introduction to Information Theory 2 3 Huffman Coding 5 3.1 Uniquely Decodable

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive

More information

CSEP 521 Applied Algorithms Spring Statistical Lossless Data Compression

CSEP 521 Applied Algorithms Spring Statistical Lossless Data Compression CSEP 52 Applied Algorithms Spring 25 Statistical Lossless Data Compression Outline for Tonight Basic Concepts in Data Compression Entropy Prefix codes Huffman Coding Arithmetic Coding Run Length Coding

More information

Block 2: Introduction to Information Theory

Block 2: Introduction to Information Theory Block 2: Introduction to Information Theory Francisco J. Escribano April 26, 2015 Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 1 / 51 Table of contents 1 Motivation

More information

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise 9 THEORY OF CODES Chapter 9 Theory of Codes After studying this chapter you should understand what is meant by noise, error detection and correction; be able to find and use the Hamming distance for a

More information

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information. L65 Dept. of Linguistics, Indiana University Fall 205 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission rate

More information

Dept. of Linguistics, Indiana University Fall 2015

Dept. of Linguistics, Indiana University Fall 2015 L645 Dept. of Linguistics, Indiana University Fall 2015 1 / 28 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission

More information

CSEP 590 Data Compression Autumn Arithmetic Coding

CSEP 590 Data Compression Autumn Arithmetic Coding CSEP 590 Data Compression Autumn 2007 Arithmetic Coding Reals in Binary Any real number x in the interval [0,1) can be represented in binary as.b 1 b 2... where b i is a bit. x 0 0 1 0 1... binary representation

More information

Information Theory. Week 4 Compressing streams. Iain Murray,

Information Theory. Week 4 Compressing streams. Iain Murray, Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 4 Compressing streams Iain Murray, 2014 School of Informatics, University of Edinburgh Jensen s inequality For convex functions: E[f(x)]

More information

6.02 Fall 2012 Lecture #1

6.02 Fall 2012 Lecture #1 6.02 Fall 2012 Lecture #1 Digital vs. analog communication The birth of modern digital communication Information and entropy Codes, Huffman coding 6.02 Fall 2012 Lecture 1, Slide #1 6.02 Fall 2012 Lecture

More information

COMM901 Source Coding and Compression. Quiz 1

COMM901 Source Coding and Compression. Quiz 1 German University in Cairo - GUC Faculty of Information Engineering & Technology - IET Department of Communication Engineering Winter Semester 2013/2014 Students Name: Students ID: COMM901 Source Coding

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for MARKOV CHAINS A finite state Markov chain is a sequence S 0,S 1,... of discrete cv s from a finite alphabet S where q 0 (s) is a pmf on S 0 and for n 1, Q(s s ) = Pr(S n =s S n 1 =s ) = Pr(S n =s S n 1

More information

Lecture 4 : Adaptive source coding algorithms

Lecture 4 : Adaptive source coding algorithms Lecture 4 : Adaptive source coding algorithms February 2, 28 Information Theory Outline 1. Motivation ; 2. adaptive Huffman encoding ; 3. Gallager and Knuth s method ; 4. Dictionary methods : Lempel-Ziv

More information

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information 4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information Ramji Venkataramanan Signal Processing and Communications Lab Department of Engineering ramji.v@eng.cam.ac.uk

More information

17.1 Binary Codes Normal numbers we use are in base 10, which are called decimal numbers. Each digit can be 10 possible numbers: 0, 1, 2, 9.

17.1 Binary Codes Normal numbers we use are in base 10, which are called decimal numbers. Each digit can be 10 possible numbers: 0, 1, 2, 9. ( c ) E p s t e i n, C a r t e r, B o l l i n g e r, A u r i s p a C h a p t e r 17: I n f o r m a t i o n S c i e n c e P a g e 1 CHAPTER 17: Information Science 17.1 Binary Codes Normal numbers we use

More information

Exercises with solutions (Set B)

Exercises with solutions (Set B) Exercises with solutions (Set B) 3. A fair coin is tossed an infinite number of times. Let Y n be a random variable, with n Z, that describes the outcome of the n-th coin toss. If the outcome of the n-th

More information

EE376A: Homework #2 Solutions Due by 11:59pm Thursday, February 1st, 2018

EE376A: Homework #2 Solutions Due by 11:59pm Thursday, February 1st, 2018 Please submit the solutions on Gradescope. Some definitions that may be useful: EE376A: Homework #2 Solutions Due by 11:59pm Thursday, February 1st, 2018 Definition 1: A sequence of random variables X

More information

Basic Principles of Video Coding

Basic Principles of Video Coding Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion

More information

transmission and coding of information problem list José Luis Ruiz july 2018

transmission and coding of information problem list José Luis Ruiz july 2018 transmission and coding of information problem list José Luis Ruiz july 2018 Departament de Matemàtiques Facultat d Informàtica de Barcelona Universitat Politècnica de Catalunya c 2010 2018 Contents 1

More information

ITCT Lecture IV.3: Markov Processes and Sources with Memory

ITCT Lecture IV.3: Markov Processes and Sources with Memory ITCT Lecture IV.3: Markov Processes and Sources with Memory 4. Markov Processes Thus far, we have been occupied with memoryless sources and channels. We must now turn our attention to sources with memory.

More information

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16 EE539R: Problem Set 4 Assigned: 3/08/6, Due: 07/09/6. Cover and Thomas: Problem 3.5 Sets defined by probabilities: Define the set C n (t = {x n : P X n(x n 2 nt } (a We have = P X n(x n P X n(x n 2 nt

More information

Source Coding Techniques

Source Coding Techniques Source Coding Techniques. Huffman Code. 2. Two-pass Huffman Code. 3. Lemple-Ziv Code. 4. Fano code. 5. Shannon Code. 6. Arithmetic Code. Source Coding Techniques. Huffman Code. 2. Two-path Huffman Code.

More information

Introduction to information theory and coding

Introduction to information theory and coding Introduction to information theory and coding Louis WEHENKEL Set of slides No 5 State of the art in data compression Stochastic processes and models for information sources First Shannon theorem : data

More information

Information Theory, Statistics, and Decision Trees

Information Theory, Statistics, and Decision Trees Information Theory, Statistics, and Decision Trees Léon Bottou COS 424 4/6/2010 Summary 1. Basic information theory. 2. Decision trees. 3. Information theory and statistics. Léon Bottou 2/31 COS 424 4/6/2010

More information

Shannon's Theory of Communication

Shannon's Theory of Communication Shannon's Theory of Communication An operational introduction 5 September 2014, Introduction to Information Systems Giovanni Sileno g.sileno@uva.nl Leibniz Center for Law University of Amsterdam Fundamental

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is

More information

Information Sources. Professor A. Manikas. Imperial College London. EE303 - Communication Systems An Overview of Fundamentals

Information Sources. Professor A. Manikas. Imperial College London. EE303 - Communication Systems An Overview of Fundamentals Information Sources Professor A. Manikas Imperial College London EE303 - Communication Systems An Overview of Fundamentals Prof. A. Manikas (Imperial College) EE303: Information Sources 24 Oct. 2011 1

More information

Information Theory with Applications, Math6397 Lecture Notes from September 30, 2014 taken by Ilknur Telkes

Information Theory with Applications, Math6397 Lecture Notes from September 30, 2014 taken by Ilknur Telkes Information Theory with Applications, Math6397 Lecture Notes from September 3, 24 taken by Ilknur Telkes Last Time Kraft inequality (sep.or) prefix code Shannon Fano code Bound for average code-word length

More information

lossless, optimal compressor

lossless, optimal compressor 6. Variable-length Lossless Compression The principal engineering goal of compression is to represent a given sequence a, a 2,..., a n produced by a source as a sequence of bits of minimal possible length.

More information

Introduction to information theory and coding

Introduction to information theory and coding Introduction to information theory and coding Louis WEHENKEL Set of slides No 4 Source modeling and source coding Stochastic processes and models for information sources First Shannon theorem : data compression

More information