log 2 N I m m log 2 N + 1 m.

Similar documents
Lecture 11: Information theory THURSDAY, FEBRUARY 21, 2019

1 Normal Distribution.

6.02 Fall 2012 Lecture #1

Nondeterministic finite automata

Information and Entropy. Professor Kevin Gold

Entropy as a measure of surprise

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.

Dept. of Linguistics, Indiana University Fall 2015

An introduction to basic information theory. Hampus Wessman

Please bring the task to your first physics lesson and hand it to the teacher.

Lecture 1: Shannon s Theorem

CS1800: Strong Induction. Professor Kevin Gold

CIS 2033 Lecture 5, Fall

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

CS 124 Math Review Section January 29, 2018

Noisy channel communication

Discrete Mathematics and Probability Theory Fall 2013 Vazirani Note 12. Random Variables: Distribution and Expectation

Quadratic Equations Part I

You separate binary numbers into columns in a similar fashion. 2 5 = 32

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 16. Random Variables: Distribution and Expectation

Toss 1. Fig.1. 2 Heads 2 Tails Heads/Tails (H, H) (T, T) (H, T) Fig.2

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory

Probability (Devore Chapter Two)

*Karle Laska s Sections: There is no class tomorrow and Friday! Have a good weekend! Scores will be posted in Compass early Friday morning

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Chapter 8: An Introduction to Probability and Statistics

Discrete Mathematics and Probability Theory Fall 2012 Vazirani Note 14. Random Variables: Distribution and Expectation

Information Theory. Week 4 Compressing streams. Iain Murray,

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Shannon's Theory of Communication

CS 361: Probability & Statistics

Chapter 9 Fundamental Limits in Information Theory

Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore

P (E) = P (A 1 )P (A 2 )... P (A n ).

STA Module 4 Probability Concepts. Rev.F08 1

Chapter 1 Review of Equations and Inequalities

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

6.080 / Great Ideas in Theoretical Computer Science Spring 2008

Noisy-Channel Coding

Lecture 1: September 25, A quick reminder about random variables and convexity

Conditional Probability

Entropies & Information Theory

Lecture 1: Introduction, Entropy and ML estimation

3F1 Information Theory, Lecture 3

Solving Equations by Adding and Subtracting

MAT Mathematics in Today's World

Probability and Independence Terri Bittner, Ph.D.

the time it takes until a radioactive substance undergoes a decay

0. Introduction 1 0. INTRODUCTION

Lecture 7: DecisionTrees

Chapter 5 Simplifying Formulas and Solving Equations

CS1800: Mathematical Induction. Professor Kevin Gold

Lecture 5. 1 Review (Pairwise Independence and Derandomization)

Chapter 4: An Introduction to Probability and Statistics

Conditional probabilities and graphical models

MATH2206 Prob Stat/20.Jan Weekly Review 1-2

Discrete Mathematics for CS Spring 2006 Vazirani Lecture 22

MATH 3C: MIDTERM 1 REVIEW. 1. Counting

1 The Basic Counting Principles

2.1 Definition. Let n be a positive integer. An n-dimensional vector is an ordered list of n real numbers.

1. When applied to an affected person, the test comes up positive in 90% of cases, and negative in 10% (these are called false negatives ).

Slope Fields: Graphing Solutions Without the Solutions

Classification and Regression Trees

6.02 Fall 2011 Lecture #9

1 / 9. Due Friday, May 11 th at 2:30PM. There is no checkpoint problem. CS103 Handout 30 Spring 2018 May 4, 2018 Problem Set 5

PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY

Examples of frequentist probability include games of chance, sample surveys, and randomized experiments. We will focus on frequentist probability sinc

Probability and the Second Law of Thermodynamics

Lecture 1 : Data Compression and Entropy

Information in Biology

Grades 7 & 8, Math Circles 10/11/12 October, Series & Polygonal Numbers

Section 2.7 Solving Linear Inequalities

Quantum Entanglement. Chapter Introduction. 8.2 Entangled Two-Particle States

Lesson 11-1: Parabolas

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

(Refer Slide Time: 0:21)

Linear Classifiers and the Perceptron

Intro to Information Theory

Lecture 10: Powers of Matrices, Difference Equations

Discrete Probability and State Estimation

Lecture 8: Shannon s Noise Models

CS 350 Algorithms and Complexity

Properties of Arithmetic

Shannon s Noisy-Channel Coding Theorem

Expected Value II. 1 The Expected Number of Events that Happen

CHAPTER 12 Boolean Algebra

6.042/18.062J Mathematics for Computer Science November 28, 2006 Tom Leighton and Ronitt Rubinfeld. Random Variables

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic.

Chapter 2. A Look Back. 2.1 Substitution ciphers

Information & Correlation

Classification & Information Theory Lecture #8

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10

Algebra & Trig Review

Precalculus idea: A picture is worth 1,000 words

AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS

1 Introduction. Sept CS497:Learning and NLP Lec 4: Mathematical and Computational Paradigms Fall Consider the following examples:

Information in Biology

Transcription:

SOPHOMORE COLLEGE MATHEMATICS OF THE INFORMATION AGE SHANNON S THEOREMS Let s recall the fundamental notions of information and entropy. To repeat, Shannon s emphasis is on selecting a given message from a collection of possible messages. The set of all possible messages is called the source, and one might say that information (before transmission, at least) is less about what you do say than it is about what you can say. There are various legends about Shannon s choice of the term entropy to describe the average information of a source. The one that s repeated most often is John von Neumann s advice to Call it entropy. No one knows what entropy is, so if you call it that you will win any argument. This little pointer on linguistic wrangling notwithstanding, there are many similarities in form and substance between Shannon s definition of entropy and the term as it is used in physics, especially in thermodynamics and statistical mechanics. For the definitions: Consider a source consisting of N symbols s 1,..., s N. In terms of asking yes-no questions, if it takes I questions to select one particular symbol then a binary search is the best strategy, e.g., Is the particular symbol in the first half of the list of symbols? Is it in the first half of the first half? Etc.This leads to the relation 2 I = N or I = log 2 N. This is assuming that the symbols are equally probable. We talked a little about what this formula means if N is not a power of 2, and so log 2 N is not a whole number. Take, say N = 9, which has log 2 9 = 3.1699. Playing the game, optimally, the number of questions we need to ask for a particular round clearly satisfies log 2 9 # questions log 2 9 + 1, so at least three and at most four questions. Likewise if we re trying to determine one of N possible symbols then log 2 N # questions log 2 N + 1, In general, the idea is that the number of questions one needs to ask on average, if the game is played many times, should tend to log 2 N. Let s be a little more precise about this. If we play the game m times it s as if we have m players each playing the game once and picking out an unknown m-tuple (s 1, s 2,... s m ). There are N m possible unknown m-tuples, all equally probable. If it takes I m questions to pick one out then, as above, log 2 N m I m log 2 N m + 1. Write this as or m log 2 N I m m log 2 N + 1 log 2 N I m m log 2 N + 1 m. 1

Now, I s the number of questions it takes to guess m-symbols so I m /s the number of questions it take to pick out one symbol on average. We can make I m /m arbitrarily close to log 2 N by taking m large. Back to the general definition of information content. If, more realistically, the symbol s n has an associated probability p n then the information of the message s n is I(s n ) = log 1 p n in bits. 1 Note that if the symbols are equally probable then they each have probability 1/N and the formula gives I(s n ) = log N as before. The entropy is the average information of the source, that is, it is the weighted average of the information of each of the symbols according to their respective probabilities: N N H = p n I(s n ) = p n log 1. p n n=1 Logs add... and so does information. Here s a nice bit of intuition bundling up information, logs, and old, familiar identities. Suppose that a message A occurs with probability p 1 and another message B occurs with probability p 2. What is the information associated with receiving both messages? Intuitively, we feel that the informations should add, that is, the information associated with receiving both messages (our surprise at receiving both) should be the sum of the information associated with each provided they are independent. You wouldn t say It s raining outside and The sidewalk is wet are independent messages, but you would say that It s raining outside and I play the trombone are independent. One fundamental idea in probability is that the probability of two independent events occurring is the product of the probabilities of each. (Think of coin tossing, or rolling a die, for common examples.) Thus, as we ve written it, the probabilities of both messages occuring is p 1 p 2, and the information associated with that is I(A and B) = log 1 p 1 p 2. But, we all know, the log of a product is the sum of the logs, and hence I(A and B) = log 1 p 1 p 2 = log 1 p 1 + log 1 p 2 = I(A) + I(B). Turning this around, if you ever wanted an intuitive reason why the log of a product should be the sum of the logs, think of this that the information of the sus the sum of the informations. 1 We ll always use logs to base 2 and we ll drop the subscript. To compute, you can relate the log base 2 to the log base 10 (for example) by the formula n=1 log 2 x = log 10 x log 10 2 = log 10 x.3010. 2

An important special case of entropy. Let s take a special case of an entropy calculation. Suppose we have a source consisting of just two messages A and B. Suppose A occurs with probability p. Since only A or B can occur, and one of them must occur, the probability that B occurs is then 1 p. Thus the entropy of the source is H({A, B}) = p log 1 p + (1 p) log 1 1 p How does this look as a function of p, for 0 < p < 1, and for the limiting cases p = 0 and p = 1? Here s a graph: At p = 0 or p = 1 the entropy is zero, as it should be (there is no uncertainty) and the entropy is a maximum at p = 1/2, also as it should be (each message is equally likely). The maximum value is 1. These facts can be verified with some calculus I won t do that. Unequal probability forces entropy down. Let s derive one general property of entropy, a property that can sometimes help your intuition. Again suppose we have a source of N messages S = {s 1, s 2,..., s N }. If the probabilities are equal then they are all equal to 1/N, the information content of each message is log N and the entropy of the source is H(S) = N 1 N log N = 1 N N log N 1 = log N. n=1 Makes sense. Let s show that if the messages are not equiprobable then the entropy goes down, i.e., let s show that H(S) log N and that equality holds only when the probabilities are equal. There are several ways one can show this (at least two ways to my knowledge). We ll use an approach based on optimization, just for practice in some ideas you may not have seen 3 n=1

for awhile (and certainly not in this context). 2 Suppose the probabilities are p 1, p 2,... p N. We want to find the extremum of N H(p 1, p 2,..., p N ) = p n log 1 N subject to the constraint p k > 0 and p n = 1. p n n=1 The constraint is the fact that the probabilities add to 1. A job for Lagrange multipliers! We might as well use log to base e here, since that only amounts to scaling the entropy, and we know how to differentiate that. Calling the constraint function N g(p 1, p 2,... p N ) = p n, at an extremum we have H = λ g. Now, p k log 1 = log 1 + p k ( 1 ) = log 1 1, p k p k p k p k p k so the condition on the gradients is, componentwise, n=1 log 1 p k 1 = λ 1. Thus all the p k s are equal, and since they add to 1 they must all equal 1/N. The extremal is a maximum because 2 p k p l H(p 1, p 2,..., p N ) = 1 p k δ kl < 0. Done. More on the meaning of entropy after we look briefly at a few sample problems that put entropy to work. Fool s gold. Consider the following weighing problem. 3 You have 27 apparently identical gold coins. One of thes false and lighter but is otherwise indistinguishable form the others. You also have a balance with two pans, but without standard, known weights for comparison weighings. Thus you have to compare the gold coins to each other and any measurement will tell you if the loaded pans weigh the same or, if not, which weighs more. How many measurements are needed to find the false coin? - Before solving the problem, find a lower bound for the number of weighings. Here s an information theoretic way of analyzing the problem. First, the amount of information we need to select one of 27 items is log 27 bits, for we want to select one coin from among 27 and indistinguishable here translates to equiprobable. A weighing putting some coins on one pan of the balance and other coins on the other pan produces one of three outcomes: (1) The pans balance 2 The other approach is vis Jensen s inequality, a general result on convex functions. That s the better approach, really, but we d have to develop Jensen s inequality. Worthwhile, but maybe another time. 3 This problem comes from A Diary on Information Theory, by A. Rényi. 4 n=1

(2) The left pan is heavier (3) The right pan is heavier If there is a scheme so that these outcomes are also equiprobable then a single weighing is worth log 3 bits (each outcome has probability 1/3). In such a scheme, if there are n weighings then we get n log 3 bits of information. Thus we want to choose n so that n log 3 log 27 = 3 log 3, so n 3. Any weighing scheme that does not have equally probable outcomes will have a lower entropy, as we showed above. Thus we get less information from such a scheme and so would require more weighings to find the false coin. This analysis suggests we should aim to find the fake coin with three weighings. Here s a way to weigh. Divide the coins into three piles of 9 coins each and compare two piles. If the scale balances then the fake coin is in the remaining pile of 9 coins. If one pan is heavier, then the other pan has the fake coin. In any event, with one weighing we now can locate the fake coin among a group of nine coins. Within that group, weigh three of those coins against another three. The result of this weighing isolates the fake coin in a group of three. In this last group, weigh two of the coins again each other. That weighing will determine which coin is fake. 4 Problem Suppose instead that you have 12 coins, indistinguishable in appearance but one is either heavier or lighter than the others. What weighing scheme will allow you to find the false coin? Place your bets. Here s an example of applying Shannon s entropy to a two message source where the probabilities are not equal. 5 Suppose we have a roulette wheel with 32 numbers. (Real roulette wheels have more, but 32 = 2 5 so our calculations will be easier.) You play the game by betting on one of the 32 numbers. You win if your number comes up and you lose if it doesn t. In a fair game all numbers are equally likely to occur. If we consider the source consisting of the 32 statements The ball landed in number n, for n going from 1 to 32, then all of those statements are equally likely and the information associated with each is I = log 1 1 32 = log 32 = 5. (It takes 5 bits to code the number.) But you don t care about the particular number, really. You want to know if you won or lost. You are interested in the two message source W = You have won L = You have lost 4 Note that another approach in the second step might be lucky. Namely, in a group of nine coins, one of which is known to be bad, weigh four against four. If the scale balances then the left over coin is the fake, and we ve found it with two weighings. But if the scale doesn t balance then we re left with a group of four coins, and it will take two more weighings to find fake coin. 5 This example is from Silicon Dreams by Robert W, Lucky. 5

What is the entropy of this source? The probability of A occuring is 1/32 and the probability of B occuring is 31/32 = 1 1/32. So the entropy of the source is H = 1 31 32 log 32 + log 32 32 31 =.15625 + 0.04437 =.20062 That s a considerably smaller number than 5; L is a considerably more likely message. This tells us that, on average, you should be able to get the news that you won or lost in about 0.2 bits. What does that mean, 0.2 bits? Doesn t it take at least one bit for each, say 1 for You have won and 0 for You have lost? Remember, the notions of 1/32 probability of winning and 31/32 probability of losing only make sense with the backdrop of playing the game many times. Probabilities are statements on percentages of occurences in the long run. Thus the question, really, is: If you have a series of wins and losses, e.g. LLLLLW LLLLLLW W LLLL..., how could you code your record so that, on average, it takes something around 0.2 bits per symbol? Here s one method. We track the won/loss record in 8 event blocks. It s reasonable to expect that 8 events in a row will be losses. As we look at the record we ask a yes/no question: Are the next 8 events all losses?. If the answer is yes we let 0 represent that block of 8 losses in a row. So every time that this actually happens in the total record we will have resolved 8 outcomes with a single bit, or a single outcome (in that block) with 1/8 of a bit. If the answer to the yes/no question is no, that is, if the next 8 events were not all losses then we have to record what they were. Here s the scheme: 0 means the next 8 results are LLLLLLL; 1XXXXXXXX mean the 8 X bits specify the actual record. So, for example, 100010000 means the next 8 results are LLLW LLLL So, starting the record at the beginning, we might have, say, 0 0 100010000 0, meaning that the first 16 events are losses, the next 8 are LLLWLLLL, the next 8 are losses, and so on. How well does this do? Look at a block of eight win-loss outcomes. The probability of a single loss is 31/32 and the probability of 8 straight losses is (31/32) 8 =.776 (the events are independent, so the probabilities multiply), and it takes 1 bit to specify this. The probability that the 8 results are not losses is 1.776 =.224, and, according to our scheme it takes 9 bits to give these results. Thus, on average, the number of bits needed to specify 8 results are.776 1 +.224 9 = 2.792. Finally, the average number of bits per result is therefore 2.792/8 =.349. That s much closer to the.2 bits given by calculating the entropy. Let me raise two questions: Can we be more clever and get the average number of bits down further and can we even be so sneaky as to break the entropy of 0.2 bits per symbol? Is this just a game, or what? 6

By way of example, let me address the second question first. An example: Run length coding. Don t think that the strategy of coding blocks of repeated symbols in this way is an idle exercise. There are many examples of messages when there are few symbols, and probabilities strongly favor one over the other thus indicating that substantial blocks of a particular symbol will occur often. The sort of coding scheme that makes use of this. much as we did above, is called run length coding. Here s an example of how run length coding can be used to code a simple image efficiently. I say simple because we ll look at a 1-bit image just black or white, no shades of gray. The idea behind run length coding, and using it to compress the digital representation of this image, is to code by blocks, rather than by individual symbols. For example 0X, where X is a binary number, means that the next X pixels are black. 1Y means that the next Y pixels are white. There are long blocks of a single pixel brightness, black or white, and coding blocks can result in a considerable savings of bits over coding each pixel with its own value. You shouldn t think that 1-bit, black-and-white images are uncommon or not useful to consider. You wouldn t want then for photos we certainly didn t do the palm tree justice but text files are precisely 1-bit images. A page of text has letters (made up of black pixels) and white space in between and around the letters (white pixels) Fax machines, for example, treat test documents as images they don t recognize and record the symbols on the page as characters, only as black or white pixels. Run length coding is a standard way of compressing black-and-white images for fax transmission. 6 Run length coding can also be applied to grayscale images. In that case one codes for a run of pixels all of the same brightness. Typical grayscale images have enough repetition that the compression ratio is about 1.5 to 1, whereas for black-and-white images it may be as high as 10 to 1. 6 Actually, fax machines combine run length coding with Huffman coding, a topic we ll discuss later. 7

How low can you go: The Noiseless Source Coding Theorem Now let s consider the first question I raised in connection with the roulette problem. I ll phrase it like this: Is there a limit to how clever we can be in the coding? The answer is yes the entropy of the source sets a lower bound for how efficient the coding of a message can be. This is a really striking demonstration of how useful the notion of entropy is. I d say it s a demonstration of how natural it is, but it only seems natural after it has been seen to be useful much of mathematics seems to go this way. The precise result is called Shannon s Noiseless Source Coding Theorem. Before stating and proving the result, we need a few clarifications on coding. The source symbols are the things to be selected (in other words, the things to be sent, if we re thinking in terms of communication). These can have any form: letters, pictures, sounds they are uncoded. As before, call the source symbols s 1,..., s N. The symbols have associated probabilities p i and the entropy is defined in terms of these probabilities. Now we code each source symbol, and for simplicity we work exclusively with binary codes. A coded symbol is then represented by binary numbers, i.e. by strings of 0 s and 1 s. Each source symbol is encoded as a specified strings of 0 s and 1 s call these the coded source symbols. The 0 and 1 are the code alphabet. 7 A codeword is a string of coded source symbols, and a message is a sequence of codewords. The main property we want of any code is that it be uniquely decodable. It should be possible to parse any codeword unambiguously into source symbols. For example coding symbols s 1, s 2, s 3 and s 4 as s 1 = 0 s 2 = 01 s 3 = 11 s 4 = 00 is not a uniquely decodable code. The codeword 0011 could be either s 3 s 4 or s 1 s 1 s 3. We ll look at the finer points of this later. Suppose we have a message using a total of M source symbols. The probabilities of the source symbols are determined by their frequency counts in the message. Thus if the i th source symbol occurs times in the message (its frequency count) then the probability associated with the i th coded source symbol is the fraction of times it does occur, i.e., and the entropy of the source is H(S) = i p i = M. p i log 1 p i = i M log M. Changing M changes the entropy for it changes the probabilities of the source symbols that occur. That will in turn affect how we might code the source symbols. 7 Thus we consider radix 2 codes. If the symbols in the code are drawn from an alphabet of r elements then he code is referred to as radix r. 8

So, with M fixed, code the source symbols (with a uniquely decodable code), and let A M be the average of the lengths of all the coded source symbols. Thus A M is a weighted average of the lengths of the individual coded source symbols, weighted according to their respective probabilities: A M = M length i. i There are different ways of coding the source, and efficient coding of a message means making A M small. A code is optimal if A M is as small as possible. How small can it be? One version of Shannon s answer is: Noiseless Source Coding Theorem In the notation above, A M H(S). This is called the noiseless source coding theorem because it deals with codings of a source before any transmission; no noise has yet been introduced that might corrupt the message. We re going to prove this by mathematical induction on M. To recall the principle of induction, we: (1) Establish that the statement is true for M = 1, and (2) Assume that the statement is true for all numbers M (the induction hypothesis) and deduce that it is also true for M + 1. The principle of mathematical induction says that if we can do this then the statement is true for all natural numbers. The first step is to verify the statement of the theorem for M = 1. When M = 1 there is only one source symbol and the entropy H = 0. Any coding of the single message is at least one bit long (and an optimal coding would be exactly one bit, of course), so A 1 1 > 0 = H. We re on our way. The induction hypothesis is that A M H for a message with M source symbols. Suppose we have a message with M + 1 source symbols. The entropy changes and we want to show that A M+1 H(S). Here H(S) = M + 1 log M + 1, m i i where is the frequency count of the i th coded source symbol in S in the message with M + 1 source symbols. Code S and split the coded source symbols into two classes: S 0 = all source symbols in S whose coded source symbol starts with a 0 S 1 = all source symbols in S whose coded source symbol starts with a 1 The number of source symbols in drawing from S 0 and S 1, respectively, are computed by M 0 = S 0, and M 1 = S 1, where the sums go over the coded source symbols in S which are in S 0 and S 1. Now M 0 + M 1 = M + 1, 9

and we can assume that neither M 0 nor M 1 is zero. (If, say, M 0 = 0 then no coded source symbols of S start with a 0, i.e. all coded source symbols start with a 1. But then this leading 1 is a wasted bit in coding the source symbols and this is clearly not optimal.) Thus we can say that 1 M 0, M 1 M, and we are set up to apply the induction hypothesis. Now drop the first bit from the coded source symbols coming from S 0 and S 1.Then, first of all, the average of the lengths of the coded source symbols in S is given by the weighted average of the lengths of the coded source symbols in S 0 and S 1 plus 1 adding the one comes from the extra bit that s been dropped, namely A M+1 = 1 + S 0 M + 1 length i + S 1 M + 1 length i = 1 + M 0 length M + 1 M i + M 1 length 0 M + 1 M i. 1 S 0 S 1 The sums on the right hand side are larger than if we use an optimal code for messages of size M 0 and M 1, so A M+1 1 + M 0 M + 1 A M 0 + M 1 M + 1 A M 1 and we can then apply the induction hypothesis to bound the average length by the entropy for messages of sizes M 0, M 1. That is, A M+1 1 + M 0 M + 1 A M 0 + M 1 M + 1 A M 1 1 + M 0 M + 1 S 0 M 0 log M 0 + M 1 M + 1 This is the key observation, for now several algebraic miracles occur: 1 + M 0 log M 0 + M 1 log M 1 = M + 1 M 0 M + 1 M 1 S 0 1 + S 0 M + 1 log M 0 + S 1 = 1 + S 0 M + 1 log 1 + S 1 = S = H(S) + 1 + log M + 1 log M + 1 + 1 + log S 1 M + 1 log M 1 S 1 M 1 log M 1. M + 1 log 1 + M 0 M + 1 log M 0 + M 1 M + 1 log M 1 1 M + 1 + M 0 M + 1 log M 0 + M 1 M + 1 log M 1 1 M + 1 + M 0 M + 1 log M 0 + M 1 M + 1 log M 1 To complete the proof of the theorem, that A M+1 H(S), we have to show that 1 + log 1 M + 1 + M 0 M + 1 log M 0 + M 1 M + 1 log M 1 0. 10

Write x = M 0 M + 1, and hence 1 x = M 1 M + 1, since M 0 + M 1 = M + 1. Then what we need is the inequality 1 + x log x + (1 x) log(1 x) 0, for 0 < x < 1. This is a calculus problem the same one as finding the maximum of the entropy of a twosymbol source. The function has a unique minimum when x = 1/2 and it s value there is zero. The induction is complete and so is the proof of Shannon s source coding theorem. Crossing the Channel Shannon s greatest contribution was to apply his measure of information to the question of communication in the presence of noise. In fact, in Shannon s theory a communication channel can be abstracted to a system of inputs and outputs, where the outputs depend probabilistically on the inputs. Noise is present in any real system and its effects are felt in what happens, probabilistically, to the inputs when they become outputs. In the early days (the early 1940 s) it was naturally assumed that increasing the transmission rate (now thought of in terms of bits per second, though they didn t use bits then) across a given communications channel increased the probability of errors in transmission. Shannon proved that this was not true provided that the transmission rate was below what he termed the channel capacity. One way of stating Shannon s theores: Shannon s Channel Coding Theorem A channel with capacity C is capable, with suitable coding, of transmitting at any rate less than C bits per symbol with vanishingly small probability of error. For rates greater than C the probability of error cannot be made arbitrarily small. There s a lot to define here and we have to limit ourselves to just that, the definitions. This is not an easy theorem and the proof is not constructive, i.e., Shannon doesn t tell us how to find the coding scheme, only that a coding scheme exists. The Channel: Mutual Information Here s the set-up. Suppose we have messages s 1, s 2,..., s N to select from, and send, at one end of a communication channel, and messages r 1, r 2,..., r M that are received at the other end. We need not have N = M. At the one end, the s j s are selected accoording to some probability distribution, say the probability that s j is selected is P (s j ). Shannon s idea was to model the abstract relationship between the two ends of a channel as a set of conditional probabilities. So let s say a few words about that. Conditional Probabilities and Bayes Theorem. Conditional probabilities are introduced to deal with the fact that the occurrence of one event may influence the occurrence of another. We use the notation P (A B) = The probability that A occurs given that B has occurred. 11

This is often read simply as The probability of A given B. This not the same as the probability of A and B occurring. For conditional probability we watch for B and then look for A. For example A = The sidewalk is wet. B = It is raining. Evidently P (A B) is not the same as P (A, B) (meaning the probability of A and B). It s also evident that P (A B) P (B A). A formula for the conditional probability P (A B) is P (A B) = P (A, B) P (B) One way to think of this is as follows. Imagine doing an experiment N times (a large number) where both A and B can be observed outcomes. Then the probability that A occurs given that B has occured is approximately the number of times that A occurs in the runs of the experiment when B has occurred, i.e., Number of occurrences of A and B P (A B). Number of occurrences of B Now write this as P (A B) = Number of occurrences of A and B Number of occurences of B Number of occurrences of A and B N Number of occurences of B N. P (A and B). P (B) If A and B are independent events then P (A and B) = P (A)P (B) and so P (A B) = P (A and B) P (B) = P (A), which makes sense if A and B are independent, and you re A, who cares whether B happened. Interchanging A and B we also then have We can also write these formulas as But we do have the symmetry and so or P (B A) = P (B, A) P (A) P (A, B) = P (A B)P (B), P (B, A) = P (B A)P (A). P (A, B) = P (B, A) P (A B)P (B) = P (B A)P (A), P (A B) P (B A) = P (B) P (A). 12.

This is known as Bayes Theorem. It s a workhorse in probability problems. The Channel as Conditional Probabilities. Once again, we have a set of messages to send, s 1, s 2,..., s N, and a set of messages that are received r 1, r 2,..., r M. To specify the channel is to specify the conditional probabilities P (r k s j ) = Probability that r k was received given that s j was sent. According to the formula for conditional probabilities P (r k s j ) = P (r k, s j ) P (s j ) Think of this as answering the question: Given the input s j, how probable is it that the output r k results? You might find it helpful to think of this characterization of the channel as an M N matrix. That is, write P (r 1 s 1 ) P (r 1 s 2 )... P (r 1 s N ) P = P (r 2 s 1 ) P (r 2 s 2 )... P (r 2 s N )............ P (r M s 1 ) P (r M s 2 )... P (r M s N ) The k th row, ( P (rk s 1 ) P (r k s 2 ) P (r k s 3 ) P (r k s N ) ). is all about what might have led to the received message r k : It might have come from s 1 with probability P (r k s 1 ); it might have come from s 2 with probability P (r k s 2 ); it might have come from s 3 with probability P (r k s 3 ), and so on. The message r k had to have come from somewhere, so observe that If s j and r k are independent, then N P (r k s j ) = 1. j=1 P (r k s j ) = P (r k, s j ) P (s j ) = P (r k)p (s j ) P (s j ) = P (r k ) and the channel matrix looks like P (r 1 ) P (r 1 )... P (r 1 ) P = P (r 2 ) P (r 2 )... P (r 2 )............ P (r M ) P (r M )... P (r M ) What s the channel matrix for the binary symmetric channel? Here {s 1, s 2 } = {0, 1} and {r 1, r 2 } likewise is {0, 1}, and ( ) p 1 p P = 1 p p 13

Probability in, probability out. The s j s are selected with some probability P (s j ). Consequently, the received messages also occur with some probability, say P (r k ). One can show (via Bayes theorem) that this is given by P (r 1 ) P (r 1 s 1 ) P (r 1 s 2 )... P (r 1 s N ) P (s 1 ) P (r 2 ). = P (r 2 s 1 ) P (r 2 s 2 )... P (r 2 s N ) P (s 2 )............. P (r M ) P (r M s 1 ) P (r M s 2 )... P (r M s N ) P (s N ) Mutual Information and Entropy. Next let s look at things from the receiving end. Here the relevant quantity is P (s j r k ) = The probability that a message s j was sent given that a message r k was received. Now, the message s j had an a priori probability P (s j ) of being selected. On receiving r k there is thus a change in status of the probability associated with s j, from an a priori P (s j ) to an a posteriori P (s j r k ) (after reception of r k ). By the same token, according to how we think of information (quantitatively) we can say that there has been a change of status of the information associated with s j on receiving r k, from an a priori log(1/p (s j )) to an a posteriori log(1/p (s j r k )). The former, log(1/p (s j ), is what we ve always called the information associated with s j, the latter, log(1/p (s j r k ), is something we haven t used or singled out, but it s along the same lines. What s really important here is the change in the information associated with s j, and so we set I(s j, r k ) = log 1 P (s j ) log 1 P (s j r k ) = log P (s j r k ) P (s j ). and call this the mutual information of s j given r k. It is a measure of the (amount of) information the channel transmits about s j if r k is received. If s j and r k are independent then P (s j r k ) = P (s j ). Intuitively, there has been no change of status in the probability associated with s j. To say it differently there has been nothing more learned about s j as a result of receiving r k, or there has been no change in the information associated with s j as a result of receiving r k. In terms of mutual information, this is exactly the statement that I(s j, r k ) = 0, the mutual information is zero. The mutual information is only positive when P (s j r k ) > P (s j ), i.e., when we are more certain of s j by virtue of having received r k. Just as entropy of a source proved to be a more widely useful concept than the information of a particular message, here too it is useful to define a system mutual information by finding an average of the mutual informations of the individual messages. This depends on all of the source messages, S, and all of the received messages R. It can be expressed in terms of the entropy H(S) of the source S, the entropy, H(R) of the received messages R, both of which use our usual definition of entropy, and a conditional entropy H(S R) which is something new. For completeness, the joint entropy is defined by H(S R) = j,k P (s j and r k ) log 14 1 P (s j r k )

This isn t so hard to motivate, but enough is enough it shouldn t be hard to believe there s something like a conditional entropy and let s just use it to define the system mutual information, which is what we really want. This is I(S, R) = H(S) H(S R). This does look very much like an averaged form of the mutual information, above. In fact, one can show I(S, R) = P (r k, s j ) log P (r k, s j ) P (r k )P (s j ). j,k And if the mutual information I(s j, r k ) is supposed to measure the amount of information the channel transmits about s j given that r k was received, then I(S, R) should be a measure of the amount of information (on average) of all the source messages S that the channel transmits given all the received messages R. It s a measure of the change of status of the entropy of the source due to the conditional probabilities between the source messages and received messages the difference between the uncertainty in the source before transmission and the uncertainty in the source after the messages have been received.. Channel Capacity. So now, finally, what is the channel capacity? It s the most information a channel can ever transmit. This Shannon defines to be C = max I(S, R) where the maximus taken over the possible probabilities for the source messages S. (That is, the maximus taken over all possible ways (in terms of probabilities) of selecting the source messages.) It is in terms of this that Shannon proved his famous channel coding theorem. We state it again: Shannon s Channel Coding Theorem A channel with capacity C is capable, with suitable coding, of transmitting at any rate less than C bits per symbol with vanishingly small probability of error. For rates greater than C the probability of error cannot be made arbitrarily small. Whew! I must report that we do not have the tools available for a proof. An example: The symmetric binary channel We send 0 s and 1 s, and only 0 s and 1 s. In a perfect world there is never an error in transmission. In an imperfect world, errors might occur in the process of sending our message; a 1 becomes a zero or vice versa. Let s say that, with probability p a transmitted 1 is received as a 1. Then, of course, with probability 1 p a transmitted 1 is received as a 0. Symmetrically, let s say that the same thing applies to transmitting a 0; with probability p a transmitted 0 is received as a 0, and with probability 1 p a transmitted 0 is received as a 1. This describes the channel completely. It s called a binary symmetric channel, and it s represented schematically by the following drawing. 15

This is a case where the channel capacity can be computed exactly, and the result is C = 1 H. The maximum of the mutual information between the sent and received messages (as in the definition of capacity) is attained when the two symbols are equally probable. That s probability 1/2 and 0 and 1 each have information content 1. It s as if, in a perfect world, the 0 and 1 were each worth a full bit before we sent them, but the noise in the channel, measured by the probability of error in transmission, takes away H bits, leaving us with only 1 H bits worth of information in transmitting each 0 or 1. That s a limit on the reliability of the channel to transmit information. Even this is not a trivial calculation, however. 16