Block 2: Introduction to Information Theory

Similar documents
Chapter 9 Fundamental Limits in Information Theory

Principles of Communications

Revision of Lecture 5

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Entropies & Information Theory

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.

Dept. of Linguistics, Indiana University Fall 2015

ELEC546 Review of Information Theory

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

Lecture 4 Noisy Channel Coding

Information Theory - Entropy. Figure 3

(Classical) Information Theory III: Noisy channel coding

Lecture 5 Channel Coding over Continuous Channels

Lecture 6 I. CHANNEL CODING. X n (m) P Y X


An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

5 Mutual Information and Channel Capacity

Coding for Discrete Source

Appendix B Information theory from first principles

CSCI 2570 Introduction to Nanocomputing

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria

Lecture 8: Channel Capacity, Continuous Random Variables

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information

Communications Theory and Engineering

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

UNIT I INFORMATION THEORY. I k log 2

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Information Theory in Intelligent Decision Making

Lecture 2. Capacity of the Gaussian channel

Multiple-Input Multiple-Output Systems

Chapter 4: Continuous channel and its capacity

Lecture 14 February 28

16.36 Communication Systems Engineering

Lecture 2: August 31

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code

Channel capacity. Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5.

Lecture 22: Final Review

Communication Theory II

Information Theory, Statistics, and Decision Trees

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Compression and Coding

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Computing and Communications 2. Information Theory -Entropy

Noisy channel communication

ITCT Lecture IV.3: Markov Processes and Sources with Memory

Capacity of a channel Shannon s second theorem. Information Theory 1/33

An introduction to basic information theory. Hampus Wessman

EE 376A: Information Theory Lecture Notes. Prof. Tsachy Weissman TA: Idoia Ochoa, Kedar Tatwawadi

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

Lecture 8: Shannon s Noise Models

One Lesson of Information Theory

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Information Theory. David Rosenberg. June 15, New York University. David Rosenberg (New York University) DS-GA 1003 June 15, / 18

Homework Set #2 Data Compression, Huffman code and AEP

Chapter 2 Review of Classical Information Theory

Capacity of AWGN channels

Chapter I: Fundamental Information Theory

Uncertainity, Information, and Entropy

Channel Coding 1. Sportturm (SpT), Room: C3165

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 4. Capacity of Fading Channels

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

Information and Entropy

ELEMENT OF INFORMATION THEORY

Reliable Computation over Multiple-Access Channels

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye

Ch. 8 Math Preliminaries for Lossy Coding. 8.5 Rate-Distortion Theory

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Revision of Lecture 4

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Entropy as a measure of surprise

Investigation of the Elias Product Code Construction for the Binary Erasure Channel

Entropy Rate of Stochastic Processes

Exercise 1. = P(y a 1)P(a 1 )

Introduction to Information Theory. By Prof. S.J. Soni Asst. Professor, CE Department, SPCE, Visnagar

ECEN 655: Advanced Channel Coding

Chapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code

Lecture 15: Thu Feb 28, 2019

Lecture 3: Channel Capacity

Energy State Amplification in an Energy Harvesting Communication System

Lecture 1: Introduction, Entropy and ML estimation

Noisy-Channel Coding

10-704: Information Processing and Learning Fall Lecture 10: Oct 3

Lecture 11: Continuous-valued signals and differential entropy

Information Theory. Mark van Rossum. January 24, School of Informatics, University of Edinburgh 1 / 35

Shannon s Noisy-Channel Coding Theorem

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

1 Introduction to information theory

ECE Information theory Final

DEEP LEARNING CHAPTER 3 PROBABILITY & INFORMATION THEORY

Information in Biology

Part I. Entropy. Information Theory and Networks. Section 1. Entropy: definitions. Lecture 5: Entropy

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16

Upper Bounds on the Capacity of Binary Intermittent Communication

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Lecture 16. Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code

Transcription:

Block 2: Introduction to Information Theory Francisco J. Escribano April 26, 2015 Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 1 / 51

Table of contents 1 Motivation 2 Entropy 3 Source coding 4 Mutual information 5 Discrete channels 6 Entropy and mutual information for continuous RRVV 7 Channel capacity theorem 8 Conclusions 9 References Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 2 / 51

Motivation Motivation Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 3 / 51

Motivation Motivation Information Theory is a discipline established during the 2nd half of the XXth Century. It relies on solid mathematical foundations [1, 2, 3, 4]. It tries to address two basic questions: To what extent can we compress data for a more efficient usage of the limited communication resources? Entropy Which is the largest possible data transfer rate for given resources and conditions? Channel capacity Key concepts for Information Theory are entropy (H (X)) and mutual information (I (X; Y)). X, Y are random variables of some kind. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 4 / 51

Motivation Motivation Up to the 40 s, it was common wisdom in telecommunications that the error rate increased with increasing data rate. Claude Shannon demonstrated that errorfree transmission may be possible under certain conditions. Information Theory provides strict bounds for any communication system. ( ) Maximum data compression minimum I X; ˆX. Maximum data transfer rate maximum I (X; Y). Any given communication system works between said limits. The mathematics behind is not always constructive, but provides guidelines to design algorithms to improve communications given a set of available resources. The resources in this context are known parameters such us available transmission power, available bandwidth, signal-to-noise ratio and the like. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 5 / 51

Entropy Entropy Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 6 / 51

Entropy Entropy Consider a discrete memoryless data source, that issues a symbol from a given set, chosen randomly and independently from the previous and the subsequent ones. ζ = {s 0,, s K 1 }, P (S = s k ) = p k, k = 0, 1,, K 1; K is the source radix Information quantity is a random variable defined as I (s k ) = log 2 ( 1 p k ) with properties I (s k ) = 0 if p k = 1 I (s k ) > I (s i ) if p k < p i I (s k ) 0, 0 p k 1 I (s l, s k ) = I (s l ) + I (s k ) Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 7 / 51

Entropy Entropy The source entropy is a measurement of its information content, and it is defined as H (ζ) = E {pk } [I (s k )] = K 1 p k I (s k ) p j = 1 p k = 0, k j H (ζ) = 0 H (ζ) = K 1 k=0 ( ) p k log 1 2 p k k=0 0 H (ζ) log 2 (K) p k = 1 K, k = 0,, K 1 H (ζ) = log 2 (K) There is no potential information (uncertainty) in deterministic ( degenerated ) sources. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 8 / 51

Entropy Entropy E.g. binary source H (p) = p log 2 (p) (1 p) log 2 (1 p) 1 0.9 0.8 0.7 H(p) 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 p Figure 1: Entropy of a binary memoryless source. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 9 / 51

Source coding Source coding Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 10 / 51

Source coding Source coding The field of source coding addresses the issues related to handling the output data from a given source, from the point of view of Information Theory. One of the main issues is data compression, its theoretical limits and the practical related algorithms. The most important prerequisite in communications is to keep data integrity any transformation has to be fully invertible. In related fields, like cryptography, some non-invertible algorithms are of utmost interest (i.e. hash functions). From the point of view of data communication (sequences of data symbols), it is useful to define the n-th extension of the source ζ as considering n successive symbols from it (ζ n ). Given that the sequence is independent and identically distributed (iid) for any n: H (ζ n ) = n H (ζ) Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 11 / 51

Source coding Source coding We may choose to represent the data from the source by asigning to each symbol s k a corresponding codeword (binary, in our case). The aim of this source coding process is to try to represent the source symbols more efficiently. We only address here variable-length binary, invertible codes the codewords are unique blocks of binary symbols of length l k, for each symbol s k. The correspondence codewords original data constitutes a code. Average codeword length: Coding efficiency: L = E [l k ] = K 1 η = L min L k=0 1 p k l k Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 12 / 51

Source coding Shannon s source coding theorem This theorem establishes the limits for loseless data compression [5]. N iid random variables with entropy H (ζ) each, can be compressed into N H (ζ) bits with negligible information loss risk as N. If they are compressed into less than N H (ζ) bits it is certain that some information will be lost. In practical terms, for a single random variable, this means L min H (ζ) And, therefore, the coding efficiency can be defined as η = H(ζ) L Like other results in Information Theory, this theorem provides the limits, but does not tell how actually we can reach them. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 13 / 51

Source coding Example of source code: Huffman coding Huffman coding provides a practical algorithm to perform source coding within the limits shown. It is an instance of a class of codes, called prefix codes No binary word within the codeset is the prefix of any other one. Properties: Unique coding. Instantaneous decoding. The lengths l k meet the Kraft-McMillan inequality [2]: K 1 2 l k 1. L of the code is bounded by H (ζ) L < H (ζ) + 1 H (ζ) = L p k = 2 l k Meeting the Kraft-McMillan inequality guarantees that a prefix code with the given l k can be constructed. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 14 / 51 k=0

Source coding Huffman coding Algorithm to perform Huffman coding: 1 List symbols s k in a column, in order of decreasing probabilities. 2 Compose the probabilities of the last 2 symbols: probabilities are added into a new dummy compound value/symbol. 3 Reorder the probabilities set in an adjacent column, putting the new dummy symbol probability as high as possible, retiring values involved. 4 In the process of moving probabilities to the right, keep the values of the unaffected symbols (making room to the compound value if needed), but assign a 0 to one of the symbols affected, a 1 to the other (top or bottom, but keep always the same criterion along the process). 5 Start afresh the process in the new column. 6 When only two probabilities remain, assign last 0, 1 and stop. 7 Write the binary codeword corresponding to each original symbol by tracing back the trajectory of each of the original symbols and the dummy symbols they take part in, from the last towards the initial column. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 15 / 51

Source coding Huffman coding example Figure 2: Example of Huffman coding: assigning final codeword patterns proceeds from right to left. To characterize the resulting code, it is important to calculate: K 1 K 1 H (ζ) ; L = p k l k ; η = H(ζ) ; σ 2 = L k=0 k=0 ( ) 2 p k l k L Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 16 / 51

Mutual information Mutual information Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 17 / 51

Mutual information Joint entropy We extend the concept of entropy to 2 RRVV. 2 or more RRVV are needed when analyzing communication channels from the point of view of Information Theory. These RRVV can also be seen as a random vector. The underlying concept is the characterization of channel input vs channel output, and what we can get about the former by observing the latter. Joint entropy of 2 RRVV: H (X, Y) = x X y Y H (X, Y) = E p(x,y) [ log 2 ( ( ) p (x, y) log 1 2 p(x,y) )] 1 P(X,Y) Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 18 / 51

Mutual information Conditional entropy Conditional entropy of 2 RRVV: H (Y X) = = x X = x X p (x) x X y Y y Y p (x) H (Y X = x) = ( ) p (y x) log 1 2 p(y x) = ( ) p (x, y) log 1 2 p(y x) = = E p(x,y) [ log 2 ( )] 1 P(Y X) H (Y X) is a measure of the uncertitude in Y once X is known. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 19 / 51

Mutual information Chain rule There is an important expression that relates joint and conditional entropy of 2 RRVV: H (X, Y) = H (X) + H (Y X). The expression points towards the following common wisdom result: joint knowledge about X and Y is the knowledge about X plus the information in Y not related to X. Proof: p (x, y) = p (x) p (y x) ; log 2 (p (x, y)) = log 2 (p (x)) + log 2 (p (y )) ; E p(x,y) [log 2 (P (X, Y))] = E p(x,y) [log 2 (P (X))] + +E p(x,y) [log 2 (P (Y X))]. Corollary: H (X, Y Z) = H (X Z) + H (Y X, Z). Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 20 / 51

Mutual information Relative entropy H (X) measures the quantity of information needed to describe the RV X on average. The relative entropy D (p q) measures the increasing in information needed to describe X when it is characterized by means of a distribution q (x) instead of p (x). x X X; p (x) H (X) X; q (x) H (X) + D (p q) Definition of relative entropy (aka Kullback-Leibler divergence, or improperly distance ): D (p q) = ( ) [ ( )] p (x) log p(x) 2 q(x) = E p(x) log P(X) 2 Q(X) Note that: lim x 0 (x log (x)) = 0; 0 log ( 0 0 ) = 0; p log ( p 0 ) =. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 21 / 51

Mutual information Relative entropy and mutual information Properties of relative entropy: 1 D (p q) 0. 2 D (p q) = 0 p (x) = q (x). 3 It is not symmetric. Therefore, it is not a true distance from the metrical point of view. Mutual information of 2 RRVV: I (X; Y) = H (X) H (X Y) = = ( ) p (x, y) log p(x,y) 2 p(x) p(y) = x X y Y = D (p (x, y) p (x) p (y)) = E p(x,y) [ log 2 ( P(X,Y) P(X) P(Y) )]. The mutual information between X and Y is the information in X, minus the information in X not related to Y. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 22 / 51

Mutual information Mutual information Properties of mutual information: 1 Symmetry: I (X; Y) = I (Y; X). 2 Non negative: I (X; Y) 0. H (X, Y) } H (X Y) I (X; Y) H (Y X) H (X) H (Y) Figure 3: Relationship between entropy and mutual information. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 23 / 51

Discrete channels Discrete channels Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 24 / 51

Discrete channels Discrete memoryless channels Communication channel: Input/output system where the output is a probabilistic function of the input. Such channel consist in Input alphabet X = {x 0, x 1,, x J 1 }, corresponding to RV X. Output alphabet Y = {y 0, y 1,, y K 1 }, corresponding to RV Y, noisy version of RV X. A set of transition probabilities linking input and output, following {p (y k x j )} k=0,1,,k 1; j=0,1, J 1 p (y k x j ) = P (Y = y k X = x j ) X and Y are finite and discrete. The channel is memoryless, since the output symbol depends only on the current input symbol. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 25 / 51

Discrete channels Discrete memoryless channels Channel matrix P, J K P = p (y 0 x 0 ) p (y 1 x 0 ) p (y K 1 x 0 ) p (y 0 x 1 ) p (y 1 x 1 ) p (y K 1 x 1 )...... p (y 0 x J 1 ) p (y 1 x J 1 ) p (y K 1 x J 1 ) The channel is said to be symmetric when each column is a permutation of any other, and the same with respect to the rows. Important property K 1 k=0 p (y k x j ) = 1, j = 0, 1,, J 1. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 26 / 51

Discrete channels Discrete memoryless channels Output Y is probabilistically determined by the input (a priori) probabilities and the channel matrix, following p X = (p (x 0 ), p (x 1 ),, p (x J 1 )), and p (x j ) = P (X = x j ) p Y = (p (y 0 ), p (y 1 ),, p (y K 1 )), and p (y k ) = P (Y = y k ) p (y k ) = J 1 j=0 p (y k x j ) p (x j ), k = 0, 1, K 1 Therefore, p Y = p X P When J = K and y j is the correct choice when sending x j, we can calculate the average symbol error probability as P e = J 1 j=0 p (x j ) K 1 k=0,k j p (y k x j ) The probability of correct transmission is 1 P e. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 27 / 51

Discrete channels Discrete memoryless channels Example of discrete memoryless channel: modulation with hard decision. y k p(y k x j ) x j Figure 4: 16-QAM transmitted constellation. Figure 5: 16-QAM received constellation with noise. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 28 / 51

Discrete channels Disccrete memoryless channels Example of non-symmetric channel the binary erasure channel, typical of storage systems. Reading data in storage systems can also be modeled as sending information through a channel with given probabilistic properties. An erasure marks the complete uncertitude over the symbol read. There are methods to recover from erasures, based on the principles of Information Theory. x 0 =0 x 1 =1 1 p p p 1 p y 0 =0 y 1 =ǫ y 2 =1 P = ( 1 p p 0 0 p 1 p ) Figure 6: Diagram showing the binary erasure channel. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 29 / 51

Discrete channels Channel capacity Mutual information depends on P and p X. Characterizing the possibilities of the channel requires removing dependency with p X. Channel capacity is defined as C = max px I (X; Y) This is the maximum average mutual information enabled by the channel, in bits per channel use, and the best we can get out of it in point of reliable information transfer. It only depends on the channel transition probabilities P. If the channel is symmetric, the distribution that maximizes I (X; Y) is the uniform one (equiprobable symbols). Channel coding is a process where controled redundancy is added to protect data integrity. A channel encoded information sequence has n bits, encoded from a block of k information bits the code rate is calculated as R = k n 1. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 30 / 51

Discrete channels Noisy-channel coding theorem Consider a discrete source ζ, emitting symbols with period T s. The binary information rate of such source is H(ζ) T s (bit/s). Consider a discrete memoryless channel, used to send coded data each T c seconds. The maximum possible data transfer rate would be C T c The noisy-channel coding theorem states the following [5]: (bit/s). If H(ζ) T s C T c, there exists a coding scheme that guarantees errorfree transmission (i.e. P e arbitrarily small). Conversely, if H(ζ) T s > C T c, the communication cannot be made reliable (i.e. we cannot have a bounded P e, so small as desired). Please note that again the theorem is asymptotic, and not constructive: it does not say how to actually reach the limit. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 31 / 51

Discrete channels Noisy-channel coding theorem Example: a binary symmetric channel. Consider a binary source ζ = {0, 1}, with equiprobable symbols. H (ζ) = 1 info bits/ channel use. Source works at a rate of 1 T s channel uses /s, and H(ζ) T s Consider an encoder with rates k n info/coded bits, and 1 T c uses /s. Note that k n = Tc T s. Maximum achievable rate is C T c coded bits/s. info bits/s. channel If H(ζ) T s = 1 T s C T c, we could find a coding scheme so that P e is made arbitrarily small (so small as desired). This means that an appropriate coding scheme has to meet k n C in order to exploit the possibilities of the noisy-channel coding theorem. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 32 / 51

Discrete channels Noisy-channel coding theorem The theorem also states that, if a bit error probability of P b is acceptable, coding rates up to R (P b ) = C 1 H(P b ) are achievable. R greater than that cannot be achieved with the given bit error probability. In a binary symmetric channel without noise (error probability p = 0), it can be demonstrated C = max p(x) I (X; Y) = 1 bits/ channel use. x 0 =0 x 1 =1 1 1 y 0 =0 y 1 =1 P = ( 1 0 0 1 ) Figure 7: Diagram showing the errorfree binary symmetric channel. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 33 / 51

Discrete channels Noisy-channel coding theorem In a binary symmetric channel with error probability p 0, C = 1 H (p) = ( ( ) ( )) = 1 p log 1 2 p + (1 p) log 1 2 1 p bits/ channel use. x 0 =0 x 1 =1 1 p p p 1 p y 0 =0 y 1 =1 ( 1 p p P = p 1 p ) Figure 8: -BSC(p). Diagram showing the binary symmetric channel In the binay erasure channel with erasure probability p, C = max p(x) I (X; Y) = 1 p bits/ channel use. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 34 / 51

Entropy and mutual information for continuous RRVV Entropy and mutual information for continuous RRVV Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 35 / 51

Entropy and mutual information for continuous RRVV Differential entropy Differential entropy or continuous entropy of a continuous RV X with pdf f X (x) is defined as h (X) = ( ) f X (x) log 1 2 f X (x) dx It does not measure an absolute quantity of information, hence the differential term. The differential entropy of a continuous random vector X with joint pdf ( ) f X x is defined as ( X ) h = ( ( ) ) f X x 1 log2 f X ( d x x ) x X = (X0,, X N 1 ) ; f X ( x ) = f X (x 0,, x N 1 ) Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 36 / 51

Entropy and mutual information for continuous RRVV Differential entropy For a given variance value σ 2, the Gaussian RV exhibits the largest achievable differential entropy. This means the Gaussian RV has a special place in the domain of continuous RRVV within Information Theory. Properties of differential entropy Differential entropy is invariant under translations h (X + c) = h (X) Scaling h (a X) = h (X) + log 2 ( a ) h ( A X ) = h ( X ) + log 2 ( A ) For given variance σ 2, a Gaussian RV X with variance σx 2 = σ2 and any other RV Y with variance σy 2 = σ2, h (X) h (Y). For a Gaussian RV X with variance σx 2 h (X) = 1 2 log ( ) 2 2π e σ 2 X Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 37 / 51

Entropy and mutual information for continuous RRVV Mutual information for continuous RRVV Mutual information of two continuous RRVV X and Y ( ) I (X; Y) = f X,Y (x, y) log fx (x y) 2 f X (x) dxdy f X,Y (x, y) is X and Y joint pdf, and f X (x y) is the conditional pdf of X given Y = y. Properties Symmetry, I (X; Y) = I (Y; X). Non-negative, I (X; Y) 0. I (X; Y) = h (X) h (X Y). I (X; Y) = h (Y) h (Y X). ( h (X Y) = f X,Y (x, y) log 2 1 f X (x y) ) dxdy This is the conditional differential entropy of X given Y. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 38 / 51

Channel capacity theorem Channel capacity theorem Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 39 / 51

Channel capacity theorem Continuous channel capacity Gaussian discrete memoryless channel, described by x(t) is a stocastic stationary process, with m x = 0 and bandwidth W x = B Hz. Process is sampled at sampling rate T s = 1 2B, and X k = x (k T s ) are thus a bunch of continuous RRVV k, with E [X k ] = 0. A RV X k is transmitted each T s seconds over a noisy channel with bandwidth B, during a total of T seconds (n = 2BT total samples). The channel is AWGN, adding noise samples described by RRVV N k with m n = 0 and S n (f ) = N0 2, so that σ2 n = N 0B. The received samples are statistically independent RRVV, described as Y k = X k + N k. The cost function for any maximization of the mutual information is the signal power E [ X 2 k] = S. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 40 / 51

Channel capacity theorem Continuous channel capacity The channel capacity is defined as { C = max fxk (x) I (Xk ; Y k ) : E [ X 2 ] } k = S I (X k ; Y k ) = h (Y k ) h (Y k X k ) = h (Y k ) h (N k ) Maximum is only reached if h (X k ) is maximized. This only happens if f Xk (x) is Gaussian! Therefore, C = I (X k ; Y k ) with X k Gaussian and E [ X 2] = S. E [ Yk] 2 = S + σ 2 n, then h (Y k ) = 1 2 log ( ( )) 2 2π e S + σ 2 n. h (N k ) = 1 2 log ( ) 2 2π e σ 2 n. ( ) C = I (X k ; Y k ) = h (Y k ) h (N k ) = 1 2 log 2 1 + S σ bits/ channel n 2 use. Finally, C (bits/s) = n T C (bits/ channel use ) C = B log 2 (1 + S N 0 B ) bits/s Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 41 / 51

Channel capacity theorem Shannon-Hartley theorem The Shannon-Hartley theorem states that the capacity of a bandlimited AWGN channel with bandwidth B and power spectral density N 0 /2 is ) C = B log 2 (1 + S N 0 B bits/s This is the highest possible information transmission rate over this analog communication channel, accomplished with arbitrarily small error probability. Capacity increases (almost) linearly with B, whereas S determines only a logarithmic increase. Increasing available bandwidth has far larger impact on capacity than increasing transmission power. The bandlimited, power constrained AWGN channel is a very convenient model for real-world communications. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 42 / 51

Channel capacity theorem Implications of the channel capacity theorem Consider an ideal system where R b = C. S = E b C, where E b is the average bit energy. ( ) C B = log 2 1 + E b C N 0 B E b N 0 = 2 C B 1 C B If we represent the spectral efficiency η = R b B as a function of E b N 0, the previous expression is an asymptotic curve on such plane that marks the border between the reliable zone, and the unrealiable zone. This curve helps us to identify the parameter set for a communication system so that it achieves reliable transmission with a given quality (measured in terms of a limited, maximum error rate). ( When B, Eb N 0 = ln (2) = 1.6 db. ) This limit is known as the Shannon limit for the AWGN channel. The capacity in the limit is C = S N 0 log 2 (e). Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 43 / 51

Channel capacity theorem Channel capacity tradeoffs Figure 9: Working regions as determined by the Shannon-Hartley theorem (source: www.gaussianwaves.com). No channel coding applied. The diagram illustrates possible tradeoffs, involving E b N 0, R b B and P b. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 44 / 51

Channel capacity theorem Channel capacity tradeoffs P b is a required target quality and further limits the attainable zone in the spectral efficiency/snr plane, depending on the framework chosen (modulation kind, channel coding scheme, and so on). For fixed spectral efficiency (fixed R b B ), we move along a horizontal line where we manage the P b versus E b N 0 tradeoff. For fixed SNR (fixed E b ), we move along a vertical line where we manage the P N b versus R b 0 B tradeoff. Figure 10: Working regions for given transmission schemes (source:www.comtechefdata.com). Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 45 / 51

Channel capacity theorem Channel capacity tradeoffs The lower, left hand side of the plot is the so-called power limited region. There, the E b N 0 is very poor and we have to sacrifice spectral efficiency to get a given transmission quality (P b ). An example of this are deep space communications, where the SNR received is extremely low due to the huge free space losses in the link. The only way to get a reliable transmission is to drop data rate at very low values. The upper, right hand side of the plot is de so-called bandwidth limited region. There, the desired spectral efficiency R b B for fixed B (desired data rate) is traded-off against unconstrained transmission power (unconstrained E b N 0 ), under a given P b. An example of this would be a terrestrial DVB transmitting station, where R b B is fixed (standarized), and where the transmitting power is only limited by regulatory or technological constraints. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 46 / 51

Conclusions Conclusions Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 47 / 51

Conclusions Conclusions Information Theory represents a cutting-edge research field with applications in communications, artificial intelligence, data mining, machine learning, robotics... We have seen three fundamental results from Shannon s 1948 seminal work, that constitute the foundations of all modern communications. Source coding theorem, that states the limits and possibilities of lossless data compresion. Noisy-channel coding theorem, that states the need of channel coding techniques to achieve a given performance, using constrained resources. It establishes the asymptotic possibility of errorfree transmission over discrete-input discrete-output noisy channels. Shannon-Hartley theorem, which establishes the absolute (asymptotic) limits for errorfree transmission over AWGN channels, and describes the different tradeoffs involved among the given resources. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 48 / 51

Conclusions Conclusions All these results build the attainable working zone for practical and feasible communication systems, managing and trading-off constrained resources and under a given target performance (BER). The η = R b B against E b N 0 plane (under a target BER) constitutes the playground for designing and bringing into practice any communication standards. Any movement over the plane has a direct impact over business and revenues in the telco domain. When addressing practical designs in communications, these results and limits are not much heeded, but they underlie all of them. There are lots of common practice and common wisdom rules of thumb in the domain, stating what to use when (regarding modulations, channel encoders and so on). Nevertheless, optimizing the designs so as to profit as much as possible from all the resources at hand require making these limits explicit. Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 49 / 51

References References Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 50 / 51

References Bibliography I [1] J. M. Cioffi, Digital Communications - coding (course). Stanford University, 2010. [Online]. Available: http://web.stanford.edu/group/cioffi/book [2] T. M. Cover and J. A. Thomas, Elements of Information Theory. New Jersey: John Wiley & Sons, Inc., 2006. [3] D. MacKay, Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003. [Online]. Available: http://www.inference.phy.cam.ac.uk/mackay/itila/book.html [4] S. Haykin, Communications Systems. New York: John Wiley & Sons, Inc., 2001. [5] Claude E. Shannon, A mathematical theory of communication, Bell Systems Technical Journal. [Online]. Available: http://plan9.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 51 / 51