Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8
I Overview II Overview of ECE 154C 2 / 8
I: Digital Communications Block Diagram I Overview II 3 / 8
I: Digital Communications Block Diagram I Overview II Note that the Source Encoder converts all types of information to a stream of binary digits. 3 / 8
I: Digital Communications Block Diagram I Overview II Note that the Source Encoder converts all types of information to a stream of binary digits. Note that the Channel Endcouter, in an attempt to protect the source coded (binary) stream, judiciously adds redundant bits. 3 / 8
I: Digital Communications Block Diagram I Overview II Sometimes the output of the source decoder must be an exact {replica of the information (e.g. computer data) called NOISELESS CODING (aka lossless compression) 3 / 8
I: Digital Communications Block Diagram I Overview II Sometimes the output of the source decoder must be an exact {replica of the information (e.g. computer data) called NOISELESS CODING (aka lossless compression) Other times the output of the source decoder can be approximately equal to the information (e.g. music, tv, speech) called CODING WITH DISTORTION (aka lossy compression) 3 / 8
Overview II: What will we cover? I Overview II REFERENCE: CHAPTER 10 ZIEMER & TRANTER SOURCE CODING - NOISELESS CODES Basic idea is to use as few binary digits as possible and still be able to recover the information exactly Topics include: Huffman Codes Shannon Fano Codes Tunstall Codes Entropy of Source Lempel-Ziv Codes 4 / 8
Overview II: What will we cover? I Overview II REFERENCE: CHAPTER 10 ZIEMER & TRANTER SOURCE CODING WITH DISTORTION Again the idea is to use minimum number of binary digits for a given value of distortion Topics include: Gaussian Source Optimal Quantizing 4 / 8
Overview II: What will we cover? I Overview II REFERENCE: CHAPTER 10 ZIEMER & TRANTER CHANNEL CAPACITY OF A NOISY CHANNEL Even if channel is noisy, messages can be sent essentially error free if extra digits are transmitted Basic idea is to use as few extra digits as possible Topics Covered: Channel Capacity Mutual Information Some 4 / 8
Overview II: What will we cover? I Overview II REFERENCE: CHAPTER 10 ZIEMER & TRANTER CHANNEL CODING Basic idea Detect errors that occured on channel and then correct them Topics Covered: Hamming Code General Theory of Block Codes (Parity Check Matrix, Generator Matrix, Minimum Distance, etc.) LDPC Codes Turbo Codes Code Performance 4 / 8
More A Few 5 / 8
Example 1: 4 letter DMS More Basic concepts came from one paper of one man named Claude Shannon! 6 / 8
Example 1: 4 letter DMS More Basic concepts came from one paper of one man named Claude Shannon! Shannon used simple models that capture the essence of the problem! 6 / 8
Example 1: 4 letter DMS More EXAMPLE 1 Simple Model of a source (Called a DISCRETE MEMORYLESS SOURCE OR DMS) 6 / 8
Example 1: 4 letter DMS More EXAMPLE 1 Simple Model of a source (Called a DISCRETE MEMORYLESS SOURCE OR DMS) I.I.D. (Independent and Identically Distributed) source letters 6 / 8
Example 1: 4 letter DMS More EXAMPLE 1 Simple Model of a source (Called a DISCRETE MEMORYLESS SOURCE OR DMS) I.I.D. (Independent and Identically Distributed) source letters Alphabet size of 4 (A,B,C,D) 6 / 8
Example 1: 4 letter DMS More EXAMPLE 1 Simple Model of a source (Called a DISCRETE MEMORYLESS SOURCE OR DMS) I.I.D. (Independent and Identically Distributed) source letters Alphabet size of 4 (A,B,C,D) P(A) = p 1,P(B) = p 2,P(C) = p 3,P(D) = p 4, i p i = 1 6 / 8
Example 1: 4 letter DMS More EXAMPLE 1 Simple Model of a source (Called a DISCRETE MEMORYLESS SOURCE OR DMS) I.I.D. (Independent and Identically Distributed) source letters Alphabet size of 4 (A,B,C,D) P(A) = p 1,P(B) = p 2,P(C) = p 3,P(D) = p 4, i p i = 1 Simplest Code A 00 B 01 C 10 D 11 6 / 8
Example 1: 4 letter DMS More EXAMPLE 1 Simple Model of a source (Called a DISCRETE MEMORYLESS SOURCE OR DMS) I.I.D. (Independent and Identically Distributed) source letters Alphabet size of 4 (A,B,C,D) P(A) = p 1,P(B) = p 2,P(C) = p 3,P(D) = p 4, i p i = 1 Simplest Code A 00 B 01 C 10 D 11 6 / 8
Example 1: 4 letter DMS More 6 / 8
Example 1: 4 letter DMS More Average length of code words L = 2(p 1 +p 2 +p 3 +p 4 ) = 2 6 / 8
Example 1: 4 letter DMS More Average length of code words L = 2(p 1 +p 2 +p 3 +p 4 ) = 2 Q: Can we use fewer than 2 binary digits per source letter (on the average) and still recover information from the binary sequence? 6 / 8
Example 1: 4 letter DMS More Average length of code words L = 2(p 1 +p 2 +p 3 +p 4 ) = 2 Q: Can we use fewer than 2 binary digits per source letter (on the average) and still recover information from the binary sequence? A: Depends on values of(p 1,p 2,p 3,p 4 ) 6 / 8
Example 2: Binary Symmetric Channel More EXAMPLE 2 Simple Model for Noisy Channel 7 / 8
Example 2: Binary Symmetric Channel More EXAMPLE 2 Simple Model for Noisy Channel Channels, as you saw in ECE154B, can be viewed as Ifs 0 (t) = s 1 (t) and equally likely signals, ( ) 2E P error = Q = P N 0 7 / 8
Example 2: Binary Symmetric Channel More EXAMPLE 2 Simple Model for Noisy Channel Channels, as you saw in ECE154B, can be viewed as Ifs 0 (t) = s 1 (t) and equally likely signals, ( ) 2E P error = Q = P N 0 Q: Can we send information error-free over such a channel even thoughp 0,1? 7 / 8
Example 2: Binary Symmetric Channel More EXAMPLE 2 Simple Model for Noisy Channel Shannon considered a simpler channel called binary symmetric channel (or BSC for short) Pictorially Mathematically P Y X (y x) = { 1 p y = x p y x Q: Can we send information error-free over such a channel even thoughp 0,1? 7 / 8
Example 2: Binary Symmetric Channel More EXAMPLE 2 Simple Model for Noisy Channel Shannon considered a simpler channel called binary symmetric channel (or BSC for short) Pictorially Mathematically P Y X (y x) = { 1 p y = x p y x Q: Can we send information error-free over such a channel even thoughp 0,1? A: Depends on the rate of transmission (how many channel uses are allowed per information bit). Essentially for small enough of transmission rate (to be defined precisely), the answer is YES! 7 / 8
Example 3: DMS with Alphabet size8 More 8 / 8
Example 3: DMS with Alphabet size8 More EXAMPLE 3 Discrete Memoryless Source with alphabet size of 8 letters: {A,B,C,D,E,F,G,H} Probabilities: {p A p B p C p D p E p F p G,p H } See the following codes: Q: Which codes are uniquely decodable? Which ones are instantaneously decodable? Compute the average length of the codewords for each code. 8 / 8
Example 3: DMS with Alphabet size8 More EXAMPLE 4 Can you optimally design a code? L = 1 2 1+ 1 4 2+ 1 8 3+ 1 16 4+ 4 64 6 = 1 32 + 1 32 + 1 16 + 1 8 + 1 4 + 1 2 +1 = 2 We will see that this is an optimal code (not only among the single-letter constructions but overall). 8 / 8
Example 3: DMS with Alphabet size8 More EXAMPLE 5 L =.1+.1+.2+.2+.3+.5+1 = 2.4 But here we can do better by encoding 2 source letters (or more) at a time? 8 / 8