EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018
|
|
- Barnard Ward
- 5 years ago
- Views:
Transcription
1 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, Optimal codeword lengths. Although the codeword lengths of an optimal variable length code are complicated functions of the message probabilities {p 1, p 2,..., p m }, it can be said that less probable symbols are encoded into longer codewords. Suppose that the message probabilities are given in decreasing order p 1 > p 2 p m. (a) Prove that for any binary Huffman code, if the most probable message symbol has probability p 1 < 1/3, then that symbol must be assigned a codeword of length 2. (b) Prove that for any binary Huffman code, if the most probable message symbol has probability p 1 > 2/5, then that symbol must be assigned a codeword of length 1. Solution: Optimal codeword lengths. Let {c 1, c 2,..., c m } be codewords of respective lengths {l 1, l 2,..., l m } corresponding to probabilities {p 1, p 2,..., p m }. (a) Suppose, for the sake of contradiction, that l 1 = 1. Without loss of generality, assume that c 1 = 0. For x, y {0, 1} let C xy denote the set of codewords beginning with xy. The total probability of C 10 and C 11 is 1 p 1 > 2/3, so at least one of these two sets (without loss of generality, C 10 ) has probability greater than 1/3. We can now obtain a better code by interchanging the subtree of the decoding tree beginning with 0 with the subtree beginning with 10; that is, we replace codewords of the form 10x... by 0x... and we let c 1 = 10. This improvement contradicts the assumption that l 1 = 1, and so l 1 2. (b) We prove that if p 1 > p 2 and p 1 > 2/5 then l 1 = 1. Suppose, for the sake of contradiction, that l 1 2. Then there are no codewords of length 1; otherwise c 1 would not be the shortest codeword. Without loss of generality, we can assume that c 1 begins with 00. For x, y {0, 1} let C xy denote the set of codewords beginning with xy. Then the sets C 01, C 10, and C 11 have total probability 1 p 1 < 3/5, so some two of these sets (without loss of generality, C 10 and C 11 ) have total probability less 2/5. We can now obtain a better code by interchanging the subtree of the decoding tree beginning with 1 with the subtree beginning with 00; that is, we replace codewords of the form 1x... by 00x... and codewords of the form 00y... by 1y.... This improvement contradicts the assumption that l 1 2, and so l 1 = 1. (Note that p 1 > p 2 was a hidden assumption for this problem; otherwise, for example, the probabilities {.49,.49,.02} have the optimal code {00, 1, 01}.) Homework 3 Page 1 of 10
2 2. Shannon Fano-Elias and Arithmetic Coding Let p(x) be a PMF on the alphabet X = {1, 2, 3,..., m}. Assume that p(x) > 0, x X. We define, F (x) and F (x) as: F (x) = a<x p(a), F (x) = F (x) p(x) An example of F (x) is shown in Figure 1. We first discuss the construction of Shannon- Fano-Elias codes, which form the basis for Arithmetic coding. F(x) F(x 2 ) 0.80 x x Figure 1: Figure represents F (x) and F (x 2 ) for X = {1, 2, 3} (a) Show that you can decode x if F (x) is known. (b) In general F (x) is a real number in [0, 1]. Thus, for storing F (x), we need to truncate it appropriately. Let F T (x) = F (x) l(x) be the truncation of F (x) written in binary base to l(x) bits, where l(x) = log p(x) Show that, one can decode x from F T (x). (c) Let C(x) represent the l(x) decimal bits in the binary representation of FT (x). Show that C(x) forms a prefix code. (d) Show that the average codelength L of the code C(x) is given by: L < H(X) + 2 (e) Consider a sequence x n = (x 1, x 2,..., x n ) over the extended alphabet space X n distributed i.i.d according to the distribution p(x). Then show that: p(x n ) = p(x n 1 )p(x n ) F (x n ) = F (x n 1 ) + p(x n 1 )F (x n ), where F (x n ) = a n <x n p(a n ), assuming lexicographic order over a n X n. (Hint: see Figure 1.) Homework 3 Page 2 of 10
3 (f) Given a sequence x n, give an efficient way to perform Shannon-Fano-Elias encoding C(x n ) of the sequence x n. (g) Suggest an efficient decoding algorithm for C(x n ) which does not explicitly store the prefix codebook. (Hint: can you recursively decode x 1, x 2,..., x n from F T (x n )?) (h) The efficient algorithm described above for computing Shannon-Fano-Elias codes for the alphabet X n is known as Arithmetic Coding. Show that the average codelength L avg = E[l(C(x n ))]/n for Arithmetic coding satisfies L avg < H(X) + 2 n. Due to the efficiency and near optimality of Arithmetic codes, they are widely used for compression, and are building blocks in compressors including GZIP, JPEG and MP4. Solution: Shannon Fano-Elias and Arithmetic Coding. (a) Since F (x) is a strictly increasing function of x, it is invertible. Thus we can decode x if F (x) is known. (b) First, observe that 2 log 1 p(x) 1 < p(x)/2 and hence F T (x) ( F (x) p(x)/2, F (x)]. Since F (x 1) < F (x) F (x) p(x)/2, the intervals ( F (x) p(x)/2, F (x)] are disjoint for different x s. Thus, by looking at the interval containing F T (x), we can decode x. (c) As shown above, FT (x) ( F (x) p(x)/2, F (x)]. Now any suffix of FT (x) lies in [ F T (x), F T (x) + p(x)/2) which is contained in ( F (x) p(x)/2, F (x) + p(x)/2). Again these intervals are disjoint for different x s and hence no codeword can be a prefix of another. (d) L = x X p(x)l(x) ( p(x) log 1 ) + 1 = p(x) x X < ( p(x) log 1 ) p(x) + 2 x X = H(X) + 2 (e) p(x n ) = p(x n 1 )p(x n ) follows from definition of independence. For the other equality, first note that in a lexicographic ordering, a n < x n if either a n 1 < x n 1 Homework 3 Page 3 of 10
4 or if a n 1 = x n 1 and a n < x n. Thus F (x n ) = a n <x n p(a n ) = p(a n 1 ) + p(x n 1 )p(a n ) a n 1 <x n 1 a n<x n = F (x n 1 ) + p(x n 1 )F (x n ) (f) Use the recursion in (e) to compute F (x n ) and p(x n ) in linear time. Then use the encoding given in parts (b) and (c). Note that computing and storing entire p(x n ) is exponential in n but we don t need to do that to encode x n. (g) We show how we can iteratively decode x 1, x 2,..., x n from F (x n ). The decoding starting from F T (x n ) is along the same lines, but slightly more involved. Observe that F (x 1 ) F (x n ) < F (x 1 + 1) (e.g., see Figure 1). We can decode x 1 by using this inequality. Once we know x 1, we can use the recursion relation F (x n ) = p(x 1 )F (x n 2) + F (x 1 ) where x n 2 = (x 2,..., x n ). This relation can be derived by noting the fact that a n < x n if either a 1 < x 1 or if a 1 = x 1 and a n 2 < x n 2 and proceeding as in part (e). Continuing this recursion, we can obtain x 2... x n as well. (h) Applying the result in part (d) to X n, we get, L avg = E[l(C(x n ))]/n < 1 n (H(Xn ) + 2) = 1 (nh(x) + 2) n = H(X) + 2 n 3. Generate Discrete Distribution from Fair Coin Tosses Suppose we have a fair coin which we can toss infinitely many times. Our target is to use this coin to generate some random variable X, which follows a desired discrete distribution P. (a) Can you generate the discrete distribution ( 1, 3 )? How many coin tosses do you 4 4 need? (b) Can you generate the discrete distribution ( 1, 2 )? How many coin tosses do you 3 3 need in expectation? Can you generate it within a fixed number of coin tosses? Homework 3 Page 4 of 10
5 (c) For general (possibly irrational) p (0, 1), can you generate the discrete distribution (p, 1 p)? How many coin tosses do you need in expectation? (Hint: consider the binary decimal expansion of p.) (d) For general discrete distributions P = (p 1,, p m ), propose a scheme to generate P using fair coin tosses. (Hint: Understanding Shannon-Fano-Elias coding from Q2 might be useful) (e) (Bonus) In the setting of (d), propose a scheme to generate P with the required number of coin tosses T satisfying ET H(P ) + 2 = m i=1 p i log p i + 2. (f) Show that for any scheme that generates P = (p 1,, p m ), the required number of coin tosses T must satisfy ET H(P ). Solution: Generate Discrete Distribution from Fair Coin Tosses. (a) Toss coin twice, output X = 1 if the outcome is HH, and output X = 2 otherwise. Obviously X follows a discrete distribution ( 1, 3 ), and we need to toss 2 times. 4 4 (b) Toss coin twice, output X = 1 if the outcome is HH, output X = 2 if the outcome is HT or TH, and repeat this process again if the outcome is TT. Then P(X = 1) = P(HH) P(T H) + P(HT ) = 1 3. The number of repetitions follows a geometric distribution with success probability 3, so the expected number of tosses is 4 2 = 8. We cannot generate it within a fixed number (say, N) of coin tosses: the probability of each outcome must be an integral multiple of 2 N, while 1 is not. 3 (c) Consider the binary decimal expansion of p = 0.a 1 a 2 a 3. Let b i {0, 1} be the i-th outcome of coin toss, define the stopping time T by T = min{n : 0.b 1 b 2 b n 0.a 1 a 2 a n } and output X = 1 if 0.b 1 b 2 b T < 0.a 1 a 2 a T, and output 2 otherwise. One can think of this process as tossing coin infinitely many times to find U = 0.b 1 b 2 b 3, and stop whenever we are sure about U < p or U > p. Clearly U follows the uniform distribution on [0, 1], so P(X = 1) = P(U < p) = p, as desired. For the expected number of coin tosses, note that T > n iff 0.b 1 b 2 b n 0.a 1 a 2 a n, which occurs with probability 2 n. Hence, E(T ) = P(T > n) = 2 n = 2. (d) Similar to (c), we consider s i = i j=1 p j for i = 0, 1,, m, and toss the coin infinitely many times. We stop whenever we are sure about U (s i 1, s i ) for some i = 1,, m, and output X = i in this case. Clearly, P(X = i) = P(s i 1 < U < s i ) = s i s i 1 = p i. Algorithmically, we stop at time T whenever 0.b 1 b 2 b T does not coincide with the first T -bit string of any of s 1,, s m 1. Homework 3 Page 5 of 10
6 (e) We show that the scheme in (d) satisfies a weaker inequality E(T ) H(X) + 3. By (d), T > n if and only if 0.b 1 b 2 b n is the first n-bit string of at least one of s 1,, s m 1, which occurs with probability An, where A 2 n n is the number of distinct first n-bit strings of s 1,, s m 1. We can upper bound A n as A n 2 n m max{ 2 n s i 2 n s i 1 1, 0} 2 n i=1 m max{ 2 n p i 1, 0} where the last inequality follows from x + y x + y. As a result, E(T ) = P(T > n) = A n 2 m n i=1 The inner sum can be further upper bounded as ( ) log 1 p i max{ 2n p p i 1, 0} i p 2 n i + i=1 ( ) p i max{ 2n p i 1, 0}. 2 n n= log 1 p i n = p i ( log 1 ) p. i 2 log 1 1 p i Consider the function f(p) = p log 1 + 3p (n + 1)p p 2 (n 1) on [2 n, 2 n+1 ) (where n is an integer), clearly f(p) is concave and thus quasi-concave in p. As a result, f(p) min{f(2 n ), f(2 n+1 )} = 0, implying that p i ( log 1 ) p p i 2 log 1 i log 1 + 3p i. 1 p i p i Hence, E(T ) m i=1 ( ) p i max{ 2n p i 1, 0} 2 n m ) (p i log 1pi + 3p i = H(X) + 3. For a better bound with a different scheme, see Theorem and the preceding discussion in Cover & Thomas. (f) For any scheme, consider the shortest coin sequence c i with length l i under which X = i is outputted. The sequences c 1,, c m must be prefix-free (why?). Since entropy serves as a lower bound for the average length of any prefix code, we have E(T ) i=1 m p i l i H(X). i=1 4. Channel capacity. Find the capacity of the following channels with probability transition matrices: Homework 3 Page 6 of 10
7 (a) X = Y = {0, 1, 2} (b) X = Y = {0, 1, 2} p(y x) = p(y x) = 1/2 1/ /2 1/2 1/2 0 1/2 (c) X = Y = {0, 1} (The Z-channel) p(y x) = [ 1 0 1/2 1/2 ] Solution: Channel capacity. (a) X = Y = {0, 1, 2} p(y x) = This is a symmetric channel and by the known result for symmetric channel (section 7.2 in Cover & Thomas): C = log Y H(r) = log 3 log 3 = 0. In this case, the output is independent of the input. (b) X = Y = {0, 1, 2} p(y x) = Again the channel is symmetric: 1/2 1/ /2 1/2 1/2 0 1/2 C = log Y H(r) = log 3 log 2 = 0.58 bits (c) First we express I(X; Y ), the mutual information between the input an output of the Z-channel, as a function of α = Pr(X = 1): H(Y X) = Pr(X = 0) 0 + Pr(X = 1) 1 = α H(Y ) = H(Pr(Y = 1)) = H(α/2) I(X; Y ) = H(Y ) H(Y X) = H(α/2) α Since I(X; Y ) is strictly concave on α (why?) and I(X; Y ) = 0 when α = 0 and α = 1, the maximum mutual information is obtained for some value of α such that 0 < α < 1. Homework 3 Page 7 of 10
8 Using elementary calculus, we determine that d dα I(X; Y ) = 1 2 log 1 α/2 2 1, α/2 which is equal to zero for α = 2/5. (It is reasonable that Pr(X = 1) < 1/2 since X = 1 is the noisy input to the channel.) So the capacity of the Z-channel in bits is H(1/5) 2/5 = = Choice of channels. Let C 1 (X 1, p 1 (y 1 x 1 ), Y 1 ) and C 2 (X 2, p 2 (y 2 x 2 ), Y 2 ) be two channels with capacities C 1, C 2 respectively. Assume the output alphabets are distinct and do not intersect. Consider a channel C, which is a union of 2 channels C 1, C 2, where at each time, one can send a symbol over C 1 or over C 2 but not both. (a) Let: { X1 with probability α X = with probability 1 α X 2 where X 1 and X 2 are random variables taking values in X 1 and X 2, respectively. Then show that: I(X; Y ) = H b (α) + αi(x 1 ; Y 1 ) + (1 α)i(x 2 ; Y 2 ) (H b (α) represents the binary entropy corresponding to α) (b) Let C be the capacity of the channel C. Use the result in part (a) to show that 2 C = 2 C C 2. (c) Let C 1 = C 2. Then show that: and give an intuitive explanation. C = C (d) Calculate the capacity of the following channel: Homework 3 Page 8 of 10
9 0 1 1 p 0 p p 1 1 p Solution: Choice of channels. (a) Let { 1 X = X1 θ(x) = 2 X = X 2 Since the output alphabets Y 1 and Y 2 are disjoint, θ is a function of Y as well, i.e. X Y θ. I(X; Y, θ) = I(X; θ) + I(X; Y θ) Since X Y θ, I(X; θ Y ) = 0. Therefore, I(X; Y ) = I(X; θ) + I(X; Y θ) = I(X; Y ) + I(X; θ Y ) = H(θ) H(θ X) + αi(x 1 ; Y 1 ) + (1 α)i(x 2 ; Y 2 ) = H(α) + αi(x 1 ; Y 1 ) + (1 α)i(x 2 ; Y 2 ) (b) It follows from (a) that C = sup {H(α) + αc 1 + (1 α)c 2 }. α Maximizing over α one gets the desired result. The maximum occurs for H (α) + C 1 C 2 = 0, or α = 2 C 1 /(2 C C 2 ). (c) The result follows from (b). Intuitively, if we have two identical channels, then in each transmission, we can transmit one extra bit through our choice of the Homework 3 Page 9 of 10
10 channel. As an extreme example, suppose both channels are BSC(0.5) channels. Then C 1 = C 2 = 0, but C = 1. This is because we have two possible channels and we can communicate 1 bit/transmission by sending through channel 1 if the input bit is 0 and sending through channel 2 if the input bit is 1. (d) This channel consists of a sum of a BSC and a zero-capacity channel. C = log ( 2 1 H(p) + 1 ) Thus Homework 3 Page 10 of 10
SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding
SIGNAL COMPRESSION Lecture 3 4.9.2007 Shannon-Fano-Elias Codes and Arithmetic Coding 1 Shannon-Fano-Elias Coding We discuss how to encode the symbols {a 1, a 2,..., a m }, knowing their probabilities,
More informationEE376A: Homework #2 Solutions Due by 11:59pm Thursday, February 1st, 2018
Please submit the solutions on Gradescope. Some definitions that may be useful: EE376A: Homework #2 Solutions Due by 11:59pm Thursday, February 1st, 2018 Definition 1: A sequence of random variables X
More informationSolutions to Homework Set #3 Channel and Source coding
Solutions to Homework Set #3 Channel and Source coding. Rates (a) Channels coding Rate: Assuming you are sending 4 different messages using usages of a channel. What is the rate (in bits per channel use)
More informationHomework Set #2 Data Compression, Huffman code and AEP
Homework Set #2 Data Compression, Huffman code and AEP 1. Huffman coding. Consider the random variable ( x1 x X = 2 x 3 x 4 x 5 x 6 x 7 0.50 0.26 0.11 0.04 0.04 0.03 0.02 (a Find a binary Huffman code
More informationEE376A - Information Theory Midterm, Tuesday February 10th. Please start answering each question on a new page of the answer booklet.
EE376A - Information Theory Midterm, Tuesday February 10th Instructions: You have two hours, 7PM - 9PM The exam has 3 questions, totaling 100 points. Please start answering each question on a new page
More informationChapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code
Chapter 3 Source Coding 3. An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code 3. An Introduction to Source Coding Entropy (in bits per symbol) implies in average
More informationChapter 5: Data Compression
Chapter 5: Data Compression Definition. A source code C for a random variable X is a mapping from the range of X to the set of finite length strings of symbols from a D-ary alphabet. ˆX: source alphabet,
More informationLecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1)
3- Mathematical methods in communication Lecture 3 Lecturer: Haim Permuter Scribe: Yuval Carmel, Dima Khaykin, Ziv Goldfeld I. REMINDER A. Convex Set A set R is a convex set iff, x,x 2 R, θ, θ, θx + θx
More informationSolutions to Set #2 Data Compression, Huffman code and AEP
Solutions to Set #2 Data Compression, Huffman code and AEP. Huffman coding. Consider the random variable ( ) x x X = 2 x 3 x 4 x 5 x 6 x 7 0.50 0.26 0. 0.04 0.04 0.03 0.02 (a) Find a binary Huffman code
More informationMotivation for Arithmetic Coding
Motivation for Arithmetic Coding Motivations for arithmetic coding: 1) Huffman coding algorithm can generate prefix codes with a minimum average codeword length. But this length is usually strictly greater
More informationLecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity
5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke
More information1 Introduction to information theory
1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through
More informationSource Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria
Source Coding Master Universitario en Ingeniería de Telecomunicación I. Santamaría Universidad de Cantabria Contents Introduction Asymptotic Equipartition Property Optimal Codes (Huffman Coding) Universal
More informationCommunications Theory and Engineering
Communications Theory and Engineering Master's Degree in Electronic Engineering Sapienza University of Rome A.A. 2018-2019 AEP Asymptotic Equipartition Property AEP In information theory, the analog of
More informationChapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code
Chapter 2 Date Compression: Source Coding 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code 2.1 An Introduction to Source Coding Source coding can be seen as an efficient way
More informationEECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have
EECS 229A Spring 2007 * * Solutions to Homework 3 1. Problem 4.11 on pg. 93 of the text. Stationary processes (a) By stationarity and the chain rule for entropy, we have H(X 0 ) + H(X n X 0 ) = H(X 0,
More information10-704: Information Processing and Learning Fall Lecture 10: Oct 3
0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 0: Oct 3 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of
More informationExercises with solutions (Set B)
Exercises with solutions (Set B) 3. A fair coin is tossed an infinite number of times. Let Y n be a random variable, with n Z, that describes the outcome of the n-th coin toss. If the outcome of the n-th
More information3F1 Information Theory, Lecture 3
3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2011, 28 November 2011 Memoryless Sources Arithmetic Coding Sources with Memory 2 / 19 Summary of last lecture Prefix-free
More informationLecture 3 : Algorithms for source coding. September 30, 2016
Lecture 3 : Algorithms for source coding September 30, 2016 Outline 1. Huffman code ; proof of optimality ; 2. Coding with intervals : Shannon-Fano-Elias code and Shannon code ; 3. Arithmetic coding. 1/39
More informationShannon-Fano-Elias coding
Shannon-Fano-Elias coding Suppose that we have a memoryless source X t taking values in the alphabet {1, 2,..., L}. Suppose that the probabilities for all symbols are strictly positive: p(i) > 0, i. The
More informationLecture 22: Final Review
Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information
More information4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information
4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information Ramji Venkataramanan Signal Processing and Communications Lab Department of Engineering ramji.v@eng.cam.ac.uk
More informationIntro to Information Theory
Intro to Information Theory Math Circle February 11, 2018 1. Random variables Let us review discrete random variables and some notation. A random variable X takes value a A with probability P (a) 0. Here
More information3F1 Information Theory, Lecture 3
3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2013, 29 November 2013 Memoryless Sources Arithmetic Coding Sources with Memory Markov Example 2 / 21 Encoding the output
More informationCSCI 2570 Introduction to Nanocomputing
CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication
More informationEE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions
EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where
More informationLecture 2: August 31
0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 2: August 3 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy
More informationLecture 16. Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code
Lecture 16 Agenda for the lecture Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code Variable-length source codes with error 16.1 Error-free coding schemes 16.1.1 The Shannon-Fano-Elias
More informationChapter 2: Source coding
Chapter 2: meghdadi@ensil.unilim.fr University of Limoges Chapter 2: Entropy of Markov Source Chapter 2: Entropy of Markov Source Markov model for information sources Given the present, the future is independent
More informationChapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University
Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission
More informationEE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.
EE376A - Information Theory Final, Monday March 14th 216 Solutions Instructions: You have three hours, 3.3PM - 6.3PM The exam has 4 questions, totaling 12 points. Please start answering each question on
More informationQuiz 2 Date: Monday, November 21, 2016
10-704 Information Processing and Learning Fall 2016 Quiz 2 Date: Monday, November 21, 2016 Name: Andrew ID: Department: Guidelines: 1. PLEASE DO NOT TURN THIS PAGE UNTIL INSTRUCTED. 2. Write your name,
More informationShannon s noisy-channel theorem
Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for
More informationLecture 6 I. CHANNEL CODING. X n (m) P Y X
6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder
More informationEE 4TM4: Digital Communications II. Channel Capacity
EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.
More informationEntropy. Probability and Computing. Presentation 22. Probability and Computing Presentation 22 Entropy 1/39
Entropy Probability and Computing Presentation 22 Probability and Computing Presentation 22 Entropy 1/39 Introduction Why randomness and information are related? An event that is almost certain to occur
More informationInformation Theory. Coding and Information Theory. Information Theory Textbooks. Entropy
Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is
More informationCoding of memoryless sources 1/35
Coding of memoryless sources 1/35 Outline 1. Morse coding ; 2. Definitions : encoding, encoding efficiency ; 3. fixed length codes, encoding integers ; 4. prefix condition ; 5. Kraft and Mac Millan theorems
More informationLecture 1 : Data Compression and Entropy
CPS290: Algorithmic Foundations of Data Science January 8, 207 Lecture : Data Compression and Entropy Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will study a simple model for
More informationEntropy as a measure of surprise
Entropy as a measure of surprise Lecture 5: Sam Roweis September 26, 25 What does information do? It removes uncertainty. Information Conveyed = Uncertainty Removed = Surprise Yielded. How should we quantify
More informationEE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16
EE539R: Problem Set 4 Assigned: 3/08/6, Due: 07/09/6. Cover and Thomas: Problem 3.5 Sets defined by probabilities: Define the set C n (t = {x n : P X n(x n 2 nt } (a We have = P X n(x n P X n(x n 2 nt
More informationData Compression. Limit of Information Compression. October, Examples of codes 1
Data Compression Limit of Information Compression Radu Trîmbiţaş October, 202 Outline Contents Eamples of codes 2 Kraft Inequality 4 2. Kraft Inequality............................ 4 2.2 Kraft inequality
More informationLecture 11: Quantum Information III - Source Coding
CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that
More informationLecture Notes on Digital Transmission Source and Channel Coding. José Manuel Bioucas Dias
Lecture Notes on Digital Transmission Source and Channel Coding José Manuel Bioucas Dias February 2015 CHAPTER 1 Source and Channel Coding Contents 1 Source and Channel Coding 1 1.1 Introduction......................................
More informationEE376A: Homeworks #4 Solutions Due on Thursday, February 22, 2018 Please submit on Gradescope. Start every question on a new page.
EE376A: Homeworks #4 Solutions Due on Thursday, February 22, 28 Please submit on Gradescope. Start every question on a new page.. Maximum Differential Entropy (a) Show that among all distributions supported
More informationMultimedia Communications. Mathematical Preliminaries for Lossless Compression
Multimedia Communications Mathematical Preliminaries for Lossless Compression What we will see in this chapter Definition of information and entropy Modeling a data source Definition of coding and when
More informationU Logo Use Guidelines
COMP2610/6261 - Information Theory Lecture 15: Arithmetic Coding U Logo Use Guidelines Mark Reid and Aditya Menon logo is a contemporary n of our heritage. presents our name, d and our motto: arn the nature
More informationUpper Bounds on the Capacity of Binary Intermittent Communication
Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,
More informationLecture 1: Shannon s Theorem
Lecture 1: Shannon s Theorem Lecturer: Travis Gagie January 13th, 2015 Welcome to Data Compression! I m Travis and I ll be your instructor this week. If you haven t registered yet, don t worry, we ll work
More informationSolutions to Homework Set #1 Sanov s Theorem, Rate distortion
st Semester 00/ Solutions to Homework Set # Sanov s Theorem, Rate distortion. Sanov s theorem: Prove the simple version of Sanov s theorem for the binary random variables, i.e., let X,X,...,X n be a sequence
More informationInformation Theory. David Rosenberg. June 15, New York University. David Rosenberg (New York University) DS-GA 1003 June 15, / 18
Information Theory David Rosenberg New York University June 15, 2015 David Rosenberg (New York University) DS-GA 1003 June 15, 2015 1 / 18 A Measure of Information? Consider a discrete random variable
More informationComputing and Communications 2. Information Theory -Entropy
1896 1920 1987 2006 Computing and Communications 2. Information Theory -Entropy Ying Cui Department of Electronic Engineering Shanghai Jiao Tong University, China 2017, Autumn 1 Outline Entropy Joint entropy
More informationCapacity of a channel Shannon s second theorem. Information Theory 1/33
Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,
More informationLecture 5: Asymptotic Equipartition Property
Lecture 5: Asymptotic Equipartition Property Law of large number for product of random variables AEP and consequences Dr. Yao Xie, ECE587, Information Theory, Duke University Stock market Initial investment
More informationPROBABILITY AND INFORMATION THEORY. Dr. Gjergji Kasneci Introduction to Information Retrieval WS
PROBABILITY AND INFORMATION THEORY Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 1 Outline Intro Basics of probability and information theory Probability space Rules of probability
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationLecture 11: Polar codes construction
15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last
More information(Classical) Information Theory II: Source coding
(Classical) Information Theory II: Source coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract The information content of a random variable
More informationShannon s Noisy-Channel Coding Theorem
Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 13, 2015 Lucas Slot, Sebastian Zur Shannon s Noisy-Channel Coding Theorem February 13, 2015 1 / 29 Outline 1 Definitions and Terminology
More informationEntropies & Information Theory
Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information
More informationEE5585 Data Compression May 2, Lecture 27
EE5585 Data Compression May 2, 2013 Lecture 27 Instructor: Arya Mazumdar Scribe: Fangying Zhang Distributed Data Compression/Source Coding In the previous class we used a H-W table as a simple example,
More informationECS 452: Digital Communication Systems 2015/2. HW 1 Due: Feb 5
ECS 452: Digital Communication Systems 2015/2 HW 1 Due: Feb 5 Lecturer: Asst. Prof. Dr. Prapun Suksompong Instructions (a) Must solve all non-optional problems. (5 pt) (i) Write your first name and the
More informationBasic Principles of Lossless Coding. Universal Lossless coding. Lempel-Ziv Coding. 2. Exploit dependences between successive symbols.
Universal Lossless coding Lempel-Ziv Coding Basic principles of lossless compression Historical review Variable-length-to-block coding Lempel-Ziv coding 1 Basic Principles of Lossless Coding 1. Exploit
More informationInformation Theory and Statistics Lecture 2: Source coding
Information Theory and Statistics Lecture 2: Source coding Łukasz Dębowski ldebowsk@ipipan.waw.pl Ph. D. Programme 2013/2014 Injections and codes Definition (injection) Function f is called an injection
More informationLecture 3: Channel Capacity
Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.
More informationLec 05 Arithmetic Coding
ECE 5578 Multimedia Communication Lec 05 Arithmetic Coding Zhu Li Dept of CSEE, UMKC web: http://l.web.umkc.edu/lizhu phone: x2346 Z. Li, Multimedia Communciation, 208 p. Outline Lecture 04 ReCap Arithmetic
More informationInformation Theory. M1 Informatique (parcours recherche et innovation) Aline Roumy. January INRIA Rennes 1/ 73
1/ 73 Information Theory M1 Informatique (parcours recherche et innovation) Aline Roumy INRIA Rennes January 2018 Outline 2/ 73 1 Non mathematical introduction 2 Mathematical introduction: definitions
More information1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.
Problem sheet Ex. Verify that the function H(p,..., p n ) = k p k log p k satisfies all 8 axioms on H. Ex. (Not to be handed in). looking at the notes). List as many of the 8 axioms as you can, (without
More informationECE 534 Information Theory - Midterm 2
ECE 534 Information Theory - Midterm Nov.4, 009. 3:30-4:45 in LH03. You will be given the full class time: 75 minutes. Use it wisely! Many of the roblems have short answers; try to find shortcuts. You
More informationPART III. Outline. Codes and Cryptography. Sources. Optimal Codes (I) Jorge L. Villar. MAMME, Fall 2015
Outline Codes and Cryptography 1 Information Sources and Optimal Codes 2 Building Optimal Codes: Huffman Codes MAMME, Fall 2015 3 Shannon Entropy and Mutual Information PART III Sources Information source:
More informationLecture 1: September 25, A quick reminder about random variables and convexity
Information and Coding Theory Autumn 207 Lecturer: Madhur Tulsiani Lecture : September 25, 207 Administrivia This course will cover some basic concepts in information and coding theory, and their applications
More informationUNIT I INFORMATION THEORY. I k log 2
UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper
More informationClassical Information Theory Notes from the lectures by prof Suhov Trieste - june 2006
Classical Information Theory Notes from the lectures by prof Suhov Trieste - june 2006 Fabio Grazioso... July 3, 2006 1 2 Contents 1 Lecture 1, Entropy 4 1.1 Random variable...............................
More informationLecture 14 February 28
EE/Stats 376A: Information Theory Winter 07 Lecture 4 February 8 Lecturer: David Tse Scribe: Sagnik M, Vivek B 4 Outline Gaussian channel and capacity Information measures for continuous random variables
More informationShannon s Noisy-Channel Coding Theorem
Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 2015 Abstract In information theory, Shannon s Noisy-Channel Coding Theorem states that it is possible to communicate over a noisy
More informationSummary of Last Lectures
Lossless Coding IV a k p k b k a 0.16 111 b 0.04 0001 c 0.04 0000 d 0.16 110 e 0.23 01 f 0.07 1001 g 0.06 1000 h 0.09 001 i 0.15 101 100 root 1 60 1 0 0 1 40 0 32 28 23 e 17 1 0 1 0 1 0 16 a 16 d 15 i
More informationAn introduction to basic information theory. Hampus Wessman
An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on
More informationECE 587 / STA 563: Lecture 5 Lossless Compression
ECE 587 / STA 563: Lecture 5 Lossless Compression Information Theory Duke University, Fall 2017 Author: Galen Reeves Last Modified: October 18, 2017 Outline of lecture: 5.1 Introduction to Lossless Source
More informationAn instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1
Kraft s inequality An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if N 2 l i 1 Proof: Suppose that we have a tree code. Let l max = max{l 1,...,
More informationMARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for
MARKOV CHAINS A finite state Markov chain is a sequence S 0,S 1,... of discrete cv s from a finite alphabet S where q 0 (s) is a pmf on S 0 and for n 1, Q(s s ) = Pr(S n =s S n 1 =s ) = Pr(S n =s S n 1
More information(Classical) Information Theory III: Noisy channel coding
(Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way
More informationBandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)
Compression Motivation Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet) Storage: Store large & complex 3D models (e.g. 3D scanner
More informationECE 587 / STA 563: Lecture 5 Lossless Compression
ECE 587 / STA 563: Lecture 5 Lossless Compression Information Theory Duke University, Fall 28 Author: Galen Reeves Last Modified: September 27, 28 Outline of lecture: 5. Introduction to Lossless Source
More informationExercise 1. = P(y a 1)P(a 1 )
Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a
More informationrepetition, part ii Ole-Johan Skrede INF Digital Image Processing
repetition, part ii Ole-Johan Skrede 24.05.2017 INF2310 - Digital Image Processing Department of Informatics The Faculty of Mathematics and Natural Sciences University of Oslo today s lecture Coding and
More informationECE Information theory Final (Fall 2008)
ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1
More informationLecture 1: Introduction, Entropy and ML estimation
0-704: Information Processing and Learning Spring 202 Lecture : Introduction, Entropy and ML estimation Lecturer: Aarti Singh Scribes: Min Xu Disclaimer: These notes have not been subjected to the usual
More informationData Compression Techniques
Data Compression Techniques Part 1: Entropy Coding Lecture 4: Asymmetric Numeral Systems Juha Kärkkäinen 08.11.2017 1 / 19 Asymmetric Numeral Systems Asymmetric numeral systems (ANS) is a recent entropy
More informationLecture 15: Conditional and Joint Typicaility
EE376A Information Theory Lecture 1-02/26/2015 Lecture 15: Conditional and Joint Typicaility Lecturer: Kartik Venkat Scribe: Max Zimet, Brian Wai, Sepehr Nezami 1 Notation We always write a sequence of
More informationData Compression Techniques (Spring 2012) Model Solutions for Exercise 2
582487 Data Compression Techniques (Spring 22) Model Solutions for Exercise 2 If you have any feedback or corrections, please contact nvalimak at cs.helsinki.fi.. Problem: Construct a canonical prefix
More informationLECTURE 10. Last time: Lecture outline
LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)
More informationNational University of Singapore Department of Electrical & Computer Engineering. Examination for
National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:
More informationlossless, optimal compressor
6. Variable-length Lossless Compression The principal engineering goal of compression is to represent a given sequence a, a 2,..., a n produced by a source as a sequence of bits of minimal possible length.
More informationEE5585 Data Compression January 29, Lecture 3. x X x X. 2 l(x) 1 (1)
EE5585 Data Compression January 29, 2013 Lecture 3 Instructor: Arya Mazumdar Scribe: Katie Moenkhaus Uniquely Decodable Codes Recall that for a uniquely decodable code with source set X, if l(x) is the
More informationECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE)
ECE 74 - Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE) 1. A Huffman code finds the optimal codeword to assign to a given block of source symbols. (a) Show that cannot be a Huffman
More informationELEC546 Review of Information Theory
ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random
More informationElectrical and Information Technology. Information Theory. Problems and Solutions. Contents. Problems... 1 Solutions...7
Electrical and Information Technology Information Theory Problems and Solutions Contents Problems.......... Solutions...........7 Problems 3. In Problem?? the binomial coefficent was estimated with Stirling
More informationLECTURE 13. Last time: Lecture outline
LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to
More informationChapter 5. Data Compression
Chapter 5 Data Compression Peng-Hua Wang Graduate Inst. of Comm. Engineering National Taipei University Chapter Outline Chap. 5 Data Compression 5.1 Example of Codes 5.2 Kraft Inequality 5.3 Optimal Codes
More information