EE 121: Introduction to Digital Communication Systems. 1. Consider the following discrete-time communication system. There are two equallly likely

Similar documents
ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

Coding for Digital Communication and Beyond Fall 2013 Anant Sahai MT 1

Probability and Statistics

Compression methods: the 1 st generation

Pulse-Code Modulation (PCM) :

Digital Image Processing Lectures 25 & 26

ECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE)

Chapter 2 Source Models and Entropy. Any information-generating process can be viewed as. computer program in executed form: binary 0

EECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have

E303: Communication Systems

Digital Transmission Methods S

COMM901 Source Coding and Compression. Quiz 1

ECE 564/645 - Digital Communications, Spring 2018 Midterm Exam #1 March 22nd, 7:00-9:00pm Marston 220

Optimum Soft Decision Decoding of Linear Block Codes

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

Image Data Compression

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Chapter 9 Fundamental Limits in Information Theory

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

National University of Singapore Department of Electrical & Computer Engineering. Examination for

Multimedia Communications. Differential Coding

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations

EE 229B ERROR CONTROL CODING Spring 2005

Information Sources. Professor A. Manikas. Imperial College London. EE303 - Communication Systems An Overview of Fundamentals

Digital communication system. Shannon s separation principle

EE5356 Digital Image Processing

UTA EE5362 PhD Diagnosis Exam (Spring 2011)

BASICS OF COMPRESSION THEORY

Information and Entropy

Coding for Discrete Source

Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks

7 The Waveform Channel

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

ECE Information theory Final (Fall 2008)

EE-597 Notes Quantization

Waveform-Based Coding: Outline

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

at Some sort of quantization is necessary to represent continuous signals in digital form

Electrical Engineering Written PhD Qualifier Exam Spring 2014

A Systematic Description of Source Significance Information

3F1 Information Theory, Lecture 3

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions

SIGNAL COMPRESSION Lecture 7. Variable to Fix Encoding

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

Source Coding: Part I of Fundamentals of Source and Video Coding

Stabilization over discrete memoryless and wideband channels using nearly memoryless observations

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING Final Examination - Fall 2015 EE 4601: Communication Systems

Midterm Exam 1 Solution

CSCI 2570 Introduction to Nanocomputing

Entropy as a measure of surprise

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

Information Theory and Statistics Lecture 2: Source coding

Exercises with solutions (Set B)

Digital Modulation 1

Lecture 4 Noisy Channel Coding

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

SOLUTIONS TO ECE 6603 ASSIGNMENT NO. 6

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Lecture 12. Block Diagram

X 1 : X Table 1: Y = X X 2

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

Appendix B Information theory from first principles

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding

Problem. Problem Given a dictionary and a word. Which page (if any) contains the given word? 3 / 26

Error Detection and Correction: Hamming Code; Reed-Muller Code

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Objectives of Image Coding

Chapter 10 Applications in Communications

Final Exam { Take-Home Portion SOLUTIONS. choose. Behind one of those doors is a fabulous prize. Behind each of the other two isa

EE5713 : Advanced Digital Communications

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics.

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.

1 Introduction to information theory

2018/5/3. YU Xiangyu

if the two intervals overlap at most at end

Predictive Coding. Prediction Prediction in Images

Lecture 4 : Adaptive source coding algorithms

Predictive Coding. Prediction

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

BASIC COMPRESSION TECHNIQUES

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code

Basic Principles of Video Coding

Lecture 10: Broadcast Channel and Superposition Coding

Coding of memoryless sources 1/35

Physical Layer and Coding

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

EE5356 Digital Image Processing. Final Exam. 5/11/06 Thursday 1 1 :00 AM-1 :00 PM

EE5585 Data Compression April 18, Lecture 23

Shannon s noisy-channel theorem

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur

Transcription:

EE 11: Introduction to Digital Communication Systems Midterm Solutions 1. Consider the following discrete-time communication system. There are two equallly likely messages to be transmitted, and they are encoded into an input process f n g such that if message 0 is transmitted, n = 1 for all n 0 and if message 1 is transmitted, n = 1 for all n 0. The process is then passed through an AWGN channel such that the output process fy n g is given by Y n = n + W n where fw n g is a sequence of i.i.d. N(0; ) random variables. The received sequence then goes through a threshold receiver whose output is the process fz n g, such that Z n = sgn(y n ). [4] a) Is the process f n g stationary? Explain. [4] b) Is the process fz n g stationary? Explain. [4] c) Compute P ( k = 1jZ k = 1). (Express your answer in terms of the Q-function.) [4] d) Compute P (Z k+1 = 1jZ k = 1). Are the Z n 's independent? a) Yes. For all k; n 1 ; n ; : : : ; n k, P ( n1 = 1; : : : ; nk = 1) = 1 ; P ( n 1 = 1; : : : ; nk = 1) = 1 : Thus, the joint distribution does not depend on the times n 1 ; : : : ; n k and the process is (strict-sense) stationary. Some students argue that the process is not stationary because for negative times, the process is not dened. I accept that although it is taken for granted that by stationarity here, one means only over positive times. Some students only verify wide-sense stationarity. I accept that too. b) Yes. fz n g is a time-invariant transformation of two stationary processes f n g and fw n g and hence is stationary. (This is similar to one of the homework questions.) c) P ( k = 1jZ k = 1) = P (Z k = 1j k = 1)P ( k = 1) P (Z k = 1) By symmetry, P (Z k = 1) = 1, P ( k = 1) = 1. Also, and hence P (Z k = 1j k = 1) = P (W k > 1) = Q( 1 ) P ( k = 1jZ k = 1) = Q( 1 ):

d) P (Z k+1 = 1jZ k = 1) = P (Z k+1 = 1jZ k = 1; k = 1)P ( k = 1jZ k = 1) + P (Z k+1 = 1jZ k = 1; k = 1)P ( k = 1jZ k = 1) = P (Z k+1 = 1j k = 1)P ( k = 1jZ k = 1) + P (Z k+1 = 1j k = 1)P ( k = 1jZ k = 1) = P (Z k+1 = 1j k+1 = 1)P ( k = 1jZ k = 1) + P (Z k+1 = 1j k+1 = 1)P ( k = 1jZ k = 1) = Q( 1 1 Q( ) + Q( 1 1 (1 Q( )) = Q ( 1 ) + Q ( 1 )

. Suppose a discrete-time source is modelled by a stationary zero-mean Gaussian process f n g with autocorrelation function R. [5] a) Is the variance of n n 1 always less than that of n? Explain. If not, when does it hold? Does DPCM coding where the dierence between two consecutive samples are quantized always improve the performance over PCM? Explain. [5] b) Find the optimal linear predictor of n given n 1, and calculate the variance of the resulting error. How do you compare the performance of a DPCM coder using this predictor with that in (a)? a) V ar( n n 1 ) = V ar( n ) + V ar( n 1 ) Cov( n n 1 ) = R (0) R (1) Thus, the variance of n n 1 is smaller than that of n if and only if R (0) < R (1) If this is not satised, the variance of the dierence is actually greater than the variance of the individual sample and DPCM will in fact do worse. b) By solving the Yule-Walker equation, one nds that and the variance of the resulting error is ^ n = R (1) R (0) n 1 E[( n R (1) R (0) n 1) ] = R (0) R (1) R (0) : Since this is always less than the variance of n, it always improves over PCM. 3

3. Consider the following modulation scheme for symbols of n bits long. The rst k bits picks the amplitude A i and the last n k bits picks the phase j. The modulated waveform is A i cos( t T + j) on [0; T ]. [6] (a)find an orthonormal basis for the waveforms. [] (b) What is the dimension of the space spanned by the waveforms? [4] (c) What is the geometry of the signal constellation? Solution (a) A i cos( t T + j) = A i cos( j )cos( t T ) + A isin( j )sin( t T ): Therefore, an orthonormal basis for the waveforms is given by s s f T sin(t T ); T cos(t T )g. b) The dimension is. c) Each signal can be represented as a point in D plane with (A i ; j ) as the polar coordinates. 4

4. Consider a source with an alphabet of N letters, probabilities p 1 ; : : : ; p N, all non-zero. [6] a) Argue that any prex-conditioned binary code with codeword lengths l 1 ; : : : ; l N must satisfy: N i=1 li 1 (Hint: think in terms of the binary tree representation of the code.) [6] b) Argue that equality must hold if the code minimizes the expected codeword lengths. [6] c) (Bonus) Show that the expected codeword length of any prex-conditioned binary code cannot be smaller than N i=1 p i log p i (Hint: considering the constraint in (b) on the codeword lengths, how small can the expected codeword length be even if the lengths could be fractional? ) a) A rigorous derivation is by induction. All prex-condition code correponds to a binary tree where all codewords are at the leaves. If there are only two codewords, the assertion is certainly true since l 1 = l = 1. Assume it is true for all prex-conditioned code with n or less codewords. Consider now a code with n + 1 codewords. Let L and R be the codeword leaves in the left and right subtrees respectively. A codeword of length l i can be thought of as a codeword of length l i 1 in one of the subtrees. Applying our induction hypothesis on the left and right subtrees separately, This implies that il l i 1 1; i ir li 1: l i 1 1: Any informal argument which is along these lines is given full credit. b) When the average codeword length is minimized, all interior nodes have two siblings. Using a similar argument as above, one can show that equality must hold. c) Consider the problem of minimizing P i p i l i subject to the constraint of P i l i = 1, without requiring that l i 's are integer. By using the method of Lagrange multipler, one can show that the optimal solution is given by l = i log p i. Hence the average codeword length for any code must be lower bounded by P i p i log p i. 5