What would be a reasonable choice of the quantization step Δ?

Similar documents
Lossy Compression. Compromise accuracy of reconstruction for increased compression.

VQ widely used in coding speech, image, and video

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Lec 07 Transforms and Quantization II

Richard Socher, Henning Peters Elements of Statistical Learning I E[X] = arg min. E[(X b) 2 ]

Differentiating Gaussian Processes

Lecture Notes on Linear Regression

Lecture 12: Discrete Laplacian

where I = (n x n) diagonal identity matrix with diagonal elements = 1 and off-diagonal elements = 0; and σ 2 e = variance of (Y X).

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Why Monte Carlo Integration? Introduction to Monte Carlo Method. Continuous Probability. Continuous Probability

Chapter 8 SCALAR QUANTIZATION

The Schrödinger Equation

8.1 Arc Length. What is the length of a curve? How can we approximate it? We could do it following the pattern we ve used before

Which Separator? Spring 1

Maximal Margin Classifier

Learning Theory: Lecture Notes

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values

10-701/ Machine Learning, Fall 2005 Homework 3

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Composite Hypotheses testing

Linear Classification, SVMs and Nearest Neighbors

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

STAT 511 FINAL EXAM NAME Spring 2001

PHYS 705: Classical Mechanics. Calculus of Variations II

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Scalar and Vector Quantization

First Year Examination Department of Statistics, University of Florida

CS-433: Simulation and Modeling Modeling and Probability Review

Finding Dense Subgraphs in G(n, 1/2)

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Lecture 12: Classification

Errors for Linear Systems

Multi-dimensional Central Limit Theorem

Statistics Chapter 4

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1

More metrics on cartesian products

Digital Signal Processing

Lecture 3. Ax x i a i. i i

Linear Feature Engineering 11

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

Boostrapaggregating (Bagging)

Appendix B: Resampling Algorithms

Asymptotic Quantization: A Method for Determining Zador s Constant

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014

Pulse Coded Modulation

SELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table:

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Power Allocation for Distributed BLUE Estimation with Full and Limited Feedback of CSI

Generalized Linear Methods

PROBABILITY PRIMER. Exercise Solutions

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

Expectation propagation

Error Probability for M Signals

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

Estimation: Part 2. Chapter GREG estimation

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Ensemble Methods: Boosting

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

A 2D Bounded Linear Program (H,c) 2D Linear Programming

Statistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics )

Multi-dimensional Central Limit Argument

1 Matrix representations of canonical matrices

ACTM State Calculus Competition Saturday April 30, 2011

Quantifying Uncertainty

V.C The Niemeijer van Leeuwen Cumulant Approximation

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Simulation and Random Number Generation

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

Complex Numbers Alpha, Round 1 Test #123

find (x): given element x, return the canonical element of the set containing x;

8.592J: Solutions for Assignment 7 Spring 2005

CSCE 790S Background Results

Chapter 7 Channel Capacity and Coding

Module 14: THE INTEGRAL Exploring Calculus

x yi In chapter 14, we want to perform inference (i.e. calculate confidence intervals and perform tests of significance) in this setting.

Chapter 3 Describing Data Using Numerical Measures

Lecture 3: Probability Distributions

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Statistical tables are provided Two Hours UNIVERSITY OF MANCHESTER. Date: Wednesday 4 th June 2008 Time: 1400 to 1600

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

Homework Assignment 3 Due in class, Thursday October 15

Chapter 7 Channel Capacity and Coding

Section 3.6 Complex Zeros

Bose (1942) showed b t r 1 is a necessary condition. PROOF (Murty 1961): Assume t is a multiple of k, i.e. t nk, where n is an integer.

Math1110 (Spring 2009) Prelim 3 - Solutions

The Concept of Beamforming

Pattern Classification (II) 杜俊

Transcription:

CE 108 HOMEWORK 4 EXERCISE 1. Suppose you are samplng the output of a sensor at 10 KHz and quantze t wth a unform quantzer at 10 ts per sample. Assume that the margnal pdf of the sgnal s Gaussan wth mean of 0 Volts and varance of 4 Volts. What s the t rate of the quantzed sgnal? 100 Kts/s What would e a reasonale choce of the quantzaton step Δ? For example, we could choose X m 4 x 8 Volts. Then, Δ X m / 10 0.0156 Volts What s the power of the quntzaton error? (Assume that the hgh rate hypothess holds). e Δ 1 10 5 Volts What s the resultng quantzaton SNR? SNR 60-7 53 db Wth your choce of Δ, what s the proalty that a sample s n the overload zone? Ths s P( x >4 x ). For a Gaussan random varale ths can e computed usng the error functon and s equal to 6 10-5. EXERCISE. Consder a random varale x wth pdf unform n [-1,1]. Suppose you perform quantzaton wth 3 ts usng a mdrse quantzer. Compute the theoretcal varance of the quantzaton nose, as dvded nto granular and overload zone, choosng X m 0.5, X m 1, and X m. Addtonally, compute the resultng quantzaton SNR. Consder frst the case X m 1. In ths case, the proalty of eng n the overload zone s 0, hence the error s only due to the granular zone. In ths case, 4 Δ 1 e x Δ Δ dx 4 Δ3 ( 1 1 Δ3 3. Snce Δ X m/ 3 1/4, we otan )Δ e 1 19 0.005. Snce the varance of the sgnal s /11/3, we have:

SNR10log x 10 18.06 db. e f x (x) e(x) For the case X m 0.5 we wll need to also consder the overload zone. Snce Δ X m / 3 1/8, the granular zone gves an error of e 1 0.00065. The error n the 1536 overload zone can e computed as: e,ol 1 1 x 4Δ Δ 1 dx x 7 9 /16 dx x dx 1 9 3 1 0.059. Overall, the 4 Δ 16 1/ 1/16 3 16 3 error varance s almost 0.06, whch s much larger (more than 10 tmes) than efore. Ths s due to the overwhelmng overload zone. The SNR s equal to 7.4 db. f x (x) X m 1 e(x) X m 0.5 In the case X m, we have Δ X m / 3 1/. Hence, for x>0, only quantzaton levels are n the area where the varale has non-null proalty. There s no overload error, ut we expect a larger granular error. We need to modfy the equaton for the error varance as follows: e 1 1 Δ ( )Δ x Δ Δ dx Δ3 1 Δ3 6 1 8 6 tmes larger than n the case X m 1. Now, SNR1 db. 0.008, whch s aout 4

f x (x) e(x) X m EXERCISE 3. Consder a sgnal wth non-unform margnal dstruton, whch we need to quantze wth 8 ts. Suppose that the optmal quantzaton thresholds are (for x>0) 0.01 (assume that the pdf of the sgnal s symmetrc). Fnd a compoundng functon g(x) such that the compounded sgnal can e quantzed usng a unform quantzer. Ths would e any monotone functon such that (for x>0), g( )g(0.01-1 )(-1)Δ for any choce of Δ. For example, g(x)log ( x /0.01) sgn(x) (g(0)0) would do the trck wth Δ1. EXERCISE 4. (GRADS) Prove that, under the hypothess of hgh rate, the optmal choce of value y for the nterval [-1, ] s the mdpont: (-1 + )/.

The optmal value of y s y xf x ( x)dx. In the hgh rate case we assume that f x (x) s ( x)dx 1 f x constant wthn [-1, ]. Let f e such constant value. Then, y 1 1 xf dx f dx 1 1 xdx dx 1 1 1 ( ) + Prove that for an optmal quantzer, the quantzaton error has mean equal to 0. Snce y s optmal, t s y E[ e] E[ x y ] E x xf x ( x)dx 1 1 1. xf x ( x)dx. Now, wthn [-1, ]: ( x)dx 1 f x 1 xf x ( x)dx xf x ( x)dx f x ( x ) x dx f x ( x)dx f x ( x)dx xf x ( x)dz f x ( x)dx 1 xf x ( x)dx xf x ( x)dx 0 ( x)dx 1 1 1 f x EXERCISE 5. (GRADS) 1 x, x 1 Consder a varale x wth the followng trangular pdf: f x ( x). 0, x >1 Fnd a compoundng functon g(x) that transform x nto a unform random varale. Let zg(x). Then, f z (g(x)) f x (x)/ g(x). We want f z (z) to e unform (constant) for all ponts z such that g -1 (z) 1. In other words, wthn ths nterval, t must e g(x) f x (x)/c, where C s a constant. By ntegraton, and forcng g(0)0 and C1, we otan for -1<x<1: g( x) x x sgn(x), whch gves a varale zg(x) unform n [-0.5,0.5]. Suppose you quantze the transformed varale wth a unform quantzer wth 3 ts wth no overload zone. What are the correspondng (non-unform) quantzaton thresholds,x for the orgnal varale? 1

We need to fnd the nverse of g(x): for -1<x<1, g 1 ( z) 1 1 z sgn(z). For z>0, the quantzaton thresholds are,z -3, and the correspondng thresholds for x are thus: 1,x 0.134;,x 0.9; 3,x 0.5; 4,x 1. EXERCISE 6. Consder a sgnal x(n), sampled at F100 Hz, and suppose you quantze t usng (1) scalar quantzaton (4 ts per sample) and () vector quantzaton (quantzng vector of samples and assgnng 8 ts per vector). 1. Compute the t rate n the two cases. It s the same (400 ts/s). Prove that the expected quadratc quantzaton error usng vector quantzaton cannot e hgher than n the scalar quantzaton case (assumng that the scalar and the vector quantzer are optmal). It s ecause, gven an optmal scalar quantzer wth nterval set B{ }, you can always construct an dentcal separale vector quantzer, defned y BxB (.e., wth assgnment regons of the type [-1, ]x[ j-1, j ]). Hence, the error of the optmal vector quantzer s at most as large as the error of ths vector quantzer, whch s dentcal to the error of the scalar quantzer. EXERCISE 7. Consder a -D vector quantzer wth y 1 (1,), y (1,4), y 3 (-1,), y 4 (0,-). 1. Show wth a graph the optmal assgnment regons {V }. y y 3 y 1 y 4. Quantze and compute the emprcal quadratc error for the followng sgnal, assumng that you quantze groups of two samples at a tme: x{-4-3 - -1 0 1 3 4 5} [-4-3] y 4 [0,-] (e 17) [- -1] y 4 [0,-] (e 5) [0 1] y 1 [1,] (e ) (same error s otaned wth y 3 ) [ 3] y 1 [1,] (e ) (same error s otaned wth y )

[4 5] y [1,4] (e 10) EXERCISE 8. (GRADS) 1. Prove that at each step of the LBG algorthm to desgn a vector quantzer the expected quadratc norm of the error E[ e ] ( x y ) V f x ( x)dx can never ncrease. (Rememer that the LBG algorthm can e used when the jont pdf f x (x) of the sgnal s known). At each step of the LBG algorthm, we mnmze the expected quadratc error, ether over the set of {V } (keepng the {y } constant) or over the set of {y } (keepng the {V } constant). Ovously, the error can never ncrease. E.g., suppose that at a certan pont we have chosen a certan set {V } and a certan set of {y }, whch gves an expected quadratc error of e. Now we fnd the {y } that mnmze the expected quadratc error for fxed {V }. The error cannot e larger than e otherwse, we may just keep the prevous {y }!. Prove that at each step of the k-mean algorthm to desgn a vector quantzer, the sample mean of the quadratc norm of the error ( x k y ) can never xk V ncrease. (In ths case, we start from a tranng sample {x k }). Same as efore, only that now, at each step of the algorthm, we mnmze ( x k y ), ether over {V } or over {y }. xk V EXERCISE 9. (GRADS - OPTIONAL) We want to desgn a quantzer wth ts for an exponental random varale wth, such that 0 0 and 4. Gven the followng choce of s: [0, 0., 0.6, 0.8, ], fnd the optmal choce of y s y 1 1 x 1 x xe x e d + e x d x (ntegraton y parts) 1 e d e x e 1 e 1 + e e 1 e e e 1 Hence, y [0.0983, 0.3933, 0.6983,.8000]. e 1 e 1 e 1 1 +

Gven the set of y s gven y your answer, fnd the optmal set of s y + y +1 (except for 0 and 4 that are fxed). Hence, [0, 0.458, 0.5458, 1.749]. Now terate, alternatng etween the desgn of the y s and of the s, tll convergence. Ths s the generalzed Lloyd s method for optmal quantzer desgn. Iteratng, I otaned the followng optmal values: : [0, 1.5081, 3.543, 6.7304, ] y : [0.660,.3560, 4.7304, 8.7304]. f x (x) x