Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) E{(X-Y) 2 } D<σ 2 I(X;Y ) = H (X) - H (X Y ) = H (X) - H (X -Y Y ) ³ H (X) - H (X -Y ) =.5 log2pes 2 -.5log2peD =.5 log(s 2 / D) Suppose you don t know the characteristics of the source, looks somewhat like Laplacian. Can you comment on R(D). In case of lossless encoding, algo should achieve entropy of source for optimality. Same role is played by R(D) here.
Probability Models Uniform distribution Gaussian distribution Laplacian Distribution f X (x) = - 2 x 1 2s e s 2
Quantization Quantization: a process of representing a large possibly infinite set of values with a much smaller set. Scalar quantization: a mapping of an input value x into a finite number of output values, y: Codes 000 001 010 011 100 101 110 111-3.0-2.0-1.0 0 1.0 2.0 3.0 input
Input Codes Output 000-3.5 001-2.5 010-1.5 011-0.5 100 0.5 101 1.5 110 2.5 111 3.5
Types of uniform quantizers Midrise (a) quantizers have even number of output levels. Midtread quantizers have odd number of output levels, including zero as one of them
Quantization error Since the reconstruction values y i are the midpoints of each interval, the quantization error must lie within the values [ Δ/2, Δ/2]. For a uniformly distributed source, the graph of the quantization error looks like
Quantization operation: Let M be the number of reconstruction levels and M boundaries be { b i } i=0 Q(x) = y i if b i-1 < x < b i MSE = s q 2 = ò - ( x - Q(x) ) 2 f X (x)dx M å b i ò = (x - y i ) 2 f X (x)dx i=1 b i-1 Lets represent the quantizer output using fixed length codewords.
If there are M levels, the rate is given by R = log 2 M éê For 8 levels we need 3 bit codewords. Quantizer design problem Given f X (x) and M, find decision boundaries b i and reconstruction levels y i, so as to minimize MSE. ùú
If we use VLCs to represent yi, then rate Where P(yi) = b i ò b i-1 f X (x) dx M å i=1 R = l i P(y i ) Problem, given distortion constraint s q 2 D * Find decision boundaries, reconstruction levels and binary codes that minimize the rate
Uniform Quantization of Uniform Source Input: Uniform [ Xmax, Xmax] Output M level uniform quantizer = 2Xmax/ M M /2 å i=1 id ò (i-1)d s 2 æ q = 2 x - è ç 2i -1 2 D ö ø 2 1 2X max dx = D2 12
Consider quantization error instead: q = x Q(x) q [ /2, /2] and is uniformly distributed s 2 q = 1 ò q2 dq = 2 12 - Signal variance E[X 2 ] = X f x dx = = With fixed length codewords, M levels need n bits, M = 2 n
SNR db = 20log In terms of n, SNR = 6.02n db For every additional bit we get an increase of 6.02dB.
Image compression 1 bit/pixel 2 bit/pixel 3 bit/pixel
Uniform Quantization of Nonuniform Sources Example nonuniform source: x [ 100, 100], P(x [ 1, 1]) = 0.95 Problem - Design an 8 level quantizer Previous approach leads to 95% of sample values represented by two numbers: -12.5 and 12.5 Max quantization error (QE)= 12.5, Min QE = 11.5 Consider an alternative Step = 0.3 Max QE = 98.5, however 95% of the time QE < 0.15 Average distortion would be less in later case.
Given M, minimize distortion Distortion as a function of stepsize and minimize the function. σ = 2 x 2i 1 2 To find optimal stepsize, differentiate wrt stepsize and set it equal to 0. Computed using numerical methods Closed form would be very difficult to derive. + 2 x M 1 2 ( ) f x dx f x dx Granular error Overload error
Overload and Granular Error
Optimum Step Size Using Leibniz rule dσ dδ 2i 1 = 2i 1 x f 2 x dx (M 1) x M 1 f 2 x dx = 0 As we change stepsize, we tradeoff between the two noise profiles.
Optimum Δ For the same levels, step-size of the uniform pdf is less than that of Gaussian, which is less than Laplacian. Laplacian has more mass in its tails as compared to Gaussian. So, stepsize should be more to control overload noise.
Mismatch Effects Used statistics of the source to determine optimum Δ. However, input may not have the same statistics. Leads to mismatch effects Variance Mismatch; 4 bit Gaussian uniform quantizer with Gaussian input
Distribution Mismatch Input pdf does not match the assumed pdf. SNR for different 8 level quantizers. Assume that the sources are uniform, Gaussian, Laplacian, and Gamma. Compute the optimum MSQE step size for uniform quantizer The resulting Δ gets larger from left-to-right
Non-uniform quantization For uniform quantizer, decision boundaries are determined by a single parameter Δ. We can certainly reduce quantization errors further if each decision boundaries can be selected freely Boundary selection can be done based on the minimization of error criterion.
pdf-optimized Quantization Given pdf, minimize MSE, Find the minimum of function by setting derivative wrt y. s 2 q = ( x - Q(x) ) 2 f X (x)dx y = M å ò - b i ò = (x - y i ) 2 i=1 b i-1 f X (x)dx 2 x y f x dx = 0 y =, b x < b
If y j are determined, the b j can be selected as: b = { x y f x dx + x y f x dx = f (b ){ b y - b y } Lloyd-Max quantizer solves iteratively for b j and y j Let us design a mid-rise quantizer, where b 0 =0 and b M/2 = max(input) Problem find {b 1, b 2 b M/2-1 } and reconstruction levels {y 1, y 2, y M/2-1 }
Lets put j = 1, we want to find b1 and y1 by y = xf x dx Guess y1 and solve for b1 numerically Then find y 2 = 2b 1 y 1 Find b2 as y = xf x dx f x dx f x dx
Find all b s and y s. The accuracy of these values depends on initial guess. We can have a stop criteria as Find b M/2 1 using the previous eqns. Using it find y M/2. Also compute y M/2 from the fact that we know b M/2 and compare them If the difference is less than a threshold stop. Else start the process again with y1.
4 bit Laplacian nonuniform quantizer
Compander Often the source characteristics vary over time. A usual way to suppress mismatch effect is to use compander c x = 2x if 1 x 1 2x 3 + 4 3 x > 1 2x 3 4 3 x < 1
Expander c x = x 2 if 2 x 2 3x 2 2 x > 2 3x 2 + 2 x < 2
Equivalent Non-uniform Quantizer
If the level of quantizer is large and the input is bounded by x max, it is possible to choose a c(x) such that the SNR of compander is independent to input pdf SNR = 10 log 10 (3M 2 ) 20log 10 α, c x = x sgn x A x = sgn x, 0 x ( ), 1