Example: for source

Similar documents
Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Multimedia Communications. Scalar Quantization

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding.

EE-597 Notes Quantization

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

E303: Communication Systems

Principles of Communications

Homework Set 3 Solutions REVISED EECS 455 Oct. 25, Revisions to solutions to problems 2, 6 and marked with ***

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression

EXAMPLE OF SCALAR AND VECTOR QUANTIZATION

The information loss in quantization

Pulse-Code Modulation (PCM) :

An Effective Method for Initialization of Lloyd Max s Algorithm of Optimal Scalar Quantization for Laplacian Source

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

Ch. 10 Vector Quantization. Advantages & Design

PROOF OF ZADOR-GERSHO THEOREM

Design of Optimal Quantizers for Distributed Source Coding

Summary of Shannon Rate-Distortion Theory

Coding for Discrete Source

Optimization of Variable-length Code for Data. Compression of memoryless Laplacian source

CS578- Speech Signal Processing

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

7.1 Sampling and Reconstruction

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

Quantisation. Uniform Quantisation. Tcom 370: Principles of Data Communications University of Pennsylvania. Handout 5 c Santosh S.

EE368B Image and Video Compression

Digital Image Processing Lectures 25 & 26

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Gaussian Source Coding With Spherical Codes

( 1 k "information" I(X;Y) given by Y about X)

THE dictionary (Random House) definition of quantization

Navneet Agrawal Deptt. Of Electronics & Communication Engineering CTAE,MPUAT,Udaipur,India

Randomized Quantization and Optimal Design with a Marginal Constraint

Approximating Vector Quantization by Transformation and Scalar Quantization

Companding of Memoryless Sources. Peter W. Moo and David L. Neuho. Department of Electrical Engineering and Computer Science

On MMSE Estimation from Quantized Observations in the Nonasymptotic Regime

' 3480 University, Montreal, Quebec H3A 2A7. QUANTIZERS FOR SYMMETRIC GAMMA DISTRIBUTIONS. 3. Uniqueness. 1. Introduction. INRS- TtlCeommunicationr

at Some sort of quantization is necessary to represent continuous signals in digital form

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1

Predictive Coding. Prediction

Predictive Coding. Prediction Prediction in Images

Vector Quantization and Subband Coding

Chapter 10 Applications in Communications

Being edited by Prof. Sumana Gupta 1. only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative

Fractal Dimension and Vector Quantization

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU

SUCCESSIVE refinement of information, or scalable

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Proyecto final de carrera

Audio /Video Signal Processing. Lecture 2, Quantization, SNR Gerald Schuller, TU Ilmenau

Worst case analysis for a general class of on-line lot-sizing heuristics

Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks

CMPT 365 Multimedia Systems. Final Review - 1

Analysis of methods for speech signals quantization

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820

Simultaneous SDR Optimality via a Joint Matrix Decomp.

Low Resolution Scalar Quantization for Gaussian and Laplacian Sources with Absolute and Squared Error Distortion Measures

Module 3. Quantization and Coding. Version 2, ECE IIT, Kharagpur

Maximum mutual information vector quantization of log-likelihood ratios for memory efficient HARQ implementations

Information Dimension

Fractal Dimension and Vector Quantization

AN IMPROVED ADPCM DECODER BY ADAPTIVELY CONTROLLED QUANTIZATION INTERVAL CENTROIDS. Sai Han and Tim Fingscheidt

STATISTICS FOR EFFICIENT LINEAR AND NON-LINEAR PICTURE ENCODING

distortion and, usually, "many" cells, and "large" rate. Later we'll see Question: What "gross" characteristics" distinguish different highresolution

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

CHAPITRE I-5 ETUDE THEORIQUE DE LA ROBUSTESSE DU QUANTIFICATEUR UNIFORME OPTIMUM

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

Iterative Quantization. Using Codes On Graphs

Optimal Multiple Description and Multiresolution Scalar Quantizer Design

Cases Where Finding the Minimum Entropy Coloring of a Characteristic Graph is a Polynomial Time Problem

EE5585 Data Compression February 28, Lecture 11

LOW COMPLEX FORWARD ADAPTIVE LOSS COMPRESSION ALGORITHM AND ITS APPLICATION IN SPEECH CODING

Performance Analysis of Multiple Antenna Systems with VQ-Based Feedback

On Optimal Coding of Hidden Markov Sources

Fast Indexing of Lattice Vectors for Image Compression

Structured Variational Inference

Lecture 20: Quantization and Rate-Distortion

Math 362, Problem set 1

Performance Bounds for Joint Source-Channel Coding of Uniform. Departements *Communications et **Signal

EE67I Multimedia Communication Systems

Digital Signal Processing 2/ Advanced Digital Signal Processing Lecture 3, SNR, non-linear Quantisation Gerald Schuller, TU Ilmenau

Image and Multidimensional Signal Processing

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Design of a CELP coder and analysis of various quantization techniques

Information Theoretic Imaging

VECTOR QUANTIZATION TECHNIQUES FOR MULTIPLE-ANTENNA CHANNEL INFORMATION FEEDBACK

ONE approach to improving the performance of a quantizer

CHANNEL FEEDBACK QUANTIZATION METHODS FOR MISO AND MIMO SYSTEMS

Radial-Basis Function Networks. Radial-Basis Function Networks

A Lossless Image Coder With Context Classification, Adaptive Prediction and Adaptive Entropy Coding

MMSE estimation and lattice encoding/decoding for linear Gaussian channels. Todd P. Coleman /22/02

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

Constellation Shaping for Communication Channels with Quantized Outputs

Transcription:

Nonuniform scalar quantizer References: Sayood Chap. 9, Gersho and Gray, Chap.'s 5 and 6. The basic idea: For a nonuniform source density, put smaller cells and levels where the density is larger, thereby making the quantization erro smaller on the average. Example: for source 000 001 010 111 w 1 w 2 w 3 w 8 Implementation: (think of M as large (e.g. 2 16 )) when thinking of complexity 1. Brute force: Encoding by thresholding: Compare x to t 1, t 2,, just as with USQ. Complexity: M-1 comparisons, M-1 units of storage Decoding by table lookup: Just as with USQ, Store w 0,...,w M-1 in table. Given c I, decoder uses I to address table and to output d(c I ) = w I Complexity: 0 ops/sample and M units of storage 2. Encoding by tree-structured thresholding: Just as with USQ, successively compare x to test thresholds associated with the nodes of a binary tree. Show example. Complexity: log 2 M comparisons/sample. storage of M-1 test thresholds. February 3, 2007 1

3. Companding: encoder = compressor + uniform thresholds, decoder = uniform levels + expander The following shows the quantization rule Q. x 1 z=c(x) z^ 1 y = Q(x) -1 = c (z) ^ compressor c: (-, ) [0,1] montonic increasing uniform scalar quantizer expander c -1 =1/M x x x x x x x x c(x) Thresholds: t i = c -1 ( i M ), i = 1,,M-1; Levels: y i = c -1 ( i M - 1 2M ), i = 1,,M One can implement any nonuniform quantizer with a compander. Conversely, any compander is the implementation of some quantizer. Complexity: that of USQ + compressor and expander functions February 3, 2007 2

Examples: A-law c(x) = Show picture: A x 1+ln A sgn(x), x max 1+ln(A x /x max) 1+ln A European PCM standard: A = 87.56 x x max A sgn(x), x max A < x µ-law c(x) = x max ln(1+µ x /x max) ln(1+µ) North American PCM standard: µ = 255 February 3, 2007 3

Things to do 1. Design quantizers to minimize MSE for given M and pdf 2. Find approximate expression for MSE of a quantizer. 3. Find OPTA function, δ sq (R). 4. Find approximate expression for OPTA function. Quantizer Design to minimize MSE Given f X (x) and M, choose t i 's and w i 's to minimize MSE 1. Iterative approach. 2. Asymptotic approach. Key Properties of Optimal Quantizers Iterative design will be based on these. 1: For a given set of levels y 1,,y M, the thresholds that minimize MSE) are midway between levels, i.e. t i = y i+y i+1 2, i = 1,,M-1 (*) 2: For a given set of thresholds t 1,,t M-1 and source density f X (x), the levels that minmize MSE are the centroids where y i = E[X t i-1 <X<t i ] = - x f X (x t i-1 <X<t i ) du, i = 1,...,M (**) f X (x t i-1 <X<t i ) = f X (x) Pr(t i-1 <X<t i ), t i-1<x<t i 0, else February 3, 2007 4

Interpretation/derivation of (1): The encoder controls the decoder, and so given x, the best thing is for the encoder to make the decoder produce the closest level to x. In other words, to minimize distortion for each x, we choose the thresholds so that Q(x) is the level closest to x. Since this minimizes (x-q(x)) 2 for each x, it minimizes the distortion: D = - (x-q(x)) 2 f(x) dx It also implies the thresholds are halfway between the levels. Can also derive (1) by equating derivative of distortion to zero, but why use a powerful mathematical tool when a simple observation suffices? Exception to the rule: If the density is zero midway between two levels, then threshold can be anywhere in the interval containing the midpoint where the density is zero. February 3, 2007 5

Interpretation/derivation of (2): decoder is MMSE estimator. Recall estimation theory: If you must estimate the value of X based on having observed the occurrence of an event A, and if you want to choose the estimate ^x A of X to minimize E[(X-^x A ) 2 A]. Then you should choose ^x A = E[X A]. (Recall: E(X-c) 2 is minimized by c = EX.) Now, M D= E[(X-Q(X)) 2 t i-1 <X<t i ] Pr(t i-1 <X<t i ) (law of total expect'n) i=1 = i=1 M E[(X-yi ) 2 t i-1 <X<t i ] Pr(t i-1 <X<t i ) We minimize the sum by minimizing each term, which is done by choosing y i = E[X t i-1 <X<t i ] February 3, 2007 6

Corollary to 1 and 2: Given a size M and a source density f X (x), the best scalar quantizer (i.e. the one with smallest MSE) satisfies (*) and (**). Consequently, 1 and 2 are called optimality properties. Notes: There may be more than one optimal quantizer. Example: f(x) f(x) Even if there is only one optimal quantizer, there may be more than one quantizer that satisfies the properties, in which case the best quantizer is one of them. In other words, the optimality properties are necessary for optimality, but not sufficient. Example of two quantizers that satisfy the optimality criteria. (x) f(x) And not all VQ's that satisfy (*) and (**) are best. If ln(f X (x)) is a strictly concave function of x, then there is only one optimal quantizer, there is only one quantizer with M levels that satisfies (*) and (**), and this is the unique best quantizer. Example: if f X is Gaussian, then ln(f X (x)) = - x 2 /2σ 2 + constant, which is strictly concave. For a Laplacian density ln(f X (x)) is not strictly concave. However, it has been shown that for this special case, there is again nly one quantizer with M levels that satisfies (*) and (**), and this is the unique best quantizer. Reference: P.E. Fleischer, "Sufficient conditions for achieving minimum distortion in a quantizer," IEEE Int. Conv. Rec, Part I, p. 104, 1964. February 3, 2007 7

Fact: A quantizer satisfies (*) and (**) iff it is a local optimum. Defn: A quantizer is locally optimum if all sufficiently small perturbations increase or maintain distortion What do we mean by "sufficently small perturbation"? Consider replacing the sets of levels {y 1,,y M } and thresholds by the sets {t 1 +εu 1,,t M-1 +εu M } and {y 1 +εz 1,,y M +εz M } for some small ε and arbitrary sets {u 1,,u M } and {z 1,,z M }. A quantizer is locally optimum iff for any such u and z, there is an ε o such that for all ε ε o, the perturbed quantizer has the same or larger distortion. Proof of Fact: Local opt (*) and (**): If a quantizer does not satisfy (*) and (**), then it can be improved by small perturbations, so it is not a local optimum. Equivalently, the contrapositive says if it is a local optimum then it satisfies (*) and (**). (*) and (**) local opt: If a quantizer satisfies (*) and (**), then it is locally optimum because any small perturbation will cause it not to satisfy the (*) and (**) and, consequently, will be a worse quantizer. (this is kind of a fuzzy argument) February 3, 2007 8

Iterative Design Algorithms Lloyd-Design Algorithm (1957, 1982) make an initial choice of quantization levels use (*) to choose best thresholds for these levels use (**) to choose best levels for these thresholds use (*) to choose best thresholds for these levels etc. continue until (*) or (**) makes very little change. Notes: Each step decreases or maintains distortion, so distortion of algorithm converges. Works well in practice Levels usually converge. Convergence to a local minimum has been proven under certain conditions. There is also a Lloyd-Max Algorithm: J. Max, Quantizing for minimum distortion, IT, 7-12, Mar. 1960. Lloyd, 1957, 1982. Optimum quantizers are often called Max or Lloyd-Max quantizers. February 3, 2007 9

Optimal Scalar Quantizers The figures on the following pages plot the SNR attained by scalar quantizers with minimum MSE for a given rate, where rate is defined by R = log 2 M, for several different source densities. Also given are tables with the SNR's plotted, as well as those for optimized uniform scalar quantization. Also given are plots comparing the SNR's for different densities. February 3, 2007 10

40 35 Gaussian source density SNR (db) 30 25 20 15 10 5 0 nonuniform scalar quant uniform scalar quant 1 2 3 4 5 6 7 rate Gaussian density Rate Unif Q Opt Q Gain 1 4.4 4.4 0 2 9.25 9.3 0.05 3 14.27 14.62 0.35 4 19.38 20.22 0.84 5 24.57 26.01 1.44 6 29.83 31.89 2.06 7 35.13 37.81 2.68 8 40.34 February 3, 2007 11

40 Laplacian source density SNR (db) 35 30 25 20 15 10 5 0 nonuniform scalar quant j uniform scalar quant 1 2 3 4 5 6 7 rate Laplacian Rate Unif Q Opt Q Gain Panter Dite diff 1 3.01 3.01 0-0.51 3.52 2 7.07 7.54 0.47 5.51 2.03 3 11.44 12.64 1.2 11.53 1.11 4 15.96 18.13 2.17 17.55 0.58 5 20.6 23.87 3.27 23.57 0.30 6 25.36 29.74 4.38 29.59 0.15 7 30.23 35.69 5.46 35.61 0.08 8 35.14 February 3, 2007 12

45 40 35 uniform source density SNR (db) 30 25 20 15 10 5 0 1 2 3 4 5 6 7 rate uniform & nonuniform scalar quant Uniform density Rate Unif Q Opt Q Gain 1 6.02 6.02 0 2 12.04 12.04 0 3 18.06 18.06 0 4 24.08 24.08 0 5 30.1 30.1 0 6 36.12 36.12 0 7 42.14 42.14 0 8 48.17 48.17 0 February 3, 2007 13

45 nonuniform scalar quantization 40 35 30 SNR (db) 25 20 15 uniform Gaussian Laplacian 10 5 0 1 2 3 4 5 6 7 rate February 3, 2007 14

45 40 uniform scalar quantization 35 30 SNR (db) 25 20 15 uniform Gaussian Laplacian 10 5 0 1 2 3 4 5 6 7 rate February 3, 2007 15