Multimedia Communications. Scalar Quantization

Similar documents
Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER

Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization

Multimedia Communications. Differential Coding

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

Pulse-Code Modulation (PCM) :

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

EE-597 Notes Quantization

The information loss in quantization

EE368B Image and Video Compression

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding.

On Optimal Coding of Hidden Markov Sources

CMPT 365 Multimedia Systems. Final Review - 1

EE67I Multimedia Communication Systems

at Some sort of quantization is necessary to represent continuous signals in digital form

Design of Optimal Quantizers for Distributed Source Coding

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Being edited by Prof. Sumana Gupta 1. only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative

Coding for Discrete Source

SCALABLE AUDIO CODING USING WATERMARKING

Lecture 20: Quantization and Rate-Distortion

Ch. 8 Math Preliminaries for Lossy Coding. 8.4 Info Theory Revisited

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

Principles of Communications

LATTICE VECTOR QUANTIZATION FOR IMAGE CODING USING EXPANSION OF CODEBOOK

Problem Set III Quantization

Lossy Distributed Source Coding

Example: for source

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

Digital communication system. Shannon s separation principle

An Effective Method for Initialization of Lloyd Max s Algorithm of Optimal Scalar Quantization for Laplacian Source

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

CS578- Speech Signal Processing

E303: Communication Systems

Vector Quantization and Subband Coding

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1

Optimization of Variable-length Code for Data. Compression of memoryless Laplacian source

Lecture 7 Predictive Coding & Quantization

Applications of on-line prediction. in telecommunication problems

Rematch and Forward: Joint Source-Channel Coding for Communications

Predictive Coding. Prediction

Predictive Coding. Prediction Prediction in Images

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Multimedia Communications Fall 07 Midterm Exam (Close Book)

CHAPITRE I-5 ETUDE THEORIQUE DE LA ROBUSTESSE DU QUANTIFICATEUR UNIFORME OPTIMUM

Audio /Video Signal Processing. Lecture 2, Quantization, SNR Gerald Schuller, TU Ilmenau

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

EE5356 Digital Image Processing

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

EXAMPLE OF SCALAR AND VECTOR QUANTIZATION

Ch. 8 Math Preliminaries for Lossy Coding. 8.5 Rate-Distortion Theory

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

LOW COMPLEX FORWARD ADAPTIVE LOSS COMPRESSION ALGORITHM AND ITS APPLICATION IN SPEECH CODING


Exercise 1. = P(y a 1)P(a 1 )

Soft-Output Trellis Waveform Coding

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Digital Signal Processing 2/ Advanced Digital Signal Processing Lecture 3, SNR, non-linear Quantisation Gerald Schuller, TU Ilmenau

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Multimedia Networking ECE 599

Image and Multidimensional Signal Processing

Iterative Encoder-Controller Design for Feedback Control Over Noisy Channels

Fuzzy quantization of Bandlet coefficients for image compression

Quantization for Distributed Estimation

7.1 Sampling and Reconstruction

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization

Basic Principles of Video Coding

Information and Entropy

Compression methods: the 1 st generation

Statistical Analysis and Distortion Modeling of MPEG-4 FGS

Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression

A Lossless Image Coder With Context Classification, Adaptive Prediction and Adaptive Entropy Coding

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

Source Coding: Part I of Fundamentals of Source and Video Coding

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Lecture 5 Channel Coding over Continuous Channels

Fractal Dimension and Vector Quantization

Explicit Receivers for Optical Communication and Quantum Reading

Chapter 9 Fundamental Limits in Information Theory

Maximum mutual information vector quantization of log-likelihood ratios for memory efficient HARQ implementations

Source Coding. Scalar Quantization

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16

A Systematic Description of Source Significance Information

1. Probability density function for speech samples. Gamma. Laplacian. 2. Coding paradigms. =(2X max /2 B ) for a B-bit quantizer Δ Δ Δ Δ Δ

Lecture 22: Final Review

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820

Module 3. Quantization and Coding. Version 2, ECE IIT, Kharagpur

Digital Image Processing Lectures 25 & 26

OPTIMUM fixed-rate scalar quantizers, introduced by Max

Proyecto final de carrera

On Common Information and the Encoding of Sources that are Not Successively Refinable

SPEECH ANALYSIS AND SYNTHESIS

Lec 05 Arithmetic Coding

Random Signal Transformations and Quantization

Transcription:

Multimedia Communications Scalar Quantization

Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing a large set of values with a much smaller set is called quantization Quantizer consists of two mappings: encoder mapping and decoder mapping Encoder divides the range of values that the source generates into a number of intervals Each interval is represented by a distinct codeword 000 001 010 011 100 101 110 111-3 -2-1 0 1 2 3

Scalar Quantization All the source outputs that fall into a particular interval are represented by the codeword of that interval For every codeword generated by the encoder, the decoder generates a reconstruction value Because a codeword represents an entire interval, there is no way of knowing which value in the interval actually was generated by the source Code Output 000-3.5 001-2.5 010-1.5 011-0.5 100 0.5 101 1.5 110 2.5 111 3.5

Scalar Quantization Construction of intervals can be viewed as part of the design of the encoder Selection of reconstruction values is part of the design of the decoder The quality of the reconstruction depend on both intervals and reconstruction values The design is considered as a pair Design a quantizer: divide the input range into intervals, assign binary codes to these intervals, find reconstruction values Do all these while satisfying the rate-distortion criteria

Scalar Quantizers Q(x) y 6 y 5 y 4 y 3 y 2 b 0 b 1 b 2 b 3 b 4 b 5 x y 1 y 0

Scalar Quantization Source: random variable X with pdf of f x (x) M: number of intervals b i, i=0, 1,2,.., M: M+1 end points of the intervals (decision boundary) b 0 and b M could be infinite y i : M reconstruction level y 0 y 1 y 2 y 3 y 4 y 5 y 6 - b 1 b 2 b 3 b 4 b 5 b 6 +

Scalar Quantization Distortion: mean squared quantization error Rate: if fixed-length codewords are used to represent the quantizer output, the rate is given by: Selection of decision boundaries will not affect the rate

Scalar Quantization If variable length codewords are used the rate will be: Selection of decision boundaries will affect the rate Code1 Code2 y1 1110 000 y2 1100 001 y3 100 010 y4 00 011 y5 01 100 y6 101 101 y7 1101 110 y8 1111 111

Scalar Quantization Problem of finding optimum scalar quantizer: 1. given a distortion constraint find the decision boundaries, reconstruction levels, and binary codes that minimize the rate or 2. given a rate constraint R<R * find the decision boundaries, reconstruction levels, and binary codes that minimize the distortion Scalar Quantizer Fixed length codewords Variable length codewords Uniform Non-uniform

Uniform Quantizers All intervals are the same size except possibly for the two outer intervals (decision boundaries are spaced evenly) Reconstruction values are also spaced evenly with the same spacing as the decision boundaries In the inner intervals the reconstruction values are the midpoint of the intervals If zero is not a reconstruction level of the quantizer, it is called a midrise quantizer (M is even) If zero is a reconstruction level of the quantizer, it is called a midtread quantizer (M is odd)

Uniform Quantizers -3Δ -2Δ -Δ Q(x) 3Δ/2 Δ/2 Δ 5Δ/2 2Δ 3Δ x -5Δ/2-3Δ/2 -Δ/2 Q(x) 3Δ 2Δ Δ Δ/2 3Δ/2 5Δ/2 x

Uniform Quantizers for Uniform Source Input: uniformly distributed in [-x max, x max ] For an M-level uniform quantizer we divide [-x max, x max ] into M equally sized intervals each with a step size of Q(x) 7D/2 3Δ/2-2Δ -Δ Δ/2 x 2Δ 3Δ x max =4Δ

Uniform Quantizers for Uniform Source For uniform distributions, quantization error q = Q(x) - x of a uniform quantizer is uniformly distributed in [-Δ/2,+Δ/2] Then E[q] = 0, =E[q 2 ] = Δ 2 /12. If the quantizer output is encoded using n bits per sample: -3Δ -2Δ -Δ x-q(x) Δ/2 Δ 2Δ 3Δ x x max =4Δ

Uniform Quantizers for Non-uniform Source When the distribution is not uniform, it is not a good idea to find the step size by dividing the range of inputs by the number of levels (sometimes the range is infinite) If pdf is not uniform, an optimal quantization step size can be found that minimizes D. -3Δ -2Δ -Δ Δ/2 Q(x) 3Δ/2 Δ 5Δ/2 7Δ/2 2Δ 3Δ x

Uniform Quantizers for Non-uniform Source Q(x) 5Δ/2 7Δ/2 3Δ/2-3Δ -2Δ -Δ Δ/2 Δ 2Δ 3Δ x

Uniform Quantizers for Non-uniform Source Given the f x (x) the value of step size can be calculated using numerical techniques Results are given for different values of M and different pdfs in table 9.3 of the book If input is unbounded, quantization error is no longer bounded In the inner intervals the error is bounded and is called granular error Unbounded error is called overload error x-q(x) -3Δ -2Δ -Δ Δ/2 Δ 2Δ 3Δ granular noise x overload noise

Uniform Quantizers for Non-uniform Source The pdf of most of non-uniform sources peaks at zero and decay away from the origin Overload probability is generally smaller than the probability of granular region Increasing step size will reduce overload error and increase granular error Loading factor is defined as the ratio of maximum value the input can take in the granular region to standard deviation Typical value for loading factor is 4 x-q(x) -3Δ -2Δ -Δ Δ/2 Δ 2Δ 3Δ granular noise x overload noise

Adaptive Quantization Mismatch effects: occur when the pdf of the input signal changes. The reconstruction quality degrades. Quantizer adaptation: FORWARD: A block of data is processed, mean and variance sent as side info. BACKWARD: based on quantized values, no side info. Jayant quantizer. Forward: a delay is necessary, size of block of data: if block is too large the adaptation process may not capture the changes in the input statistics, if too small transmission of side information adds a significant overhead

Adaptive Quantization If we study the input-output of a quantizer we can get an idea about the mismatch from the distribution of output values If Δ (quantizer step size) is smaller than what it should be, the input will fall in the outer levels of the quantizer an excessive number of times If Δ is larger than what it should be, the input will fall in the inner levels of the quantizer an excessive number of times Jayant quantizer: If the input falls in the outer levels, step size needs to be expanded, and if the input falls into inner levels, the step size needs to be reduced

Adaptive Quantization In Jayant quantizer expansion and contraction of the step size is accomplished by assigning a multiplier M k to each interval. If the (n-1)th input falls in the kth interval, the step size to be used for the nth input is obtained by multiplying step size used for the (n-1)th input with M k. Multiplier values for the inner levels in the quantizer are less than one and multiplier values for the outer levels are greater than one In math: Δ n =M l(n-1) Δ n-1

Adaptive Quantization If input values are small for a period of time, the step size continues to shrink. In a finite precision system it would result in a value of zero. Solution: a minimum Δ min is defined and the step size cannot go below this value Similarly to avoid too large values for the step size we define a Δ max

Adaptive Quantization How to choose the values of multipliers? Stability criteria: once the quantizer is matched to the input, the product of expansions and contractions are equal to one. M k = l k

Non-uniform Scalar Quantizers Q(x) y 6 y 5 y 4 y 3 y 2 x 0 x 1 x 2 x 3 x 4 x 5 x y 1 y 0

Non-uniform Quantizers Non uniform quantizers attempt to decrease the average distortion by assigning more levels to more probable regions. For given M and input pdf, we need to choose {b i } and {y i } to minimize the distortion 2 q =

Non-uniform Quantizers The optimum value of reconstruction level depends on the boundary and the boundary depends on the reconstruction levels. Instead, it is easier to find these optimality conditions: For a given partition (encoder), what is the optimum codebook (decoder)? For a given codebook (decoder), what is the optimum partition (encoder)?

It is difficult to solve both sets of equations analytically. An iterative algorithm known as the Lloyd algorithm solves the problem by iteratively optimizing the encoder and decoder until both conditions are met with sufficient accuracy. The Lloyd Algorithm Choose initial reconstruction values Find optimum boundaries Find distortion and change in distortion Significant decrease in distortion? STOP Find new reconstruction values YES

COMPRESSOR Companded Quantization UNIFORM QUANTIZER EXPANDER Instead of making the step size small for intervals in which the input lies with high probability, make these intervals large and use a uniform quantizer Equivalent to the a non-uniform quantizer. Example: the µ-law compander:

Entropy-Constrained Quantization Two approaches: 1) Keep the design of quantizer the same and entropy code the quantization output 2) Take into account the the selection of decision boundaries will affect the rate Joint optimization: entropy-constrained optimization. Minimize distortion subject to the constraint

High-Rate Optimum Quantization The above iterative method is called Generalized Lloyd algorithm. At high rates, the optimum entropy-constrained quantizer is the uniform quantizer! At high rates,