Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

Similar documents
Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014

Multimedia Communications. Scalar Quantization

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER

Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization

Example: for source

EE-597 Notes Quantization

Being edited by Prof. Sumana Gupta 1. only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative

Principles of Communications

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding.

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

CS578- Speech Signal Processing

CHAPITRE I-5 ETUDE THEORIQUE DE LA ROBUSTESSE DU QUANTIFICATEUR UNIFORME OPTIMUM

EE67I Multimedia Communication Systems

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Random Signal Transformations and Quantization

The information loss in quantization

Pulse-Code Modulation (PCM) :

Lec 05 Arithmetic Coding

7.1 Sampling and Reconstruction

Digital Signal Processing 2/ Advanced Digital Signal Processing Lecture 3, SNR, non-linear Quantisation Gerald Schuller, TU Ilmenau

Objectives of Image Coding

An Effective Method for Initialization of Lloyd Max s Algorithm of Optimal Scalar Quantization for Laplacian Source

Design of Optimal Quantizers for Distributed Source Coding

at Some sort of quantization is necessary to represent continuous signals in digital form

EE 5345 Biomedical Instrumentation Lecture 12: slides

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression

Optimization of Variable-length Code for Data. Compression of memoryless Laplacian source

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

E4702 HW#4-5 solutions by Anmo Kim

EE368B Image and Video Compression

Compression methods: the 1 st generation

Audio /Video Signal Processing. Lecture 2, Quantization, SNR Gerald Schuller, TU Ilmenau

Coding for Discrete Source

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Ch. 8 Math Preliminaries for Lossy Coding. 8.4 Info Theory Revisited

Basic Principles of Video Coding

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

On Optimal Coding of Hidden Markov Sources

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

CMPT 365 Multimedia Systems. Final Review - 1

Proyecto final de carrera

E303: Communication Systems

1. Probability density function for speech samples. Gamma. Laplacian. 2. Coding paradigms. =(2X max /2 B ) for a B-bit quantizer Δ Δ Δ Δ Δ

Ch. 8 Math Preliminaries for Lossy Coding. 8.5 Rate-Distortion Theory

F O R SOCI AL WORK RESE ARCH

EE5585 Data Compression February 28, Lecture 11

Digital Image Processing Lectures 25 & 26

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

Lecture 20: Quantization and Rate-Distortion

Statistical Analysis and Distortion Modeling of MPEG-4 FGS

Transform coding - topics. Principle of block-wise transform coding

ECE Information theory Final

Chapter 9 Fundamental Limits in Information Theory

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

VID3: Sampling and Quantization

SCALABLE AUDIO CODING USING WATERMARKING

Predictive Coding. Prediction

Predictive Coding. Prediction Prediction in Images

EXAMPLE OF SCALAR AND VECTOR QUANTIZATION

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU

Lecture 22: Final Review

Homework Set 3 Solutions REVISED EECS 455 Oct. 25, Revisions to solutions to problems 2, 6 and marked with ***

' 3480 University, Montreal, Quebec H3A 2A7. QUANTIZERS FOR SYMMETRIC GAMMA DISTRIBUTIONS. 3. Uniqueness. 1. Introduction. INRS- TtlCeommunicationr

Class of waveform coders can be represented in this manner

THE dictionary (Random House) definition of quantization

Digital Signal Processing

Machine Learning and Data Mining. Decision Trees. Prof. Alexander Ihler

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Remote Estimation Games over Shared Networks

BASICS OF COMPRESSION THEORY

Lecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1)

Module 3. Quantization and Coding. Version 2, ECE IIT, Kharagpur

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1

CHAPTER 3. P (B j A i ) P (B j ) =log 2. j=1

Quantization for Distributed Estimation

T i t l e o f t h e w o r k : L a M a r e a Y o k o h a m a. A r t i s t : M a r i a n o P e n s o t t i ( P l a y w r i g h t, D i r e c t o r )

Image Coding. Chapter 10. Contents. (Related to Ch. 10 of Lim.) 10.1

Seminar: D. Jeon, Energy-efficient Digital Signal Processing Hardware Design Mon Sept 22, 9:30-11:30am in 3316 EECS

Chapter 3. Quantization. 3.1 Scalar Quantizers

Problem Set III Quantization

arxiv: v1 [cs.it] 21 Feb 2013

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

Analysis of methods for speech signals quantization

Solutions to Set #2 Data Compression, Huffman code and AEP

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Homework Set #2 Data Compression, Huffman code and AEP

probability of k samples out of J fall in R.

Expectation Maximization

Solutions to Homework Set #1 Sanov s Theorem, Rate distortion

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

Simultaneous SDR Optimality via a Joint Matrix Decomp.

Lecture 5 Channel Coding over Continuous Channels

Transcription:

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) E{(X-Y) 2 } D<σ 2 I(X;Y ) = H (X) - H (X Y ) = H (X) - H (X -Y Y ) ³ H (X) - H (X -Y ) =.5 log2pes 2 -.5log2peD =.5 log(s 2 / D) Suppose you don t know the characteristics of the source, looks somewhat like Laplacian. Can you comment on R(D). In case of lossless encoding, algo should achieve entropy of source for optimality. Same role is played by R(D) here.

Probability Models Uniform distribution Gaussian distribution Laplacian Distribution f X (x) = - 2 x 1 2s e s 2

Quantization Quantization: a process of representing a large possibly infinite set of values with a much smaller set. Scalar quantization: a mapping of an input value x into a finite number of output values, y: Codes 000 001 010 011 100 101 110 111-3.0-2.0-1.0 0 1.0 2.0 3.0 input

Input Codes Output 000-3.5 001-2.5 010-1.5 011-0.5 100 0.5 101 1.5 110 2.5 111 3.5

Types of uniform quantizers Midrise (a) quantizers have even number of output levels. Midtread quantizers have odd number of output levels, including zero as one of them

Quantization error Since the reconstruction values y i are the midpoints of each interval, the quantization error must lie within the values [ Δ/2, Δ/2]. For a uniformly distributed source, the graph of the quantization error looks like

Quantization operation: Let M be the number of reconstruction levels and M boundaries be { b i } i=0 Q(x) = y i if b i-1 < x < b i MSE = s q 2 = ò - ( x - Q(x) ) 2 f X (x)dx M å b i ò = (x - y i ) 2 f X (x)dx i=1 b i-1 Lets represent the quantizer output using fixed length codewords.

If there are M levels, the rate is given by R = log 2 M éê For 8 levels we need 3 bit codewords. Quantizer design problem Given f X (x) and M, find decision boundaries b i and reconstruction levels y i, so as to minimize MSE. ùú

If we use VLCs to represent yi, then rate Where P(yi) = b i ò b i-1 f X (x) dx M å i=1 R = l i P(y i ) Problem, given distortion constraint s q 2 D * Find decision boundaries, reconstruction levels and binary codes that minimize the rate

Uniform Quantization of Uniform Source Input: Uniform [ Xmax, Xmax] Output M level uniform quantizer = 2Xmax/ M M /2 å i=1 id ò (i-1)d s 2 æ q = 2 x - è ç 2i -1 2 D ö ø 2 1 2X max dx = D2 12

Consider quantization error instead: q = x Q(x) q [ /2, /2] and is uniformly distributed s 2 q = 1 ò q2 dq = 2 12 - Signal variance E[X 2 ] = X f x dx = = With fixed length codewords, M levels need n bits, M = 2 n

SNR db = 20log In terms of n, SNR = 6.02n db For every additional bit we get an increase of 6.02dB.

Image compression 1 bit/pixel 2 bit/pixel 3 bit/pixel

Uniform Quantization of Nonuniform Sources Example nonuniform source: x [ 100, 100], P(x [ 1, 1]) = 0.95 Problem - Design an 8 level quantizer Previous approach leads to 95% of sample values represented by two numbers: -12.5 and 12.5 Max quantization error (QE)= 12.5, Min QE = 11.5 Consider an alternative Step = 0.3 Max QE = 98.5, however 95% of the time QE < 0.15 Average distortion would be less in later case.

Given M, minimize distortion Distortion as a function of stepsize and minimize the function. σ = 2 x 2i 1 2 To find optimal stepsize, differentiate wrt stepsize and set it equal to 0. Computed using numerical methods Closed form would be very difficult to derive. + 2 x M 1 2 ( ) f x dx f x dx Granular error Overload error

Overload and Granular Error

Optimum Step Size Using Leibniz rule dσ dδ 2i 1 = 2i 1 x f 2 x dx (M 1) x M 1 f 2 x dx = 0 As we change stepsize, we tradeoff between the two noise profiles.

Optimum Δ For the same levels, step-size of the uniform pdf is less than that of Gaussian, which is less than Laplacian. Laplacian has more mass in its tails as compared to Gaussian. So, stepsize should be more to control overload noise.

Mismatch Effects Used statistics of the source to determine optimum Δ. However, input may not have the same statistics. Leads to mismatch effects Variance Mismatch; 4 bit Gaussian uniform quantizer with Gaussian input

Distribution Mismatch Input pdf does not match the assumed pdf. SNR for different 8 level quantizers. Assume that the sources are uniform, Gaussian, Laplacian, and Gamma. Compute the optimum MSQE step size for uniform quantizer The resulting Δ gets larger from left-to-right

Non-uniform quantization For uniform quantizer, decision boundaries are determined by a single parameter Δ. We can certainly reduce quantization errors further if each decision boundaries can be selected freely Boundary selection can be done based on the minimization of error criterion.

pdf-optimized Quantization Given pdf, minimize MSE, Find the minimum of function by setting derivative wrt y. s 2 q = ( x - Q(x) ) 2 f X (x)dx y = M å ò - b i ò = (x - y i ) 2 i=1 b i-1 f X (x)dx 2 x y f x dx = 0 y =, b x < b

If y j are determined, the b j can be selected as: b = { x y f x dx + x y f x dx = f (b ){ b y - b y } Lloyd-Max quantizer solves iteratively for b j and y j Let us design a mid-rise quantizer, where b 0 =0 and b M/2 = max(input) Problem find {b 1, b 2 b M/2-1 } and reconstruction levels {y 1, y 2, y M/2-1 }

Lets put j = 1, we want to find b1 and y1 by y = xf x dx Guess y1 and solve for b1 numerically Then find y 2 = 2b 1 y 1 Find b2 as y = xf x dx f x dx f x dx

Find all b s and y s. The accuracy of these values depends on initial guess. We can have a stop criteria as Find b M/2 1 using the previous eqns. Using it find y M/2. Also compute y M/2 from the fact that we know b M/2 and compare them If the difference is less than a threshold stop. Else start the process again with y1.

4 bit Laplacian nonuniform quantizer

Compander Often the source characteristics vary over time. A usual way to suppress mismatch effect is to use compander c x = 2x if 1 x 1 2x 3 + 4 3 x > 1 2x 3 4 3 x < 1

Expander c x = x 2 if 2 x 2 3x 2 2 x > 2 3x 2 + 2 x < 2

Equivalent Non-uniform Quantizer

If the level of quantizer is large and the input is bounded by x max, it is possible to choose a c(x) such that the SNR of compander is independent to input pdf SNR = 10 log 10 (3M 2 ) 20log 10 α, c x = x sgn x A x = sgn x, 0 x ( ), 1