Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Similar documents
Lossy Compression. Compromise accuracy of reconstruction for increased compression.

VQ widely used in coding speech, image, and video

Lecture 3: Shannon s Theorem

Chapter 7 Channel Capacity and Coding

Chapter 8 SCALAR QUANTIZATION

Chapter 7 Channel Capacity and Coding

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Lecture Notes on Linear Regression

Asymptotic Quantization: A Method for Determining Zador s Constant

Pulse Coded Modulation

Numerical Heat and Mass Transfer

Introduction to Information Theory, Data Compression,

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Appendix B. The Finite Difference Scheme

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

Composite Hypotheses testing

NUMERICAL DIFFERENTIATION

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

EGR 544 Communication Theory

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Error Probability for M Signals

Solving Nonlinear Differential Equations by a Neural Network Method

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Flexible Quantization

Global Sensitivity. Tuesday 20 th February, 2018

COGNITIVE RADIO NETWORKS BASED ON OPPORTUNISTIC BEAMFORMING WITH QUANTIZED FEEDBACK

Transform Coding. Transform Coding Principle

Problem Set 9 Solutions

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

What would be a reasonable choice of the quantization step Δ?

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n!

Estimation: Part 2. Chapter GREG estimation

A Hybrid Variational Iteration Method for Blasius Equation

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Note 10. Modeling and Simulation of Dynamic Systems

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

CHAPTER III Neural Networks as Associative Memory

FUZZY FINITE ELEMENT METHOD

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Switched Quasi-Logarithmic Quantizer with Golomb Rice Coding

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Finding Dense Subgraphs in G(n, 1/2)

DEGREE REDUCTION OF BÉZIER CURVES USING CONSTRAINED CHEBYSHEV POLYNOMIALS OF THE SECOND KIND

Linear Approximation with Regularization and Moving Least Squares

Quantum and Classical Information Theory with Disentropy

Lecture 12: Discrete Laplacian

Convexity preserving interpolation by splines of arbitrary degree

DETERMINATION OF UNCERTAINTY ASSOCIATED WITH QUANTIZATION ERRORS USING THE BAYESIAN APPROACH

Chapter Newton s Method

Numerical Solution of Ordinary Differential Equations

Inductance Calculation for Conductors of Arbitrary Shape

Classification as a Regression Problem

EEE 241: Linear Systems

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

Module 9. Lecture 6. Duality in Assignment Problems

Limited Dependent Variables

The Feynman path integral

Marginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients

Digital Modems. Lecture 2

Lecture 21: Numerical methods for pricing American type derivatives

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

CSE4210 Architecture and Hardware for DSP

Compression in the Real World :Algorithms in the Real World. Compression in the Real World. Compression Outline

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Supplementary Notes for Chapter 9 Mixture Thermodynamics

Chapter 13: Multiple Regression

ACTM State Calculus Competition Saturday April 30, 2011

Time-Varying Systems and Computations Lecture 6

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Laboratory 1c: Method of Least Squares

MMA and GCMMA two methods for nonlinear optimization

On the Multicriteria Integer Network Flow Problem

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

Chapter - 2. Distribution System Power Flow Analysis

Lecture 4. Instructor: Haipeng Luo

Basic Statistical Analysis and Yield Calculations

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Gaussian Conditional Random Field Network for Semantic Segmentation - Supplementary Material

The Expectation-Maximization Algorithm

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper

Polynomial Regression Models

= z 20 z n. (k 20) + 4 z k = 4

Introduction to information theory and data compression

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Difference Equations

Laboratory 3: Method of Least Squares

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Lecture 4: November 17, Part 1 Single Buffer Management

Lectures on Multivariable Feedback Control

Snce h( q^; q) = hq ~ and h( p^ ; p) = hp, one can wrte ~ h hq hp = hq ~hp ~ (7) the uncertanty relaton for an arbtrary state. The states that mnmze t

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Transcription:

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur

Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur

Instructonal Objectves At the end of ths lesson, the students should be able to: 1. Defne quantzaton.. Dstngush between scalar and vector quantzaton. 3. Defne quantzaton error and optmum scalar quantzer desgn crtera. 4. Desgn a Lloyd-Max quantzer. 5. Dstngush between unform and non-unform quantzaton. 6. Defne rate-dstorton functon. 7. State source codng theorem. 8. Determne the mnmum possble rate for a gven SNR to encode a quantzed Gaussan sgnal. 6.0 Introducton In lesson-3, lesson-4 and lesson-5, we have dscussed several lossless compresson schemes. Although the lossless compresson technques guarantee exact reconstructon of mages after decodng, ther compresson performance s very often lmted. We have seen that wth lossless codng schemes, our achevable compresson s restrcted by the source entropy, as gven by Shannon s noseless codng theorem. In lossless predctve codng, t s the predcton error that s encoded and snce the entropy of the predcton error s less due to spatal redundancy, better compresson ratos can be acheved. Even then, compresson ratos better than :1 s often not possble for most of the practcal mages. For sgnfcant bandwdth reductons, lossless technques are consdered to be nadequate and lossy compresson technques are employed, where psycho-vsual redundancy s exploted so that the loss n qualty s not vsually perceptble. The man dfference between the lossy and the lossless compresson schemes s the ntroducton of the quantzer. In mage compresson systems, dscussed n lesson-, we have seen that the quantzaton s usually appled to the transform-doman mage representatons. Before dscussng the transform codng technques or the lossy compresson technques n general, we need to have some basc background on the theory of quantzaton, whch s the scope of the present lesson. In ths lesson, we shall frst present the defntons of scalar and vector quantzaton and then consder the desgn ssues of optmum quantzer. In partcular, we shall dscuss Lloyd-Max quantzer desgn and then show the relatonshp between the rate-dstorton functon and the sgnal-to-nose rato. Verson ECE IIT, Kharagpur

6.1 Quantzaton Quantzaton s the process of mappng a set of contnuous-valued samples nto a smaller, fnte number of output levels. Quantzaton s of two basc types (a) scalar quantzaton and (b) vector quantzaton. In scalar quantzaton, each sample s quantzed ndependently. A scalar quantzer Q(.) s a functon that maps a contnuous-valued varable s havng a probablty densty functon p(s) nto a dscrete set of reconstructon levels r ( = 1,, L) by applyng a set of the decson levels d ( = 1,, L), appled on the contnuous-valued samples s, such that Q () s r f s ( d 1, d ], = 1,, L =.(6.1), where, L s the number of output level. In words, we can say that the output of the quantzer s the reconstructon level r, f the value of the sample les wthn the range ( d,. 1 d ] In vector quantzaton, each of the samples s not quantzed. Instead, a set of contnuous-valued samples, expressed collectvely as a vector s represented by a lmted number of vector states. In ths lesson, we shall restrct our dscussons to scalar quantzaton. In partcular, we shall concentrate on the scalar quantzer desgn,.e., how to desgn d and r n equaton (6.1). The performance of a quantzer s determned by ts dstorton measure. Let s = Q()be s the quantzed varable. Then, ε = s s s the quantzaton error and the dstorton D s measured n terms of the expectaton of the square of the quantzaton error (.e., the mean-square error) and s gven by D = E[ ( s s ) ]. We should desgn d and r so that the dstorton D s mnmzed. There are two dfferent approaches to the optmal quantzer desgn [ ] (a) Mnmze D E ( s s ) = wth respect to d and r ( 1,, L) =, subject to the constrant that L, the number of output states n the quantzer s fxed. These quantzers perform non-unform quantzaton n general and are known as Lloyd-Max quantzers. The desgn of Lloyd-Max quantzers s presented n the next secton. [ ] (b) Mnmze D = E ( s s ) wth respect to d and r ( 1,, L) the constrant that the source entropy H ( s) = C =, subject to s a constant and the number of output states L may vary. These quantzers are called entropyconstraned quantzers. Verson ECE IIT, Kharagpur

In case of fxed-length codng, the rate R for quantzers wth L states s gven by log R, whle R > H () s n case of varable-length codng. Thus, Lloyd-Max quantzers are more suted for use wth fxed-length codng, whle entropyconstraned quantzers are more sutable for use wth varable-length codng. 6. Desgn of Lloyd-Max Quantzers The desgn of Lloyd-Max quantzers requres the mnmzaton of D = E L d [( s r ) ] = ( s r ) p( s)ds = 1 d 1 (6.) Settng the partal dervatves of D wth respect to d and = 1,, L to zero and solvng, we obtan the necessary condtons for mnmzaton as r ( ) r = d d 1 d d 1 sp p () s () s ds ds, 1 L (6.3) d = r + r + 1, 1 L...(6.4) Mathematcally, the decson and the reconstructon levels are solutons to the above set of nonlnear equatons. In general, closed form solutons to equatons (6.3) and (6.4) do not exst and they need to be solved by numercal technques. Usng numercal technques, these equatons could be solved n an teratve way by frst assumng an ntal set of values for the decson levels{ d }. For smplcty, one can start wth decson levels correspondng to unform quantzaton, where decson levels are equally spaced. Based on the ntal set of decson levels, the reconstructon levels can be computed usng equaton (6.3) f the pdf of the nput varable to the quantzer s known. These reconstructon levels are used n equaton (6.4) to obtan the updated values of{ d }. Solutons of equatons (6.3) and (6.4) are teratvely repeated untl a convergence n the decson and reconstructon levels are acheved. In most of the cases, the convergence s acheved qute fast for a wde range of ntal values. Verson ECE IIT, Kharagpur

6.3 Unform and non-unform quantzaton Lloyd-Max quantzers descrbed above perform non-unform quantzaton f the pdf of the nput varable s not unform. Ths s expected, snce we should perform fner quantzaton (that s, the decson levels more closely packed and consequently more number of reconstructon levels) wherever the pdf s large and coarser quantzaton (that s, decson levels wdely spaced apart and hence, less number of reconstructon levels), wherever pdf s low. In contrast, the reconstructon levels are equally spaced n unform quantzaton,.e., r 1 + 1 r = θ L 1 where θ s a constant, that s defned as the quantzaton step-sze. In case, the pdf of the nput varable s s unform n the nterval [A, B],.e., p () s 1 = B A 0 A s B otherwse the desgn of Lloyd-Max quantzer leads to a unform quantzer, where θ = B A L d r = A + θ 0 L θ = d 1 + 1 L If the pdf exhbts even symmetrc propertes about ts mean, e.g., Gaussan and Laplacan dstrbutons, then the decson and the reconstructon levels have some symmetry relatons for both unform and non-unform quantzers, as shown n Fg.6.1 and Fg.6. for some typcal quantzer characterstcs (reconstructon vels vs. nput varable s) for L even and odd respectvely. Verson ECE IIT, Kharagpur

Verson ECE IIT, Kharagpur

When pdf s even symmetrc about ts mean, the quantzer s to be desgned for only L/ levels or (L-1)/ levels, dependng upon whether L s even or odd, respectvely. 6.4 Rate-Dstorton Functon and Source Codng Theorem Shannon s Codng Theorem on noseless channels consders the channel, as well as the encodng process to be lossless. Wth the ntroducton of quantzers, the encodng process becomes lossy, even f the channel remans as lossless. In most cases of lossy compressons, a lmt s generally specfed on the maxmum tolerable dstorton D from fdelty consderaton. The queston that arses s Gven a dstorton measure D, how to obtan the smallest possble rate? The answer s provded by a branch of nformaton theory that s known as the ratedstorton theory. The correspondng functon that relates the smallest possble rate to the dstorton, s called the rate-dstorton functon R(D). A typcal nature of rate-dstorton functon s shown n Fg.6.3. Verson ECE IIT, Kharagpur

At no dstorton (D=0),.e. for lossless encodng, the correspondng rate R(0) s equal to the entropy, as per Shannon s codng theorem on noseless channels. Rate-dstorton functons can be computed analytcally for smple sources and dstorton measures. Computer algorthms exst to compute R(D) when analytcal methods fal or are mpractcal. In terms of the rate-dstorton functon, the source codng theorem s presented below. Source Codng Theorem There exsts a mappng from the source symbols to codewords such that for a gven dstorton D, R(D) bts/symbol are suffcent to enable source reconstructon wth an average dstorton arbtrarly close to D. The actual bts R s gven by R R D ( ) Verson ECE IIT, Kharagpur