VQ widely used in coding speech, image, and video

Similar documents
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Chapter 8 SCALAR QUANTIZATION

Flexible Quantization

Asymptotic Quantization: A Method for Determining Zador s Constant

Transform Coding. Transform Coding Principle

Lecture Notes on Linear Regression

What would be a reasonable choice of the quantization step Δ?

Chapter 7 Channel Capacity and Coding

Memory ecient adaptation of vector quantizers to time-varying channels

Chapter 7 Channel Capacity and Coding

Clustering & Unsupervised Learning

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

Lecture 12: Classification

Scalar and Vector Quantization

Generalized Linear Methods

Lec 07 Transforms and Quantization II

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Clustering & (Ken Kreutz-Delgado) UCSD

Which Separator? Spring 1

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

EGR 544 Communication Theory

Chapter Newton s Method

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Report on Image warping

Pulse Coded Modulation

Feature Selection: Part 1

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

VECTOR QUANTIZATION, GRAPH THEORY, AND IMRT DESIGN

CSE4210 Architecture and Hardware for DSP

Errors for Linear Systems

Linear Classification, SVMs and Nearest Neighbors

More metrics on cartesian products

Other NN Models. Reinforcement learning (RL) Probabilistic neural networks

Lecture 3: Shannon s Theorem

Novel Pre-Compression Rate-Distortion Optimization Algorithm for JPEG 2000

Power Allocation for Distributed BLUE Estimation with Full and Limited Feedback of CSI

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen

Kernel Methods and SVMs Extension

The Feynman path integral

Switched Quasi-Logarithmic Quantizer with Golomb Rice Coding

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Learning Theory: Lecture Notes

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 10 Support Vector Machines II

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Some Reading. Clustering and Unsupervised Learning. Some Data. K-Means Clustering. CS 536: Machine Learning Littman (Wu, TA)

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule:

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Clustering gene expression data & the EM algorithm

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Problem Set 9 Solutions

ROBUST ENCODING OF THE FS1016 LSF PARAMETERS : APPLICATION OF THE CHANNEL OPTIMIZED TRELLIS CODED VECTOR QUANTIZATION

Homework Assignment 3 Due in class, Thursday October 15

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

Spatial Statistics and Analysis Methods (for GEOG 104 class).

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

Statistical pattern recognition

Pattern Classification

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

EEE 241: Linear Systems

Aggregation of Social Networks by Divisive Clustering Method

Ensemble Methods: Boosting

562 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 5, MAY Here, d i 1 min. and the vector a j.

Error Probability for M Signals

Maximum Likelihood Estimation (MLE)

Introduction to Information Theory, Data Compression,

The Study of Teaching-learning-based Optimization Algorithm

The Geometry of Logit and Probit

Composite Hypotheses testing

CHAPTER III Neural Networks as Associative Memory

Microwave Diversity Imaging Compression Using Bioinspired

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

Probability Density Function Estimation by different Methods

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

A Fast Fractal Image Compression Algorithm Using Predefined Values for Contrast Scaling

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Introduction to information theory and data compression

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks

Société de Calcul Mathématique SA

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

The Concept of Beamforming

Lecture Space-Bounded Derandomization

PHYS 705: Classical Mechanics. Calculus of Variations II

Singular Value Decomposition: Theory and Applications

Feb 14: Spatial analysis of data fields

Communication with AWGN Interference

Math Review. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University

Queueing Networks II Network Performance

Online Classification: Perceptron and Winnow

Transcription:

at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng theorems and other results n rate-dstorton theory (due to Shannon) mply that one can always do better (n RD sense) )f vector of samples are coded das unts Note: We can code samples as unts wthout necessarly explotng or knowng nterdependency between the samples VQ wdely used n codng speech, mage, and vdeo

at Bref descrpton Some man advantages: explot dependency that may exst wthn an nput vector ablty to generate non-cubc mult-dmensonal parttons of nput whch h provdes better compacton of the nput space ablty to track hgh-order statstcal characterstcs of the nput Some man dsadvantages: encodng complexty and memory requrements ncrease exponentally wth vector sze (under a gven rate) and wth bt-rate Lack of robustness: senstvty to channel nose Conventonal VQ s severely lmted to modest vector and codebook sze Dfferent more robust methods needed

at Bref descrpton VQ takes blocks of pxels nstead pxels 1. Dvde mage nto blocks (common sze: 24x24, 16x16, 8x8 ) 1 2 3 4 5 6 7 8 9 x = [ 1 2 3 4 5 6 7 8 9 ] T 2. Turn block nto a vector 3. Compare x wth best matchng xˆ n codebook Codebook: Table consstng of representatve vectors (reconstructon levels) { r } =1,,M Best matchng: wth respect to a chosen dstance (error) measure. 4. Transmt the ndex k of that best matchng vector: x ˆ = Q ( x ) = r k 5. Recever gets the ndex k and retreves xˆ = r k from ts own stored codebook whch matches transmtter s codebook

at Bref descrpton Image block to vector x Search Index Channel look-up xˆ Codebook Codebook Encoder Decoder E : X R N I D : I { r } N I ; r R Exp. Fnd best matchng vector n codebook 00 x = 01 01 10

at Motvaton Theorem (from source codng and rate-dstorton theory): As vector sze grows, performance mproves n the ratedstorton sense Practcal constrants: Encodng complexty and memory requrements ncrease exponentally wth vector sze (under a gven bt-rate) and wth bt- rate codebook grows exponentally as a functon of vector sze N and bt-rate r. Other problems: lack of robustness and senstvty to channel nose Conventonal VQ s severely lmted to modest vector and codebook szes Dfferent more robust VQ approaches needed

at VQ Desgn In VQ, the N-dmensonal nput set X R N s dvded nto M regons or cells ( quantzaton levels ) V { x X : Q ( x ) = r } ; M = 1 r = th code vector (reconstructon level) Optmal VQ Let d(x,y) = defned dstance measure between x and y Exp.: MSE = E MAE = E Q Q [ N d( x, y) ] ; d ( x, y) = ( x y ) 2 = E ) = 1 N [ d( x, y) ]; d( x, y = x 1 = E ) = 1 y 2

at Def: A vector quantzer s sad to be optmal f the expected dstorton D [ d( x, Q( x) )] = d( x Q( x) ) f ( x) d x = E, s mnmzed over all vector quantzers wth M code vectors Two necessary optmalty condtons: 1. The nearest-neghbor condton: For a gven set of code vectors {r } =1,,M, Q(x) must be a nearest-neghbor mappng;.e.: Q ( x) = r ff d( x, r ) d( x, r ); for 1 M

at 2. The centrod condton: For a gven set of partton cells {V } =1,,M, each code vector r (1 M) must be chosen so as to mnmze the average dstorton gven a partton cell V ;.e., r s set to be the vector y that mnmzes the condtonal dstorton D ( y ) = E[ d( x, y) x V ] = d( x, y) f ( x) => select r (1 M) such that D x V ( r ) = mn D ( y ) y d x => r centrod of the cell V

at Remarks Computaton of a centrod for a partcular cell depends on the dstorton measure d(x,y). Another less mportant necessary condton for optmalty s that for a gven source dstorton, ponts on the boundares between nearest-neghbor cells occur wth zero probabltes. Ths s automatcally satsfed for contnuous-valued nput R.Vs. The nearest-neghbor and centrod condtons hold for scalar quantzers. They are very mportant because they are frequently used as the bass for most of the VQ desgn algorthms.

at Remarks (contnued) From above optmalty condtons: For gven reconstructon levels (code vectors) {r } =1, M, the quantzaton levels are defned n terms of regons wth centrod r such that V ={ x : d(x,r ) d(x,r ) } ; {1,,M} If MSE, d(x,r ) = x-r 2 and V ={ x : x-r 2 x-r 2 }

at Remarks (contnued) Smple algorthm for performng VQ: 1. For each nput x, compute dstances d(x,r ) ; {1,,M} 2. Choose such that d(x,r(, ) mn 1 M d(x,r(, ) (Choose level correspondng to closest centrod)

at Smple VQ Algorthm (contnued) If more than one quantzaton level possble, use some predefned rule to make decson or smply make an arbtrary decson Computatonal requrements: f M code vectors (.e., codebook has M entres), we have to compute M dstances and make M-1 1 comparsons for each nput sample x Ths and the memory requred to store the centrods put a lmt on the practcal sze of the codebooks For gven quantzaton levels {V } =1, M 1, the optmal reconstructon levels n the mean-square sense (.e., d(x,r ) = x-r 2 ) are r = V V x f f ( x) ( x) dx dx

at Remarks (contnued) Equatons V ={ x : d(x,r ) d(x,r ) } and r V = V x f f ( x ) ( x) dx dx can be used teratvely to desgn codebooks (fnd {r }) for vector quantzers that are optmal n the mse sense most popular and classcal VQ desgn technque s the Generalzed Lloyd Algorthm (GLA), also known as the LBG (Lnde, Buzo and Gray) algorthm (1981) another wdely used algorthm: Parwse Nearest Neghbor (PNN) by Equtz (1989). Sgnfcantly reduces computaton and no need for ntal codebook, comparable reconstructed mages qualty.

at Generalzed Lloyd (LBG) algorthm based on the optmalty condtons mentoned earler most popular (although not best), wdely used for comparson wth other codebook desgn methods adaptaton of the k-means clusterng algorthm Basc steps: Step1: Start wth a tranng set of vectors (get a large quantty of representatve vectors: tran on one set, test wth others) Step 2: Start wth an ntal codebook of sze M (selected from tranng set); example: randomly selected vectors from tranng set Step 3: Vector Quantze each tranng vector usng current codebook (cluster tranng data)

at = codevectors = tranng vectors M = 4 Step 4: Use centrod of clusters as the updated d codebook centrod = mean of cluster for mse and for a statonary and ergodc nput snce tme/space averages replace statstcal averages centrod = center of mass 5. Repeat from Step 3 untl dstorton between old and new codebook s smaller than a selected small threshold

at Remarks on LBG At each teraton, the LBG algorthm constructs {r } and {V } satsfyng x f x d x V ={ x : d(x,r ) d(x,r ) } and LBG guaranteed to converge and fnds a locally optmal quantzer for a tranng set (may not be locally optmal for the nput x). r = V V f ( ) ( x) Fnal resultng codebook depends d on ntal t choce algorthm nfluenced by choce of ntal codebook (cluster centers), and by the choce and geometrcal propertes of tranng data. dx

at Remarks on LBG (contnued) Local optmal desgn for fxed number of levels M. In codng, VQ usually used n conuncton wth entropy codng lmt the entropy of quantzed sgnal rather than number of quantzaton levels l n the desgn process entropy-constraned VQ (EC-VQ)

at Intalzaton n LBG Most mportant ssue snce t can sgnfcantly affect the performance of desgned codebook Several codebook ntalzaton methods proposed. Popular ones: 1. Random selecton from tranng set 2. Bnary splttng for LBG codebook desgn» uses fxed perturbatons of the current code vectors (centrods) to create more code vectors: twce as many at each step

at Basc steps of Bnary Splttng for LBG: 1. Step 1: Start wth the centrod of the tranng set 2. Step 2: Perturb the current centrod(s) (usually 2 opposte drectons f sze doubles at each teraton) 3. Step 3: VQ all the tranng vectors, and take centrods of the new resultng clusters 4. Step 4: Repeat Step 2 untl we get the desred numbe of centrods (codevectors) for the ntal t codebook 5. Do LBG Advantage: can reduce search complexty by usng Tree search hvq nstead of exhaustve search hvq