D. VQ WITH 1ST-ORDER LOSSLESS CODING

Similar documents
VARIABLE-RATE VQ (AKA VQ WITH ENTROPY CODING)

Bounds on the expected entropy and KL-divergence of sampled multinomial distributions. Brandon C. Roy

Chapter 5 Properties of a Random Sample

Introduction to local (nonparametric) density estimation. methods

Chapter 3 Sampling For Proportions and Percentages

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution:

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

X X X E[ ] E X E X. is the ()m n where the ( i,)th. j element is the mean of the ( i,)th., then

Summary of the lecture in Biostatistics

Special Instructions / Useful Data

Functions of Random Variables

22 Nonparametric Methods.

MATH 247/Winter Notes on the adjoint and on normal operators.

STK4011 and STK9011 Autumn 2016

Bayes (Naïve or not) Classifiers: Generative Approach

Lecture 3. Sampling, sampling distributions, and parameter estimation

Lecture 3 Probability review (cont d)

Econometric Methods. Review of Estimation

X ε ) = 0, or equivalently, lim

Random Variables and Probability Distributions

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. x, where. = y - ˆ " 1

Chapter 9 Jordan Block Matrices

MA 524 Homework 6 Solutions

Simulation Output Analysis

THE ROYAL STATISTICAL SOCIETY 2016 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE MODULE 5

The Mathematical Appendix

ENGI 4421 Joint Probability Distributions Page Joint Probability Distributions [Navidi sections 2.5 and 2.6; Devore sections

Lecture Notes Types of economic variables

Point Estimation: definition of estimators

Chapter 14 Logistic Regression Models

Chapter 4 Multiple Random Variables

Mean is only appropriate for interval or ratio scales, not ordinal or nominal.

L5 Polynomial / Spline Curves

3. Basic Concepts: Consequences and Properties

Third handout: On the Gini Index

ρ < 1 be five real numbers. The

Analysis of Variance with Weibull Data

Estimation of Stress- Strength Reliability model using finite mixture of exponential distributions

A tighter lower bound on the circuit size of the hardest Boolean functions

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model

CHAPTER VI Statistical Analysis of Experimental Data

1 Solution to Problem 6.40

Module 7. Lecture 7: Statistical parameter estimation

Unit 9. The Tangent Bundle

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

TESTS BASED ON MAXIMUM LIKELIHOOD

Investigation of Partially Conditional RP Model with Response Error. Ed Stanek

Beam Warming Second-Order Upwind Method

Continuous Distributions

Lecture 8: Linear Regression

Lecture 9: Tolerant Testing

Introduction to Probability

hp calculators HP 30S Statistics Averages and Standard Deviations Average and Standard Deviation Practice Finding Averages and Standard Deviations

( ) = ( ) ( ) Chapter 13 Asymptotic Theory and Stochastic Regressors. Stochastic regressors model

Chapter 10 Two Stage Sampling (Subsampling)

Analysis of System Performance IN2072 Chapter 5 Analysis of Non Markov Systems

Algorithms Design & Analysis. Hash Tables

CHAPTER 4 RADICAL EXPRESSIONS

best estimate (mean) for X uncertainty or error in the measurement (systematic, random or statistical) best

ECE 729 Introduction to Channel Coding

A New Family of Transformations for Lifetime Data

IS 709/809: Computational Methods in IS Research. Simple Markovian Queueing Model

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand DIS 10b

8.1 Hashing Algorithms

Chapter 8. Inferences about More Than Two Population Central Values

UNIT 2 SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS

KLT Tracker. Alignment. 1. Detect Harris corners in the first frame. 2. For each Harris corner compute motion between consecutive frames

Lattices. Mathematical background

. The set of these sums. be a partition of [ ab, ]. Consider the sum f( x) f( x 1)

Section l h l Stem=Tens. 8l Leaf=Ones. 8h l 03. 9h 58

means the first term, a2 means the term, etc. Infinite Sequences: follow the same pattern forever.

NP!= P. By Liu Ran. Table of Contents. The P versus NP problem is a major unsolved problem in computer

Multivariate Transformation of Variables and Maximum Likelihood Estimation

Recall MLR 5 Homskedasticity error u has the same variance given any values of the explanatory variables Var(u x1,...,xk) = 2 or E(UU ) = 2 I

Research Article A New Iterative Method for Common Fixed Points of a Finite Family of Nonexpansive Mappings

Sampling Theory MODULE X LECTURE - 35 TWO STAGE SAMPLING (SUB SAMPLING)

ECE 559: Wireless Communication Project Report Diversity Multiplexing Tradeoff in MIMO Channels with partial CSIT. Hoa Pham

NP!= P. By Liu Ran. Table of Contents. The P vs. NP problem is a major unsolved problem in computer

Chapter 4 Multiple Random Variables

CODING & MODULATION Prof. Ing. Anton Čižmár, PhD.

å 1 13 Practice Final Examination Solutions - = CS109 Dec 5, 2018

Random Variables. ECE 313 Probability with Engineering Applications Lecture 8 Professor Ravi K. Iyer University of Illinois

CHAPTER 3 POSTERIOR DISTRIBUTIONS

Chapter 11 Systematic Sampling

12.2 Estimating Model parameters Assumptions: ox and y are related according to the simple linear regression model

arxiv: v1 [math.st] 24 Oct 2016

The E vs k diagrams are in general a function of the k -space direction in a crystal

Chain Rules for Entropy

Chapter -2 Simple Random Sampling

Random Variate Generation ENM 307 SIMULATION. Anadolu Üniversitesi, Endüstri Mühendisliği Bölümü. Yrd. Doç. Dr. Gürkan ÖZTÜRK.

Chapter 2 - Free Vibration of Multi-Degree-of-Freedom Systems - II

MEASURES OF DISPERSION

Chapter -2 Simple Random Sampling

Assignment 7/MATH 247/Winter, 2010 Due: Friday, March 19. Powers of a square matrix

Statistics: Unlocking the Power of Data Lock 5

Taylor s Series and Interpolation. Interpolation & Curve-fitting. CIS Interpolation. Basic Scenario. Taylor Series interpolates at a specific

Ordinary Least Squares Regression. Simple Regression. Algebra and Assumptions.

Mu Sequences/Series Solutions National Convention 2014

MOLECULAR VIBRATIONS

Transcription:

VARIABLE-RATE VQ (AKA VQ WITH ENTROPY CODING) Varable-Rate VQ = Quatzato + Lossless Varable-Legth Bary Codg A rage of optos -- from smple to complex A. Uform scalar quatzato wth varable-legth codg, oe dex at a tme. 1100 1101 100 00 01 101 1110 1111 1 w 2 N w w B. Nouform scalar quatzato wth varable-legth codg -- oe dex at a tme. C. Scalar quatzato wth hgher-order varable-legth codg -- ether block codg of dces at a tme or th-order codtoal codg of the dces. D. k-dme'l VQ wth varable-legth codg, oe dex at a tme. E. k-dme'l VQ wth hgher-order varable-legth codg -- ether block codg of dces at a tme or th-order codtoal codg of the dces. G. k-dme'l VQ wth other types of lossless codg We focus maly o E. wth block codg. A.-D. are specal cases of E. Codtoal codg s just a slght varato of E.. VQ-EC-1 D. VQ WITH 1ST-ORDER LOSSLESS CODING samples dex bts dex reproductos X 1...X k I 1 k partto bary bary I Y...Y codebook ecoder decoder ecoder decoder Key characterstcs: k-dmesoal VQ wth partto S = {S 1,..., S M }, codebook C = {w 1,..., w M }, quatzato rule Q, ad bary prefx codebook B = C b = {c 1,..., c M } wth legths {L 1,...,L M } Decompose ecoder to "partto" ad "bary ecoder" Gve x, the partto produces dex whe x S The bary ecoder outputs bary codeword c wth legth l Decompose decoder to "bary decoder" ad "codebook" The bary decoder decodes the bts to the dex. The codebook outputs w. Rate = R = 1 k x L(x) p(x) = 1 k rate of bary ecoder we ofte assume R = 1 k H(I) = 1 K H(Q(X)) ("VQ wth etropy codg") Dstorto = D = E 1 k X-Q(X) 2 (ot affected by choce of lossless coder) VQ-EC-2

E. VQ WITH BLOCK LOSSLESS CODING source vectors dces X... X I 1... I partto bary ecoder bary codeword 1 c Y 1... Y codebook I 1... I bary decoder reproducto vectors k-dme'l VQ wth varable-legth codg of dces blocks of. Decompose ecoder to "partto" ad "bary ecoder": "Partto" successve k-dm'l source vectors X 1,,X to dces I 1,,I, where X j = (X j,1,,x j,k ). Losslessly ecode dces at oce, (I 1,,I ), usg FVL block lossless code wth prefx codebook C b cotag M bary codewords. Decompose decoder to "bary decoder" ad "codebook": Decode bary codeword to dces I 1,,I. Output correspodg quatzato vectors w I1,,w I as reproductos of X 1,,X, respectvely. "Block lossless bary codg" s a easy to aalyze paradgm for studyg the beefts of varable-legth codg (a.k.a. etropy codg). We usually assume source s statoary, so that X j has the same pdf for all j, whch wll be deoted f X (x), f(x) or f k (x). VQ-EC-3 SUMMARY OF CHARACTERISTICS Quatzer = k-dmesoal VQ k = dmeso M = sze (ot eces'ly power of 2, ot so mportat, may be fte) S = {S 1,,S M } = k-dmesoal partto C = {w 1,,w M } = k-dmesoal codebook Q(x) = quatzato rule Bary Ecoder: = order of bary ecoder (.e. put blocklegth) source vectors dces bary codeword X 1... X I 1... I partto bary c ecoder Y 1... Y codebook I 1... I bary decoder reproducto vectors C b = {v : I } = bary prefx codebook -- oe cdwrd for each seq. of dces where I = set of cell dex -tuples = { = ( 1,, ) : 1 1 M,, 1 M } v = (v,1,,v,l ) = bary codeword of legth L for Dervatve Characterstcs: quatzato rule: Q(x j ) = w whe x j S ecodg rule: α(x 1,,x ) = v, whe = ( 1,, ) ad x j S j, j = 1,..., decodg rule: β(c ) = (w 1,w 1,,w 1 ) VQ-EC-4

Dstorto (same as usual) PERFORMANCE M D = 1 k E X - Q(X) 2 = 1 k x-y 2 f k (x) dx =1 S where X = (X 1,,X k ) ad f k (x) s ts desty. Dst' depeds o S ad C but ot C b. Rate R = 1 k L = 1 k P L bts/sample where L = average legth of bary codewords P = probablty of bary codeword v = Pr(X 1 S 1,,X S ), I From lossless codg theorem H(I) L * H(I) + 1 where L * = least avg. legth of prefx code for gve VQ & H(I) = - P log 2 P = etropy of I (or of (Y 1,,Y )) From ow o we assume (uless otherwse stated) that R = 1 k H(I) = 1 k H (I) = 1 k H(Y 1,,Y ) = H k (Y) We call ths "VQ wth th-order etropy codg (EC)". Note: The sze of the quatzer has o drect relato to ts rate. VQ-EC-5 IMPLEMENTATION AND COMPLEXITY Quatzer -- same ssues as wth fxed-rate codg Lossless Coder -- table lookup s the brute force method + Table stores M bary codewords of varous legths + M = 2 kr f where R f = 1 k log 2 M s "fxed-rate" rate + Complexty of brute force mplemetato of etropy creases expoetally wth k R f. OPTA fuctos we seek OPTIMAL PERFORMANCE δ(k,,r) = least MSE of k-dm'l VQ w. th-order etropy codg ad rate R or less S(k,,R) = max SNR of k-dm'l VQ's w. th-order etropy codg ad rate R or less δ(r) = f k, δ(k,,r) = least MSE of VQ wth EC ad rate R or less (ay k,) S(R) = sup k, δ(k,,r) = max SNR of VQ wth EC ad rate R or less (ay k,) VQ-EC-6

HIGH-RESOLUTION ANALYSIS As before, assume the VQ has mostly small cells, eglgble overload dstorto, large M, eghborg cells wth smlar szes & shapes, pot desty approx'ly Λ(x), ertal profle approx'y m(x) Sce quatzer sze s umportat (e.g. ts ot related to rate), ad ca eve be fte, we use uormalzed pot desty, Λ(x), whch s a fucto such that 1. A Λ(x) dx umber of codevectors (or cells) rego A 2. If A s small, but much larger tha the cells the vcty of x, Λ(x) A # pots\cells A 3. Λ(x) 0, Λ(x) dx = M = total umber of quatzato pots (ca be ) 4. Ordarly Λ(x) s a smooth or pecewse smooth fucto. 5. Λ(x) 1 S whe x S DISTORTION: BENNETT'S INTEGRAL Uder hgh-resoluto codtos, a dervato lke that for the orgal Beett shows D m(x) Λ 2/k (x) f k(x) dx VQ-EC-7 RATE: ASYMPTOTIC ENTROPY FORMULA Fact: If X 1,,X are detcally dstrbuted, the uder hgh-resoluto codtos, where R = 1 k H(I) h k + 1 k f k (x) log 2 Λ(x) dx I = (I 1,...,I ) x = (x 1,,x k ) h k = 1 k h(x 1,,X k ) = k-th order dfferetal etropy We'll derve ths shortly. = - 1 k f k (x 1 x k ) log 2 f k (x 1 x k ) dx 1 dx k Note: Dfferetal etropy s ot the same thg as etropy 1. For a cotuous radom varable or vector etropy s fte. 1 Sometmes people use the term etropy whe they really mea dfferetal etropy. VQ-EC-8

Most Importat Example: Uform scalar quatzer wth step sze ad ftely may levels Λ(x) 1 The from the approxmate rate formula R h k + 1 k f k (x) log 2 Λ(x) dx = h - log = h - 1 2 log 12 2 h - 1 2 log 12 D 12 Equvaletly D 1 12 22h 2-2R VQ-EC-9 DERIVATION OF ASYMPTOTIC FORMULA FOR H(I) Frst case: = 1 (for smplcty) H(I) P log P, where P = Pr(X 1 S ) = S f k (x) dx f k (x) dx log f k (x) dx S S - (f k (w ) S ) log (f k (w ) S ) because cells are small (f k (w ) log f k (w )) S - 1 f k (w ) log Λ(w ) S ) recall Λ(x) 1 S whe x S - f k (x) log f(x) dx + f k (x) log Λ(x) dx = h k + 1 k f k (x) log Λ(x) dx VQ-EC-10

Geeral case: 1 H(I) P log P f k (x) dx log f k (x) dx, S S - where x = (x 1 x k ), S = (S 1 S 2 S ) (f k (w ) S ) log (f k (w ) S ), where w = (w 1, w 2,, w ) (f k (w ) log f k (w )) S - (f k (w ) log S ) S The frst summato above ca be approxmated by the tegral - f k (x) log f k (x) dx = k h k ( * ) VQ-EC-11 Before approxmatg the secod sum, ote that log ISj - log Λ(wj ) log S = log ( S 1 IS 2 IS ) = Substtute the above to the secod summato: - (f k (w ) log IS ) IS (f k (w ) (- log Λ(wj ))) IS f k (x 1 x ) log Λ(xj ) dx 1 dx k = f k (x 1 x ) log Λ(x j ) dx 1 dx k = f k (x j ) log Λ(x j ) dx j = f k (x 1 ) log Λ(x) dx 1 because X 1,...,X are detcal ( ** ) Substtutg ( * ) ad ( ** ) to the expresso for H(I) gves 1 k H(I) 1 k (k h k + f k (x 1 ) log Λ(x) dx 1 ) = h k + 1 k f k (x) log 2 Λ(x) dx VQ-EC-12