Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for data reduction in virtually all lossy coding schemes
Quantization at Basic Concepts In quantization, range of input values x is divided into countable non-overlapping subsets, called quantization levels { V } M j j = 1 Each quantization level (subset of input range) k=1,,m is assigned an index and a representative value, called reconstruction value or reconstruction level { } M j r k r j A quantizer Q(.) with quamtization levels V and j = 1 reconstruction levels is a mapping: { r } M j j =1 =1 { } M j Q(x) = r k ; where x V k V k,
Quantization at Basic Concepts Two main types of quantizers: If the domain of the input signal is R k or C k, i.e. the input x is a k- dimensional vector quantizer is a vector quantizer In the special case where the dimension k=1, i.e. input is scalar quantizer is a scalar quantizer A scalar quantizer is a special case of a vector quantizer Exp. 1: scalar quantizer (M=5) Exp. : vector quantizer (k=, M=6) x 5 x=(x1, x) r 1 =-1 r =1 r 3 =.5 r 4 =4 r 5 =6 0 1.5 3 5 x V V 5 6 V 3 V 4 V 1 V V 3 V 4 V 5 V 1 V x 0 5 1 Usually, r i = central point of V i ; i =1,, 6
Quantization at Basic Concepts In the context of a communication system, quantizer = composition of two mappings: Encoder Mapping E: X I Decoder Mapping D: I {r i } Q = D E When a variable-length encoding is allowed, an entropy coder L can be included as part of the quantizer for convenience (or for joint optimization) Q = D L -1 L E
Quantization at Basic Concepts Quantizer Design Objective: optimize the performance of the quantizer given some constraints on its structure. Reason: Quantizer introduces noise and distortion It is a lossy and irreversible operation need to optimize to minimize distortion
Quantization at Basic Concepts Quantizer Design Performance evaluation: Performance usually evaluated in terms of the reconstruction (quantization) error x Q(x) Common measures of quantization error: mean square quantization error E Q = E { ( x - Q(x) ) } mean absolute quantization error E Q1 = E { x - Q(x) } Perceptual measures are more desirable, but more difficult to quantify and compare. They are based on the concept of noise masking : choose quantization and reconstruction levels such that the noise is minimally noticeable (MND) given some constraints, or not-noticeable (JND)
Quantization at Basic Concepts Quantizer Design Common design constraints on the quantizer structure include: Number of quantization levels M Entropy of the quantized signal Geometric structure of the quantization levels (e.g., uniform, nonuniform, ) Max allowable distortion Max allowable bit-rate
Some Results from Information o Theory Suppose there is a discrete-domain, discrete-value random source (signal) generating a discrete set of independent samples or messages (such as gray levels) {r k }, with probabilities p k, k=1, L. The information associated with r k is defined as I k =-log p k (bits) ; p k = probability that a sample has the value r k Note: L k =11 p k = 1 0 p k 1 I k 0 I k is large when an unlikely message is generated p k = 1 certain message I k = 0
Some Results from Information o Theory Assume equally-likely symbols { } M = 56 r k p k k= 1 = 1 56 ; k = 1,...,56 p k = 8 I k = 8 bits
Some Results from Information o Theory Entropy Measure of the average amount of information content in the signal (non-context info content) The first-order entropy (also called entropy ) of a signal that is discrete in both time/space and amplitude (such as an image) is given by H = H k = k p k I k p k log p k bits/message (signal value) Average information generated by source or contained in signal H is lower bound on average bitrate (bits/sample) needed to encode the signal under assumption that samples are uncorrelated (or no info about inter-sample dependency is available)
Some Results from Information o Theory Entropy is maximum for uniform distributions 1 p k = ; k = 1,..., L; L = Total #of messages L H = L 1 1 log k = 1 L L = log L = max pk H
Some Results from Information o Theory The first-order entropy is a measure of the information content in the signal (average information) a lower bound on the average number of bits per sample needed to represent the signal losslessly (assuming no information about inter-sample dependency is available; so, one cannot exploit dependency to lower bit rate).
Some Results from Information o Theory Entropy Coding Objective: use n k =-log p k =I k bits to code amplitude level k, (i.e., V k or, equivalently, r k ) Result: Average bit rate defined as B = p n k k k is equal to the entropy H if n k =-log p k lower bound is achieved (assuming independent samples)
Some Results from Information o Theory Entropy Coding Two widely-used techniques for entropy coding are: Huffman coding Arithmetic coding Note: Huffman code is optimal in the sense that its average bit-rate (B Huf ) does not exceed the average bit-rate of any other code Assumption: each symbol is coded separately and assigned a fixed codeword with integer number of bits. Average bit-rate of Huffman code is within one bit of entropy H B Huf H + 1 ; B Huf = H if p k are powers of ½
Some Results from Information o Theory Rate Distortion Theory Provides useful results about: minimum bit-rate achievable under a fixed maximum allowed distortion minimum distortion achievable under a given maximum allowable bit- rate Important in quantizer s design: tradeoff between bit-rate and distortion Rate-Distortion function: R(D) = R D gives the minimum average rate R D (in bits per sample) required to represent (code) a random variable x (signal) under a fixed distortion D The distortion D is given by an error measure (MSE, MAE, ) Exp. D = E [ ( x y ) ] ; where y = representative value of x
Rate Distortion t o Theory Example 1: Signal x (e.g., row_by_row ordering of image) is a Gaussian distributed random variable with pdf: p x ( x) 1 exp ( x m) = σ πσ where m = Mean = E [ x ] σ = Variance = E [ ( x m ) ] x = value that RV x takes In this case, the rate-distortion t ti function of x is given by: R D = max 0, 1 1 σ log = D 0 σ log D ; ; D σ D > σ
Rate Distortion t o Theory For a Gaussian distributed RV x: R D 1 σ = max 0, log D 1 σ log ; D σ = D 0 ; D > σ Note: max possible distortion is D = σ since R D = 0 when D = σ
Rate Distortion t o Theory Plot: 1 σ = max 0, log D R D R D σ D
Rate Distortion t o Theory Note: For a continuous-valued RV, H R(D=0) = H For a discrete-valued and finite-alphabet RV, H is finite R(D=0) = H finite
Rate Distortion t o Theory Example : For an image, each pixel can be modeled as a RV. Consider a block of M pixels x = { x(1), x(),, x(m) } -where x(i): Gaussian RVs coded independently Code using y = { y(1), y(),, y(m) }; - wherey(i) = representative value of x(i) We talk here about average distortion D avg. The arithmetic average mse distortion is D avg 1 = M M k = 1 E [( x( k) y( k) ) ]
Rate Distortion t o Theory Example (continued) Two cases: Fixed (desired) average distortion D avg => R D =? Fixed (desired) average bit rate R D => D avg =?
Rate Distortion t o Theory Example (continued) Fixed (desired) average distortion D avg => R D =? Then the rate-distortion function R D of the vector x is given by R D = 1 M 1 k max 0, log σ M k = 1 θ 14 44 444 3 R D, k of individualelements with distortion D = k min θ, Note: R D, k = R θ σ k where θ D avg is determined by solving M 1 = min( θ, σ k ) M k = 1 Note: if θ σ k, D k = σ k and R Dk =0
Rate Distortion t o Theory Fixed (desired) rate R D => D avg =? 1. θ is found by solving R D = 1 M 1 k max 0, log σ k = θ M 1. The minimum attainable average distortion D avg is given by D avg 1 = M M k = 1 min ( θ, σ ) k g Note: In general, R D is a convex and monotonically nonincreasing function of the distortion D