at Some sort of quantization is necessary to represent continuous signals in digital form

Similar documents
Digital Image Processing Lectures 25 & 26

The information loss in quantization

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Compression and Coding

Image and Multidimensional Signal Processing

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014

Digital communication system. Shannon s separation principle

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

Coding for Discrete Source

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

Multimedia Communications. Scalar Quantization

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Basic Principles of Video Coding

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU

BASICS OF COMPRESSION THEORY

EE5356 Digital Image Processing

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Waveform-Based Coding: Outline

Compression and Coding. Theory and Applications Part 1: Fundamentals

Compression and Coding. Theory and Applications Part 1: Fundamentals

Information and Entropy

Ch. 8 Math Preliminaries for Lossy Coding. 8.4 Info Theory Revisited

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Objectives of Image Coding

EE-597 Notes Quantization

Principles of Communications

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

EE5356 Digital Image Processing. Final Exam. 5/11/06 Thursday 1 1 :00 AM-1 :00 PM

Lecture 20: Quantization and Rate-Distortion

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Lecture 5 Channel Coding over Continuous Channels

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Vector Quantization and Subband Coding

Statistical Analysis and Distortion Modeling of MPEG-4 FGS

Multimedia Networking ECE 599

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding.

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding

Compression methods: the 1 st generation

Transform Coding. Transform Coding Principle

EE67I Multimedia Communication Systems

Lecture 12. Block Diagram

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

arxiv: v1 [cs.it] 20 Jan 2018

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

EE368B Image and Video Compression

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY

Design of Optimal Quantizers for Distributed Source Coding

SCALABLE AUDIO CODING USING WATERMARKING

Chapter 9 Fundamental Limits in Information Theory

Image Data Compression

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

Lecture 2. Capacity of the Gaussian channel

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

Ch. 10 Vector Quantization. Advantages & Design

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

Soft-Output Trellis Waveform Coding

Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization

Fast Progressive Wavelet Coding

Image Compression Basis Sebastiano Battiato, Ph.D.

Source Coding for Compression

6.003: Signals and Systems. Sampling and Quantization


Lecture 22: Final Review

Digital Signal Processing

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

Fuzzy quantization of Bandlet coefficients for image compression

Shannon s A Mathematical Theory of Communication

On Optimal Coding of Hidden Markov Sources

Relationship Between λ and Q in RDO

Source Coding: Part I of Fundamentals of Source and Video Coding

Reliable Computation over Multiple-Access Channels

CMPT 365 Multimedia Systems. Final Review - 1

Lecture 1. Introduction

Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming

Compressing Tabular Data via Pairwise Dependencies

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

UNIT I INFORMATION THEORY. I k log 2

COMM901 Source Coding and Compression. Quiz 1

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

Lecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1)

EE 121: Introduction to Digital Communication Systems. 1. Consider the following discrete-time communication system. There are two equallly likely

Multimedia & Computer Visualization. Exercise #5. JPEG compression

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Lecture 8: Shannon s Noise Models

repetition, part ii Ole-Johan Skrede INF Digital Image Processing

Lecture 1: Shannon s Theorem

Entropies & Information Theory

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER

Chapter 2: Source coding

Transcription:

Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for data reduction in virtually all lossy coding schemes

Quantization at Basic Concepts In quantization, range of input values x is divided into countable non-overlapping subsets, called quantization levels { V } M j j = 1 Each quantization level (subset of input range) k=1,,m is assigned an index and a representative value, called reconstruction value or reconstruction level { } M j r k r j A quantizer Q(.) with quamtization levels V and j = 1 reconstruction levels is a mapping: { r } M j j =1 =1 { } M j Q(x) = r k ; where x V k V k,

Quantization at Basic Concepts Two main types of quantizers: If the domain of the input signal is R k or C k, i.e. the input x is a k- dimensional vector quantizer is a vector quantizer In the special case where the dimension k=1, i.e. input is scalar quantizer is a scalar quantizer A scalar quantizer is a special case of a vector quantizer Exp. 1: scalar quantizer (M=5) Exp. : vector quantizer (k=, M=6) x 5 x=(x1, x) r 1 =-1 r =1 r 3 =.5 r 4 =4 r 5 =6 0 1.5 3 5 x V V 5 6 V 3 V 4 V 1 V V 3 V 4 V 5 V 1 V x 0 5 1 Usually, r i = central point of V i ; i =1,, 6

Quantization at Basic Concepts In the context of a communication system, quantizer = composition of two mappings: Encoder Mapping E: X I Decoder Mapping D: I {r i } Q = D E When a variable-length encoding is allowed, an entropy coder L can be included as part of the quantizer for convenience (or for joint optimization) Q = D L -1 L E

Quantization at Basic Concepts Quantizer Design Objective: optimize the performance of the quantizer given some constraints on its structure. Reason: Quantizer introduces noise and distortion It is a lossy and irreversible operation need to optimize to minimize distortion

Quantization at Basic Concepts Quantizer Design Performance evaluation: Performance usually evaluated in terms of the reconstruction (quantization) error x Q(x) Common measures of quantization error: mean square quantization error E Q = E { ( x - Q(x) ) } mean absolute quantization error E Q1 = E { x - Q(x) } Perceptual measures are more desirable, but more difficult to quantify and compare. They are based on the concept of noise masking : choose quantization and reconstruction levels such that the noise is minimally noticeable (MND) given some constraints, or not-noticeable (JND)

Quantization at Basic Concepts Quantizer Design Common design constraints on the quantizer structure include: Number of quantization levels M Entropy of the quantized signal Geometric structure of the quantization levels (e.g., uniform, nonuniform, ) Max allowable distortion Max allowable bit-rate

Some Results from Information o Theory Suppose there is a discrete-domain, discrete-value random source (signal) generating a discrete set of independent samples or messages (such as gray levels) {r k }, with probabilities p k, k=1, L. The information associated with r k is defined as I k =-log p k (bits) ; p k = probability that a sample has the value r k Note: L k =11 p k = 1 0 p k 1 I k 0 I k is large when an unlikely message is generated p k = 1 certain message I k = 0

Some Results from Information o Theory Assume equally-likely symbols { } M = 56 r k p k k= 1 = 1 56 ; k = 1,...,56 p k = 8 I k = 8 bits

Some Results from Information o Theory Entropy Measure of the average amount of information content in the signal (non-context info content) The first-order entropy (also called entropy ) of a signal that is discrete in both time/space and amplitude (such as an image) is given by H = H k = k p k I k p k log p k bits/message (signal value) Average information generated by source or contained in signal H is lower bound on average bitrate (bits/sample) needed to encode the signal under assumption that samples are uncorrelated (or no info about inter-sample dependency is available)

Some Results from Information o Theory Entropy is maximum for uniform distributions 1 p k = ; k = 1,..., L; L = Total #of messages L H = L 1 1 log k = 1 L L = log L = max pk H

Some Results from Information o Theory The first-order entropy is a measure of the information content in the signal (average information) a lower bound on the average number of bits per sample needed to represent the signal losslessly (assuming no information about inter-sample dependency is available; so, one cannot exploit dependency to lower bit rate).

Some Results from Information o Theory Entropy Coding Objective: use n k =-log p k =I k bits to code amplitude level k, (i.e., V k or, equivalently, r k ) Result: Average bit rate defined as B = p n k k k is equal to the entropy H if n k =-log p k lower bound is achieved (assuming independent samples)

Some Results from Information o Theory Entropy Coding Two widely-used techniques for entropy coding are: Huffman coding Arithmetic coding Note: Huffman code is optimal in the sense that its average bit-rate (B Huf ) does not exceed the average bit-rate of any other code Assumption: each symbol is coded separately and assigned a fixed codeword with integer number of bits. Average bit-rate of Huffman code is within one bit of entropy H B Huf H + 1 ; B Huf = H if p k are powers of ½

Some Results from Information o Theory Rate Distortion Theory Provides useful results about: minimum bit-rate achievable under a fixed maximum allowed distortion minimum distortion achievable under a given maximum allowable bit- rate Important in quantizer s design: tradeoff between bit-rate and distortion Rate-Distortion function: R(D) = R D gives the minimum average rate R D (in bits per sample) required to represent (code) a random variable x (signal) under a fixed distortion D The distortion D is given by an error measure (MSE, MAE, ) Exp. D = E [ ( x y ) ] ; where y = representative value of x

Rate Distortion t o Theory Example 1: Signal x (e.g., row_by_row ordering of image) is a Gaussian distributed random variable with pdf: p x ( x) 1 exp ( x m) = σ πσ where m = Mean = E [ x ] σ = Variance = E [ ( x m ) ] x = value that RV x takes In this case, the rate-distortion t ti function of x is given by: R D = max 0, 1 1 σ log = D 0 σ log D ; ; D σ D > σ

Rate Distortion t o Theory For a Gaussian distributed RV x: R D 1 σ = max 0, log D 1 σ log ; D σ = D 0 ; D > σ Note: max possible distortion is D = σ since R D = 0 when D = σ

Rate Distortion t o Theory Plot: 1 σ = max 0, log D R D R D σ D

Rate Distortion t o Theory Note: For a continuous-valued RV, H R(D=0) = H For a discrete-valued and finite-alphabet RV, H is finite R(D=0) = H finite

Rate Distortion t o Theory Example : For an image, each pixel can be modeled as a RV. Consider a block of M pixels x = { x(1), x(),, x(m) } -where x(i): Gaussian RVs coded independently Code using y = { y(1), y(),, y(m) }; - wherey(i) = representative value of x(i) We talk here about average distortion D avg. The arithmetic average mse distortion is D avg 1 = M M k = 1 E [( x( k) y( k) ) ]

Rate Distortion t o Theory Example (continued) Two cases: Fixed (desired) average distortion D avg => R D =? Fixed (desired) average bit rate R D => D avg =?

Rate Distortion t o Theory Example (continued) Fixed (desired) average distortion D avg => R D =? Then the rate-distortion function R D of the vector x is given by R D = 1 M 1 k max 0, log σ M k = 1 θ 14 44 444 3 R D, k of individualelements with distortion D = k min θ, Note: R D, k = R θ σ k where θ D avg is determined by solving M 1 = min( θ, σ k ) M k = 1 Note: if θ σ k, D k = σ k and R Dk =0

Rate Distortion t o Theory Fixed (desired) rate R D => D avg =? 1. θ is found by solving R D = 1 M 1 k max 0, log σ k = θ M 1. The minimum attainable average distortion D avg is given by D avg 1 = M M k = 1 min ( θ, σ ) k g Note: In general, R D is a convex and monotonically nonincreasing function of the distortion D