> ", A " "<n. :-t. 'O "" (l, -...I. c-j "';> \ J, \l\ , iy-j: "a-j C 'C) \.,S \\ \ /tv '-> .,,J c,. <tt .

Similar documents
Objectives of Image Coding

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)

at Some sort of quantization is necessary to represent continuous signals in digital form

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

Coding for Discrete Source

BASICS OF COMPRESSION THEORY

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

Multimedia Communications. Scalar Quantization

Principles of Communications

COMM901 Source Coding and Compression. Quiz 1

2018/5/3. YU Xiangyu


Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014

Compression methods: the 1 st generation

Digital Image Processing Lectures 25 & 26

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

EE5356 Digital Image Processing. Final Exam. 5/11/06 Thursday 1 1 :00 AM-1 :00 PM

Chapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code

= 4. e t/a dt (2) = 4ae t/a. = 4a a = 1 4. (4) + a 2 e +j2πft 2

Chapter 9 Fundamental Limits in Information Theory

CSCI 2570 Introduction to Nanocomputing

ECE Information theory Final

Lecture 4 Channel Coding

Being edited by Prof. Sumana Gupta 1. only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative

Lecture 1: Shannon s Theorem

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

APPH 4200 Physics of Fluids

Digital communication system. Shannon s separation principle

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming

UNIT I INFORMATION THEORY. I k log 2

EE-597 Notes Quantization

X 1 : X Table 1: Y = X X 2

Communications Theory and Engineering

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

Source Coding: Part I of Fundamentals of Source and Video Coding

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

Multiple Description Coding for quincunx images.

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

Network coding for multicast relation to compression and generalization of Slepian-Wolf

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Motivation for Arithmetic Coding

Low-density parity-check (LDPC) codes

Lecture 22: Final Review

Pulse-Code Modulation (PCM) :

Chapter 2: Source coding

ECS 332: Principles of Communications 2012/1. HW 4 Due: Sep 7

Image Data Compression

Exercise 1. = P(y a 1)P(a 1 )

CS578- Speech Signal Processing

CMPT 365 Multimedia Systems. Final Review - 1

Block 2: Introduction to Information Theory

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

1 Introduction to information theory

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Communication constraints and latency in Networked Control Systems

ECE Information theory Final (Fall 2008)

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

Basic Principles of Video Coding

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code

Information Theory and Coding Techniques

National University of Singapore Department of Electrical & Computer Engineering. Examination for

EE376A - Information Theory Midterm, Tuesday February 10th. Please start answering each question on a new page of the answer booklet.

~,. :'lr. H ~ j. l' ", ...,~l. 0 '" ~ bl '!; 1'1. :<! f'~.., I,," r: t,... r':l G. t r,. 1'1 [<, ."" f'" 1n. t.1 ~- n I'>' 1:1 , I. <1 ~'..

Image Coding. Chapter 10. Contents. (Related to Ch. 10 of Lim.) 10.1

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

SCALABLE AUDIO CODING USING WATERMARKING

Various signal sampling and reconstruction methods

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel

Multimedia Networking ECE 599

Ch. 8 Math Preliminaries for Lossy Coding. 8.4 Info Theory Revisited

Lecture 5 Channel Coding over Continuous Channels

5. Density evolution. Density evolution 5-1

Performance Bounds for Joint Source-Channel Coding of Uniform. Departements *Communications et **Signal

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER

Lecture 20: Quantization and Rate-Distortion

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY

Local Asymptotics and the Minimum Description Length

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Transform Coding. Transform Coding Principle

Predictive Coding. Prediction Prediction in Images

Predictive Coding. Prediction

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1

Lecture 12. Block Diagram

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information

Chapter 4: Continuous channel and its capacity

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

10-704: Information Processing and Learning Fall Lecture 9: Sept 28

Lecture 18: Gaussian Channel

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Transcription:

~

l' '" >. 1.... A " '.",... "<n J. 'O "" (l,, :t I "...I cj '. "';> \ J, \l\, iyj: c It "aj C 'C) \.,S ( /tv... hj N \\ \ + <tt I '>.,,J c,..$ < 4S'

.,. r 1 """..,. 3.3 3 "" "",.. Q?., 0... < Jt... 2 :3 (c 1\4 1,. ", J ' 3 A 3 3,.. r \7 I : I + 1\ ", <<+ L:J \\. ' 4! t c."t + 3" 4. 1 3 \..., Q!1 ri " % j 1\ 3 ;: q\ if 1 ::> " ')) 11 3.,J\.,.g 0. '+ Q. (':!:, '"...J '',&.,..

Objectives of Image Coding. Representation of an image with acceptable quality, using as small a number of bits as possible Applications:. Reduction of channel bandwidth for image transmission. Reduction of required storage

Issues in Image Coding 1. What tocode? a. Image density b. Image transformcoefficients c. Image model parameters 2. How to assign reconstruction levels a. Uniform spacing between reconstruction levels b. Nonuniform spacing between reconstruction levels 3. Bit assi :nment "'" a. Equallength bit assignment to each reconstruction level b. Unequallength bit assignment to each reconstructionlevel 111'II9I IDUI'CI In. of bi1i /:;" I

1\1ethods of Reconstruction Level Assignments Assumptions:. Image intensity is to be coded. Equallength bit assignment Scalar Case 1. Equal spacing of reconstruction levels (Unifonn Quantization) (Ex): Image intensity f: 0 255 Number of reconstruction levels: 4 (2 bits for equal bit assignment) 1; 1j 1j II. d, d& dj J,.,. I I I I I,.1, I I I I,,, 0 @ t @ lf6 @) f

f tunifonft qu"'ti_ ~; ; I1pn 10.3 Eumple or unirorm quantizer. The Dumber or reconstructioa levels is 4./ is assumed to be betweed0 and 1. and I is the result or quantizinl/. The reconstruction levels and decisiod, bounlhries are denoted by r. and d.. respectively.

Scalar Case (cont.) 2. Spacing based on some error criterion ri : ~construction levels (32, 96,.160, 224) di : decision boundaries (0, 64, 128, 192, 256) Optimally choose ri and di. To do this, assume f MJll < f < f max r" \'". J ~ the number of reconstruction levels p (j): probability density function for f Minimize fmax ri ' di : E = E [(f 1)2] = J (f 1)2 P (f) df f =f min ==> 0 => r. 2d. rj J J 'l = 0 => LloydMaxQuantizer These are not simple linear equations. J dj+1 d. f P (f) df rj = ~ J j+l p(f) df d. J

Scalar Case (cont.) Solution to the Optimization Problem TABLE10.1 PLACEMENTOF RECONSTRUCTIONANDDECISIONLEVELS FORLLOYDMAXaUANTlZER. FORUNIFORMPDF,p,(fo)ISASSUMED UNIFORM BETWEEN 1 AND 1. THE GAUSSIAN PDFIS ASSUMEDTO HAVEMEANOF0 ANDVARIANCEOF1. FORTHELAPLACIANPDF,...'21'01 p,( fo) = 2; e; with C1 a 1. Uniform Gaussian Laplacian Bits r; d; rj d, rj dj 1 0.5000 1.??oo 0.7979 :I: 0.7071 :I: 0.5000 0.??oo 0.7979 0.0000 0.7071 0.0000 1.??oo :I: :r: 2 0.7500 1.??oo 1.5104 00 1.8340 QO 0.2500 0.5000 0.4528 0.9816 0.41981.1269 0.2500 0.??oo 0.4528 0.??oo 0.4198 0.??oo 0.7500 0.5000 1.5104 0.9816 1.8340 1.1269 1.??oo QO :I: 3 0.8750 1.??oo 2.1519 :I: 3.0867 :I: 0.6250 0.7500 1.3439 1.7479 1.6725 2.3796 0.3750.0.5000 0.7560 1.0500 0.8330 1.2527 0.12500.2500 0.2451 0.5005 0.2334 0.5332 0.1250 0.??oo 0.2451 0.0000 0.2334 0.0000 0.3750 0.2500 0.7560 0.5005 0.8330 0.5332 0.6250 0.5000 1.3439 1.0500 1.6725 1.2527 0.8750 0.7500 2.1519 1.7479 3.0867 2.3769 1.??oo :I:) :I:) 4 0.9375 1.??oo 2.732600 4.4311 00 0.8125 0.8750 2.0690 2.4008 3.0169 3.7240 0.6875 0.7500 1.6180 1.8435 2.1773 2.5971 0.56250.6250 1.2562 1.4371 1.5778 1.8776 0.4375 0.5000 0.9423 1.0993 1.1110 1.3444 0.3125 0.3750 0.6568 0.7995 0.7287 0.9198 0.1875 0.2500 0.3880 0.5224 0.4048 0.5667 0.0625 0.12500.1284 0.25820.1240 0.2664 0.0625 0.??oo 0.1284 0.??oo 0.1240 0.??oo 0.1875 0.1250 0.3880 0.2582 0.4048 0.2644 0.3125 0.2500 0.6568 0.5224 0.7287 0.5667 0.4375 0.3750 0.9423 0.7995 1.1110 0.9198 0.5625 0.5000 1.2562 1.0993 1.5778 1.3444 0.6875. 0.6250 1.6180 1.4371 2.1773 1.8776 0.8125 0.7500 2.0690 1.8435 3.0169 2.5971 0.9375 0.8750 2.7326 2.4008 4.4311 3.7240 1.??oo 00 00

; 1.5104 Q.4528 l I 0.8818 0.8818 0.4528, 1.5104 f1curc 10.5 EJample of a Uoyd.Mu quantizer. The Dumber of ru:oastrvc:tioo levels is 4. and the probability dciisityfunc:tioofor f is GalllliaD with mcaa of 0 and variance of 1. o 10 Q 2 J 20 0 0 J, UoydMIx 30 I quii'i\iz8cion L flprc 10.6 eomparisod of IYCfIIC distonion D E1.<I.m IS a tune. tion of L, the awnbcrof~ levels, for a Wliform quidwr (doctcd tide) ajid the Uoyd.Mu quidliur (loiid line). The wrtica111is ill0... D. The probability dcmity (uaetioiiif 0( 1. to be Oa18iaD widl ftriadci

Scalar Case (cont.) For some densities, the optimal solution can be viewed as follows: f g Uniform,.. Nonlinearity. r Nonline.rity' quantizer Figure 10.7 Nonuniformquantization by companding,,..,.. f p (g) is flat density equal1y likely between gmax' gmin o.clliry OcnIDry Fo...ard TransCormauon lnyei'1c Transformallo" p(fj. C:ft'1)8',? &Xi'{} '.ierlb{;j '.';:6 crill:;) R.ayleilh '(fj. {:S},.iup{f} /..j':4'ln[l/ci ijd"1 II,(n.2{1!1),. Htap {.I}] /ao I I InC' I 2" CJ II 120 J:.. ",.,(Iup 101}] /<0 /.!. II In 11 2; ] ;<0

Veetor Case fl' f 2: two image intensities One approach: Separate level assignment for fl' f 2. previous discussions apply Another approach: Joint level assignment for (f l' f 2). typically more efficient than separate level assignment ~ rs, Minimize reconstruction levels/decisionboundaries: = f f «(f 1 j 1)2 + (f 2 j 2)2).P (f l' f 2) 'df 1. df 2 11/2. efficiency depends on the amount of conelationbetween f } and f 2. finding joint density is difficult (Ex): Extreme casevector level assignment for 256x 256 Joint density P(x}. x2'.... x256)is difficult to get Law of diminishing returns comes into play

Codeword Design: Bit Allocation. After quantization, need to assign binray codword to each quantization symbol.. Options: uniform: equal number of bits to each symbol + inefficient ~ nonuniform: short codewords to more probable symbols, and longer codewords for less probable ones.. For nonuniform, code has to be uniquely decodable: Example: L. ==.4, rl + 0, r2 + 1, rg + 10, r4 + 11 suppose we receive 100. Can decode it two ways: either rgrl or r2rlrl. Not uniquely decodable. One codeword is a prefix of another one.. Use Prefix codes instead to achieve unique decodability. l 1

Overview (. Goal: Design variable length codewords lo that the average bit rate is minimized.. Define entropy to be:. L H = E Pilog2Pi i=1 Pi is the probabiliyt that the ith symbol is ai. Since 2:f IPi == 1, we have 0 < H < log2l. Entropy: average amount of information a message contains. Example: L == 2, PI == 1, P2 == 0 + H == 0 A symbol contains NO information. PI == P2 == 1/2 + H == 1 Maximum possible value. ~ () p 2

Overview. Information theory: H is the theoretically minimum possible average bite rate.. In practice, hard. to achieve Example: L symbols, Pi == I + H == log2~ uniform length coding can achieve this.. Huffman coding tells you how to do nonuniform bit allocation to different codewords so that you get unique decodability and get pretty close to entropy. 3