Digital Image Processing Lectures 25 & 26

Size: px
Start display at page:

Download "Digital Image Processing Lectures 25 & 26"

Transcription

1 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015

2 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image in order to reduce the number of bits to represent an image or a sequence of images (e.g., video). Applications: Image Transmission: e.g., HDTV, 3DTV, satellite/military communication, and teleconferencing. Image Storage: e.g., Document storage & retrieval, medical image archives, weather maps, and geological surveys. Category of Techniques: 1 Pixel Encoding: PCM, run-length encoding, bit-plane, Huffmann encoding, entropy encoding 2 Predictive Encoding: Delta modulation, 2-D DPCM, inter-frame method 3 Transform-based Encoding: DCT-based, WT-based, Zonal encoding 4 Others: Vector quantization (clustering), neural network-based, hybrid encoding

3 Encoding System There are three steps involved with any encoding system (Fig. 1). a. Mapping: Removes redundancies in the images. Should be invertible. b. Quantization: Mapped values are quantized using uniform or Llyod-Max quantizers. c. Coding: Optimal codewords are assigned to the quantized values. Figure 1: A Typical Image Encoding System. However, before we discuss several types of encoding systems, we need to review some basic results from information theory.

4 Measure of Information & Entropy Assume there is a source (e.g., an image) that generates a discrete set of independent messages (e.g., grey-levels), r k, with prob. P k, k [1, L] with L being the number of messages (or number of levels). Figure 2: Source and message. Then, information associated with r k is I k = log 2 P k bits Clearly, L 1 k=0 P k = 1. For equally likely levels (messages) information can be transmitted as an n-bit binary number P k = 1 L = 1 2 n I k = n bits For images, P k s are obtained from the histogram.

5 As an example, consider a binary image with r 0 = Black, P 0 = 1 and r 1 = White, P 1 = 0, then I k = 0 i.e. no information. Entropy: Average information generated by the source L L H = P k I k = P k log 2 P k Avg. bits/pixel k=1 k=1 Entropy also represents a measure of redundancy. Let L = 4,P 1 = P 2 = P 3 = 0 and P 4 = 1, then H = 0 i.e. most certain case and thus maximum redundancy. Now, let L = 4,P 1 = P 2 = P 3 = P 4 = 1/4, then H = 2 i.e. most uncertain case and hence least redundant. Maximum entropy occurs when levels are equally likely, P k = 1 L k [1, L], then Thus, H max = L k=1 1 L log 1 2 L = log 2 L 0 H H max

6 Entropy and Coding Entropy represents the lower bound on the number of bits required to code the coder inputs, i.e. for a set of coder inputs v k, k [1, L], with prob P k it is guaranteed that it is not possible to code them using less than H bits on the average. If we design a code with codewords C k, k [1, L] with corresponding word lengths β k s, the average number of bits required by the coder is R(L) = L k=1 β kp k. Figure 3: Coder producing codewords C k s with lengths β k s. Shannon s Entropy Coding Theorem (1949) The average length R(L) is bounded by H R(L) H + ɛ,, ɛ = 1/L

7 i.e. it is possible to encode without distortion a source with entropy H using an average of H + ɛ bits/message; or it is possible to encode with distortion the source using H bits/message. Optimality of the coder depends on how close R(L) is to H. Example: Let L = 2, P 1 = p and P 2 = 1 p 0 p 1. Thus, the entropy is H = p log 2 p (1 p) log 2 (1 p). The above figure shows H as a function of p. Clearly, since the source is binary, we can use 1 bit/pixel. This corresponds to H max = 1 at p = 1/2. However, if p = 1/8,H 0.2 i.e. more redundancies then it is possible to find a coding scheme that uses only 0.2 bits/pixel.

8 Remark: Max achievable compression is C = Average bit rate of original raw data(b) Average bit rate of encoded data (R(L)) Thus B H + ɛ C B ɛ = 1/L H Since certain distortion is inevitable in any image transmission, it is necessary to find the minimum number of bits to encode the image while allowing a certain level of distortion. Rate Distortion Function Let D be a fixed distortion between the actual values, x and reproduced values, ˆx. Then the question is: allowing D distortion what is minimum number of bits required to encode the data? If we consider x as a Gaussian r.v. with σx, 2 D is D = E[(x ˆx) 2 ] Rate distortion function is defined by

9 R D = { 1 σ 2 x 2 log 2 D 0 D σx 2 0 D > σx 2 = Max[0, 1 2 log σx 2 2 D ] At maximum D σx, 2 R D = 0 i.e. no information needs is transmitted. Figure 4: Rate Distortion Function R D versus D. R D shows the number of bits required for distortion D. Since R ( ) D represents the number of bits/pixel N = 2 R D σ 2 1/2, x = D D is considered to be quantization noise variance. This variance can be minimized using Llyod-Max quantizer. In transform domain we can assume that x is white (e.g., due to KL).

10 Pixel-Based Encoding Encode each pixel ignoring their inter-pixel dependencies. Among methods are: 1. Entropy Coding Every block of an image is entropy encoded based upon the P k s within a block. This produces variable length code for each block depending on spatial activities within the blocks. 2. Run-Length Encoding Scan the image horizontally or vertically and while scanning assign a group of pixel with the same intensity into a pair (g i, l i ) where g i is the intensity and l i is the length of the run. This method can also be used for detecting edges and boundaries of an object. It is mostly used for images with a small number of gray levels and is not effective for highly textured images.

11 Example 1: Consider the following 8 8 image The run-length codes using vertical (continuous top-down) scanning mode are: (4,9) (5,5) (4,3) (5,1) (6,3) (5,1) (4,3) (5,1) (6,1) (7,1) (6,1) (5,1) (4,3) (5,1) (6,3) (5,1) (4,3) (5,5) (4,10) (0,8) i.e. total of 20 pairs = 40 numbers. The horizontal scanning would lead to 34 pairs = 68 numbers, which is more than the actual number of pixels (i.e. 64).

12 Example 2: Let the transition probabilities for run-length encoding of a binary image (0: black and 1: white) be p 0 = P (0 1) and p 1 = P (1 0). Assuming all runs are independent, find (a) average run lengths, (b) entropies of white and black runs, and (c) compression ratio. Solution: A run of length l 1 can be represented by a Geometric r.v. X i with PMF P (X i = l) = p i (1 p i ) l 1 with i = 0, 1 which corresponds to happening of 1st occurrences of 0 or 1 after l independent trials. (Note that (1 P (0 1)) = P (1 1) and (1 P (1 0)) = P (0 0)) and Thus, for the average we have µ Xi = lp (X i = l) = lp i (1 p i ) l 1 l=1 which using series n=1 nan 1 = 1 (1 a) reduces to µ 2 Xi = 1 p i. The entropy is given by H Xi = P (X i = l)log 2 P (X i = l) l=1 l=1 = p i (1 p i ) l 1 [log 2 p i + (l 1)log 2 (1 p i )] l=1

13 Using the same series formula, we get H Xi = 1 p i [p i log 2 p i + (1 p i )log 2 (1 p i )] The achievable compression ratio is where P i = pi p 0+p 1 C = H X 0 + H X1 µ X0 + µ X1 = H X 0 P 0 µ X0 + H X 1 P 1 µ X1 are the a priori probabilities of black and white pixels.

14 3. Huffman Encoding Algorithm consists of the following steps. 1 Arrange symbols with probability P k s in a decreasing order and consider them as leaf nodes of a tree. 2 Merge two nodes with smallest prob to form a new node whose prob is the sum of the two merged nodes. Go to Step 1 and repeat until only two nodes are left ( root nodes ). 3 Arbitrarily assign 1 s and 0 s to each pair of branches merging into a node. 4 Read sequentially from root node to the leaf nodes to form the associated code for each symbol. Example 3: For the same image in the previous example, which requires 3 bits/pixel using standard PCM we can arrange the table on the next page.

15 Gray levels # occurrences P k C k β k P k β k P k log 2 P k R H Codewords C k s are obtained by constructing the binary tree as in Fig. 5. Figure 5: Tree Structure for Huffman Encoding.

16 Note that in this case, we have 8 R = β k P k = bits/pixel k=1 H = 8 P k log 2 P k = bits/pixel k=1 Thus, R = H + 1 L = i.e. an average of 2 bits/pixel (instead of 3 bits/pixel using PCM) can be used to code the image. However, the drawback of the standard Huffman encoding method is that the codes have variable lengths.

17 Predictive Encoding Idea: Remove mutual redundancy among successive pixels in a region of support (ROS) or neighborhood and encode only the new information. This method is based upon linear prediction. Let us start with 1-D linear predictors. An N th order linear prediction of x(n) based on N previous samples is generated using a 1-D autoregressive (AR) model ˆx(n) = a 1 x(n 1) + a 2 x(n 2) + + a N x(n N) a i s are model coefficients determined based on some sample signals. Now instead of encoding x(n) the prediction error e(n) = x(n) ˆx(n) is encoded as it requires substantially smaller number of bits. Then, at the receiver we reconstruct x(n) using the previous encoded values x(n k) and the encoded error signal, i.e. x(n) = ˆx(n) + e(n) This method is also referred to as differential PCM (DPCM).

18 Minimum Variance Prediction The predictor ˆx(n) = N a i x(n i) is the best N th order linear mean-squared predictor of x(n), which minimizes the MSE ( ) ] 2 ɛ = E[ x(n) ˆx(n) i=1 This minimization wrt a k s results in the following orthogonal property : [ ɛ ( ] = 2E x(n) ˆx(n)) x(n k) = 0, 1 k N a k which leads to the normal equation r xx (k) N a i r xx (k i) = σeδ(k), 2 i=1 0 k N where r xx (k) is the autocorrelation of the data x(n) and σ 2 e is the variance of the driving process e(n).

19 Plugging different values for k [0, N] gives the AR Yule-Walker equation for solving for a i s and σ 2 e, i.e. r xx (0) r xx (1) r xx (N) r xx (1) r xx (0) r xx (N 1) r xx (N) r xx (N 1) r xx (0). 1 a 1.. a N = Note that correlation matrix, R x in this case is both Toeplitz and Hermitian. The solution to this system of linear equation is given by σ 2 e = 1 [Rx 1 a i = σe 2 [Rx 1 ] 1,1 where [Rx 1 ] i,j is the i, j th element of matrix Rx 1. ] i+1,1 σ 2 e (1)

20 In the 2-D case, an AR model with non-symmetric half-plane (NSHP) ROS is used. This ROS is shown in Fig. 6 when image is scanned from left-to-right and top-to-bottom. Figure 6: A 1st Order 2-D AR Model with NSHP ROS. For a 1st order 2-D AR x(m, n) = a 01 x(m, n 1) + a 11 x(m 1, n 1) + a 10 x(m 1, n) + a 1, 1 x(m 1, n + 1) + e(m, n) where a i,j s are model coefficients. Then, the best linear prediction of x(m, n) is ˆx(m, n) = a 01 x(m, n 1)+a 11 x(m 1, n 1)+a 10 x(m 1, n)+a 1, 1 x(m 1, n+1)

21 Note that at every pixel four previously scanned pixels are needed to generate predicted value ˆx(m, n). Fig. 7 shows those pixels that need to be stored in the global state vector for this 1st order predictor. Figure 7: Global State Vector. Assuming that the reproduced values (quantized) up to (m, n 1) are available, we generate ˆx o (m, n) = a 10 x o (m, n 1) + a 11 x o (m 1, n 1) + a 10 x o (m 1, n) + a 1, 1 x o (m 1, n + 1) Then, prediction error is applied to the quantizer. e(m, n) := x(m, n) ˆx o (m, n) quantizer input The quantized value e o (m, n) is encoded and transmitted. Also it is used to generate the reproduced value using x o (m, n) = e o (m, n) + ˆx o (m, n) reproduced value

22 The entire process at the transmitter and receiver is depicted in Fig. 8. Clearly, it is assumed that the model coefficients are available at the receiver. Figure 8: Block Diagram of 2-D Predictive Encoding System. It is interesting to note that q(m, n) := x(m, n) x o (m, n) PCM quantization error = e(m, n) e o (m, n) DPCM quantization error However, for the same quantization error, q(m, n), DPCM requires much fewer number of bits.

23 Performance Analysis of DPCM For straight PCM the rate distortion function is R P CM = 1 2 log 2 σ 2 x/σ 2 q bit/pixels i.e. number of bits required per pixel in the presence of a particular distortion σ 2 q = E[q 2 (m, n)]. Now for DPCM the rate distortion function R DP CM = 1 2 log 2 σ 2 e/σ 2 q bit/pixels for the same distortion. Clearly, σ 2 e σ 2 x R DP CM R P CM. The bit reduction of DPCM over PCM is R P CM R DP CM = 1 2 log 2 σ 2 x/σ 2 e = log 10 σ 2 x/σ 2 e The achieved compression depends on the inter-pixel redundancy i.e. for an image with no redundancy image (random). σ 2 x = σ 2 e R P CM = R DP CM

24 Transform-Based Encoding Idea: Reduce redundancy by applying unitary transformation to blocks of an image. Then the redundancy removed coefficients/features are encoded. The process of transform-based encoding or block quantization is depicted in Fig. 9. The image is first partitioned into non-overlapping blocks. Each block in then unitary transformed and the principal coefficients are quantized and encoded. Figure 9: Transform-Based Encoding Process. Q1: What are the best mapping matrices A and B, so that maximum redundancy removal is achieved and at the same time distortion due to coefficients reduction is minimized? Q2: What is the best quantizer that gives minimum quantization distortion?

25 Theorem: Let x be a random vector representing blocks of an image and y be the transformed version y = Ax with components y(k) that are mutually uncorrelated. These components are then quantized to y o and then encoded and transmitted. At the receiver the decoded values are reconstructed using matrix B i.e. x o = By o. The objective is find optimum matrices A and B and optimum quantizer such that D = E[ x x o 2 ] is minimized. 1. The optimum matrices are A = Ψ t and B = Ψ, i.e. KL transform pair. 2. The optimum quantizer is Lloyd-Max quantizer. Proof: See Jain s book (ch. 11).

26 Bit allocation To allocate optimally a given total number of bits (M) to N (retained) components of y o, so that distortion is minimized D = 1 N N E[(y(k) y o (k)) 2 ] = 1 N k=1 N σkf(m 2 k ) f(.): quantizer distortion function (monotonic convex with f(0) = 1 and f( ) = 0). σk 2 : variance of coefficient y(k). m k : number of bits allocated to y o (k). Optimal bit allocation involves finding m k s to minimize D subject to M = N k=1 m k. Note that coefficients with higher variance contain more information than those with lower variance. Thus, more bits are designated to them to improve the performance. i. Shannon s Allocation Strategy m k = m k (θ) = Max(0, 1 2 log 2( σ2 k θ )) θ: Must be found to produce an average rate of p = M N bits per pixel (bpp). k=1

27 ii. Segall Allocation Strategy log 2( 1.46σ2 k θ ) 0.083σk 2 θ > 0 m k (θ) = log 2( σ2 k θ ) σk 2 θ > 0.083σ2 k 0 θ > σk 2 where θ solves N k=1 m k(θ) = M. iii. Huang/Schultheiss Allocation Strategy This bit allocation approximates the optimal non-uniform allocation for Gaussian coefficients giving: ˆm k = M N + 2 log 2 σk 2 2 N log N 2 σi 2 m k = Int[ ˆm k ] with M = i=1 N k=1 m k F ixed. Figs. 10 and 11 show reconstructed images of Lena and Barbara using Shannon (SNR Lena = and SNR Barb = 17.24dB) and Segall (SNR Lena = and SNR Barb = 16.90dB) bit allocation methods for an average of p = 1.5 bpp together with the corresponding error images.

28 Student Version of MATLAB Student Version of MATLAB Figure 10: Reconstructed & Error Images-Shannon s (1.5bpp).

29 Student Version of MATLAB Student Version of MATLAB Figure 11: Reconstructed & Error Images-Segall s (1.5bpp).

at Some sort of quantization is necessary to represent continuous signals in digital form

at Some sort of quantization is necessary to represent continuous signals in digital form Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for

More information

Basic Principles of Video Coding

Basic Principles of Video Coding Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion

More information

BASICS OF COMPRESSION THEORY

BASICS OF COMPRESSION THEORY BASICS OF COMPRESSION THEORY Why Compression? Task: storage and transport of multimedia information. E.g.: non-interlaced HDTV: 0x0x0x = Mb/s!! Solutions: Develop technologies for higher bandwidth Find

More information

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive

More information

Image Data Compression

Image Data Compression Image Data Compression Image data compression is important for - image archiving e.g. satellite data - image transmission e.g. web data - multimedia applications e.g. desk-top editing Image data compression

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006 jzhang@cse.unsw.edu.au

More information

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, ) Overview Analog capturing device (camera, microphone) Sampling Fine Quantization A/D CONVERTER PCM encoded or raw signal ( wav, bmp, ) Transform Quantizer VLC encoding Compressed bit stream (mp3, jpg,

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5 Lecture : Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP959 Multimedia Systems S 006 jzhang@cse.unsw.edu.au Acknowledgement

More information

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at

More information

UNIT I INFORMATION THEORY. I k log 2

UNIT I INFORMATION THEORY. I k log 2 UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper

More information

EE5356 Digital Image Processing

EE5356 Digital Image Processing EE5356 Digital Image Processing INSTRUCTOR: Dr KR Rao Spring 007, Final Thursday, 10 April 007 11:00 AM 1:00 PM ( hours) (Room 111 NH) INSTRUCTIONS: 1 Closed books and closed notes All problems carry weights

More information

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l Vector Quantization Encoder Decoder Original Image Form image Vectors X Minimize distortion k k Table X^ k Channel d(x, X^ Look-up i ) X may be a block of l m image or X=( r, g, b ), or a block of DCT

More information

Image and Multidimensional Signal Processing

Image and Multidimensional Signal Processing Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Image Compression 2 Image Compression Goal: Reduce amount

More information

Compression methods: the 1 st generation

Compression methods: the 1 st generation Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic

More information

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak 4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the

More information

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding SIGNAL COMPRESSION 8. Lossy image compression: Principle of embedding 8.1 Lossy compression 8.2 Embedded Zerotree Coder 161 8.1 Lossy compression - many degrees of freedom and many viewpoints The fundamental

More information

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image Fundamentals: Coding redundancy The gray level histogram of an image can reveal a great deal of information about the image That probability (frequency) of occurrence of gray level r k is p(r k ), p n

More information

Objectives of Image Coding

Objectives of Image Coding Objectives of Image Coding Representation of an image with acceptable quality, using as small a number of bits as possible Applications: Reduction of channel bandwidth for image transmission Reduction

More information

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING 5 0 DPCM (Differential Pulse Code Modulation) Making scalar quantization work for a correlated source -- a sequential approach. Consider quantizing a slowly varying source (AR, Gauss, ρ =.95, σ 2 = 3.2).

More information

Information and Entropy

Information and Entropy Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication

More information

EE5585 Data Compression April 18, Lecture 23

EE5585 Data Compression April 18, Lecture 23 EE5585 Data Compression April 18, 013 Lecture 3 Instructor: Arya Mazumdar Scribe: Trevor Webster Differential Encoding Suppose we have a signal that is slowly varying For instance, if we were looking at

More information

Compression and Coding

Compression and Coding Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)

More information

Waveform-Based Coding: Outline

Waveform-Based Coding: Outline Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and

More information

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition Review of Quantization UMCP ENEE631 Slides (created by M.Wu 004) Quantization UMCP ENEE631 Slides (created by M.Wu 001/004) L-level Quantization Minimize errors for this lossy process What L values to

More information

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU Audio Coding P.1 Fundamentals Quantization Waveform Coding Subband Coding 1. Fundamentals P.2 Introduction Data Redundancy Coding Redundancy Spatial/Temporal Redundancy Perceptual Redundancy Compression

More information

Predictive Coding. Prediction Prediction in Images

Predictive Coding. Prediction Prediction in Images Prediction Prediction in Images Predictive Coding Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Predictive Coding. Prediction

Predictive Coding. Prediction Predictive Coding Prediction Prediction in Images Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Pulse-Code Modulation (PCM) :

Pulse-Code Modulation (PCM) : PCM & DPCM & DM 1 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from

More information

Multimedia Networking ECE 599

Multimedia Networking ECE 599 Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation

More information

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1 IMAGE COMPRESSION-II Week IX 3/6/23 Image Compression-II 1 IMAGE COMPRESSION Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding Predictive coding Transform coding

More information

Image Compression using DPCM with LMS Algorithm

Image Compression using DPCM with LMS Algorithm Image Compression using DPCM with LMS Algorithm Reenu Sharma, Abhay Khedkar SRCEM, Banmore -----------------------------------------------------------------****---------------------------------------------------------------

More information

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria Source Coding Master Universitario en Ingeniería de Telecomunicación I. Santamaría Universidad de Cantabria Contents Introduction Asymptotic Equipartition Property Optimal Codes (Huffman Coding) Universal

More information

EE 5345 Biomedical Instrumentation Lecture 12: slides

EE 5345 Biomedical Instrumentation Lecture 12: slides EE 5345 Biomedical Instrumentation Lecture 1: slides 4-6 Carlos E. Davila, Electrical Engineering Dept. Southern Methodist University slides can be viewed at: http:// www.seas.smu.edu/~cd/ee5345.html EE

More information

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression Institut Mines-Telecom Vector Quantization Marco Cagnazzo, cagnazzo@telecom-paristech.fr MN910 Advanced Compression 2/66 19.01.18 Institut Mines-Telecom Vector Quantization Outline Gain-shape VQ 3/66 19.01.18

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

EE 121: Introduction to Digital Communication Systems. 1. Consider the following discrete-time communication system. There are two equallly likely

EE 121: Introduction to Digital Communication Systems. 1. Consider the following discrete-time communication system. There are two equallly likely EE 11: Introduction to Digital Communication Systems Midterm Solutions 1. Consider the following discrete-time communication system. There are two equallly likely messages to be transmitted, and they are

More information

CS578- Speech Signal Processing

CS578- Speech Signal Processing CS578- Speech Signal Processing Lecture 7: Speech Coding Yannis Stylianou University of Crete, Computer Science Dept., Multimedia Informatics Lab yannis@csd.uoc.gr Univ. of Crete Outline 1 Introduction

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 5 Other Coding Techniques Instructional Objectives At the end of this lesson, the students should be able to:. Convert a gray-scale image into bit-plane

More information

Digital communication system. Shannon s separation principle

Digital communication system. Shannon s separation principle Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

ECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.

ECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11. ECE47/57 - Lecture Image Compression Fundamentals and Lossless Compression Techniques /03/ Roadmap Preprocessing low level Image Enhancement Image Restoration Image Segmentation Image Acquisition Image

More information

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C Image Compression Background Reduce the amount of data to represent a digital image Storage and transmission Consider the live streaming of a movie at standard definition video A color frame is 720 480

More information

E303: Communication Systems

E303: Communication Systems E303: Communication Systems Professor A. Manikas Chair of Communications and Array Processing Imperial College London Principles of PCM Prof. A. Manikas (Imperial College) E303: Principles of PCM v.17

More information

IMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model

IMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model IMAGE COMRESSIO IMAGE COMRESSIO-II Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding redictive coding Transform coding Week IX 3/6/23 Image Compression-II 3/6/23

More information

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

EE-597 Notes Quantization

EE-597 Notes Quantization EE-597 Notes Quantization Phil Schniter June, 4 Quantization Given a continuous-time and continuous-amplitude signal (t, processing and storage by modern digital hardware requires discretization in both

More information

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256 General Models for Compression / Decompression -they apply to symbols data, text, and to image but not video 1. Simplest model (Lossless ( encoding without prediction) (server) Signal Encode Transmit (client)

More information

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University On Compression Encrypted Data part 2 Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University 1 Brief Summary of Information-theoretic Prescription At a functional

More information

Lecture 7 Predictive Coding & Quantization

Lecture 7 Predictive Coding & Quantization Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 7 Predictive Coding & Quantization June 3, 2009 Outline Predictive Coding Motion Estimation and Compensation Context-Based Coding Quantization

More information

EE5356 Digital Image Processing. Final Exam. 5/11/06 Thursday 1 1 :00 AM-1 :00 PM

EE5356 Digital Image Processing. Final Exam. 5/11/06 Thursday 1 1 :00 AM-1 :00 PM EE5356 Digital Image Processing Final Exam 5/11/06 Thursday 1 1 :00 AM-1 :00 PM I), Closed books and closed notes. 2), Problems carry weights as indicated. 3), Please print your name and last four digits

More information

Proyecto final de carrera

Proyecto final de carrera UPC-ETSETB Proyecto final de carrera A comparison of scalar and vector quantization of wavelet decomposed images Author : Albane Delos Adviser: Luis Torres 2 P a g e Table of contents Table of figures...

More information

A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING

A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING Nasir M. Rajpoot, Roland G. Wilson, François G. Meyer, Ronald R. Coifman Corresponding Author: nasir@dcs.warwick.ac.uk ABSTRACT In this paper,

More information

Joint Source-Channel Coding Optimized On Endto-End Distortion for Multimedia Source

Joint Source-Channel Coding Optimized On Endto-End Distortion for Multimedia Source Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 8-2016 Joint Source-Channel Coding Optimized On Endto-End Distortion for Multimedia Source Ebrahim Jarvis ej7414@rit.edu

More information

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization 3.1. Introduction CHAPTER 3 Transformed Vector Quantization with Orthogonal Polynomials In the previous chapter, a new integer image coding technique based on orthogonal polynomials for monochrome images

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS EC 32 (CR) Total No. of Questions :09] [Total No. of Pages : 02 III/IV B.Tech. DEGREE EXAMINATIONS, APRIL/MAY- 207 Second Semester ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS Time: Three Hours

More information

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions Engineering Tripos Part IIA THIRD YEAR 3F: Signals and Systems INFORMATION THEORY Examples Paper Solutions. Let the joint probability mass function of two binary random variables X and Y be given in the

More information

Design of Optimal Quantizers for Distributed Source Coding

Design of Optimal Quantizers for Distributed Source Coding Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305

More information

Lecture 2. Capacity of the Gaussian channel

Lecture 2. Capacity of the Gaussian channel Spring, 207 5237S, Wireless Communications II 2. Lecture 2 Capacity of the Gaussian channel Review on basic concepts in inf. theory ( Cover&Thomas: Elements of Inf. Theory, Tse&Viswanath: Appendix B) AWGN

More information

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 1 Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 Aline Roumy aline.roumy@inria.fr May 2011 2 Motivation for Video Compression Digital video studio standard ITU-R Rec. 601 Y luminance

More information

Transform coding - topics. Principle of block-wise transform coding

Transform coding - topics. Principle of block-wise transform coding Transform coding - topics Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform Threshold coding Typical coding artifacts

More information

Chapter 10 Applications in Communications

Chapter 10 Applications in Communications Chapter 10 Applications in Communications School of Information Science and Engineering, SDU. 1/ 47 Introduction Some methods for digitizing analog waveforms: Pulse-code modulation (PCM) Differential PCM

More information

Vector Quantization and Subband Coding

Vector Quantization and Subband Coding Vector Quantization and Subband Coding 18-796 ultimedia Communications: Coding, Systems, and Networking Prof. Tsuhan Chen tsuhan@ece.cmu.edu Vector Quantization 1 Vector Quantization (VQ) Each image block

More information

Multimedia Communications. Differential Coding

Multimedia Communications. Differential Coding Multimedia Communications Differential Coding Differential Coding In many sources, the source output does not change a great deal from one sample to the next. This means that both the dynamic range and

More information

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color Compression What Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color Why 720x480x20x3 = 31,104,000 bytes/sec 30x60x120 = 216 Gigabytes for a 2 hour movie

More information

Fundamentals of Digital Image Processing

Fundamentals of Digital Image Processing Fundamentals of Digital mage Processing ANL K. JAN University of California, Davis PRENTCE HALL, Englewood Cliffs, NJ 763 mage Data Compression. NTRODUCTON mage data compression is concerned with minimizing

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Noise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source

Noise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source Noise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source Yuval Kochman, Jan Østergaard, and Ram Zamir Abstract It was recently shown that the symmetric multiple-description

More information

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009 The Secrets of Quantization Nimrod Peleg Update: Sept. 2009 What is Quantization Representation of a large set of elements with a much smaller set is called quantization. The number of elements in the

More information

Predictive Coding. Lossy or lossless. Feedforward or feedback. Intraframe or interframe. Fixed or Adaptive

Predictive Coding. Lossy or lossless. Feedforward or feedback. Intraframe or interframe. Fixed or Adaptive Predictie Coding Predictie coding is a compression tecnique based on te difference between te original and predicted alues. It is also called DPCM Differential Pulse Code Modulation Lossy or lossless Feedforward

More information

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is

More information

Entropies & Information Theory

Entropies & Information Theory Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information

More information

TTIC 31230, Fundamentals of Deep Learning David McAllester, April Information Theory and Distribution Modeling

TTIC 31230, Fundamentals of Deep Learning David McAllester, April Information Theory and Distribution Modeling TTIC 31230, Fundamentals of Deep Learning David McAllester, April 2017 Information Theory and Distribution Modeling Why do we model distributions and conditional distributions using the following objective

More information

repetition, part ii Ole-Johan Skrede INF Digital Image Processing

repetition, part ii Ole-Johan Skrede INF Digital Image Processing repetition, part ii Ole-Johan Skrede 24.05.2017 INF2310 - Digital Image Processing Department of Informatics The Faculty of Mathematics and Natural Sciences University of Oslo today s lecture Coding and

More information

Entropy as a measure of surprise

Entropy as a measure of surprise Entropy as a measure of surprise Lecture 5: Sam Roweis September 26, 25 What does information do? It removes uncertainty. Information Conveyed = Uncertainty Removed = Surprise Yielded. How should we quantify

More information

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet) Compression Motivation Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet) Storage: Store large & complex 3D models (e.g. 3D scanner

More information

CSE 408 Multimedia Information System Yezhou Yang

CSE 408 Multimedia Information System Yezhou Yang Image and Video Compression CSE 408 Multimedia Information System Yezhou Yang Lots of slides from Hassan Mansour Class plan Today: Project 2 roundup Today: Image and Video compression Nov 10: final project

More information

Coding for Discrete Source

Coding for Discrete Source EGR 544 Communication Theory 3. Coding for Discrete Sources Z. Aliyazicioglu Electrical and Computer Engineering Department Cal Poly Pomona Coding for Discrete Source Coding Represent source data effectively

More information

Lecture 20: Quantization and Rate-Distortion

Lecture 20: Quantization and Rate-Distortion Lecture 20: Quantization and Rate-Distortion Quantization Introduction to rate-distortion theorem Dr. Yao Xie, ECE587, Information Theory, Duke University Approimating continuous signals... Dr. Yao Xie,

More information

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) E{(X-Y) 2 } D

More information

2018/5/3. YU Xiangyu

2018/5/3. YU Xiangyu 2018/5/3 YU Xiangyu yuxy@scut.edu.cn Entropy Huffman Code Entropy of Discrete Source Definition of entropy: If an information source X can generate n different messages x 1, x 2,, x i,, x n, then the

More information

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms) Course Code 005636 (Fall 2017) Multimedia Multimedia Data Compression (Lossless Compression Algorithms) Prof. S. M. Riazul Islam, Dept. of Computer Engineering, Sejong University, Korea E-mail: riaz@sejong.ac.kr

More information

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB) Image Compression Qiaoyong Zhong CAS-MPG Partner Institute for Computational Biology (PICB) November 19, 2012 1 / 53 Image Compression The art and science of reducing the amount of data required to represent

More information

Lec 03 Entropy and Coding II Hoffman and Golomb Coding

Lec 03 Entropy and Coding II Hoffman and Golomb Coding CS/EE 5590 / ENG 40 Special Topics Multimedia Communication, Spring 207 Lec 03 Entropy and Coding II Hoffman and Golomb Coding Zhu Li Z. Li Multimedia Communciation, 207 Spring p. Outline Lecture 02 ReCap

More information

Ch. 10 Vector Quantization. Advantages & Design

Ch. 10 Vector Quantization. Advantages & Design Ch. 10 Vector Quantization Advantages & Design 1 Advantages of VQ There are (at least) 3 main characteristics of VQ that help it outperform SQ: 1. Exploit Correlation within vectors 2. Exploit Shape Flexibility

More information

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006) MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK SATELLITE COMMUNICATION DEPT./SEM.:ECE/VIII UNIT V PART-A 1. What is binary symmetric channel (AUC DEC 2006) 2. Define information rate? (AUC DEC 2007)

More information

Digital Image Processing Lectures 13 & 14

Digital Image Processing Lectures 13 & 14 Lectures 13 & 14, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2013 Properties of KL Transform The KL transform has many desirable properties which makes

More information

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16 EE539R: Problem Set 4 Assigned: 3/08/6, Due: 07/09/6. Cover and Thomas: Problem 3.5 Sets defined by probabilities: Define the set C n (t = {x n : P X n(x n 2 nt } (a We have = P X n(x n P X n(x n 2 nt

More information

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. Preface p. xvii Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. 6 Summary p. 10 Projects and Problems

More information

ECE Information theory Final

ECE Information theory Final ECE 776 - Information theory Final Q1 (1 point) We would like to compress a Gaussian source with zero mean and variance 1 We consider two strategies In the first, we quantize with a step size so that the

More information

BASIC COMPRESSION TECHNIQUES

BASIC COMPRESSION TECHNIQUES BASIC COMPRESSION TECHNIQUES N. C. State University CSC557 Multimedia Computing and Networking Fall 2001 Lectures # 05 Questions / Problems / Announcements? 2 Matlab demo of DFT Low-pass windowed-sinc

More information

Lecture 16. Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code

Lecture 16. Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code Lecture 16 Agenda for the lecture Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code Variable-length source codes with error 16.1 Error-free coding schemes 16.1.1 The Shannon-Fano-Elias

More information

The Distributed Karhunen-Loève Transform

The Distributed Karhunen-Loève Transform 1 The Distributed Karhunen-Loève Transform Michael Gastpar, Member, IEEE, Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Manuscript received November 2004; revised March 5, 2006;

More information

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur Module 5 EMBEDDED WAVELET CODING Lesson 13 Zerotree Approach. Instructional Objectives At the end of this lesson, the students should be able to: 1. Explain the principle of embedded coding. 2. Show the

More information

Lecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1)

Lecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1) 3- Mathematical methods in communication Lecture 3 Lecturer: Haim Permuter Scribe: Yuval Carmel, Dima Khaykin, Ziv Goldfeld I. REMINDER A. Convex Set A set R is a convex set iff, x,x 2 R, θ, θ, θx + θx

More information

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801 Rate-Distortion Based Temporal Filtering for Video Compression Onur G. Guleryuz?, Michael T. Orchard y? University of Illinois at Urbana-Champaign Beckman Institute, 45 N. Mathews Ave., Urbana, IL 68 y

More information

(Classical) Information Theory II: Source coding

(Classical) Information Theory II: Source coding (Classical) Information Theory II: Source coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract The information content of a random variable

More information

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK DEPARTMENT: ECE SEMESTER: IV SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A 1. What is binary symmetric channel (AUC DEC

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information