Basic Principles of Video Coding

Size: px
Start display at page:

Download "Basic Principles of Video Coding"

Transcription

1 Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion estimation

2 Basic Principles of Video Coding Analysis/ Source Encoding Encoder Lossy Quantization Lossless Binary Encoding Source model Quantizer parameters Parameter statistics Channel Noise Synthesis/ Source Decoding Dequantization Binary Decoding Decoder Fig. 3. A typical video coding system.

3 Basic Principles of Video Coding Analysis/Synthesis & Source Encoding/Decoding The source model used in the the analysis/synthesis part of the video coding systems may make assumptions about the spatial and temporal correlation between pixels of a sequence. It might also consider the shape and motion of objects or illumination effects. If the source model consists of statistically independent pixels, the parameters of this model would be luminance and chrominance amplitudes. However, if we use an object model, the parameters would be shape, texture and motion of individual objects. Depending on the source model, different source coding scheme can be employed. 3

4 Basic Principles of Video Coding Quantization/Dequantization The parameters of the source model are quantized into a finite set of symbols. The quantized parameters depend on the desired trade off between the bit rate and distortion. Binary Encoding/Decoding The quantized parameters are finally mapped into binary codewords using lossless coding techniques. The resulting bit stream is transmitted over the communication channel where noise may corrupt it. The decoder performs the reverse processes, i.e., binary decoding, dequantization and systhesis. 4

5 Categories of Source Coding Schemes Waveform Coding Assuming statistically independence between pixels, the simplest waveform coding technique is the pulse code modulation (PCM). It is not used because of its inefficiency. Predictive coding exploits the correlation between adjacent pixels by coding the prediction errors between the predicted pixels and the pixels to be coded. To exploit the correlation within a block of pixels, the pixel block is transformed using unitary transforms, such as Karhunen-Loeve (KLT), discrete cosine (DCT), or discrete wavelet (DWT) transforms. The transform serves to decorrelate the original pixels and concentrate the signal energy into a few coefficients, which are then coded. 5

6 Categories of Source Coding Schemes Content-based Coding The block-based coding techniques approximate the shape of objects in a scene with square blocks of fixed size which results in high prediction errors in boundary blocks. Content-based coding techniques segment a video frame into regions corresponding to different objects and code those objects separately. In object-based analysis-synthesis coding, the objects are segmented and described by individual object models. The shape of an object is described by a -D silhouette; the motion by a motion vector field; and the texture by its color waveform. The decoder synthesizes the object using the current shape and motion parameters as well as the color parameters from the preceding frame. 6

7 Categories of Source Coding Schemes Model-based Coding If the object type in a video sequence is known, e.g., a human head, we can use a specially designed wireframe model to describe the object. The approach is called model-based coding which is highly efficient as it adapts to the shape of the object. In semantic coding, the object type as well as the parameters describing its behaviors, e.g., facial expressions, are coded and transmitted. This coding scheme has the potential of achieving very high coding efficiency, because the number of possible behaviors is small, hence the number of bits required to specify them is correspondingly small. 7

8 Categories of Source Coding Schemes Table 3. Comparison of source models, parameter sets and coding techniques. Source Model Encoding Parameters Coding Technique Statistically independent pixels Statistically dependent pixels Translationally moving blocks Unknown moving objects Known moving objects Known moving objects with known behavior Color of each pixel Color of each block Color and motion vector of each block Shape, motion and color of each object Shape, motion and color of each known object Shape, color and behavior of each object PCM Transform coding, predictive coding and vector quantization Block-based hybrid coding Object-based analysissynthesis coding Model-based (or knowledge-based) coding Semantic coding 8

9 Source Model Information Theory Consider a discrete-time and ergodic source which generates sequences {x(n)} of N source symbols. The sequences can be considered as realization of random sequences {X(n)} with random variables X(n) assuming a value of amplitude, k,,..., K. x k The source is a discrete-amplitude source if the size of x k is finite, otherwise it is a continuous-amplitude source. The source is memoryless if successive samples are statistically independent. A sequence is ergodic if its ensemble average is equal to time average. A sequence is ergodic implies it is stationary. 9

10 Source Model Source Entropy For discrete-amplitude memoryless sources, the entropy is H ( X ) K k P( x k )log P( x k ) and for discrete-amplitude sources with memory, H ( X ) lim... P( x)log P( x) N N all x where x is a vector of N successive samples x(n), x(n+),., x(n+n). H ( X ) < H ( X ) log with memory without memory K

11 Source Model The source redundancy as given by R( X ) log K H ( X ) is due to two reasons: a non-uniform distribution of probabilities and the presence of memory. For a memoryless source with ( x ) / K, H ( X ) log K and the redundancy is zero. P k For continuous-amplitude memoryless sources, h ( X ) px ( x)log px ( x) dx and for continuous-amplitude sources with memory, h ( X ) lim... px ( x)log N N p x ( x) dx

12 Source Model It can be shown that for a Gaussian pdf with a given source variance σ X h( X ) with memory < h( X ) without memory (π eσ ) Note that h(x) are formally called differential entropies, and that it is useful to define an entropy power log x which has a maximum value equal to Gaussian source. Q h( X πe ) σ X for a memoryless

13 Source Entropy Amplitude Fig. 3. Frame of Calendar and its histogram. (Entropy 7.67 bits/pixel) 3

14 Source Entropy Amplitude Fig. 3.3 Frame-difference image between frames and of Calendar and its histogram. (Entropy 5.3 bits/pixel) 4

15 Rate Distortion Theory Encoder Source x y z Source Encoder Channel Encoder Channel Decoder Destination x~ Source Decoder y~ Channel Decoder z~ Fig. 3.4 A typical communication system. 5

16 Rate Distortion Theory In Figure 3.4, the source produces N-tuples, or blocks of N pixels x that the source coder uses to produce symbols y. The channel coder converts the symbols y into channel symbols z that the channel transmits. The channel is characterised by having a capacity of C bits per pixel or NC bits per channel symbol. The channel decoder receives z~ and converts it to ~ y, which the source decoder uses to produce x~ that are then made available to the destination. For a given P(z), the mutual information is defined as where P( ~ z z) is the conditional probability and I( z, ~ z ) is a measure of the amount of information about z~ that is conveyed by the channel. I( z, ~ z ) P( z) P( ~ z z)log z, ~ z P( ~ z z) bits/symbol P( ~ z ) 6

17 Rate Distortion Theory Shannon has shown that the channel capacity is defined as C N max I( z, ~ z ) P( z) bits/pixel where the maximization is over all possible distributions on the channel symbols z. Example: If the channel is noiseless, i.e., P( ~ z z),, ~ z z otherwise then, I ( z, ~ z ) H ( z) bits/symbol. 7

18 Rate Distortion Theory That is, the channel conveys all of the information about z to. If the set {z} has K members and they are all equally likely, I( z, ~ z ) is maximized. z~ In this case, C N log K bits/pixel or K NC. If the channel is noisy, the bit rate will be less than C. However, transmission at near C is still possible by employing errorcorrection coding in the channel encoder at the expense of transmission delay between the input and output. 8

19 Rate Distortion Theory For distortionless communication, i.e., ~ x, x NC H ( y) H ( x) bits/block. In most video applications, distortionless communication is generally not a requirement. Distortion is tolerated as long as it is not perceivable subjectively. Since the channel capacity needs only to satisfy NC H (y), considerable savings in bit rate may be possible if H ( y) < H ( x). In this case, the source coding is irreversible, and information is lost. 9

20 Rate Distortion Theory As ~ x x, we define a distortion measure, d( x, ~ x ), and the average distortion is d E[ d( x, ~ x )]. If the conditional probability distribution between input and output is P( ~ x x), the mutual information is then given by I( x, x~ ) P( x) P( x ~ x)log x, x~ P( ~ x x) P( ~ x) In lossy coding, an acceptable distortion threshold D is determined such that d P( x, ~ x ) d( x, ~ x ) x, ~ x D

21 Rate Distortion Theory The rate-distortion function is then defined as R( D) N min I( x, ~ x ) P( ~ x x) bits/pixel R(D) gives the lower bound on the channel capacity required to achieve d D. For a memoryless zero-mean Gaussian source with variance and mean square error criterion σ x R( D) G log i.e., no information needs to be transmitted if σ D x,, D σ D σ x x D σ x.

22 Video Coding Standards - Overview CC h t q Video in T Q VLC v Q T ME + F P m f T Q P Transform Quantiser Picture memory F Loop filter CC Coding control ME Motion estimation VLC Variable length coder h Flag for INTRA/INTER t Flag for transmitted or not Fig. 3.5 A generic video standard encoder. q v m f Quantization parameters Coded bit stream Motion vector Loop filter on/off

23 Video Coding Standards - Overview Side information Buffer Demultiplexer Variable Length Decoder Inverse Quantization Inverse Transform + Side information Motion Compensation Fig. 3.6 A generic video standard decoder. 3

24 Video Coding Standards - Overview All video coding standards, such as H.6, H.63, H.64, MPEG-, MPEG- and MPEG-4 employ block-based hybrid predictive transform coding technique. The image frame is sub-divided into blocks of fixed size. Each block is motion-compensated from the previous frame resulting in a predicted image. The encoder subtracts this predicted image from the original image and transmits the prediction error. If the prediction is inaccurate, i.e., the prediction error exceeds a threshold, the block of pixels instead of the prediction errors is transformed. Motion vectors need to be transmitted separately so that the decoder can perform the same motion-compensation to reconstruct the image block. 4

25 Predictive Coding In predictive coding, the pixel itself is not coded; instead its value is predicted from the neighbouring pixels in the same frame or in the previous frame to exploit the correlation that exists between adjacent pixels. Fig. 3.4 shows the block diagram of a generic lossy predictive coding system. In the encoder, an input sample s is first predicted from the previously reconstructed samples ŝ stored in the memory to form the predicted pixel s p. The prediction error e p is then quantized and coded using a variable-length coder. The decoder structure resembles the prediction loop of the encoder, where the reconstructed samples ŝ give the decoder output. 5

26 Predictive Coding Fig. 3.4 A lossy predictive coding system. 6

27 Predictive Coding Error Analysis of Predictive Coding From Fig. 3.4, the prediction error e p is The reconstructed samples ŝ is given by where ê p p s p is the quantized prediction error, i.e., where e q is the quantization error. e s s ˆ s p + eˆ p e ˆ e + e p p q.,, From the above three equations, we can show that The above relation states that the coding error between the input and output of the predictive coder is the quantization error. s ˆ s eq. 7

28 Predictive Coding Optimal Linear Predictor Design Let the current pixel be s and s k, k,,..., K, the previous pixels which are used to predict s. In linear prediction, the predicted value for s is given by s p K k a k s k where a k are called the prediction coefficients. K is the order of prediction. To determine the prediction coefficients, we minimize the mean square error (MSE) between s and s p. 8

29 Predictive Coding Let S k be the random variables (RVs) corresponding to s k ; and S p be the RVs corresponding to s p. The MSE is σ p E p k [ ] K S S E S a S. Minimzing E[ ] with respect to the prediction coefficients a k, i.e., finding σ p / a, we have m k k E S K ak Sk Sm, m,,..., K k. The above relation is the orthogonality principle for the linear minimum MSE estimator, which states that the prediction error is orthogonal to each past sample used for prediction. 9

30 3 Predictive Coding Let R(k,m) E[S k S m ] be the correlation between S k and S m. The above equation can thus rewritten as or in matrix form, or which gives K, m m R m k R a K k k...,,,, ) (, ), ( ) (, (,) (,) ), ( ) (, ) (,,) ( (,) (,),) ( (,) (,) K R R R a a a K K R K R K R K R R R K R R R K, ] [ r a R. ] [ r R a

31 Predictive Coding Thus, the minimum MSE is σ p [( S S ) S ] K E p R(, ) k a k R( k, ) R(, ) r R(, ) r T T a [ R] r For a stationary source, the autocorrelation of a pixel is a constant, independent of its spatial location, i.e., R ( m, m) R(,), m,,..., K. Furthermore, the correlations between two pixels are symmetrical, i.e., R ( k, m) R( m, k). Exercise: Show that the minimum MSE is σ p E[ S S ) S ]. ( p 3

32 Transform Coding Transform coding is a waveform coding technique whereby the image is sub-divided into non-overlapping blocks of pixels. Each block of pixels is either a vector or an array, depending on whether it is -D or -D transform. It is then transformed by a unitary transformation matrix A to obtain a block of uncorrelated transform coefficients, so that a large fraction of its total energy is packed into relatively few transform coefficients. Therefore the efficiency of a transform is determined by its energy compaction property. The coefficients are quantized separately by scalar quantizers and converted into binary codewords using binary encoding. In the decoder, the quantized coefficients are recovered through an inverse transform. 3

33 33 Transform Coding -D transform Coding ) ( () () N u u u u T ) ( () () N v v v v ) ( () () N v v v v ) ( () () N u u u u Q N Q Q T - Fig D transform coding system.

34 Transform Coding Figure 3.5 depicts the -D transform coding process. It can be expressed conveniently in matrix notation. Let u and v denote the N image vector before and after transformation, respectively. v Tu The transform coefficients, v ( n), n,,..., N, are quantized by a bank of N quantizers optimised based on the statistics of the coefficients, to give v. The inverse transform is - u T v where - T is the N N inverse transformation matrix. For a unitary transformation, the matrix inverse is given by T T - T. 34

35 Transform Coding If matrix T is real, it is also an orthogonal matrix, i.e., - The problem is to find the optimum matrices T and T such that the overall average mean square distortion D N - T T T E T [(u u) (u u) ] is minimized. This transform is Karhunen-Loeve transform.. 35

36 Transform Coding -D transform Coding -D transformation can be extended from the -D case, i.e., where U and V denote the N N image matrix before and after transformation, respectively. The inverse transform is T V TUT U T T VT -D transform can be computed as two separable -D transforms, i.e., -D transformation of the rows are performed first, followed by the -D transformation of the columns of the resulting transform coefficients. 36

37 Transform Coding Karhunen-Loeve Transform (KLT) For a real N image vector u, the basis vectors of the KLT are given by the orthonormalized eigenvectors {w k } and the eigenvalues {λ k } of its covariance matrix R, that is, The KLT of u is defined as and the inverse transform is Rw k λ w, k N. k k T v W * u, u Wv N k v( k) w k where w k is the the kth column of W. 37

38 38 Transform Coding Using matrix diagonalization, we know that The basis vectors of the KLT of a 8 8 first-order stationary Markov source whose covariance matrix is given below where the correlation coefficient ρ is close to, is shown Fig }. Diag{ *T λ k RW D W 7 7 ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ R

39 Transform Coding Fig. 3.6 Basis vectors of 8 8 transforms. 39

40 Transform Coding The KLT even though is optimal in the mean square error sense for a particular image, it is dependent on the statistics of the image data. Hence, there is no general fast algorithm. It can be replaced by sub-optimal transforms, such as the discrete cosine and Hadamard transforms. 4

41 Discrete Cosine Transform Discrete Cosine Transform (DCT) The N N cosine transform matrix C { c( k, n)}, is defined as c( k, n) / N / N π (n + ) k cos N,, k, n N k N, n N The -D DCT of a sequence { u( n), n N } is defined as v( k) N ( ) α k n π (n + ) k u( n)cos N, k N where α( ) / N, α( k) / N for k N. 4

42 4 Discrete Cosine Transform The inverse DCT is given by The basis vectors of the 8 8 DCT are shown in Figure 3.6. The -D DCT pair is given by the following equations: Since DCT is a separable transform, the above equations can be evaluated by first transforming each row of U and then transforming each column of the intermediate result to obtain V., ) ( )cos ( ) ( ) ( + N n N k n k v k k u N k π α T ), ( ), ( ), ( ), ( CUC V N m N n n l c n m m u k c l k v * VC C U *T * * ), ( ), ( ), ( ), ( N k N l n l c l k m v k c n m u

43 Discrete Cosine Transform Fig. 3.7 Basis images of 8 8 DCT. 43

44 Discrete Cosine Transform Properties of DCT. The DCT is real and orthogonal, i.e., * - C C C C T. It is not the real part of the unitary DFT but related to it. 3. The DCT has a fast algorithm similar to that of the FFT. 4. It has excellent energy compaction for highly correlated data, e.g., image data. 5. The N N DCT is very close to the KLT for a first-order stationary Markov source of length N whose autocorrelation matrix R is given on Slide 38, where the correlation coefficient is close to. 44

45 Discrete Cosine Transform Variance KLT, DCT HT Coefficient index Fig. 3.8 Distribution of variances of the transform coefficients (in decreasing order) of a stationary Markov sequence with N 6, ρ

46 Discrete Cosine Transform coefficient 3 coefficients 6 coefficients coefficients 46

47 Discrete Cosine Transform 5 coefficients coefficients 36 coefficients All (64) coefficients 47

48 Hadamard Transform Hadamard Transform (HT) The elements of the basis vectors of HT take only binary values ± and are, therefore, well suited for digital signal processing. The n Hadamard transform matrices, H n, are N N matrices, where N, n is an integer. These can be easily generated by the core matrix H and the Kronecker product recursion H n H n H n H n H. N H n H n 48

49 Hadamard Transform The number of transitions in a basis vector of the HT is called its sequency. The HT of an N vector u is written as v Hu and the inverse transform is given by u Hv where H H n, n log N. The -D HT pair for N N images is obtained by substituting C by H in the DCT transform pair on Slide 4. The basis vectors of the 8 8 HT are shown in Figure

50 Hadamard Transform Properties of HT. The HT is real, symmetric, and orthogonal, i.e., * T H H H H -. It has a fast transform. The -D transformation can be implemented in N log N additions. 3. The HT has good to very good energy compaction for highly correlated images. 5

51 Hadamard Transform Example Consider a data matrix, U The -D transform of U by 4 4 HT where H, Note that H is sequency-ordered. 5

52 5 Hadamard Transform The Hadamard transform is. is U H V T V H V

53 Transform Coding System U Forward Transform V Zigzag Scanning Quantizer B Encoder U Inverse Transform V Inverse Quantizer B Decoder Fig. 3.9 A typical transform coding system. 53

54 Transform Coding System. Divide the N M image into non-overlapping blocks of size p q and transform each block to obtain V, i,,..., I, I NM / pq. i. Scan the coefficients in a zig-zag order as illustrated below. The rationale for doing this is that the variances of the coefficients decreases monotonically along the zig-zag scan path. 54

55 Transform Coding System 3. The coefficients are quantized uniformly or non-uniformly. Note that distortion is introduced in the quantization process which controls the bit rate. 4. The quantized coefficients can further be compressed losslessly by employing entropy coding, such as runlength and Huffman coding. 5. The codewords are then transmitted over the communication channel. 6. In the receiver, the decoder carries out the reverse process. 55

56 Transform Coding System PSNR (db) KLT DCT HT Bit rate (bpp) Fig. 3. PSNR comparisons of various transform coders for a stationary Markov sequence with ρ.95, and block size of 8 8 pixels. 56

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at

More information

Multimedia Networking ECE 599

Multimedia Networking ECE 599 Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation

More information

Waveform-Based Coding: Outline

Waveform-Based Coding: Outline Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and

More information

Image Data Compression

Image Data Compression Image Data Compression Image data compression is important for - image archiving e.g. satellite data - image transmission e.g. web data - multimedia applications e.g. desk-top editing Image data compression

More information

Compression and Coding

Compression and Coding Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)

More information

Digital communication system. Shannon s separation principle

Digital communication system. Shannon s separation principle Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation

More information

Transform coding - topics. Principle of block-wise transform coding

Transform coding - topics. Principle of block-wise transform coding Transform coding - topics Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform Threshold coding Typical coding artifacts

More information

Information and Entropy

Information and Entropy Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication

More information

Transform Coding. Transform Coding Principle

Transform Coding. Transform Coding Principle Transform Coding Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform coefficients Entropy coding of transform coefficients

More information

Predictive Coding. Prediction Prediction in Images

Predictive Coding. Prediction Prediction in Images Prediction Prediction in Images Predictive Coding Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Predictive Coding. Prediction

Predictive Coding. Prediction Predictive Coding Prediction Prediction in Images Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Digital Image Processing Lectures 25 & 26

Digital Image Processing Lectures 25 & 26 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image

More information

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1 IMAGE COMPRESSION-II Week IX 3/6/23 Image Compression-II 1 IMAGE COMPRESSION Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding Predictive coding Transform coding

More information

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image Fundamentals: Coding redundancy The gray level histogram of an image can reveal a great deal of information about the image That probability (frequency) of occurrence of gray level r k is p(r k ), p n

More information

BASICS OF COMPRESSION THEORY

BASICS OF COMPRESSION THEORY BASICS OF COMPRESSION THEORY Why Compression? Task: storage and transport of multimedia information. E.g.: non-interlaced HDTV: 0x0x0x = Mb/s!! Solutions: Develop technologies for higher bandwidth Find

More information

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 1 Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 Aline Roumy aline.roumy@inria.fr May 2011 2 Motivation for Video Compression Digital video studio standard ITU-R Rec. 601 Y luminance

More information

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256 General Models for Compression / Decompression -they apply to symbols data, text, and to image but not video 1. Simplest model (Lossless ( encoding without prediction) (server) Signal Encode Transmit (client)

More information

Compression methods: the 1 st generation

Compression methods: the 1 st generation Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, ) Overview Analog capturing device (camera, microphone) Sampling Fine Quantization A/D CONVERTER PCM encoded or raw signal ( wav, bmp, ) Transform Quantizer VLC encoding Compressed bit stream (mp3, jpg,

More information

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression SYDE 575: Introduction to Image Processing Image Compression Part 2: Variable-rate compression Variable-rate Compression: Transform-based compression As mentioned earlier, we wish to transform image data

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006 jzhang@cse.unsw.edu.au

More information

IMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model

IMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model IMAGE COMRESSIO IMAGE COMRESSIO-II Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding redictive coding Transform coding Week IX 3/6/23 Image Compression-II 3/6/23

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5 Lecture : Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP959 Multimedia Systems S 006 jzhang@cse.unsw.edu.au Acknowledgement

More information

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University On Compression Encrypted Data part 2 Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University 1 Brief Summary of Information-theoretic Prescription At a functional

More information

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU Audio Coding P.1 Fundamentals Quantization Waveform Coding Subband Coding 1. Fundamentals P.2 Introduction Data Redundancy Coding Redundancy Spatial/Temporal Redundancy Perceptual Redundancy Compression

More information

at Some sort of quantization is necessary to represent continuous signals in digital form

at Some sort of quantization is necessary to represent continuous signals in digital form Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for

More information

Objectives of Image Coding

Objectives of Image Coding Objectives of Image Coding Representation of an image with acceptable quality, using as small a number of bits as possible Applications: Reduction of channel bandwidth for image transmission Reduction

More information

Introduction to Video Compression H.261

Introduction to Video Compression H.261 Introduction to Video Compression H.6 Dirk Farin, Contact address: Dirk Farin University of Mannheim Dept. Computer Science IV L 5,6, 683 Mannheim, Germany farin@uni-mannheim.de D.F. YUV-Colorspace Computer

More information

Source Coding: Part I of Fundamentals of Source and Video Coding

Source Coding: Part I of Fundamentals of Source and Video Coding Foundations and Trends R in sample Vol. 1, No 1 (2011) 1 217 c 2011 Thomas Wiegand and Heiko Schwarz DOI: xxxxxx Source Coding: Part I of Fundamentals of Source and Video Coding Thomas Wiegand 1 and Heiko

More information

6. H.261 Video Coding Standard

6. H.261 Video Coding Standard 6. H.261 Video Coding Standard ITU-T (formerly CCITT) H-Series of Recommendations 1. H.221 - Frame structure for a 64 to 1920 kbits/s channel in audiovisual teleservices 2. H.230 - Frame synchronous control

More information

Compression and Coding. Theory and Applications Part 1: Fundamentals

Compression and Coding. Theory and Applications Part 1: Fundamentals Compression and Coding Theory and Applications Part 1: Fundamentals 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance) 2 Why

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 5 Other Coding Techniques Instructional Objectives At the end of this lesson, the students should be able to:. Convert a gray-scale image into bit-plane

More information

Source Coding for Compression

Source Coding for Compression Source Coding for Compression Types of data compression: 1. Lossless -. Lossy removes redundancies (reversible) removes less important information (irreversible) Lec 16b.6-1 M1 Lossless Entropy Coding,

More information

Compression and Coding. Theory and Applications Part 1: Fundamentals

Compression and Coding. Theory and Applications Part 1: Fundamentals Compression and Coding Theory and Applications Part 1: Fundamentals 1 What is the problem? Transmitter (Encoder) Receiver (Decoder) Transformation information unit Channel Ordering (significance) 2 Why

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.

More information

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen,

More information

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. Preface p. xvii Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. 6 Summary p. 10 Projects and Problems

More information

Lecture 7 Predictive Coding & Quantization

Lecture 7 Predictive Coding & Quantization Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 7 Predictive Coding & Quantization June 3, 2009 Outline Predictive Coding Motion Estimation and Compensation Context-Based Coding Quantization

More information

Multimedia & Computer Visualization. Exercise #5. JPEG compression

Multimedia & Computer Visualization. Exercise #5. JPEG compression dr inż. Jacek Jarnicki, dr inż. Marek Woda Institute of Computer Engineering, Control and Robotics Wroclaw University of Technology {jacek.jarnicki, marek.woda}@pwr.wroc.pl Exercise #5 JPEG compression

More information

ECE Information theory Final

ECE Information theory Final ECE 776 - Information theory Final Q1 (1 point) We would like to compress a Gaussian source with zero mean and variance 1 We consider two strategies In the first, we quantize with a step size so that the

More information

EE67I Multimedia Communication Systems

EE67I Multimedia Communication Systems EE67I Multimedia Communication Systems Lecture 5: LOSSY COMPRESSION In these schemes, we tradeoff error for bitrate leading to distortion. Lossy compression represents a close approximation of an original

More information

The information loss in quantization

The information loss in quantization The information loss in quantization The rough meaning of quantization in the frame of coding is representing numerical quantities with a finite set of symbols. The mapping between numbers, which are normally

More information

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is

More information

Department of Electrical Engineering, Polytechnic University, Brooklyn Fall 05 EL DIGITAL IMAGE PROCESSING (I) Final Exam 1/5/06, 1PM-4PM

Department of Electrical Engineering, Polytechnic University, Brooklyn Fall 05 EL DIGITAL IMAGE PROCESSING (I) Final Exam 1/5/06, 1PM-4PM Department of Electrical Engineering, Polytechnic University, Brooklyn Fall 05 EL512 --- DIGITAL IMAGE PROCESSING (I) Y. Wang Final Exam 1/5/06, 1PM-4PM Your Name: ID Number: Closed book. One sheet of

More information

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization 3.1. Introduction CHAPTER 3 Transformed Vector Quantization with Orthogonal Polynomials In the previous chapter, a new integer image coding technique based on orthogonal polynomials for monochrome images

More information

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB) Image Compression Qiaoyong Zhong CAS-MPG Partner Institute for Computational Biology (PICB) November 19, 2012 1 / 53 Image Compression The art and science of reducing the amount of data required to represent

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Lossless Image and Intra-frame Compression with Integer-to-Integer DST

Lossless Image and Intra-frame Compression with Integer-to-Integer DST 1 Lossless Image and Intra-frame Compression with Integer-to-Integer DST Fatih Kamisli, Member, IEEE arxiv:1708.07154v1 [cs.mm] 3 Aug 017 Abstract Video coding standards are primarily designed for efficient

More information

Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression

Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL 12, NO 11, NOVEMBER 2002 957 Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression Markus Flierl, Student

More information

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006) MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK SATELLITE COMMUNICATION DEPT./SEM.:ECE/VIII UNIT V PART-A 1. What is binary symmetric channel (AUC DEC 2006) 2. Define information rate? (AUC DEC 2007)

More information

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site DOCUMENT Anup Basu Audio Image Video Data Graphics Objectives Compression Encryption Network Communications Decryption Decompression Client site Presentation of Information to client site Multimedia -

More information

Basics of DCT, Quantization and Entropy Coding

Basics of DCT, Quantization and Entropy Coding Basics of DCT, Quantization and Entropy Coding Nimrod Peleg Update: April. 7 Discrete Cosine Transform (DCT) First used in 97 (Ahmed, Natarajan and Rao). Very close to the Karunen-Loeve * (KLT) transform

More information

CSE 408 Multimedia Information System Yezhou Yang

CSE 408 Multimedia Information System Yezhou Yang Image and Video Compression CSE 408 Multimedia Information System Yezhou Yang Lots of slides from Hassan Mansour Class plan Today: Project 2 roundup Today: Image and Video compression Nov 10: final project

More information

Revision of Lecture 4

Revision of Lecture 4 Revision of Lecture 4 We have completed studying digital sources from information theory viewpoint We have learnt all fundamental principles for source coding, provided by information theory Practical

More information

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK DEPARTMENT: ECE SEMESTER: IV SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A 1. What is binary symmetric channel (AUC DEC

More information

Image Compression - JPEG

Image Compression - JPEG Overview of JPEG CpSc 86: Multimedia Systems and Applications Image Compression - JPEG What is JPEG? "Joint Photographic Expert Group". Voted as international standard in 99. Works with colour and greyscale

More information

Real-Time Audio and Video

Real-Time Audio and Video MM- Multimedia Payloads MM-2 Raw Audio (uncompressed audio) Real-Time Audio and Video Telephony: Speech signal: 2 Hz 3.4 khz! 4 khz PCM (Pulse Coded Modulation)! samples/sec x bits = 64 kbps Teleconferencing:

More information

Multimedia Communications Fall 07 Midterm Exam (Close Book)

Multimedia Communications Fall 07 Midterm Exam (Close Book) Multimedia Communications Fall 07 Midterm Exam (Close Book) 1. (20%) (a) For video compression using motion compensated predictive coding, compare the advantages and disadvantages of using a large block-size

More information

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for MARKOV CHAINS A finite state Markov chain is a sequence S 0,S 1,... of discrete cv s from a finite alphabet S where q 0 (s) is a pmf on S 0 and for n 1, Q(s s ) = Pr(S n =s S n 1 =s ) = Pr(S n =s S n 1

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Revision of Lecture 5

Revision of Lecture 5 Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information

More information

UNIT I INFORMATION THEORY. I k log 2

UNIT I INFORMATION THEORY. I k log 2 UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING 5 0 DPCM (Differential Pulse Code Modulation) Making scalar quantization work for a correlated source -- a sequential approach. Consider quantizing a slowly varying source (AR, Gauss, ρ =.95, σ 2 = 3.2).

More information

Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences

Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences Jingning Han, Vinay Melkote, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa

More information

Coding for Discrete Source

Coding for Discrete Source EGR 544 Communication Theory 3. Coding for Discrete Sources Z. Aliyazicioglu Electrical and Computer Engineering Department Cal Poly Pomona Coding for Discrete Source Coding Represent source data effectively

More information

Information Theory - Entropy. Figure 3

Information Theory - Entropy. Figure 3 Concept of Information Information Theory - Entropy Figure 3 A typical binary coded digital communication system is shown in Figure 3. What is involved in the transmission of information? - The system

More information

Objective: Reduction of data redundancy. Coding redundancy Interpixel redundancy Psychovisual redundancy Fall LIST 2

Objective: Reduction of data redundancy. Coding redundancy Interpixel redundancy Psychovisual redundancy Fall LIST 2 Image Compression Objective: Reduction of data redundancy Coding redundancy Interpixel redundancy Psychovisual redundancy 20-Fall LIST 2 Method: Coding Redundancy Variable-Length Coding Interpixel Redundancy

More information

The Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform

The Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform The Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform Item Type text; Proceedings Authors Jones, H. W.; Hein, D. N.; Knauer, S. C. Publisher International Foundation

More information

A Video Codec Incorporating Block-Based Multi-Hypothesis Motion-Compensated Prediction

A Video Codec Incorporating Block-Based Multi-Hypothesis Motion-Compensated Prediction SPIE Conference on Visual Communications and Image Processing, Perth, Australia, June 2000 1 A Video Codec Incorporating Block-Based Multi-Hypothesis Motion-Compensated Prediction Markus Flierl, Thomas

More information

Chapter 2 Data Coding and Image Compression

Chapter 2 Data Coding and Image Compression Chapter 2 Data Coding and Image Compression The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Shannon, Claude

More information

BASIC COMPRESSION TECHNIQUES

BASIC COMPRESSION TECHNIQUES BASIC COMPRESSION TECHNIQUES N. C. State University CSC557 Multimedia Computing and Networking Fall 2001 Lectures # 05 Questions / Problems / Announcements? 2 Matlab demo of DFT Low-pass windowed-sinc

More information

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY Discrete Messages and Information Content, Concept of Amount of Information, Average information, Entropy, Information rate, Source coding to increase

More information

Wavelet-based Image Coding: An Overview

Wavelet-based Image Coding: An Overview This is page 1 Printer: Opaque this Wavelet-based Image Coding: An Overview Geoffrey M. Davis Aria Nosratinia ABSTRACT This paper presents an overview of wavelet-based image coding. We develop the basics

More information

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive

More information

3 rd Generation Approach to Video Compression for Multimedia

3 rd Generation Approach to Video Compression for Multimedia 3 rd Generation Approach to Video Compression for Multimedia Pavel Hanzlík, Petr Páta Dept. of Radioelectronics, Czech Technical University in Prague, Technická 2, 166 27, Praha 6, Czech Republic Hanzlip@feld.cvut.cz,

More information

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding SIGNAL COMPRESSION 8. Lossy image compression: Principle of embedding 8.1 Lossy compression 8.2 Embedded Zerotree Coder 161 8.1 Lossy compression - many degrees of freedom and many viewpoints The fundamental

More information

Proyecto final de carrera

Proyecto final de carrera UPC-ETSETB Proyecto final de carrera A comparison of scalar and vector quantization of wavelet decomposed images Author : Albane Delos Adviser: Luis Torres 2 P a g e Table of contents Table of figures...

More information

Module 4. Multi-Resolution Analysis. Version 2 ECE IIT, Kharagpur

Module 4. Multi-Resolution Analysis. Version 2 ECE IIT, Kharagpur Module 4 Multi-Resolution Analysis Lesson Multi-resolution Analysis: Discrete avelet Transforms Instructional Objectives At the end of this lesson, the students should be able to:. Define Discrete avelet

More information

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2.0 THEOREM OF WIENER- KHINTCHINE An important technique in the study of deterministic signals consists in using harmonic functions to gain the spectral

More information

Channel capacity. Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5.

Channel capacity. Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5. Channel capacity Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5. Exercices Exercise session 11 : Channel capacity 1 1. Source entropy Given X a memoryless

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

Basics of DCT, Quantization and Entropy Coding. Nimrod Peleg Update: Dec. 2005

Basics of DCT, Quantization and Entropy Coding. Nimrod Peleg Update: Dec. 2005 Basics of DCT, Quantization and Entropy Coding Nimrod Peleg Update: Dec. 2005 Discrete Cosine Transform (DCT) First used in 974 (Ahmed, Natarajan and Rao). Very close to the Karunen-Loeve * (KLT) transform

More information

3F1 Information Theory, Lecture 3

3F1 Information Theory, Lecture 3 3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2013, 29 November 2013 Memoryless Sources Arithmetic Coding Sources with Memory Markov Example 2 / 21 Encoding the output

More information

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding.

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding. Roadmap Quantization Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory Source coding 2 Introduction 4 1 Lossy coding Original source is discrete Lossless coding: bit rate

More information

Vector Quantization and Subband Coding

Vector Quantization and Subband Coding Vector Quantization and Subband Coding 18-796 ultimedia Communications: Coding, Systems, and Networking Prof. Tsuhan Chen tsuhan@ece.cmu.edu Vector Quantization 1 Vector Quantization (VQ) Each image block

More information

Analysis of methods for speech signals quantization

Analysis of methods for speech signals quantization INFOTEH-JAHORINA Vol. 14, March 2015. Analysis of methods for speech signals quantization Stefan Stojkov Mihajlo Pupin Institute, University of Belgrade Belgrade, Serbia e-mail: stefan.stojkov@pupin.rs

More information

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color Compression What Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color Why 720x480x20x3 = 31,104,000 bytes/sec 30x60x120 = 216 Gigabytes for a 2 hour movie

More information

Laboratory 1 Discrete Cosine Transform and Karhunen-Loeve Transform

Laboratory 1 Discrete Cosine Transform and Karhunen-Loeve Transform Laboratory Discrete Cosine Transform and Karhunen-Loeve Transform Miaohui Wang, ID 55006952 Electronic Engineering, CUHK, Shatin, HK Oct. 26, 202 Objective, To investigate the usage of transform in visual

More information

ELEMENT OF INFORMATION THEORY

ELEMENT OF INFORMATION THEORY History Table of Content ELEMENT OF INFORMATION THEORY O. Le Meur olemeur@irisa.fr Univ. of Rennes 1 http://www.irisa.fr/temics/staff/lemeur/ October 2010 1 History Table of Content VERSION: 2009-2010:

More information

Filterbank Optimization with Convex Objectives and the Optimality of Principal Component Forms

Filterbank Optimization with Convex Objectives and the Optimality of Principal Component Forms 100 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 1, JANUARY 2001 Filterbank Optimization with Convex Objectives and the Optimality of Principal Component Forms Sony Akkarakaran, Student Member,

More information

Predictive Coding. Lossy or lossless. Feedforward or feedback. Intraframe or interframe. Fixed or Adaptive

Predictive Coding. Lossy or lossless. Feedforward or feedback. Intraframe or interframe. Fixed or Adaptive Predictie Coding Predictie coding is a compression tecnique based on te difference between te original and predicted alues. It is also called DPCM Differential Pulse Code Modulation Lossy or lossless Feedforward

More information

EE5585 Data Compression April 18, Lecture 23

EE5585 Data Compression April 18, Lecture 23 EE5585 Data Compression April 18, 013 Lecture 3 Instructor: Arya Mazumdar Scribe: Trevor Webster Differential Encoding Suppose we have a signal that is slowly varying For instance, if we were looking at

More information

Digital Image Processing

Digital Image Processing Digital Image Processing, 2nd ed. Digital Image Processing Chapter 7 Wavelets and Multiresolution Processing Dr. Kai Shuang Department of Electronic Engineering China University of Petroleum shuangkai@cup.edu.cn

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801 Rate-Distortion Based Temporal Filtering for Video Compression Onur G. Guleryuz?, Michael T. Orchard y? University of Illinois at Urbana-Champaign Beckman Institute, 45 N. Mathews Ave., Urbana, IL 68 y

More information

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS EC 32 (CR) Total No. of Questions :09] [Total No. of Pages : 02 III/IV B.Tech. DEGREE EXAMINATIONS, APRIL/MAY- 207 Second Semester ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS Time: Three Hours

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information