A Lossless Image Coder With Context Classification, Adaptive Prediction and Adaptive Entropy Coding

Size: px
Start display at page:

Download "A Lossless Image Coder With Context Classification, Adaptive Prediction and Adaptive Entropy Coding"

Transcription

1 A Lossless Image Coder With Context Classification, Adaptive Prediction and Adaptive Entropy Coding Author Golchin, Farshid, Paliwal, Kuldip Published 1998 Conference Title Proc. IEEE Conf. Acoustics, Speech and Signal Processing DOI Copyright Statement 1998 IEEE. Personal use of this material is permitted. However, permission to reprint/ republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Downloaded from Link to published version Griffith Research Online

2 A LOSSLESS IMAGE CODER WITH CONTEXT CLASSIFICATION, ADAPTIVE PREDICTION AND ADAPTIVE ENTROPY CODING Furshid Golchin and Kuldip K. Puliwul School of Microelectronic Engineering Griffith University Brisbane, QLD 41 11, Australia EGolc hin, K.Pali w me.gu.edu.au ABSTRACT In this paper, we combine a context classification scheme with adaptive prediction and entropy coding to produce an adaptive lossless image code?. In this coder, we maximize the benefits of adaptivity using both adaptive prediction and entropy coding. The adaptive prediction is closely tied with the classification of contexts within the image. These contexts are defined with respect to the local edge, texture or gradient characteristics as well as local activity within small blocks of the image. For each context an optimal predictor is found which is used for the prediction of all pixels belonging to that particular context. Once the predicted values have been removed from the original image, a clustering algorithm is used to design a separate, optimal entropy coding scheme for encoding the prediction residual. Blocks of residual pixels are classified into a finite number of classes and members of each class are encoded using the entropy coder designed for that particular class. The combination of these two powerful techniques produces some of the best lossless coding results reported so far. 1. INTRODUCTION Image coding is an essential component of many digital transmission and storage systems. Generally speaking, image coding algorithms can be divided into two categories: lossy and lossless. Lossy compression techniques are more popular, but they lead to coding distortions (or artifacts) which are not tolerable in certain applications. In areas such as medical image coding and some satellite imaging for instance, coding artifacts can have potentially adverse consequences or can corrupt data which has been obtained at a great cost. It is in these areas that lossless image coding methods are utilized. Although lossless methods offer significantly lower compression ratios, the fact that they preserve the original data has made them indisposable. In recent years, a great deal of research in image coding has concentrated on lossy methods. The development of better transforms, quantizers and particularly adaptivity in coders has resulted in significant advances in this area. An adaptive coder is able to adapt its workings to the local characteristics within the different regions of an image. Use of context [l] information to provide adaptivity has demonstrated significant improvements in lossless compression results. However, many of these techniques are based on heuristics and often do not fully exploit the gains offered by adaptive coding. *The first author is supported in part by a CSIRO Division of Telecomm. and Industrial Physics postgraduate scholarship. A lossless image coder in its simplest form is implemented using a fixed predictor and a fixed entropy coder. The predictor makes a prediction for each pixel based on previously transmitted values. The predicted values are then subtracted from the original values, thus leaving the prediction error (or residual). An entropy coder is then used to encode the residual signal. Adaptivity can be provided by making the predictor andor the entropy coder adaptive. An adaptive predictor is able to adapt to the characteristics of various textures or the direction of edges and hence produce more accurate predictions. An adaptive entropy coder, on the other hand, can adjust its workings to better match the local statistics of an image and produce shorter-length codes. 2. CONTEXT CLASSIFICATION AND ADAPTIVE PREDICTION In this paper, we concentrate on one particular type of adaptive prediction which is performed by switching among a finite number of predictors. The decision of which predictor to use is made for each pixel being encoded and is solely based on previously transmitted pixel values. The difficulty with this type of adaptive prediction schemes is in the trade-off that exists between the number of predictors and computational complexity. Since the appropriate predictor is often selected using an exhaustive search [2] the computational complexity is high and can only be reduced by limiting the number of predictors. We have previously presented an optimal scheme for the design of a small number of predictors [2]; however, this scheme requires a computationally intensive design process. The main performance obstacle in the path of the adaptive prediction schemes mentioned above is the exhaustive search required to map the local context from pixel domain to an appropriate predictor. The performance can be significantly improved if a faster mapping is used. Such a mapping has been recently proposed by Wu and Memmon [6] [5]. The coder named CALK [5], has many distinct features which contribute to its state-of-the-art performance; however its most significant (and possibly understated) feature is its Context Classification (or Quantization). In CALIC, a fast method is used to classify the causal neighborhood (context) of the pixel to be encoded intc a finite number of contexts. For each of the possible contexts, a bias value (a scalar) is maintained which is added to the prediction imade by a less sophisticated linear predictor in order to improvi: its performance. CALK assumes that the predictor is consistentlly repeating a similar prediction error over the same context which can be compen $ IEEE

3 sated for by adding the bias value. In this paper, we take a slightly different approach and set out to exploit this fast mapping to the fullest. We do so by using context classification to select a linear predictor from a set of 1024 linear predictors. In this fashion, a pixel-by-pixel decision is made as to which linear predictor should be used for predicting that particular pixel. As done in [6], we define the prediction context of a pixel x (see Fig. 1) as: C(X) = (w,ww,nw,n, nn, ne, 2n - nn, 2w - ww). (1) where each of the elements is the pixel in that direction (north, south,...) with respect to the pixel being encoded. The two values 2n - nn and 2w - ww do not correspond to actual pixel values. These values are calculated from the values of pixels n,nn,w and ww and are used to provide some measure of the gradient of intensity within the context. For the purpose of notation, we also index the values in C(X) such that C,(X) = w, C,(X) = ww,..., C,(X) = 2w - ww. Next, we calculate the value p which is the mean of all pixels in the context of X: 8 a=1 Using the value p as a reference point, we are now in a position to generate the %bit, binary string S(X) which defines the shape or the texture of the context: where, S(X) = (SI(X), S2(X), --., ss (X)). (3) The 8-bit binary string S(X), can also be interpreted as a binary number ranging between 0 and 255. We use this number as an index for the classification of the context C(X). This index classifies each context according to its texture, gradient, edges and so on. This type of classification is arguably the key factor in the success of CALK. However, rather than using the Gradient Adjusted Predictor (GAP) used in CALIC to produce a reference value, we have used the mean value p which is easier to calculate. So far, the shape of the context has been classified into one of 256 different shapes. However, since in calculating S(X), all magnitude information has been discarded and only sign information has been retained, S(X) does not contain any information about the strength of textures, steepness of gradients or the sharpness of edges. In order to incorporate some of this information into the context classification, we a150 calculate a separate index corresponding to the level of activity (or energy) within the context. To calculate the energy index, we first need to define some measure of the energy within the context. We do so by calculating the standard deviation of the values in each context: t The standard deviation value U is then quantized into one of 4 levels using Lloyd-Max [SI scalar quantizer. This quantizer is (5) I I 1 I I Figure 1 : The Prediction context of pixel X designed once using data from a set of training images and is kept for all future usage. In this fashion, the quantization index q(x) E {1,2,3,4} is found for the context of each pixel to be predicted. This index provides the activity information which is not contained in the index S(X) calculated previously. The two indices complement each other in terms of information and their combination is used to produce an effective classification for each context: I(X) = 4x1 S(X). (6) In this way, I(X) becomes an integer between 0 and 1023 which is used as the final classification index for each context. We design an optimum 4-th oder linear predictor for each of the 1024 contexts using either training data or the image to be encoded. In coding, the context of each pixel is classified and then the appropriate predictor is used for the prediction of that particular pixel. It should be noted that the predictors for each context are designed using a Mean-Squared-Error (MSE) criterion. 3. THE MINIMUM-ENTROPY CLUSTERING ALGORITHM The adaptive prediction scheme described in the previous section, decorrelates neighboring pixels in the image. However, upon inspecting the residual image, it becomes clear that large values of residuals (prediction error) tend to be confined to certain areas while small residual values are also grouped together. This non-uniformity in the local statistics of the areas within the residual image can be exploited in entropy coding. This is done through using multiple entropy coders, each of which is matched to a particular type of region in the residual image. The question remains as to how the different areas within the image can be classified into different types and how the entropy coders can be designed. To this end, we utilize the Minimum-Entropy Clustering (MEC) [9] algorithm. Our aim in the use of the Minimum-Entropy Clustering algorithm is to classify blocks of samples and then encode the samples in each block using a suitable entropy coder. The classification and design of the entropy coders should be performed in a way such that the overall entropy is minimized. In order to design a coding system with N entropy coders, the MEC algorithm operates as follows: 1. Initialization: N probability distribution functions (PDF s) are defined. These PDF s will define the initial classes. 2. Minimum-Code-Length classijication: Each block of samples B = (bo, bl,..., is classified as belonging to 2546

4 class C such that the code length (after entropy coding) L = - Em- 2=0 logl p(b,ic) is minimized. This is similar to a nearest neighbor selection in VQ design. 3. Re-Estimate class statistics: Estimate the new class PDF s. This will ensure that the class statistics are matched to those of the samples in that class and hence the entropy is further reduced. This step is similar to a centroid calculation in VQ design. 4. Iteration: Stop if a maximum number of iterations is reached or the classes have converged. Otherwise, go to step 2. Steps 2 and 3 form the core of the MEC algorithm. In each of these two steps, the overall entropy is reduced. There are a number of choices available for the initial classification. However, a better definition of the initial classes can reduce the number of iterations required. Since we wish to group together blocks with similar statistics, a reasonable choice for initial classification would be classification based on the variance of the blocks. To do so, we choose a classification similar to that used by Chen and Smith [3]. The variance of each block is estimated and the blocks are sorted in the order of increasing (or decreasing) variance. Using the sorted list, m - 1 threshold values (of variance) are selected and used to classify the blocks into m classes. After this classification, the class PDF s are estimated and used to define the initial classes. This algorithm may be used in either a parametric or a nonparametric form. In its parametric form, all distributions are modeled as Generalized Gaussian (GG) distributions [4] whose shape parameter and variance are estimated. In this case, step 3 of the algorithm becomes a maximum-likelihood estimation of the shape parameter and variance. The main disadvantage of using the parametric form of the MEC algorithm is the additional processing required in the calculation of probabilities (Step 2) and the parameter estimation (Step 3). In the non-parametric version of the MEC algorithm, frequency tables are maintained for each of the classes. These frequency tables are updated in step 3 of the algorithm. After the MEC algorithm has converged, these frequency tables are used to design the entropy coders. It is for this reason that the frequency tables must also be known at the decoder. In the parametric case, the probabilities can be efficiently described to the decoder by transmitting two parameters (shape and variance) per class. In the non-parametric version, the frequency tables must be explicitly transmitted. To reduce the amount of information, we may take advantage of the symmetry in the frequency tables. One half of each frequency table is quantized using 6-8 bits and then transmitted to the decoder. If adaptive entropy coders are used, the probability tables are quickly adapted to match the source statistics and any mismatches caused by quantization rapidly disappear. There is a trade-off between two versions of the MEC algorithm mentioned above. The non-parametric version offers faster execution times at the expense of added side-information; although the parametric version is slower to run it is much more concise to transmit its parameters. However, in terms of compression the two versions of the MEC algorithm produce quite similar results. The results in this paper have been obtained using the non-parametric version of the MEC algorithm. 4. A CONTEXT CLASSIFICATION BA SED IMAGE CODER In this section, we examine the design of the adaptive predictor and the entropy coders used to encode the prediction residuals. As mentioned in the previous section, the predictors can be designed either for a training set of images or specifically for the image to be encoded. There is a trade-off between the the two methods which parallels the trade-offs experienced in quantize1 design. If a training set of images are used, the training is performed off-line, however the predictors are not designed for the particular image and the predictors are slightly inferior in performance. Altematively, if the predictors are designed for the image to be encoded, then some online training is required and the predictor coefficients must be transmitted to the receiver. In this paper, we examine both of the above-mentioned scenarios and compare their performance. The training algorithm for the predictors is as follows: For each pixel in the image (Training image, or image to be encoded) the context C(X) is found. We calculate and remove average value p from the pixels in C(X). Take the sign of each of the mean removed values as +=0 and -=1 to make an 8-bit binary string S(X). Calculate the standard deviation of the values in C(X). Quantize the Standard deviation into onme of 4 levels to find energy index q( X). The pixel is classified as belonging to 1024 contexts by combining S(X) andq(x). Update autocorrelation values for that context. Retum to Step 1 (until the entire image has been processed) The standard deviation values are quantized using a Lloyd- Max Once the classification has been completed and the autocorrelation values are finalized, the autocorrelation values are used to design a 4th-order optimum linear predictor for each context. If a particular context appears rarely or not at all (in the training set), then it is allocated a trivial second order predictor with two coefficients of 0.5. It should also be noted that if the predictors are specifically designed for the image to be encoded, their coefficients must be transmitted as side information and contribute ;around 0.01 to 0.02 bpp to the overall bit-rate. All predictor coefficients are quantized using 10 bits per predictor coefficient. It is interesting to note that within a typical image, only of the possible 1024 contexts appear. This is a significant advantage with respect to the transmission of the predictor coefficients, since only a small portion of the predictjdr coefficients need to be transmitted. The MEC algorithm was used to define a classification scheme and design the entropy coders. Classification vias made on blocks of 8x8 pixels since it was experimentally found to produce the best results. The 8x8 blocks were classified into 16 classes and hence encoded using 16 different entropy coders. As mentioned previously, along with the classification scheme, the frequency tables for the entropy coders must also be transmitted to the decoder. For a 512x512 pixel image this results in an added bit-rate of 0.01 bpp. For a 256x256 pixel image this overhead increases to approximately 0.04 bpp. 2547

5 Coded fruit ~~ lena man mldhill Coding Method Table 1: A comparison of lossless compression results (quoted in bpp) 5. RESULTS AND CONCLUSIONS The coding results are listed in Table 1. The Context Classification Based Adaptive Coder trained on a Training Set is referred to as CCBAC-TS and the coder trained on the Specific Image is referred to as CCBAC-SI. A set of ten images, excluding the training set was used for training the CCBAC-TS. The values quoted in the table are based on actual file sizes and include all overheads. The results obtained from CALIC [6], were also provided for comparison. From Table 1, it is clear that both proposed methods significantly outperform CALIC which is considered to be the state-of-the-art in lossless image coding. As expected, the image specific training algorithm CCBAC-SI outperforms the pre-trained predictors used in CCBAC-TS. What is surprising however, is the small difference between the two sets of results. This suggests that in many cases, the pre-trained predictors may be advantageous, since they require no online training. The use of off-line training (using a training set of images), is even more appropriate when the coder is being used on satellite or medical images. For instance, if the coder is going to be used in encoding X-ray images, then the predictors can be trained using a suite of X-ray images. Both of the tested coders outperform CALIC. We must, however, acknowledge the increased computational complexity of both coders compared to CALIC. Work is currently in progress toward further optimizing the speed of the presented algorithms. The prediction scheme presented in this paper, demonstrates the advantages of using multiple predictors and the fast context classification scheme presented. The fact that the context classification scheme does not use an exhaustive search allows a large number of contexts to be utilized. The combination of this type of prediction with an adaptive entropy coding scheme such as the MEC algorithm was shown to produce some of the best results obtained in image coding to date. [4] N. Farvardin and J.W. Modestino, Optimum quantizer performance for a class of non-gaussian memoryless sources, IEEE Trans. Inform. Theory, vol. IT-30, pp , May [SI X. Wu, Context selection and quantization for lossless image coding, Proc. Data Compression Con$ 95, March [6] X. Wu and N. Memmon, CALIC - A context based lossless image codec, Proc. Int. Con$ on Acoust., Speech, Signal Processing, Atlanta, GA, pp , May [7] Y. Linde, A. Buzo and R.M. Gray, An algorithm for vector quantizer design, IEEE Trans. Comm., COM-28, pp , January [8] S.P. Lloyd, Least squares quantization in PCM, IEEE Trans. Inform. Theory, IT-28, pp , March [9] E Golchin and K.K. Paliwal, Minimum-entropy clustering and its application to lossless image coding, Accepted for KIP-97, October REFERENCES [l] T.V. Ramabadran and K. Chen, The use of contextual information in the reversible compression of medical images, IEEE Trans. Medical Imaging, vol. 11, pp , June [2] E Golchin and K.K. Paliwal, Clustering-based adaptive prediction and entropy coding for use in lossless image coding, Acceptedfor KIP-97, October [3] W.H. Chen and C.H. Smith, Adaptive coding of monochrome and color images, IEEE Trans. on Commun., vol. COM-25, pp , November

Empirical Lower Bound on the Bitrate for the Transparent Memoryless Coding of Wideband LPC Parameters

Empirical Lower Bound on the Bitrate for the Transparent Memoryless Coding of Wideband LPC Parameters Empirical Lower Bound on the Bitrate for the Transparent Memoryless Coding of Wideband LPC Parameters Author So, Stephen, Paliwal, Kuldip Published 2006 Journal Title IEEE Signal Processing Letters DOI

More information

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014 Scalar and Vector Quantization National Chiao Tung University Chun-Jen Tsai 11/06/014 Basic Concept of Quantization Quantization is the process of representing a large, possibly infinite, set of values

More information

Vector Quantization and Subband Coding

Vector Quantization and Subband Coding Vector Quantization and Subband Coding 18-796 ultimedia Communications: Coding, Systems, and Networking Prof. Tsuhan Chen tsuhan@ece.cmu.edu Vector Quantization 1 Vector Quantization (VQ) Each image block

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.

More information

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition Review of Quantization UMCP ENEE631 Slides (created by M.Wu 004) Quantization UMCP ENEE631 Slides (created by M.Wu 001/004) L-level Quantization Minimize errors for this lossy process What L values to

More information

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l Vector Quantization Encoder Decoder Original Image Form image Vectors X Minimize distortion k k Table X^ k Channel d(x, X^ Look-up i ) X may be a block of l m image or X=( r, g, b ), or a block of DCT

More information

Logarithmic quantisation of wavelet coefficients for improved texture classification performance

Logarithmic quantisation of wavelet coefficients for improved texture classification performance Logarithmic quantisation of wavelet coefficients for improved texture classification performance Author Busch, Andrew, W. Boles, Wageeh, Sridharan, Sridha Published 2004 Conference Title 2004 IEEE International

More information

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression Institut Mines-Telecom Vector Quantization Marco Cagnazzo, cagnazzo@telecom-paristech.fr MN910 Advanced Compression 2/66 19.01.18 Institut Mines-Telecom Vector Quantization Outline Gain-shape VQ 3/66 19.01.18

More information

Image compression using a stochastic competitive learning algorithm (scola)

Image compression using a stochastic competitive learning algorithm (scola) Edith Cowan University Research Online ECU Publications Pre. 2011 2001 Image compression using a stochastic competitive learning algorithm (scola) Abdesselam Bouzerdoum Edith Cowan University 10.1109/ISSPA.2001.950200

More information

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding.

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding. Roadmap Quantization Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory Source coding 2 Introduction 4 1 Lossy coding Original source is discrete Lossless coding: bit rate

More information

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding SIGNAL COMPRESSION 8. Lossy image compression: Principle of embedding 8.1 Lossy compression 8.2 Embedded Zerotree Coder 161 8.1 Lossy compression - many degrees of freedom and many viewpoints The fundamental

More information

Can the sample being transmitted be used to refine its own PDF estimate?

Can the sample being transmitted be used to refine its own PDF estimate? Can the sample being transmitted be used to refine its own PDF estimate? Dinei A. Florêncio and Patrice Simard Microsoft Research One Microsoft Way, Redmond, WA 98052 {dinei, patrice}@microsoft.com Abstract

More information

Basic Principles of Video Coding

Basic Principles of Video Coding Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion

More information

EE368B Image and Video Compression

EE368B Image and Video Compression EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer

More information

Digital Image Processing Lectures 25 & 26

Digital Image Processing Lectures 25 & 26 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image

More information

Fractal Dimension and Vector Quantization

Fractal Dimension and Vector Quantization Fractal Dimension and Vector Quantization [Extended Abstract] Krishna Kumaraswamy Center for Automated Learning and Discovery, Carnegie Mellon University skkumar@cs.cmu.edu Vasileios Megalooikonomou Department

More information

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009 The Secrets of Quantization Nimrod Peleg Update: Sept. 2009 What is Quantization Representation of a large set of elements with a much smaller set is called quantization. The number of elements in the

More information

Ch. 10 Vector Quantization. Advantages & Design

Ch. 10 Vector Quantization. Advantages & Design Ch. 10 Vector Quantization Advantages & Design 1 Advantages of VQ There are (at least) 3 main characteristics of VQ that help it outperform SQ: 1. Exploit Correlation within vectors 2. Exploit Shape Flexibility

More information

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization 3.1. Introduction CHAPTER 3 Transformed Vector Quantization with Orthogonal Polynomials In the previous chapter, a new integer image coding technique based on orthogonal polynomials for monochrome images

More information

HARMONIC VECTOR QUANTIZATION

HARMONIC VECTOR QUANTIZATION HARMONIC VECTOR QUANTIZATION Volodya Grancharov, Sigurdur Sverrisson, Erik Norvell, Tomas Toftgård, Jonas Svedberg, and Harald Pobloth SMN, Ericsson Research, Ericsson AB 64 8, Stockholm, Sweden ABSTRACT

More information

Pulse-Code Modulation (PCM) :

Pulse-Code Modulation (PCM) : PCM & DPCM & DM 1 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from

More information

Multimedia Communications. Scalar Quantization

Multimedia Communications. Scalar Quantization Multimedia Communications Scalar Quantization Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing

More information

Design of Optimal Quantizers for Distributed Source Coding

Design of Optimal Quantizers for Distributed Source Coding Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305

More information

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak 4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the

More information

Fractal Dimension and Vector Quantization

Fractal Dimension and Vector Quantization Fractal Dimension and Vector Quantization Krishna Kumaraswamy a, Vasileios Megalooikonomou b,, Christos Faloutsos a a School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 523 b Department

More information

Proyecto final de carrera

Proyecto final de carrera UPC-ETSETB Proyecto final de carrera A comparison of scalar and vector quantization of wavelet decomposed images Author : Albane Delos Adviser: Luis Torres 2 P a g e Table of contents Table of figures...

More information

Selective Use Of Multiple Entropy Models In Audio Coding

Selective Use Of Multiple Entropy Models In Audio Coding Selective Use Of Multiple Entropy Models In Audio Coding Sanjeev Mehrotra, Wei-ge Chen Microsoft Corporation One Microsoft Way, Redmond, WA 98052 {sanjeevm,wchen}@microsoft.com Abstract The use of multiple

More information

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive

More information

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801 Rate-Distortion Based Temporal Filtering for Video Compression Onur G. Guleryuz?, Michael T. Orchard y? University of Illinois at Urbana-Champaign Beckman Institute, 45 N. Mathews Ave., Urbana, IL 68 y

More information

Image Compression using DPCM with LMS Algorithm

Image Compression using DPCM with LMS Algorithm Image Compression using DPCM with LMS Algorithm Reenu Sharma, Abhay Khedkar SRCEM, Banmore -----------------------------------------------------------------****---------------------------------------------------------------

More information

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at

More information

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS Parvathinathan Venkitasubramaniam, Gökhan Mergen, Lang Tong and Ananthram Swami ABSTRACT We study the problem of quantization for

More information

on a per-coecient basis in large images is computationally expensive. Further, the algorithm in [CR95] needs to be rerun, every time a new rate of com

on a per-coecient basis in large images is computationally expensive. Further, the algorithm in [CR95] needs to be rerun, every time a new rate of com Extending RD-OPT with Global Thresholding for JPEG Optimization Viresh Ratnakar University of Wisconsin-Madison Computer Sciences Department Madison, WI 53706 Phone: (608) 262-6627 Email: ratnakar@cs.wisc.edu

More information

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose ON SCALABLE CODING OF HIDDEN MARKOV SOURCES Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa Barbara, CA, 93106

More information

LOW COMPLEXITY WIDEBAND LSF QUANTIZATION USING GMM OF UNCORRELATED GAUSSIAN MIXTURES

LOW COMPLEXITY WIDEBAND LSF QUANTIZATION USING GMM OF UNCORRELATED GAUSSIAN MIXTURES LOW COMPLEXITY WIDEBAND LSF QUANTIZATION USING GMM OF UNCORRELATED GAUSSIAN MIXTURES Saikat Chatterjee and T.V. Sreenivas Department of Electrical Communication Engineering Indian Institute of Science,

More information

Fuzzy quantization of Bandlet coefficients for image compression

Fuzzy quantization of Bandlet coefficients for image compression Available online at www.pelagiaresearchlibrary.com Advances in Applied Science Research, 2013, 4(2):140-146 Fuzzy quantization of Bandlet coefficients for image compression R. Rajeswari and R. Rajesh ISSN:

More information

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

Multimedia Communications. Mathematical Preliminaries for Lossless Compression Multimedia Communications Mathematical Preliminaries for Lossless Compression What we will see in this chapter Definition of information and entropy Modeling a data source Definition of coding and when

More information

MODERN video coding standards, such as H.263, H.264,

MODERN video coding standards, such as H.263, H.264, 146 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 16, NO. 1, JANUARY 2006 Analysis of Multihypothesis Motion Compensated Prediction (MHMCP) for Robust Visual Communication Wei-Ying

More information

Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression

Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL 12, NO 11, NOVEMBER 2002 957 Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression Markus Flierl, Student

More information

Number Representation and Waveform Quantization

Number Representation and Waveform Quantization 1 Number Representation and Waveform Quantization 1 Introduction This lab presents two important concepts for working with digital signals. The first section discusses how numbers are stored in memory.

More information

ECE533 Digital Image Processing. Embedded Zerotree Wavelet Image Codec

ECE533 Digital Image Processing. Embedded Zerotree Wavelet Image Codec University of Wisconsin Madison Electrical Computer Engineering ECE533 Digital Image Processing Embedded Zerotree Wavelet Image Codec Team members Hongyu Sun Yi Zhang December 12, 2003 Table of Contents

More information

at Some sort of quantization is necessary to represent continuous signals in digital form

at Some sort of quantization is necessary to represent continuous signals in digital form Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for

More information

OPTIMUM fixed-rate scalar quantizers, introduced by Max

OPTIMUM fixed-rate scalar quantizers, introduced by Max IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL 54, NO 2, MARCH 2005 495 Quantizer Design for Channel Codes With Soft-Output Decoding Jan Bakus and Amir K Khandani, Member, IEEE Abstract A new method of

More information

A fractional bit encoding technique for the GMM-based block quantisation of images

A fractional bit encoding technique for the GMM-based block quantisation of images Digital Signal Processing 15 (2005) 255 275 www.elsevier.com/locate/dsp A fractional bit encoding technique for the GMM-based block quantisation of images Kuldip K. Paliwal, Stephen So School of Microelectronic

More information

Proc. of NCC 2010, Chennai, India

Proc. of NCC 2010, Chennai, India Proc. of NCC 2010, Chennai, India Trajectory and surface modeling of LSF for low rate speech coding M. Deepak and Preeti Rao Department of Electrical Engineering Indian Institute of Technology, Bombay

More information

Progressive Wavelet Coding of Images

Progressive Wavelet Coding of Images Progressive Wavelet Coding of Images Henrique Malvar May 1999 Technical Report MSR-TR-99-26 Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 1999 IEEE. Published in the IEEE

More information

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) E{(X-Y) 2 } D

More information

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER 2 Quantization After the introduction to image and video compression presented in Chapter 1, we now address several fundamental aspects of image and video compression in the remaining chapters of Section

More information

- An Image Coding Algorithm

- An Image Coding Algorithm - An Image Coding Algorithm Shufang Wu http://www.sfu.ca/~vswu vswu@cs.sfu.ca Friday, June 14, 2002 22-1 Agenda Overview Discrete Wavelet Transform Zerotree Coding of Wavelet Coefficients Successive-Approximation

More information

Implementation of Lossless Huffman Coding: Image compression using K-Means algorithm and comparison vs. Random numbers and Message source

Implementation of Lossless Huffman Coding: Image compression using K-Means algorithm and comparison vs. Random numbers and Message source Implementation of Lossless Huffman Coding: Image compression using K-Means algorithm and comparison vs. Random numbers and Message source Ali Tariq Bhatti 1, Dr. Jung Kim 2 1,2 Department of Electrical

More information

Compression methods: the 1 st generation

Compression methods: the 1 st generation Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 5 Other Coding Techniques Instructional Objectives At the end of this lesson, the students should be able to:. Convert a gray-scale image into bit-plane

More information

SCELP: LOW DELAY AUDIO CODING WITH NOISE SHAPING BASED ON SPHERICAL VECTOR QUANTIZATION

SCELP: LOW DELAY AUDIO CODING WITH NOISE SHAPING BASED ON SPHERICAL VECTOR QUANTIZATION SCELP: LOW DELAY AUDIO CODING WITH NOISE SHAPING BASED ON SPHERICAL VECTOR QUANTIZATION Hauke Krüger and Peter Vary Institute of Communication Systems and Data Processing RWTH Aachen University, Templergraben

More information

Compression and Coding

Compression and Coding Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)

More information

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256 General Models for Compression / Decompression -they apply to symbols data, text, and to image but not video 1. Simplest model (Lossless ( encoding without prediction) (server) Signal Encode Transmit (client)

More information

A COMPARATIVE STUDY OF -DISTORTION LIMITED IMAGE COMPRESSION ALGORITHMS. Hannes Hartenstein, Ralf Herz and Dietmar Saupe

A COMPARATIVE STUDY OF -DISTORTION LIMITED IMAGE COMPRESSION ALGORITHMS. Hannes Hartenstein, Ralf Herz and Dietmar Saupe c c A COMPARATIVE STUDY OF -DISTORTION LIMITED IMAGE COMPRESSION ALGORITHMS Hannes Hartenstein, Ralf Herz and Dietmar Saupe Institut für Informatik, Universität Freiburg, Am Flughafen 17, 79110 Freiburg,

More information

Image and Multidimensional Signal Processing

Image and Multidimensional Signal Processing Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Image Compression 2 Image Compression Goal: Reduce amount

More information

Analysis of methods for speech signals quantization

Analysis of methods for speech signals quantization INFOTEH-JAHORINA Vol. 14, March 2015. Analysis of methods for speech signals quantization Stefan Stojkov Mihajlo Pupin Institute, University of Belgrade Belgrade, Serbia e-mail: stefan.stojkov@pupin.rs

More information

MODULE -4 BAYEIAN LEARNING

MODULE -4 BAYEIAN LEARNING MODULE -4 BAYEIAN LEARNING CONTENT Introduction Bayes theorem Bayes theorem and concept learning Maximum likelihood and Least Squared Error Hypothesis Maximum likelihood Hypotheses for predicting probabilities

More information

SCALABLE AUDIO CODING USING WATERMARKING

SCALABLE AUDIO CODING USING WATERMARKING SCALABLE AUDIO CODING USING WATERMARKING Mahmood Movassagh Peter Kabal Department of Electrical and Computer Engineering McGill University, Montreal, Canada Email: {mahmood.movassagh@mail.mcgill.ca, peter.kabal@mcgill.ca}

More information

+ (50% contribution by each member)

+ (50% contribution by each member) Image Coding using EZW and QM coder ECE 533 Project Report Ahuja, Alok + Singh, Aarti + + (50% contribution by each member) Abstract This project involves Matlab implementation of the Embedded Zerotree

More information

Digital communication system. Shannon s separation principle

Digital communication system. Shannon s separation principle Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation

More information

Maximum mutual information vector quantization of log-likelihood ratios for memory efficient HARQ implementations

Maximum mutual information vector quantization of log-likelihood ratios for memory efficient HARQ implementations Downloaded from orbit.dtu.dk on: Apr 29, 2018 Maximum mutual information vector quantization of log-likelihood ratios for memory efficient HARQ implementations Danieli, Matteo; Forchhammer, Søren; Andersen,

More information

Coding for Discrete Source

Coding for Discrete Source EGR 544 Communication Theory 3. Coding for Discrete Sources Z. Aliyazicioglu Electrical and Computer Engineering Department Cal Poly Pomona Coding for Discrete Source Coding Represent source data effectively

More information

VECTOR-QUANTIZATION BY DENSITY MATCHING IN THE MINIMUM KULLBACK-LEIBLER DIVERGENCE SENSE

VECTOR-QUANTIZATION BY DENSITY MATCHING IN THE MINIMUM KULLBACK-LEIBLER DIVERGENCE SENSE VECTOR-QUATIZATIO BY DESITY ATCHIG I THE IIU KULLBACK-LEIBLER DIVERGECE SESE Anant Hegde, Deniz Erdogmus, Tue Lehn-Schioler 2, Yadunandana. Rao, Jose C. Principe CEL, Electrical & Computer Engineering

More information

Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization

Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization Multimedia Systems Giorgio Leonardi A.A.2014-2015 Lecture 4 -> 6 : Quantization Overview Course page (D.I.R.): https://disit.dir.unipmn.it/course/view.php?id=639 Consulting: Office hours by appointment:

More information

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. Preface p. xvii Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. 6 Summary p. 10 Projects and Problems

More information

Approximating the Covariance Matrix with Low-rank Perturbations

Approximating the Covariance Matrix with Low-rank Perturbations Approximating the Covariance Matrix with Low-rank Perturbations Malik Magdon-Ismail and Jonathan T. Purnell Department of Computer Science Rensselaer Polytechnic Institute Troy, NY 12180 {magdon,purnej}@cs.rpi.edu

More information

BASICS OF COMPRESSION THEORY

BASICS OF COMPRESSION THEORY BASICS OF COMPRESSION THEORY Why Compression? Task: storage and transport of multimedia information. E.g.: non-interlaced HDTV: 0x0x0x = Mb/s!! Solutions: Develop technologies for higher bandwidth Find

More information

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C Image Compression Background Reduce the amount of data to represent a digital image Storage and transmission Consider the live streaming of a movie at standard definition video A color frame is 720 480

More information

S. R. Tate. Stable Computation of the Complex Roots of Unity, IEEE Transactions on Signal Processing, Vol. 43, No. 7, 1995, pp

S. R. Tate. Stable Computation of the Complex Roots of Unity, IEEE Transactions on Signal Processing, Vol. 43, No. 7, 1995, pp Stable Computation of the Complex Roots of Unity By: Stephen R. Tate S. R. Tate. Stable Computation of the Complex Roots of Unity, IEEE Transactions on Signal Processing, Vol. 43, No. 7, 1995, pp. 1709

More information

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels LEI BAO, MIKAEL SKOGLUND AND KARL HENRIK JOHANSSON IR-EE- 26: Stockholm 26 Signal Processing School of Electrical Engineering

More information

Evaluation of the modified group delay feature for isolated word recognition

Evaluation of the modified group delay feature for isolated word recognition Evaluation of the modified group delay feature for isolated word recognition Author Alsteris, Leigh, Paliwal, Kuldip Published 25 Conference Title The 8th International Symposium on Signal Processing and

More information

SPEECH ANALYSIS AND SYNTHESIS

SPEECH ANALYSIS AND SYNTHESIS 16 Chapter 2 SPEECH ANALYSIS AND SYNTHESIS 2.1 INTRODUCTION: Speech signal analysis is used to characterize the spectral information of an input speech signal. Speech signal analysis [52-53] techniques

More information

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen,

More information

Multimedia Networking ECE 599

Multimedia Networking ECE 599 Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation

More information

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms) Course Code 005636 (Fall 2017) Multimedia Multimedia Data Compression (Lossless Compression Algorithms) Prof. S. M. Riazul Islam, Dept. of Computer Engineering, Sejong University, Korea E-mail: riaz@sejong.ac.kr

More information

Quantization for Distributed Estimation

Quantization for Distributed Estimation 0 IEEE International Conference on Internet of Things ithings 0), Green Computing and Communications GreenCom 0), and Cyber-Physical-Social Computing CPSCom 0) Quantization for Distributed Estimation uan-yu

More information

Error Floors of LDPC Coded BICM

Error Floors of LDPC Coded BICM Electrical and Computer Engineering Conference Papers, Posters and Presentations Electrical and Computer Engineering 2007 Error Floors of LDPC Coded BICM Aditya Ramamoorthy Iowa State University, adityar@iastate.edu

More information

On Optimal Coding of Hidden Markov Sources

On Optimal Coding of Hidden Markov Sources 2014 Data Compression Conference On Optimal Coding of Hidden Markov Sources Mehdi Salehifar, Emrah Akyol, Kumar Viswanatha, and Kenneth Rose Department of Electrical and Computer Engineering University

More information

Image Compression. 1. Introduction. Greg Ames Dec 07, 2002

Image Compression. 1. Introduction. Greg Ames Dec 07, 2002 Image Compression Greg Ames Dec 07, 2002 Abstract Digital images require large amounts of memory to store and, when retrieved from the internet, can take a considerable amount of time to download. The

More information

Lecture 7 Predictive Coding & Quantization

Lecture 7 Predictive Coding & Quantization Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 7 Predictive Coding & Quantization June 3, 2009 Outline Predictive Coding Motion Estimation and Compensation Context-Based Coding Quantization

More information

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Devin Cornell & Sushruth Sastry May 2015 1 Abstract In this article, we explore

More information

Converting DCT Coefficients to H.264/AVC

Converting DCT Coefficients to H.264/AVC MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Converting DCT Coefficients to H.264/AVC Jun Xin, Anthony Vetro, Huifang Sun TR2004-058 June 2004 Abstract Many video coding schemes, including

More information

A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING

A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING Nasir M. Rajpoot, Roland G. Wilson, François G. Meyer, Ronald R. Coifman Corresponding Author: nasir@dcs.warwick.ac.uk ABSTRACT In this paper,

More information

Gaussian Source Coding With Spherical Codes

Gaussian Source Coding With Spherical Codes 2980 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 11, NOVEMBER 2002 Gaussian Source Coding With Spherical Codes Jon Hamkins, Member, IEEE, and Kenneth Zeger, Fellow, IEEE Abstract A fixed-rate shape

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006 jzhang@cse.unsw.edu.au

More information

Modelling of produced bit rate through the percentage of null quantized transform coefficients ( zeros )

Modelling of produced bit rate through the percentage of null quantized transform coefficients ( zeros ) Rate control strategies in H264 Simone Milani (simone.milani@dei.unipd.it) with the collaboration of Università degli Studi di adova ST Microelectronics Summary General scheme of the H.264 encoder Rate

More information

BASIC COMPRESSION TECHNIQUES

BASIC COMPRESSION TECHNIQUES BASIC COMPRESSION TECHNIQUES N. C. State University CSC557 Multimedia Computing and Networking Fall 2001 Lectures # 05 Questions / Problems / Announcements? 2 Matlab demo of DFT Low-pass windowed-sinc

More information

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image Fundamentals: Coding redundancy The gray level histogram of an image can reveal a great deal of information about the image That probability (frequency) of occurrence of gray level r k is p(r k ), p n

More information

LATTICE VECTOR QUANTIZATION FOR IMAGE CODING USING EXPANSION OF CODEBOOK

LATTICE VECTOR QUANTIZATION FOR IMAGE CODING USING EXPANSION OF CODEBOOK LATTICE VECTOR QUANTIZATION FOR IMAGE CODING USING EXPANSION OF CODEBOOK R. R. Khandelwal 1, P. K. Purohit 2 and S. K. Shriwastava 3 1 Shri Ramdeobaba College Of Engineering and Management, Nagpur richareema@rediffmail.com

More information

Predictive Coding. Prediction Prediction in Images

Predictive Coding. Prediction Prediction in Images Prediction Prediction in Images Predictive Coding Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

IMAGE COMPRESSION OF DIGITIZED NDE X-RAY RADIOGRAPHS. Brian K. LoveweIl and John P. Basart

IMAGE COMPRESSION OF DIGITIZED NDE X-RAY RADIOGRAPHS. Brian K. LoveweIl and John P. Basart IMAGE COMPRESSIO OF DIGITIZED DE X-RAY RADIOGRAPHS BY ADAPTIVE DIFFERETIAL PULSE CODE MODULATIO Brian K. LoveweIl and John P. Basart Center for ondestructive Evaluation and the Department of Electrical

More information

3 rd Generation Approach to Video Compression for Multimedia

3 rd Generation Approach to Video Compression for Multimedia 3 rd Generation Approach to Video Compression for Multimedia Pavel Hanzlík, Petr Páta Dept. of Radioelectronics, Czech Technical University in Prague, Technická 2, 166 27, Praha 6, Czech Republic Hanzlip@feld.cvut.cz,

More information

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING 5 0 DPCM (Differential Pulse Code Modulation) Making scalar quantization work for a correlated source -- a sequential approach. Consider quantizing a slowly varying source (AR, Gauss, ρ =.95, σ 2 = 3.2).

More information

Predictive Coding. Prediction

Predictive Coding. Prediction Predictive Coding Prediction Prediction in Images Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Tufts COMP 135: Introduction to Machine Learning

Tufts COMP 135: Introduction to Machine Learning Tufts COMP 135: Introduction to Machine Learning https://www.cs.tufts.edu/comp/135/2019s/ Logistic Regression Many slides attributable to: Prof. Mike Hughes Erik Sudderth (UCI) Finale Doshi-Velez (Harvard)

More information

ECE 661: Homework 10 Fall 2014

ECE 661: Homework 10 Fall 2014 ECE 661: Homework 10 Fall 2014 This homework consists of the following two parts: (1) Face recognition with PCA and LDA for dimensionality reduction and the nearest-neighborhood rule for classification;

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5 Lecture : Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP959 Multimedia Systems S 006 jzhang@cse.unsw.edu.au Acknowledgement

More information