BASICS OF COMPRESSION THEORY
|
|
- Felicity Phelps
- 6 years ago
- Views:
Transcription
1 BASICS OF COMPRESSION THEORY Why Compression? Task: storage and transport of multimedia information. E.g.: non-interlaced HDTV: 0x0x0x = Mb/s!! Solutions: Develop technologies for higher bandwidth Find ways to reduce the number of bits to transmit without compromising the information content
2 Types of Compression Lossless (entropy coding) Does not lose information (i.e., the signal is perfectly reconstructed after decompression) Produces a variable bit-rate It is not guaranteed to actually reduce the data size Lossy Loses some information (i.e., the signal is not perfectly reconstructed after decompression) Produces any desired constant bit-rate Lossless Compression
3 Data Sources Model the data production as a source of symbols. A source has a vocabulary of N symbols Each symbol represented with log N bits. speech: bits/sample: N = =, symbols color image: x bits/sample: N = =x0 symbols x image blocks: x bits/block: N = =0 symbols Lossless Compression Coding: translate each symbol in the vocabulary into a binary codeword. Codewords may have different binary lenghts. Example: symbols (a,b,c,d): a (00) Æ 000 b (0) Æ 00 c (0) Æ 0 d () Æ Goal: minimize the average symbol length
4 Average Symbol Length l(i) = binary length of i -th symbol Source emits M symbols (one every T seconds) Symbol i has been emitted m(i) times: M = Number of bits been emitted: L = Average length per symbol: L = Average bit rate: L / T Â Â N i= N i= Â m(i) l(i) N i = m(i) l(i) M m(i) Minimum Average Symbol Length Basic idea for reducing the average symbol length: assign shorter codewords to symbols that appear more frequently, longer codewords to symbols that appear less frequently Theorem: the average binary length of the the encoded symbols is always greater than or equal to the entropy H of the source What is the entropy of a source of symbols? We need some basic concept of statistic first...
5 Symbol Probabilities Probability P(i) of a symbol: corresponds to its relative frequency: m(i)/m Axioms of probability: 0 P(i) Â Average symbol length: N i = P(i) = N N L = Â m(i) M l(i) = Â P(i)l(i) i= ( ) i = Entropy Entropy (definition): H = - Â The entropy is a function of a given source of symbols (more precisely: of a probability distribution The entropy is highest (equal to log N ) if all symbols are equally probable N i = P(i)log P(i) The entropy is small (always 0) when some symbols that are much more likely to appear than other symbols
6 Example: N= { } {,0,0,0} H = - log ( ) = H = 0 Case Study : Huffman coding The Huffman code assignment procedure is based on a binary tree structure. The leaves of binary tree associated with the list of probabilities. Take the two smallest probabilities in the list and make the corresponding nodes siblings. Generate an intermediate node as their parent and label the branch from parent to one of the child nodes and the branch from parent to the other child 0.. Replace the probabilities and associated nodes in the list by the single new intermediate node with the sum of the two probabilities. If the list contains only one element, quit. Otherwise, go to step. Codeword formation: Follow the path from the root of the tree to the symbol, and accumulates the labels of all the branches.
7 N = symbols: {a,b,c,d,e,f,g,h} bits per symbol (N = =) P(a) = 0.0, P(b)=0.0, P(c)=0.0, P(d)=0.09, P(e)=0., P(f)=0., P(g)=0., P(h)=0. Average length per symbol (before coding): Entropy:H = -  i = Example  L = P(i) = i= bits/symbol P(i)log P(i) =. bits/symbol = 0. xl Average length per symbol (with Huffman coding): L Huf =. bits/symbol Efficiency of the code: H L Huf = 9%. Original codewords Huffman codewords 0 P(h)=0. 0 P(g)= P(f)= P(e)= P(d)=0.09 P(c)=0.0 P(b)= P(a)=
8 Huffman Coding (cont d) Probabilities are estimated from the relative frequencies To decode the bit stream, the decoder needs the Huffman table Decoding the bit stream: without coding: = bcbgfb with Huffman coding: = ghaed (variable-length) we can decode it because it is a prefix code (no codeword is the prefix of another codeword) Do we always reduce the length? No! We reduce the average symbol length Case Study : CCITT Group -D fax encoding Facsimile: bilevel image. x page scanned (left-to-right) at 00 dpi:. Mb ( pixels on each line) Coding strategy: Count each run of white/black pixels Encode the run-lengths with Huffman coding Note that the probabilities of run-lengths are not the same for black and white pixels Decompose each run-length n as n =kx+m (with 0 m<), and create Huffman tables for both k and m Special codewords for end-of-line, end-of-page, synchronization Average compression ratio on text documents: 0-to-
9 Lossy Compression Lossy Compression Goal: minimize the distortion for a given compression ratio Ideally, we would optimize the system based on perceptual distortion (difficult to compute) We ll need a few more concepts from statistics
10 Some Definitions x = the original value of a sample ˆ x = the sample value after compression and decompressio e = ˆ x - x = error (a.k.a. noise, a.k.a. distortion) s x = power (variance) of the signal s e = power (variance) of the error Signal-to-noise ratio :SNR = 0log 0 s ( x s e ) Lossy Compression: Simple Examples Subsampling : retain only samples on a subsampling grid (spatial/temporal) Has fundamental limitations: see sampling theorem Quantization: quantize with fewer levels N (larger quantization step D=V/N ) SNR PCM = 0log 0 s ( x s e ) = 0log 0 ( N a ) Can we do better than simple quantization (PCM)?
11 Differential PCM If the (n-)-th sample has value x(n-), what is the most probable value for x(n)? Simplest case: the predicted value for x(n) is x(n-) Differential PCM (DPCM): instead of quantizing and transmitting x(n), quantize and transmit the residual d(n)=x(n)-x(n-) Decoding: x ˆ (n) = x(n -) + d ˆ (n) where d ˆ (n) is the quantized version of d(n) We will show that SNR DPCM > SNR PCM More precisely: we ill compare SNR DPCM and SNR PCM when the same number of bits is used to quantize x(n) (in PCM) and d(n) (in DPCM) SNR of DPCM Coding What is the equivalent quantization error? e( n) = x ˆ ( n) - x( n) = x( n -) + d ˆ ( n) - x( n) = d ˆ ( n) - d( n) = e d ( n) So the equivalent quantization error is equal to the quantization error for the residual. In particular, s e = s ed
12 SNR of DPCM Coding (cont d) Now we can compute SNR DPCM : s 0 log x s 0 0 log x SNR DPCM = = 0 s e s e d Ê s 0 log x s d ˆ s 0 Á 0 log x = = 0 + SNR Ë s d s d d s e d where SNR d is the signal-to-noise ratio consequent to the quantization of d(n) IMPORTANT: SNR d = SNR PCM Because it only depends on the number of quantization levels! SNR of DPCM Coding (cont d) Now, we expect that (in general), s Hence, 0log x 0 s > 0 d s d < s x and therefore, for the same number of quantization levels (i.e., for the same bit-rate in output), we have SNR DPCM > SNR PCM
13 Example x(n) d(n) d(n) Closed-loop Encoding Problem: to be able to reconstruct x ˆ (n) = x(n -) + d ˆ (n) the decoder must know x(n-)!! Closed-loop differential quantization: rather than d(n) = x(n) - x(n - ) we compute (at the encoder) d(n) = x(n) - x ˆ (n - ) so that x ˆ (n) = x ˆ (n -) + d ˆ (n)
14 x(n) d (n) d ˆ (n) Quantizer ˆ x (n - ) Predictor ˆ x (n) ˆ d (n) (a) ˆ x (n) ˆ x (n - ) Predictor (b) Figure : Closed-loop differential encoder (a) and decoder (b) Predictive Encoding Higher-order linear prediction: instead of d(n) = x(n) - x ˆ (n - ) quantize and transmit M d(n) = x(n) - a m x ˆ (n - m) Â m = The coefficients are computed based on large segment of the signal The decoder must know the coefficients (need to transmit them) Note: DPCM is the simplest case of predictive encoding
15 Differential PCM (cont d) Differential PCM not only allows to reduce the quantization noise but also the signal entropy Example: x(n) d(n) P(x) P(d) Transform Coding Consider segments formed by M samples. E.g.: image frame x pixel block (M =) a segment of speech Apply a suitable invertible transformation (typically a matrix multiplication) x()...,x(m) Æ y(),...,y(m) (channels) Quantize and transmit y(),...,y(m)
16 x x() y() y ˆ () x() x() T y() y() Q Q Q y ˆ () y ˆ () ˆ y x(m) y(m) Q M y ˆ (M) (a) y ˆ () x ˆ () ˆ y ˆ y () ˆ y () T - ˆ x () ˆ x () ˆ x y ˆ (M) x ˆ (M) (b) Figure : Transform coder (a) and decoder (b). Some Definitions y ˆ (),..., y ˆ (M) = quantized versions of y()...,y(m) x ˆ (),..., x ˆ (M) = inverse transform of y ˆ (),..., y ˆ (M) Distortion (error): e (n) = ˆ X x (n) - x(n) e Y (n) = y ˆ (n) - y(n) Overall distortion: D = X M D Y = M Â Â M n = M n= s e x (n) s ey (n)
17 Channel Power Distribution Without transformation (y(n)=x(n), PCM case), all channels have the same error variance s ex because the source is stationary. Therefore, D X = D Y = s e x = s ey = D PCM Otherwise, the channels y(x) will (in general) have different variances s e y( n) (as a consequence of the transformation). This turns out to be the key factor in transform coding. Transform Coding (cont d) Important: quantizers in different channels may have different numbers of levels Bit budget B : if we we quantize y(),...,y(m) using B(),...,B(M) bits respectively, it must be M B(n) = B Â n= Optimal bit allocation: allocate more bits to those channels that have the highest variance It can be shown that the transformation increases the equivalent quantization SNR The transformation also reduces the signal entropy!
18 Discrete Cosine Transform (DCT) The DCT is a kind of transform coding with matrix T defined as Ï Ô t pm = Ì Ô Ó Ê M cos p M p ( m + ˆ Ë ) p =,,..., M - Ê M cos p M p ( m + ˆ ) Ë p = 0 It has the property that different channels represent the signal power along different (spatial or temporal) frequencies (similarly to the (discrete) Fourier transform What Do the DCT Coefficients Represent? The DCT coefs represent the spatial frequency content within a MxM image block The (0,0) coefficient contains is called DC coefficient, and is proportional to the average of the MxM image values in the block As we move to the right of the block, the coefficients represent the energy in higher horizontal frequencies As we move down the block, the coefficients represent the energy in higher vertical frequencies
19 Examples of x DCTs Horizontal centerband frequency x f x x f x x f x Vertical centerband frequency y y f y image x f y image DCT f x DCT x f x y y f y image x f y image DCT f x DCT y image f y DCT x f x y image f y DCT y image f y DCT Subband Coding It is a kind of transform coding (although operating on the whole signal) It is not implemented as a matrix multiplication but as a bank of filters followed by decimation (I.e., subsampling) Again, each channel represents the signal energy content along different frequencies
20 A Simple Scheme of Subband Coding ( channels) Low-pass filter Frequency response Signal band Decimate by Q Interpolate by x(n) f Transmission ˆ x (n) Decimate by Q Interpolate by High-pass filter f N-Band Subband Coding Band n- Band n Band n+ H(f) Hz
21 Coding - Conclusions Lossless coding Can only compress so much (depending on source entropy) Produces variable size bit rate Lossy coding Produces any desired constant bit-rate Uses DPCM or transform coding for improved SNR In general: x(n) Lossy coder Lossless coder ˆ x (n) References A. Gersho, R. Gray, Vector Quantization and Signal Compression, Kluwer Academic Publisher, 99
22 Some Details for the More Curious... Orthonormal Transformations We will consider linear and orthogonal transformations: È x() È y() X = Í... Y = Í... Î Í x(m) Î Í y(m) Then Y=TX where T is a M xm orthonormal matrix (i.e. T - =T T, det(t)=±) Fact : If T is orthonormal, then D X = D Y
23 Distortion Without transformation (T=I, PCM case): There are B/M bits per channel, therefore: D X = D PCM = s e x = Cs x B / M With transformation T: Fact : Under optimal bit allocation, N ( n= ) N D Y = C s y (n ) B N Coding Gain The coding gain (CG) measures the performance of transform coding with respect to direct quantization: CG = D PCM /D x = D PCM /D y = s x N ( s n= y(n) ) N Thus, transform coding (under optimal bit allocation) always performs at least as well as PCM It can be shown that the coding gain reaches its maximum when T is the Karhunen-Loeve transform, i.e. when the channels are decorrelated
Image Data Compression
Image Data Compression Image data compression is important for - image archiving e.g. satellite data - image transmission e.g. web data - multimedia applications e.g. desk-top editing Image data compression
More informationBasic Principles of Video Coding
Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion
More informationRun-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE
General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive
More informationDigital Image Processing Lectures 25 & 26
Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image
More informationL. Yaroslavsky. Fundamentals of Digital Image Processing. Course
L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at
More informationMultimedia Networking ECE 599
Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation
More informationCompression and Coding
Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)
More informationencoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256
General Models for Compression / Decompression -they apply to symbols data, text, and to image but not video 1. Simplest model (Lossless ( encoding without prediction) (server) Signal Encode Transmit (client)
More informationAudio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU
Audio Coding P.1 Fundamentals Quantization Waveform Coding Subband Coding 1. Fundamentals P.2 Introduction Data Redundancy Coding Redundancy Spatial/Temporal Redundancy Perceptual Redundancy Compression
More informationObjectives of Image Coding
Objectives of Image Coding Representation of an image with acceptable quality, using as small a number of bits as possible Applications: Reduction of channel bandwidth for image transmission Reduction
More informationSIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding
SIGNAL COMPRESSION 8. Lossy image compression: Principle of embedding 8.1 Lossy compression 8.2 Embedded Zerotree Coder 161 8.1 Lossy compression - many degrees of freedom and many viewpoints The fundamental
More informationIntroduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.
Preface p. xvii Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. 6 Summary p. 10 Projects and Problems
More informationOverview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )
Overview Analog capturing device (camera, microphone) Sampling Fine Quantization A/D CONVERTER PCM encoded or raw signal ( wav, bmp, ) Transform Quantizer VLC encoding Compressed bit stream (mp3, jpg,
More informationWaveform-Based Coding: Outline
Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and
More informationBasics of DCT, Quantization and Entropy Coding
Basics of DCT, Quantization and Entropy Coding Nimrod Peleg Update: April. 7 Discrete Cosine Transform (DCT) First used in 97 (Ahmed, Natarajan and Rao). Very close to the Karunen-Loeve * (KLT) transform
More informationImage Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image
Fundamentals: Coding redundancy The gray level histogram of an image can reveal a great deal of information about the image That probability (frequency) of occurrence of gray level r k is p(r k ), p n
More informationLecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments
Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006 jzhang@cse.unsw.edu.au
More informationDigital communication system. Shannon s separation principle
Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation
More informationMultimedia Communications. Mathematical Preliminaries for Lossless Compression
Multimedia Communications Mathematical Preliminaries for Lossless Compression What we will see in this chapter Definition of information and entropy Modeling a data source Definition of coding and when
More informationLecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5
Lecture : Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP959 Multimedia Systems S 006 jzhang@cse.unsw.edu.au Acknowledgement
More informationBandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)
Compression Motivation Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet) Storage: Store large & complex 3D models (e.g. 3D scanner
More informationIMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1
IMAGE COMPRESSION-II Week IX 3/6/23 Image Compression-II 1 IMAGE COMPRESSION Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding Predictive coding Transform coding
More informationCMPT 365 Multimedia Systems. Final Review - 1
CMPT 365 Multimedia Systems Final Review - 1 Spring 2017 CMPT365 Multimedia Systems 1 Outline Entropy Lossless Compression Shannon-Fano Coding Huffman Coding LZW Coding Arithmetic Coding Lossy Compression
More informationObjective: Reduction of data redundancy. Coding redundancy Interpixel redundancy Psychovisual redundancy Fall LIST 2
Image Compression Objective: Reduction of data redundancy Coding redundancy Interpixel redundancy Psychovisual redundancy 20-Fall LIST 2 Method: Coding Redundancy Variable-Length Coding Interpixel Redundancy
More informationDigital Communications III (ECE 154C) Introduction to Coding and Information Theory
Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I
More informationEE67I Multimedia Communication Systems
EE67I Multimedia Communication Systems Lecture 5: LOSSY COMPRESSION In these schemes, we tradeoff error for bitrate leading to distortion. Lossy compression represents a close approximation of an original
More informationat Some sort of quantization is necessary to represent continuous signals in digital form
Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for
More information4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak
4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the
More informationECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.
ECE47/57 - Lecture Image Compression Fundamentals and Lossless Compression Techniques /03/ Roadmap Preprocessing low level Image Enhancement Image Restoration Image Segmentation Image Acquisition Image
More informationImage and Multidimensional Signal Processing
Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Image Compression 2 Image Compression Goal: Reduce amount
More informationChapter 9 Fundamental Limits in Information Theory
Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For
More informationCompression methods: the 1 st generation
Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic
More informationWavelet Scalable Video Codec Part 1: image compression by JPEG2000
1 Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 Aline Roumy aline.roumy@inria.fr May 2011 2 Motivation for Video Compression Digital video studio standard ITU-R Rec. 601 Y luminance
More informationEE5356 Digital Image Processing. Final Exam. 5/11/06 Thursday 1 1 :00 AM-1 :00 PM
EE5356 Digital Image Processing Final Exam 5/11/06 Thursday 1 1 :00 AM-1 :00 PM I), Closed books and closed notes. 2), Problems carry weights as indicated. 3), Please print your name and last four digits
More informationAutumn Coping with NP-completeness (Conclusion) Introduction to Data Compression
Autumn Coping with NP-completeness (Conclusion) Introduction to Data Compression Kirkpatrick (984) Analogy from thermodynamics. The best crystals are found by annealing. First heat up the material to let
More informationCompression. Encryption. Decryption. Decompression. Presentation of Information to client site
DOCUMENT Anup Basu Audio Image Video Data Graphics Objectives Compression Encryption Network Communications Decryption Decompression Client site Presentation of Information to client site Multimedia -
More informationBASIC COMPRESSION TECHNIQUES
BASIC COMPRESSION TECHNIQUES N. C. State University CSC557 Multimedia Computing and Networking Fall 2001 Lectures # 05 Questions / Problems / Announcements? 2 Matlab demo of DFT Low-pass windowed-sinc
More information18.2 Continuous Alphabet (discrete-time, memoryless) Channel
0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not
More informationReview of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition
Review of Quantization UMCP ENEE631 Slides (created by M.Wu 004) Quantization UMCP ENEE631 Slides (created by M.Wu 001/004) L-level Quantization Minimize errors for this lossy process What L values to
More informationIMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model
IMAGE COMRESSIO IMAGE COMRESSIO-II Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding redictive coding Transform coding Week IX 3/6/23 Image Compression-II 3/6/23
More informationVector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l
Vector Quantization Encoder Decoder Original Image Form image Vectors X Minimize distortion k k Table X^ k Channel d(x, X^ Look-up i ) X may be a block of l m image or X=( r, g, b ), or a block of DCT
More informationECE533 Digital Image Processing. Embedded Zerotree Wavelet Image Codec
University of Wisconsin Madison Electrical Computer Engineering ECE533 Digital Image Processing Embedded Zerotree Wavelet Image Codec Team members Hongyu Sun Yi Zhang December 12, 2003 Table of Contents
More informationReal-Time Audio and Video
MM- Multimedia Payloads MM-2 Raw Audio (uncompressed audio) Real-Time Audio and Video Telephony: Speech signal: 2 Hz 3.4 khz! 4 khz PCM (Pulse Coded Modulation)! samples/sec x bits = 64 kbps Teleconferencing:
More informationBasics of DCT, Quantization and Entropy Coding. Nimrod Peleg Update: Dec. 2005
Basics of DCT, Quantization and Entropy Coding Nimrod Peleg Update: Dec. 2005 Discrete Cosine Transform (DCT) First used in 974 (Ahmed, Natarajan and Rao). Very close to the Karunen-Loeve * (KLT) transform
More informationUNIT I INFORMATION THEORY. I k log 2
UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.
More informationImage Compression - JPEG
Overview of JPEG CpSc 86: Multimedia Systems and Applications Image Compression - JPEG What is JPEG? "Joint Photographic Expert Group". Voted as international standard in 99. Works with colour and greyscale
More informationModule 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 5 Other Coding Techniques Instructional Objectives At the end of this lesson, the students should be able to:. Convert a gray-scale image into bit-plane
More informationLecture 7 Predictive Coding & Quantization
Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 7 Predictive Coding & Quantization June 3, 2009 Outline Predictive Coding Motion Estimation and Compensation Context-Based Coding Quantization
More informationCompression and Coding. Theory and Applications Part 1: Fundamentals
Compression and Coding Theory and Applications Part 1: Fundamentals 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance) 2 Why
More informationIntraframe Prediction with Intraframe Update Step for Motion-Compensated Lifted Wavelet Video Coding
Intraframe Prediction with Intraframe Update Step for Motion-Compensated Lifted Wavelet Video Coding Aditya Mavlankar, Chuo-Ling Chang, and Bernd Girod Information Systems Laboratory, Department of Electrical
More informationInformation and Entropy
Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication
More informationEE5356 Digital Image Processing
EE5356 Digital Image Processing INSTRUCTOR: Dr KR Rao Spring 007, Final Thursday, 10 April 007 11:00 AM 1:00 PM ( hours) (Room 111 NH) INSTRUCTIONS: 1 Closed books and closed notes All problems carry weights
More informationCODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING
5 0 DPCM (Differential Pulse Code Modulation) Making scalar quantization work for a correlated source -- a sequential approach. Consider quantizing a slowly varying source (AR, Gauss, ρ =.95, σ 2 = 3.2).
More informationHuffman Coding. C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University
Huffman Coding C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)573877 cmliu@cs.nctu.edu.tw
More informationCompression and Coding. Theory and Applications Part 1: Fundamentals
Compression and Coding Theory and Applications Part 1: Fundamentals 1 What is the problem? Transmitter (Encoder) Receiver (Decoder) Transformation information unit Channel Ordering (significance) 2 Why
More informationSIGNAL COMPRESSION Lecture 7. Variable to Fix Encoding
SIGNAL COMPRESSION Lecture 7 Variable to Fix Encoding 1. Tunstall codes 2. Petry codes 3. Generalized Tunstall codes for Markov sources (a presentation of the paper by I. Tabus, G. Korodi, J. Rissanen.
More informationMultimedia Communications. Differential Coding
Multimedia Communications Differential Coding Differential Coding In many sources, the source output does not change a great deal from one sample to the next. This means that both the dynamic range and
More informationImage Compression using DPCM with LMS Algorithm
Image Compression using DPCM with LMS Algorithm Reenu Sharma, Abhay Khedkar SRCEM, Banmore -----------------------------------------------------------------****---------------------------------------------------------------
More informationMultimedia. Multimedia Data Compression (Lossless Compression Algorithms)
Course Code 005636 (Fall 2017) Multimedia Multimedia Data Compression (Lossless Compression Algorithms) Prof. S. M. Riazul Islam, Dept. of Computer Engineering, Sejong University, Korea E-mail: riaz@sejong.ac.kr
More informationMultimedia Information Systems
Multimedia Information Systems Samson Cheung EE 639, Fall 2004 Lecture 3 & 4: Color, Video, and Fundamentals of Data Compression 1 Color Science Light is an electromagnetic wave. Its color is characterized
More informationSYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression
SYDE 575: Introduction to Image Processing Image Compression Part 2: Variable-rate compression Variable-rate Compression: Transform-based compression As mentioned earlier, we wish to transform image data
More informationCSEP 521 Applied Algorithms Spring Statistical Lossless Data Compression
CSEP 52 Applied Algorithms Spring 25 Statistical Lossless Data Compression Outline for Tonight Basic Concepts in Data Compression Entropy Prefix codes Huffman Coding Arithmetic Coding Run Length Coding
More informationOn Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University
On Compression Encrypted Data part 2 Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University 1 Brief Summary of Information-theoretic Prescription At a functional
More informationLecture 10 : Basic Compression Algorithms
Lecture 10 : Basic Compression Algorithms Modeling and Compression We are interested in modeling multimedia data. To model means to replace something complex with a simpler (= shorter) analog. Some models
More informationPulse-Code Modulation (PCM) :
PCM & DPCM & DM 1 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from
More informationTransform coding - topics. Principle of block-wise transform coding
Transform coding - topics Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform Threshold coding Typical coding artifacts
More informationAn instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1
Kraft s inequality An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if N 2 l i 1 Proof: Suppose that we have a tree code. Let l max = max{l 1,...,
More information6. H.261 Video Coding Standard
6. H.261 Video Coding Standard ITU-T (formerly CCITT) H-Series of Recommendations 1. H.221 - Frame structure for a 64 to 1920 kbits/s channel in audiovisual teleservices 2. H.230 - Frame synchronous control
More informationImage Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)
Image Compression Qiaoyong Zhong CAS-MPG Partner Institute for Computational Biology (PICB) November 19, 2012 1 / 53 Image Compression The art and science of reducing the amount of data required to represent
More informationDepartment of Electrical Engineering, Polytechnic University, Brooklyn Fall 05 EL DIGITAL IMAGE PROCESSING (I) Final Exam 1/5/06, 1PM-4PM
Department of Electrical Engineering, Polytechnic University, Brooklyn Fall 05 EL512 --- DIGITAL IMAGE PROCESSING (I) Y. Wang Final Exam 1/5/06, 1PM-4PM Your Name: ID Number: Closed book. One sheet of
More informationTransform Coding. Transform Coding Principle
Transform Coding Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform coefficients Entropy coding of transform coefficients
More informationSource Coding for Compression
Source Coding for Compression Types of data compression: 1. Lossless -. Lossy removes redundancies (reversible) removes less important information (irreversible) Lec 16b.6-1 M1 Lossless Entropy Coding,
More informationEE5585 Data Compression April 18, Lecture 23
EE5585 Data Compression April 18, 013 Lecture 3 Instructor: Arya Mazumdar Scribe: Trevor Webster Differential Encoding Suppose we have a signal that is slowly varying For instance, if we were looking at
More informationEE 5345 Biomedical Instrumentation Lecture 12: slides
EE 5345 Biomedical Instrumentation Lecture 1: slides 4-6 Carlos E. Davila, Electrical Engineering Dept. Southern Methodist University slides can be viewed at: http:// www.seas.smu.edu/~cd/ee5345.html EE
More informationVector Quantization and Subband Coding
Vector Quantization and Subband Coding 18-796 ultimedia Communications: Coding, Systems, and Networking Prof. Tsuhan Chen tsuhan@ece.cmu.edu Vector Quantization 1 Vector Quantization (VQ) Each image block
More informationChapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code
Chapter 2 Date Compression: Source Coding 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code 2.1 An Introduction to Source Coding Source coding can be seen as an efficient way
More information2018/5/3. YU Xiangyu
2018/5/3 YU Xiangyu yuxy@scut.edu.cn Entropy Huffman Code Entropy of Discrete Source Definition of entropy: If an information source X can generate n different messages x 1, x 2,, x i,, x n, then the
More informationPredictive Coding. Prediction Prediction in Images
Prediction Prediction in Images Predictive Coding Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe
More informationEE368B Image and Video Compression
EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer
More informationPredictive Coding. Prediction
Predictive Coding Prediction Prediction in Images Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe
More informationFault Tolerance Technique in Huffman Coding applies to Baseline JPEG
Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen,
More informationPredictive Coding. Lossy or lossless. Feedforward or feedback. Intraframe or interframe. Fixed or Adaptive
Predictie Coding Predictie coding is a compression tecnique based on te difference between te original and predicted alues. It is also called DPCM Differential Pulse Code Modulation Lossy or lossless Feedforward
More informationLossless Image and Intra-frame Compression with Integer-to-Integer DST
1 Lossless Image and Intra-frame Compression with Integer-to-Integer DST Fatih Kamisli, Member, IEEE arxiv:1708.07154v1 [cs.mm] 3 Aug 017 Abstract Video coding standards are primarily designed for efficient
More informationSource Coding: Part I of Fundamentals of Source and Video Coding
Foundations and Trends R in sample Vol. 1, No 1 (2011) 1 217 c 2011 Thomas Wiegand and Heiko Schwarz DOI: xxxxxx Source Coding: Part I of Fundamentals of Source and Video Coding Thomas Wiegand 1 and Heiko
More informationCSCI 2570 Introduction to Nanocomputing
CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication
More informationIntroduction to Video Compression H.261
Introduction to Video Compression H.6 Dirk Farin, Contact address: Dirk Farin University of Mannheim Dept. Computer Science IV L 5,6, 683 Mannheim, Germany farin@uni-mannheim.de D.F. YUV-Colorspace Computer
More informationInformation Theory and Statistics Lecture 2: Source coding
Information Theory and Statistics Lecture 2: Source coding Łukasz Dębowski ldebowsk@ipipan.waw.pl Ph. D. Programme 2013/2014 Injections and codes Definition (injection) Function f is called an injection
More informationE303: Communication Systems
E303: Communication Systems Professor A. Manikas Chair of Communications and Array Processing Imperial College London Principles of PCM Prof. A. Manikas (Imperial College) E303: Principles of PCM v.17
More informationReduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C
Image Compression Background Reduce the amount of data to represent a digital image Storage and transmission Consider the live streaming of a movie at standard definition video A color frame is 720 480
More informationCMPT 365 Multimedia Systems. Lossless Compression
CMPT 365 Multimedia Systems Lossless Compression Spring 2017 Edited from slides by Dr. Jiangchuan Liu CMPT365 Multimedia Systems 1 Outline Why compression? Entropy Variable Length Coding Shannon-Fano Coding
More informationChapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code
Chapter 3 Source Coding 3. An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code 3. An Introduction to Source Coding Entropy (in bits per symbol) implies in average
More informationCSEP 590 Data Compression Autumn Arithmetic Coding
CSEP 590 Data Compression Autumn 2007 Arithmetic Coding Reals in Binary Any real number x in the interval [0,1) can be represented in binary as.b 1 b 2... where b i is a bit. x 0 0 1 0 1... binary representation
More informationCompression. Reality Check 11 on page 527 explores implementation of the MDCT into a simple, working algorithm to compress audio.
C H A P T E R 11 Compression The increasingly rapid movement of information around the world relies on ingenious methods of data representation, which are in turn made possible by orthogonal transformations.the
More informationThe Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform
The Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform Item Type text; Proceedings Authors Jones, H. W.; Hein, D. N.; Knauer, S. C. Publisher International Foundation
More informationShannon-Fano-Elias coding
Shannon-Fano-Elias coding Suppose that we have a memoryless source X t taking values in the alphabet {1, 2,..., L}. Suppose that the probabilities for all symbols are strictly positive: p(i) > 0, i. The
More informationEMBEDDED ZEROTREE WAVELET COMPRESSION
EMBEDDED ZEROTREE WAVELET COMPRESSION Neyre Tekbıyık And Hakan Şevki Tozkoparan Undergraduate Project Report submitted in partial fulfillment of the requirements for the degree of Bachelor of Science (B.S.)
More informationModule 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur
Module 5 EMBEDDED WAVELET CODING Lesson 13 Zerotree Approach. Instructional Objectives At the end of this lesson, the students should be able to: 1. Explain the principle of embedded coding. 2. Show the
More informationPrinciples of Communications
Principles of Communications Weiyao Lin, PhD Shanghai Jiao Tong University Chapter 4: Analog-to-Digital Conversion Textbook: 7.1 7.4 2010/2011 Meixia Tao @ SJTU 1 Outline Analog signal Sampling Quantization
More information+ (50% contribution by each member)
Image Coding using EZW and QM coder ECE 533 Project Report Ahuja, Alok + Singh, Aarti + + (50% contribution by each member) Abstract This project involves Matlab implementation of the Embedded Zerotree
More information