Real-Time Audio and Video

Similar documents
Multimedia Networking ECE 599

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

Image Data Compression

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

CSE 126 Multimedia Systems Midterm Exam (Form A)

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

Image Compression - JPEG

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site

Predictive Coding. Prediction Prediction in Images

Predictive Coding. Prediction

CSE 408 Multimedia Information System Yezhou Yang

Basic Principles of Video Coding

Objective: Reduction of data redundancy. Coding redundancy Interpixel redundancy Psychovisual redundancy Fall LIST 2

BASICS OF COMPRESSION THEORY

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression

Introduction to Video Compression H.261

Lecture 7 Predictive Coding & Quantization

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1

Multimedia & Computer Visualization. Exercise #5. JPEG compression

Digital Image Processing Lectures 25 & 26

2018/5/3. YU Xiangyu

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

Image and Multidimensional Signal Processing

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

Lecture 10 : Basic Compression Algorithms

Part 2: Video Coding Techniques

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)

UNIT I INFORMATION THEORY. I k log 2

Multimedia Information Systems

Product Obsolete/Under Obsolescence. Quantization. Author: Latha Pillai

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

Information and Entropy

Information and Entropy. Professor Kevin Gold

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

Digital communication system. Shannon s separation principle

6. H.261 Video Coding Standard

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Waveform-Based Coding: Outline

A study of image compression techniques, with specific focus on weighted finite automata

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

Department of Electrical Engineering, Polytechnic University, Brooklyn Fall 05 EL DIGITAL IMAGE PROCESSING (I) Final Exam 1/5/06, 1PM-4PM

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Compression. Reality Check 11 on page 527 explores implementation of the MDCT into a simple, working algorithm to compress audio.

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C

Detailed Review of H.264/AVC

Compression and Coding

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

IMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model

BASIC COMPRESSION TECHNIQUES

Transform coding - topics. Principle of block-wise transform coding

Modelling of produced bit rate through the percentage of null quantized transform coefficients ( zeros )

H.264/MPEG4 Part INTRODUCTION Terminology

! Where are we on course map? ! What we did in lab last week. " How it relates to this week. ! Compression. " What is it, examples, classifications

Image compression. Institute of Engineering & Technology, Ahmedabad University. October 20, 2015

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Basics of DCT, Quantization and Entropy Coding

Image Compression. 1. Introduction. Greg Ames Dec 07, 2002

JPEG and JPEG2000 Image Coding Standards

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

CMPT 365 Multimedia Systems. Final Review - 1

ECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

4x4 Transform and Quantization in H.264/AVC

Transform Coding. Transform Coding Principle

6.003: Signals and Systems. Sampling and Quantization

h 8x8 chroma a b c d Boundary filtering: 16x16 luma H.264 / MPEG-4 Part 10 : Intra Prediction H.264 / MPEG-4 Part 10 White Paper Reconstruction Filter

Source Coding: Part I of Fundamentals of Source and Video Coding

The MPEG4/AVC standard: description and basic tasks splitting

E303: Communication Systems

CMPT 365 Multimedia Systems. Lossless Compression

Autumn Coping with NP-completeness (Conclusion) Introduction to Data Compression

RLE = [ ; ], with compression ratio (CR) = 4/8. RLE actually increases the size of the compressed image.

INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND AUDIO

Fast Progressive Wavelet Coding

Source Coding for Compression

AN IMPROVED CONTEXT ADAPTIVE BINARY ARITHMETIC CODER FOR THE H.264/AVC STANDARD

Basics of DCT, Quantization and Entropy Coding. Nimrod Peleg Update: Dec. 2005

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1

Compression methods: the 1 st generation

repetition, part ii Ole-Johan Skrede INF Digital Image Processing

Multimedia Communications. Differential Coding

A Complete Video Coding Chain Based on Multi-Dimensional Discrete Cosine Transform

EE67I Multimedia Communication Systems

Module 4. Multi-Resolution Analysis. Version 2 ECE IIT, Kharagpur

on a per-coecient basis in large images is computationally expensive. Further, the algorithm in [CR95] needs to be rerun, every time a new rate of com

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur

Number Representation and Waveform Quantization

EE5585 Data Compression April 18, Lecture 23

CSEP 521 Applied Algorithms Spring Statistical Lossless Data Compression

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

Proyecto final de carrera

STATISTICS FOR EFFICIENT LINEAR AND NON-LINEAR PICTURE ENCODING

CHAPTER 3. Implementation of Transformation, Quantization, Inverse Transformation, Inverse Quantization and CAVLC for H.

Can the sample being transmitted be used to refine its own PDF estimate?

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)

RATE-DISTORTION ANALYSIS AND TRAFFIC MODELING OF SCALABLE VIDEO CODERS. A Dissertation MIN DAI

Transcription:

MM- Multimedia Payloads MM-2 Raw Audio (uncompressed audio) Real-Time Audio and Video Telephony: Speech signal: 2 Hz 3.4 khz! 4 khz PCM (Pulse Coded Modulation)! samples/sec x bits = 64 kbps Teleconferencing: 6 samplea/sec x bits = 2 kbps Music (CD ROM): Human hearing: 2 Hz 22 khz 44. samples/sec x 6 bits = 75.6 kbps (mono) =.4 Mbps (stereo) NOTICE: This chapter is a prerequisite for the next chapter: Multimedia over IP Raw Video (uncompressed video) Videoconferencing (MPEG-): Small screen 352 pixels x 24 lines x 2 bits x 3 frames/sec = 3.4 Mbps TV (MPEG-2, NTSC): 72 pixels x 4 lines x 24 bits (3 x, red, green, blue) x 3 frames/sec = 66 Mbps HDTV: Multimedia Payloads (cont.) 92 pixels x lines x 24 bits (3 x, red, green, blue) x 3 frames/sec =.493 Gbps MM-3 Digitized image Compression The previous two slides show that a transmission of raw signals, specially of video signals, would require enormous bandwidth. Given the fact that video signals have a high degree of redundancy, its is clear that this redundancy needs to be removed before transmitting or storing the signal. Removing or reducing the redundancy in signals is called compression. Data redundancy reduction Coding Transmission /storage MM-4 MPEG Motion Pictures Expert Group Digitized image Reconstruction Decoding

Compression (cont.) MM-5 Discrete Cosine Transform (DCT) MM-6 The key concepts used in video compression are: " Discrete Cosine Transform (DCT) " Quantization " Zig-Zag Scan " Run-Length Encoding (RLE) " Entropy Coding " YUV Color spaces and sub sampling " Temporal compression Spatial compression The first five concepts will be applied on the compression of still grayscale images. The techniques will be then extended to the still color images, thus giving the foundation for JPEG, the major standard for transmission of still images in Internet. DCT belongs to the family of Fourier transforms it is similar to discrete Fourier transform (DFT). Fourier transform of D signals (e.g. time-domain signals like audio signals) help us to see how the signal s energy (read: signal s information contents ) is distributed across different frequencies. For example, we can discover that most of the signal s energy is concentrated in a small range of frequencies. That gives a room for temporal compression of D temporal signals. Fourier transform can also be applied to 2D signals images. In the case of a single image (no temporal component assumed), the image is a function of two spatial variables. The reasoning about the 2D Fourier transform is similar to the reasoning about D Fourier transform: the signal energy concentration and spatial compression. The techniques of JPEG can further be extended to moving images (video) and audio giving MPEG, also the most common standard used in Internet. MM-7 MM- As with the Fourioer transform there are two versions of DCT: one-dimensional and two-dimensional DCT. Frequency-domain (transformed) signal Time-domain (original) signal One-dimensional DCT Forward D DCT S f DCT s t s t IDCT S f ( ) = ( ( )) ( ) = ( ( )) Inverse D DCT Time Frequency Function S(f) shows explicitly the frequency components of the signal and the distribution of the signal s energy (i.e. signal's information contents) among the frequency components. DCT is a lossless transformation, i.e. the original time-domain signal can be fully restored from the frequency-domain signal by applying the inverse DCT. Frequency-domain (transformed) signal Two-dimensional DCT Forward 2D DCT Pixel position (, ) = ( (, )) (, ) = ( (, )) Spatial signal (pixel values of image) Inverse 2D DCT 2D frequency S p q DCT s x y The concept of frequency of time-domain signal can be intuitively related to the rate of change of signal s(t) in time. In the case of spatial functions, the frequency can be related to the rate of change of pixel values along the spatial axes of S(x,y).

MM-9 MM- We will consider the functions s(x,y) and S(p,q) to be discrete functions defined over the sets of integers, [,,..., ], [,,..., ] x p y q M N In that case s(x,y) and S(p,q) become matrices: A s x y S p q = [ (, )] N M NM = [ (, )] The image compression is based on the fact that DCT tends to concentrate the signal s energy into a small regions of the frequency domain (exactly into the part of the frequency domain that is closer to the origin). We may pick a threshold value h and nullify all elements of S(p,q) that are smaller than h:! S ( p, q) if (, ) > '(, ) =" # if S( p, q) h S p q The matrix S (p,q) has fewer nonzero elements than the matrix S(p,q). That means we have less information to transmit or store (we transmit the nonzero elements only). However the removal of small elements of S(p,q) leads to lossy compression, since DCT(S (p,q)) is no longer equal to s(x,y). y varying h we can compromise between the compression rate and the compression quality of the restored image. Forward Transform: A = [ A ] N mn M DCT N = $ & pq % ' M MM- Inverse Transform: N = $ & pq % ' M IDCT A = [ A ] N mn M MM-2 MxN block of grayscale pixels MxN block of DCT coefficients MxN block of DCT coefficients MxN block of grayscale pixels

DCT is a lossless tranformation: MM-3 A Large coefficient values are concentrated in the upper left corner (smaller indices) MM-4 A = [ A ] N mn M IDCT DCT A ( ( )) DCT N = $ & pq % ' M = A IDCT A = [ A ] N mn M Vertical frequencies This bar graph shows the absolute values of 5 x 5 top left elements of DC component MM-5 Horizontal frequencies Very small or zero coefficient values are concentrated in the lower right corner (larger indices) Notice: The plot above is obtained by using the log (abs()), where the elements of smaller than were set to, otherwise the plot would be entirely blue, since all larger elements would be concentrated at very small area in the upper left corner. This slide illustrates the benefit of DCT: typically an image consists of many smooth areas and fewer edges. Edges have sharp change of pixel values. The low-frequency DCT coefficients represent the smooth areas of the original image, while the high-frequency coefficients represent the edges. The excerpt from the image below contains two edges, therefore the high-frequency coefficients are very small. MM-6 AC components A(,:) (First row) 3x42 (,:) = dct(a(,:)) (DC coefficient not shown)

Since the DCT matrix has many small coefficient values, they can be set to zero, which will reduce the number of the non-zero values. This can be achieved with quantization (shown later). The zeroed elements can be eliminated with run-length encoding (see later), thus reducing the number of bits to be transferred, i.e. the required transmission bit rate. MM-7 A IDCT(reduced(DCT(A))) MM- The following four slides illustrate the effect of removing the small coefficients from the DCT matrix. The values of the reduction factor is defined as follows: N rf M Number of pixels in original image ( ) = Number of non-zero coefficients in DCT matrix rf=37 A IDCT(reduced(DCT(A))) MM-9 A IDCT(red(DCT(A))) MM-2 cf=74 rf=9

A IDCT(reduced(DCT(A))) MM-2 In standards to be discussed later, the DCT is not applied to the entire image, it is rather applied to smaller blocks of the image. The standard block size is by pixels. For that purpose the image is divided into non overlapping x blocks, and the DCT is applied to each block independently. MM-22 Image j i lock A p q $ & ij (, )%' DCT p q $ & ij (, )%' rf=473 [ A( p, q) ] N M MM-23 Quantization MM-24 lock A(3:3, 2:2) DCT of A(3:3, 2:2) Quantization is a procedure used to encode an analog value into a binary number. The number of bits required to represent the binary encoded value depends on the number of the quantization levels. s Continuous signal s [, ] Q(s) ' {,,2,..., } max L-level quantizer s Quantized signal L x DCT s h s Q s round h log 2 (L) bits required for binary representation! < ( h s < 2h ) s* ' = ( ) = " =,......, = ( - h. (# L ( L ) h s < smax s L max Quantization step size (quantum) This quantizer is called uniform quantizer because it has constant step size independent of the magnitude of the input signal.

s Quantization (cont.) Reverse of quantization is called dequantization. Quantized signal L { } ',,2,..., L-level dequantizer Q - (s) h ˆ = ' 2 s s h Reconstructed signal sˆ, h,2 h,...,( L ) h { } MM-25 Comments: Quantization (cont.) The quantization can be applied to a discrete signal to. For example signal {, x, 2 x,, (N-) x} can be quantized to {, h, 2h,, (L-)h} where x=s max /N and L<N. Quantization can be applied to signals that have negative values. Quantization reduces the number of bits by which the signal is encoded. Therefore the quantization compresses the signal. This kind of compression is called lossy compression meaning that some amount of information is lost, i.e. the reconstructed signal is not equal to the original signal. The ratio of the number of bits of the original signal and the number of bits used in transmission (or storage) after the compression is called compression factor. The compression factor and the quality of the signal are in inverse relationship: generally the higher compression factor leads to lover signal quality. MM-26 Quantization (cont.) MM-27 Quantization (cont.) MM-2 Quantization of Images Comments: In an image block every pixel is quantized independently Image s p q [ s p q ] [ ] (, ) N M Reconstructed image [ ] ˆ(, ) N M Quantizer Q Dequantizer Q - Quantized image (, )' N M Quantized image s p q [ s p q ] (, )' N M Quantization of an image does not reduce the number of pixels in the image, however the number of bits needed to encode each pixel is decreased. (Decreasing of number of pixels in an image is called down sampling, will be discussed later) Quantization of an image increases the number of pixels that have identical value. This gives a chance for further lossless compression by entropy coding as shown later. If the pixels with equal value are adjacent another lossless compression technique can be applied, called Run-Length Encoding (see later).

Quantization (cont.) MM-29 Quantization (cont.) MM-3 Quantization of Images after DCT The DCT transformed images can also be quantized Image [ s( p, q) ] N M Transformed image S p q DCT [ ] (, ) N M Quantizer Q Quantized DCT S p q [ ] (, )' N M Comment: Quantization of DCT transformed images takes a full advantage of the fact that DCT concentrates information in the upper left corner of the matrix. Consequently the insignificant coefficients of DCT are nullified after quantization. The nullified DCT coefficients are generally adjacent which contributes to higher compression factor after removing the coefficients with runlength encoding. Reconstructed image s p q [ ] ˆ(, ) N M IDCT Reconstructed Transformed image N $ Sˆ( p, q ) % & ' M Dequantizer Q - Quantized DCT S p q [ ] (, )' N M Quantization (cont.) Quantization of DCT locks The DCT transformed images can also be quantized: Image block A p q Transformed image block $ & (, )%' DCT $ & (, )%' Q ij Reconstructed image block A p q p q ij Reconstructed transformed image block p q $ ˆ ij (, ) % & ' IDCT $ ˆ (, ) % Q - & ' ij Quantized DCT block p q $ & (, )'%' ij Quantized DCT block p q $ & (, )'%' ij MM-3 Quantization (cont.) Quantization Tables Since different coefficients in DCT have different weight in the amount of information they carry, they can be quantized with different quantization step sizes. The step sizes can be stored in a quantization table, or quantization matrix. p q ij p q $ & (, )%' ij $ ˆ ij ( p, q ) % & ' round ij p q DT p q ) (, ) * (, )' = (, ), -. Q [ QT ( p, q) ] Q - p q $ & (, )'%' ij p q $ & (, )'%' ij Quantization table p q p q QT p q MM-32 QT p q ˆ (, ) ij (, ) = ij (, )' (, ) 2

Quantization (cont.) Quantization Table with Scaling Factor Elements of a quantization table can be multiplied with a scaling factor (CF) in order to control the degree of compression and the resulting output bit rate. p q $ & ij (, )%' p q $ ˆ ij (, ) % & ' ij Q Q - p q $ & ij (, )'%' p q $ & ij (, )'%' ) (, ) (, )' ij * = (, ), -. p q SF [ QT ( p, q) ] p q round DT p q SF Zig-Zag Scan Quantization table with scaling factor MM-33 The result of quantization is an x matrix. In order to obtain a bit stream suitable for run-length encoding and entropy coding, and later for transmission, the 2D matrix needs to be mapped into a D array. This is done by scanning the matrix elements in some order. JPEG uses the zig-zag order, which tends to group low-frequency coefficients towards the beginning of the resulting D array. 3 4 2 5 9 6 3 7 4 5 7 26 6 27 3 2 3 42 29 43 44 MM-35 DC component 6 2 4 4 24 49 72 Quantization (cont.) Quantization Tables (cont.) Quantization tables are standardised. JPEG defines the following quantization table: 2 3 7 22 35 64 92 4 6 22 37 55 7 95 6 9 24 29 56 64 7 9 Table contains quantization step sizes. It is designed based on the visual sensitivity of human eye to different frequency components. Typically lowfrequency coefficients are assigned smaller step sizes because the human eye is more sensitive to changes in low-frequency components. 24 26 4 5 6 3 2 4 5 57 7 9 4 2 5 6 69 3 39 2 3 The table can be scaled to achieve desired bit rate (see later). 6 55 56 62 77 92 Run-Length Encoding (RLE) MM-34 AC components After quantization, specially after quantization of DCT coefficients, there are many repeated zeros in the zig-zag bit stream. The non-zero elements can be paired with the number of preceeding zeros. The DC value goes first unpaired. The pairs (run-length, non-zero element) are called symbols. The symbols are fed to the entropy coder. 523462 MM-36 2 2 9 24 25 33 32 4 4 46 45 53 54 55 5(2,2)(,3)(2,4)(6,)(,6)(,2)(,) 2 22 36 23 35 37 34 3 49 39 4 5 47 5 5 52 57 59 56 6 63 6 62 64 DC value Run length (x,y) are symbols (ready for entropy coding) End of block

Entropy Coding MM-37 Entropy Coding (cont.) MM-3 A message can be considered as a string of symbols c c 2 c 3 c 4... where each symbol c k takes values from a finite set of symbols {S, S 2,..., S M }. Examples: Textual message (string of printable ASCII characters) takes values from the set of 95 symbols: {Space,!,,...,, 2,..., A,, C,..., }, ~} The question is how many bits we need to encode a message? Theoretically (Shannon) the minimal required number of bits is equal to: where: bi log2 p i ) * =, -. b i number of bits to encode the symbol S i p i probability that a symbol c k has value S i, i.e. Prob(c k = S i ) Audio or video message after RLE: 5 (2,2) (,3) (2,4) (6,) (,6) (,2) (,) The probability can be estimated from the symbol frequency count: p i / ni n where: n i is the number of occurrences of symbol S i in the message, or in the relevant class of messages. This estimate is more accurate if the message is longer. j j Entropy Coding (cont.) MM-39 Huffman Coding MM-4 The name entropy encoding comes from the fact that the information entropy of a source that generates messages is defined as: ) * ( ) = / j j log2, -. H S p p j One popular entropy encoding technique is Huffman coding which uses symbol codes with variable bit lengths. Huffman coding represents a lossless compression algorithm (i.e. the decoded message is identical to the original message). Suppose we generate messages (character strings) which use symbols from the set of 7 characters: {A,,C,D,E,F,G} Since the set has 7 characters we can encode each character wit 3 bits: A E F C G D - Suppose a specific message: AAGE Then the binary encoded message is: A A G E The string contains 2 bits

Huffman Coding (cont.) MM-4 Huffman Coding (cont.) MM-42 Now, instead of straight forward fixed-length binary codes we use Huffman codes for the characters that appear in the message: A E G - The message is now: Codebook A A G E The character is the most frequent character in this message, therefore it uses the smallest numbers of bits (just one bit = ) The string contains 3 bits, which is only 62% of the previously encoded message. This encoding technique provides unambiguous and easy decoding, because the encoding has unique prefix property: no code is a prefix to any other code. Deriving the Huffman codebook. Find the frequency count for all characters that appear in the message: Symbol Count Probability A 2 2/7 3 3/7 E /7 G /7 Total 7 2. Sort the symbols according to their probabilities: 3/7 2/7 /7 /7 A E G Huffman Coding (cont.) MM-43 Huffman Coding (cont.) MM-44 3. Join the two rightmost nodes and form a new node. Label arks with and : 5. Resort the new nodes: 6. Repeat step 3: 4. repeat step 3: 3/7 2/7 3/7 A 2/7 /7 /7 E E G G 4/7 2/7 2/7 A /7 /7 4/7 3/7 2/7 2/7 A /7 /7 E G 7. Read leafs in terms of s and s: A = E = G = = 7/7 4/7 3/7 2/7 2/7 A /7 /7 E G

Huffman Coding (cont.) MM-45 JPEG Grayscale Images MM-46 In summary, the Huffman compression consists of the following:. Generate string of symbols (the longer the better) 2. Derive the symbol statistics (frequency counts) 3. Derive the Huffman codebook for the statistics 4. Encode the message 5. Send the codebook to the receiver 6. Send the message to the receiver (can be done in several packets) Grayscale image Ready for transmission or storage locking DCT Q VLC Codebook x block symbols RLE x DCT coefficients QT D binary array Variable length coding (entropy coding) JPEG uses Huffman coding x DCT quantized Zig-Zag JPEG = Joint Photographic Expert Group Voted as international standard in 2 Works for color images (see later) Color Images MM-47 RG MM-4 Color images have three color components (planes). Each component is treated in a similar way as grayscale images. Two major color spaces are used in digital video: RG and YUV Color image Red component RG The picture has three components: R (red), G (green) and (blue). Each component has 256 levels (this is how much human eye can see), therefore each component is encoded with bits, total for picture 24 bits. RG is used in image sensors and image display (like CCD, CRT and LCD) YUV The picture has three components: Y (luminance, intensity, brightness), U and V (chrominance, color content). YUV color space is better suited to human perception, as the humans deal with brightness and color. YUV is used in JPEG, MPEG- and MPEG-2. There are variances of YUV like YIQ and YCbCr. Encoding of YUV requires less bits (see later)

RG (cont.) MM-49 RG (cont.) MM-5 Color image Green component Color image lue component RG (cont.) MM-5 Color Translation MM-52 Hist(R) Hist(G) YUV values can be derived from RG values by simple mapping: =.3.59. Y R G =.493 ( ) U =.77 ( ) Y Hist() This slide shows histograms for the three RG color components. As seen all intensities (..255) are significant in all components. Y V or $ % $.2.57.4 %$ % U =.47.29.436 G & V' &.65.55. '& ' R Y The Y component contains the information about the brightness (intensity) of the image, while the U and V components contain the information abou the color. R

Color Translation (cont.) MM-53 Color Translation (cont.) MM-54 Mapping is reversible, RG components can be derived from the corresponding YUV components: R $ % $...4 %$ % G =..395.5 U & ' &. 2.32. '& V' YIQ color space is similar to YUV color space. It is used in NTSC (National Television System Committee): Y $ % $.2.57.4%$ % I =.596.275.32 G & Q' &.22.523.3 '& ' Y R While RG values are all in the range (..255), the YUV and YIQ are not. In order to force the luminance and chrominances to the same range, the YUV components have to be scaled and shifted. The result is YC b C r color space: Y $ % $.257.54.9%$ % $ 6% b =.4.29.439 G 2 & C ' &.439.36.7 '& ' & 2' r R $ % $.64..596 %$ 6 % G =.64.392.3 b 2 & ' &.64 2.7. '& C 2' R Y r YUV MM-55 YUV (cont.) MM-56 Color image Y component Color image U component

MM-57 MM-5 YUV (cont.) Color image YUV (cont.) V component log(hist(y)) log(hist(u)) log(hist(v)) MM-59 Y Reduced(DCT(Y)) rf = 3 MM-6 YUV (cont.) YUV (cont.) DCT(Y) This slide shows histograms for the three YUV color components. All intensities (..255) are significant in Y component, however the U and V components uses only intensities (..2) and (..45) respectively. This indicates that U and V components can use higher compression ratio than Y component IDCT(reduced(DCT(Y))) rf=3

YUV (cont.) MM-6 YUV (cont.) MM-62 DCT(U) Reduced(DCT(U)) U IDCT(reduced(DCT(U))) cf = 72 rf=7 YUV (cont.) MM-63 YUV (cont.) MM-64 DCT(V) Reduced(DCT(V)) V IDCT(reduced(DCT(V))) rf = rf=

6 2 2 4 24 49 72 2 3 7 22 35 64 92 4 6 22 37 55 7 95 6 9 24 29 56 64 7 9 24 26 4 5 6 3 2 4 5 57 7 9 4 2 YUV (cont.) Quantization tables used in JPEG 5 6 69 3 3 2 3 6 55 56 62 77 92 MM-65 y examining previous slides we can conclude that grayscale images and the Y component of YUV color space can have reduction factor around 3, while U and V components can have reduction factors 7 and respectively, without diminishing the quality. This means that U and V components can use larger step sizes in quantization. For this reason JPEG defines two quantization tables, one for luminance another for chromonances Chrominance Subsampling esides the smaller resolution in terms of quantization levels, the UV components require smaller spatial resolution. This means that U and V components (or Q and I, or Cb and Cr) require smaller number of pixels than the Y component. Therefore sampling of chrominance components can be done with smaller rates. Examples: Subsampling format 4:4:4 4:2:2 4:: MM-66 Luminance Quantization Table (Y) Chrominance Quantization Table (U,V/I,Q/Cb,Cr) 7 24 47 2 26 66 24 26 56 47 66 All U/V pixels are sampled (no subsampling) Every second U/V pixel horizontally, every pixel vertically - Luminance or original chrominance pixel - Subsampled chrominance pixel Every fourth U/V pixel horizontally, every pixel vertically Chrominance Subsampling (cont) MM-67 Chrominance Subsampling (cont.) MM-6 JPEG/MPEG use the lowest subsampling rate (format 4:2:) which requires the smallest bandwidth. There are two variants of 4:2: subsampling with different grid offsets: Summary: U V 4:2: 2 3 4 5 6 7 9 2 3 2 3 4 5 = U U U U V V V V 4 4 ' 4 5 = ' 4 5 (JPEG/MPEG-/MJPEG) 4:2: 2 3 4 5 6 7 9 2 3 2 3 4 5 U V = U U V V 2 2 ' 4 = ' 4 (MPEG-2) 4:4:4 4:2:2 4:2: 4:: Subsampling Format SMPTE JPEG/MPEG-/MJPEG MPEG-2 /2 /2 PAL-DV NTSC-DV Subsampling Ratio Hor / /2 /4 Ver / / / SMPTE = The Society of Motion Picture and Television Engineers

Chrominance Subsampling (cont.) MM-69 JPEG Color Images MM-7 JPEG components and blocks Color image RG Y Y U V rgb2yuv translation RG Source (camera) 4 pixels 64 pixels 24 pixels 4 pixels lock # 32 pixels lock # lock #47 64 pixels V U lock # Ready for transmission or storage locking & subsampling VLC Codebook symbols x block RLE DCT x DCT coefficients QT Q D binary array x DCT quantized Variable length coding (entropy coding) JPEG uses Huffman coding Zig-Zag Compression of Moving Pictures MM-7 Predictive Coding MM-72 This slide shows the essence of temporal compression: The compression techniques considered so far are applied to still pictures. This kind of compression is called spatial compression or intra-frame compression. In case of moving pictures the source generates a sequence of pictures called frames. A typical frame rate is 3 frames per second. It is reasonable to expect that two adjacent frames are very similar since the picture doesn t change very rapidly in /3 seconds. In most of cases most of a picture is unchanged, while only parts of the picture are moving. There is an obvious redundancy between frames, which is called temporal redundancy or inter-frame redundancy. The rest of this chapter discusses compression techniques that exploit the temporal redundancy in moving pictures. The core of the temporal compression are predictive coders and predictive decoders. - = Frame # Frame #2 (shifted 3 pixels down and 3 pixels right) Difference Many pixels have equal value - very high spatial compression ratio can be achieved Send the frame # and the difference, frame #2 can be constructed by adding the difference to the frame #.

Following the general idea of sending the differences instead of actual frames we can first consider a single pixel: s s e s s Sender sends: (), (2) = (2) (), Predictive Coding (cont.) (3) = (3) (2),... However, if sender sends the quantized errors: s e Q s s e q (2)' = [ (2) ()] = (2) (2), (3)' = [ (3) (2)] = (3) (3),... e s s e Receiver reconstructs: (), (2) = () (2), (3) = (2) (3),... s e s e q s q s s e s q e q s q q The receiver is forced to reconstruct the signal using e(n) instead of e(n):! () = (),! (2) =! () ()' = () () () = (2) (),! (3) =! (2) (2)' = (2) () (2) (2) = (3) () (2),............... s n s n q q q q n! ( ) = ( ) () (2) (3)... ( ) Quantization error MM-73 Huge cumulative quantization error possible" Predictive Coding (cont.) An approach to avoid cumulative quantization errors is to send the quantized prediction error instead of quantized difference between two adjacent samples: error s( n s n ) Predicted s n ( ) " ( ) ' ( ) ' ( )' Q - ˆ( )' ˆ( )' signal Prediction error: Prediction Quantized pred. error: Reconstructed signal: sˆ( n)' Predictor Quantized prediction error "s n Reconstructed signal ( )' "s n Reconstructed signal s n s n Predictor Same circuits at the sender s n s and the receiver s side " ( ) = s( n) sˆ ( n)' " s( n)' = Q[ s( n)] = " s( n) q( n) s n = ˆ " s n = s n " s n " s n q n = s( n) q( n) MM-74 ( )' ( )' ( )' [ ( ) ( )] [ ( ) ( )] No cumulative quantization error" Predictive Coding (cont.) Predictor MM-75 Predictive Coding (cont.) Predictor (cont.) MM-76 The simplest predictor is linear predictor which is based on Taylor expansion of continuous functions. Suppose the continuous version of the audio/video signal f(t) and sampling time t, then: k ( )" ( ) = / k = " 2 ( " ) ( ) ( ) " (" ) k f t t f t f t t O t = If t is small enough, then k f t t df t d k dt dt In general case the prediction of s(n) can be based on several past samples of s: ˆ( ) = α ( ) α ( 2) α ( 3)... α ( ) 2 3 This can be applied to the quantized samples as well: s n s n s n s n s n K ˆ( )' = α ( )' α ( 2)' α ( 3)'... α ( )' 2 3 K K ( ) " ( ) ( " ) ( " ) ( ) [ ( ) ( " )] = 2 ( ) ( " ) t t f t f t t df dt t t f t f t f t t f t f t t f If s(n), s(n) and s(n-) represent samples of f(t t), f(t) and f(t- t), then: s n s n s n s n s n s n ( ) 2 ( ) ( ) or ( ) 2 ( ) ( 2) In order to simplify and to reduce the memory requirements, the predictors can be based on only one past sample: s n s n s n s n s n K s n s n ˆ( )' = ( )' Such simple predictor is good enough with stationary pictures, i.e. pictures in which details are not moving, but may change the YUV values.

" ( )' Q - ( s n ) ( ) ˆ( )' n s n s ˆ( s n)' Predictor "s n ( )' s n DPCM The encoding method described in previous slides is called Differential Pulse Code Modulation (DPCM). "s n ( )' s n ˆ( )' s( n)' Predictor Predictive Coding (cont.) ( s n ) s( n) ' DPCM DCPM is often used to encode audio signals and DC values after block DCT "s ( n) ' " Inverse ( )' DPCM s n MM-77 The predictive encoder can also be applied to blocks of pixels. Let A n ( ) be the block in n-th frame. Then: Sender A n A( n) A n Predictive Coding (cont.) ( ) " ( )' Q - ˆ( )' Aˆ( n )' Predictor A p q [ (, )] "A n A n ( )' VLC VLC - Predictor s memory contains block A( n )' Receiver "A n ( )' Frame n- A n ˆ( )' Predictor Frame n A( n)' MM-7 ( )' A n " lock A( n )' A( n)' Picture Higher compression ratio and lower bandwidth can be achieved if we differentially encode DCT of blocks instead of blocks themselves: A n "A( n) ( ) - ˆ( )' A n Predictive Coding (cont.) " n DCT ( ) Predictor DCT of prediction error Q " ( n) ' IDCT VLC MM-79 MM- Interframe Prediction with Motion Compensation The predictive encoder is the simplest interframe prediction algorithm. It works well in cases of stationary pictures. In case where the picture details are moving a more sophisticated predictor is required. The more sophisticated predictors use techniques based on motion estimation and motion compensation. There are two basic approaches in prediction with motion compensation: " Forward interframe prediction " -directional interframe prediction Higher compression ratio and lower bandwidth can be achieved if we differentially encode DCT of blocks instead of blocks themselves:

Forward Interframe Prediction The simplest case: stationary picture. Details of the picture are not moving, only pixel values are changing from frame to frame. The illustration below ignores DCT for the sake of simplicity (with DCT a much better spatial compression can be achieved). At sender Frame (n-) Sender Frame (n) Co-sited blocks of pixels Locally predicted frame (n-) Locally predicted frame (n) - Frames generated by the source (camera) Difference sent to receiver MM- Forward Interframe Prediction (cont.) Difference received from the sender Receiver Reconstructed frame (n) Frames displayed Reconstructed frame (n-) MM-2. Most of pixels =. 2. Differences of details that have a change have pixel values in smaller range can be encoded with fewer bits Forward Interframe Prediction (cont.) Less simple case: picture detail moves by translation, blocks are no longer co-sited. Frame (n-) A( p, q; n ) Locally predicted frame (n-) ˆ(, ; )' A p q n Frames generated by the source Motion estimator v Motion compensator Motion compensation: Sender A( x, y; n) A x y n Frame (n) v Locally predicted frame (n) Aˆ( x, y ; n )' - Motion vector v = (, ) v v A x v y v n x y Difference sent to receiver "A( x, y; n)' ˆ(, ; )' = ˆ(, ; )' x y v MM-3 At receiver Locally predicted frame (n) Forward Interframe Prediction (cont.) Difference received from the sender "A( x, y; n)' v Receiver Aˆ( x, y ; n )' Reconstructed frame (n) Frames displayed Reconstructed frame (n-) Locally predicted frame (n) A( x, y; n )' A( p, q; n )' Motion compensator v Aˆ( p, q ; n )' MM-4

Forward Interframe Prediction (cont.) MM-5 Macroblocks MM-6 Even less simple case: picture detail moves by translation and rotation. Frame (n-) Frame (n) " What happens if the errors mentioned in the previous slide become too big? Then the sending of difference images could require more information, i.e. more bits, than sending the image itself (i.e. its quantized DCT). v " What if different parts of the image move in different directions and by different amounts? Difference (n) 2 3 That is OK. There will be more error details. In addition, more blocks will be affected. In this example blocks marked, 2 and 3 will have some error, which will be taken care by the motion compensated prediction algorithm. oth problems are addressed by introducing macrobloks (M). Entire image is divided into non-overlapping blocks, whereby each M has its own motion vector. The size of the M is a compromise between decreasing errors and decreasing the amount of additional information that needs to be sent along with the difference images, that is the motion vectors. MPEG has chosen 6 x 6 pixel Ms. Macroblocks (cont.) MM-7 Macroblocks (cont.) MM- M M 2 M 3 M 2 M 22 M 23 Image (frame) M grid rightness information Color information Motion vectors in a frame M 3 6x6 Y Y C b C r A M consists of 4 adjacent x luminance blocks, and two x chrominance blocks that cover the entire M. For this reason the chrominance pixels are down sampled by format 4:2: ( fore every 4 luma pixels there is one chroma pixel of each kind). Y 2 Y 3 Used for motion compensation x pixel block Macroblock Used for DCT and quantization

Motion Estimation The motion estimation algorithm is not standardized by MPEG or other protocols, they are left to the implementer (vendor). A straightforward approach would be the exhaustive search within some neighborhood of the M, called search area. M Search area R( ) Frame (n-) Frame (n) M( p, q) MM-9 Motion Estimation (cont) In order to define what is the best matching block, a matching criteria has to be established. A commonly used measure of similarity is the mean-square error: Mean-square error for a single vector v = (x,y) / Frame (n-) [ ] 2 ( v) = (, ) (, ), v = (, ) V( ) i j M n Current image (, MSE S i j S i x j y x y M ) Image indices must fall into M Search area R M ( ) n Previous image Frame (n) MM-9 Only the luminance pixel values (Y component) are used in this equation. Values of v = (x,y) are chosen such to stay in the search area For a given M in frame (n) find the best matching block in frame (n-). The best matching block has the size of M, but doesn t have to be aligned with the M grid. Assumption: Sub pixel motions can be ignored ( ) n M v M n ( ) v MSE ( ) MSE v ( ) Mn ( ) Motion Estimation (cont.) The best match is the one with the minimal MSE: vo = ( o, o ) = arg min ( v) v ( ) MSE x y MSE V M MSE v V ( M) ( v ) = min ( ) o v where v o = (x o,y o ) is the optimal value of v, the estimated motion vector. There are several cases: MM-9 i-directional Interframe Prediction What happens if the moving object covers or uncovers new objects? In the case below the forward interframe prediction would cause to much information in the difference frame. etter result would have been achieved if the prediction was based on the future frame (n) instead of the past frame (n-). Frame (n-) Frame (n) MM-92 MSE() = MSE(v o ) = MSE(v o )! maxerr MSE(v o ) > maxerr - M didn t move and didn t change the luminance - M moved by v o but didn t change the luminance - M moved by v o and changed the luminance - error to big, couldn t find the matching block, use intracoding for this M The best search strategy is left to the implementer and is still a matter of research. The algorithm above is called exhaustive block-matching algorithm. There are rapid ( intelligent ) search strategies, which are not considered here.

MM-93 i-directional Interframe Prediction (cont.) For that reason MPEG specifies bi-directional motion compensation which allows a choice between forward or backward prediction. This much better handles covered or uncovered regions. Current frame A x y n A x v y v n A x v y v n ˆ(, ; )' = α ˆ(, ; )' α ˆ(, ; )' b bx by f fx fy α b.5 Previous frame Motion vector for frame (n-) α f.5 Combined Future frame Forward prediction ackward prediction Motion vector for frame (n) Weighting coefficients α b and α f are side information that has to be sent, along with the motion vectors, to the receiver. Group of Pictures The three types of frames are typically arranged in a sequence called group of pictures (GOP): Display order: I 2 Transmit order: 4 2 3 Frame precedence in forward prediction 4 P 5 6 Frame precedence in backward prediction 3 7 5 7 P 6 9 P MM-95 Parameters of this GOP structure: N = (number of pictures in GOP) M = 3 (spacing of P-frames) 9 MM-94 i-directional Interframe Prediction (cont.) How can we make a prediction based on future frames? Send the future frame before the actual frame. This requires reordering of frames that are naturally generated in display order, into rearranged transmit order. For that purpose MPEG defines three kinds of frames: " I-frames (intra frames) Frames encoded entirely in intra-mode. Only spatial compression, no temporal compression (like JPEG). More information transmitted. This frame is used as a reference for predictive and bi-directional coding. Used periodically to stop error propagation (video conferencing however ca not afford periodic refresh). " P-frames (predictive frames) Frames used for forward prediction. Enable both, spatial and temporal compression. Less information to transfer, higher compression ratio. In order to decode a P-frame the previous I-frame or P-frame needs to be used. Along with P-frame the corresponding motion vectors have to be transmitted. A P-frame has three to five times less bits than an I-frame. " -frames (bi-directional frames) Used for bi-directional decoding. Contain picture difference between the previous, or the future P-frame or I-frame (whichever needs less bits for transfer). Contains picture difference, motion vectors and side information. Has number of bits about 5% of P-frames. Provides better quality of decoded pictures, but requires reordering in transmission, which introduces delay. Can t be used in real-time streams (interactive video, video conferencing) Comments: Group of Pictures (cont.) " MPEG-2 doesn t insist on a fixed GOP structure. For example, after a shot change a new GOP sequence can be started, which requires an I- frame to start with. " Transmitted I/P/ frames are sometimes called anchor frames. They carry the new information needed for the reconstruction of true frames. The previously reconstructed I and P frames are called reference frames. Frames to be reconstructed are called target frames. " I-frames provide random access point into the bit stream of a GOP. " A frame can contain different (and contradictory) motions. Therefore not all Ms of a P-frame are predicted. If the prediction error is too big the coder can decide to intra-code that M. Similarly, the Ms of a - frame can be forward, backward or combined predicted, or intra-coded. " -frames can be discarded without error propagation (temporal scalability) MM-96 I P P P

Group of Pictures (cont.) Compression Ratio of GOP Average compression factor of a single picture: CF b = b number of bits for uncompressed image c c number of bits for compressed image Average compression factor of a GOP: ( I P ) I P = = I I P P I P I P n n n n n CF b c n c n c n n n n CF CF CF n i number of pictures of type i (i = I, P, ) c i number of bits for compressed image of tipe i CF c average compression factor for picturs of type i MM-97 Example: Given is GOP: IPPPPP. Average compression ratios for picture types are typically CF I =, CF P = 25, CF =. The average compression factor for a GOP is: 5 4 34.6 CF = = =.45 GOP 5 25