Multimedia Information Systems

Similar documents
Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

CMPT 365 Multimedia Systems. Lossless Compression

Compression and Coding

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Vision & Perception. Simple model: simple reflectance/illumination model. image: x(n 1,n 2 )=i(n 1,n 2 )r(n 1,n 2 ) 0 < r(n 1,n 2 ) < 1

CSEP 521 Applied Algorithms Spring Statistical Lossless Data Compression

Image and Multidimensional Signal Processing

Chapter 2: Source coding

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria

Image Data Compression

Lecture 4 : Adaptive source coding algorithms

CMPT 365 Multimedia Systems. Final Review - 1

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Source Coding Techniques

Text Compression. Jayadev Misra The University of Texas at Austin December 5, A Very Incomplete Introduction to Information Theory 2

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

Lecture 10 : Basic Compression Algorithms

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

Autumn Coping with NP-completeness (Conclusion) Introduction to Data Compression

repetition, part ii Ole-Johan Skrede INF Digital Image Processing

UNIT I INFORMATION THEORY. I k log 2

Digital communication system. Shannon s separation principle

Lecture 1 : Data Compression and Entropy

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

Basic Principles of Lossless Coding. Universal Lossless coding. Lempel-Ziv Coding. 2. Exploit dependences between successive symbols.

BASICS OF COMPRESSION THEORY

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Multimedia Networking ECE 599

BASIC COMPRESSION TECHNIQUES

Visual Imaging and the Electronic Age Color Science Metamers & Chromaticity Diagrams. Lecture #6 September 7, 2017 Prof. Donald P.

Information and Entropy

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)

CS4800: Algorithms & Data Jonathan Ullman

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

SIGNAL COMPRESSION Lecture 7. Variable to Fix Encoding

Entropy as a measure of surprise

Lecture 16. Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

2018/5/3. YU Xiangyu

Digital Image Processing

Image Compression - JPEG

Color and compositing

Communications Theory and Engineering

ELEC 515 Information Theory. Distortionless Source Coding

Lec 05 Arithmetic Coding

A study of image compression techniques, with specific focus on weighted finite automata

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code

Real-Time Audio and Video

CHAPTER 8 COMPRESSION ENTROPY ESTIMATION OF HEART RATE VARIABILITY AND COMPUTATION OF ITS RENORMALIZED ENTROPY

Color vision and colorimetry

Data Compression Techniques

Multimedia Communications Fall 07 Midterm Exam (Close Book)

Lec 03 Entropy and Coding II Hoffman and Golomb Coding

Image Compression Basis Sebastiano Battiato, Ph.D.

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)

Ch 0 Introduction. 0.1 Overview of Information Theory and Coding

10-704: Information Processing and Learning Fall Lecture 10: Oct 3

CSCI 2570 Introduction to Nanocomputing

Basic Principles of Video Coding

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

ECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.

Lecture 3 : Algorithms for source coding. September 30, 2016

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur

Data Compression Using a Sort-Based Context Similarity Measure

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Source Coding: Part I of Fundamentals of Source and Video Coding

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Color images C1 C2 C3

Digital Image Processing Lectures 25 & 26

Huffman Coding. C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University

ECE 587 / STA 563: Lecture 5 Lossless Compression

Lecture 1: Shannon s Theorem

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site

Chapter 9 Fundamental Limits in Information Theory

Chapter 5: Data Compression

Lec 04 Variable Length Coding (VLC) in JPEG

lossless, optimal compressor

SGN-2306 Signal Compression. 1. Simple Codes

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

EC 624 Digital Image Processing ( ) Class I Introduction. Instructor: PK. Bora

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression

Colour Part One. Energy Density CPSC 553 P Wavelength 700 nm

Coding for Discrete Source

CSE 408 Multimedia Information System Yezhou Yang

ECE 587 / STA 563: Lecture 5 Lossless Compression

Objective: Reduction of data redundancy. Coding redundancy Interpixel redundancy Psychovisual redundancy Fall LIST 2

wavelength (nm)

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

3F1 Information Theory, Lecture 3

Compression and Coding. Theory and Applications Part 1: Fundamentals

3F1 Information Theory, Lecture 3

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

Data Compression Techniques

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information

+ (50% contribution by each member)

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

CSE 421 Greedy: Huffman Codes

IN this paper, we study the problem of universal lossless compression

EECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have

Transcription:

Multimedia Information Systems Samson Cheung EE 639, Fall 2004 Lecture 3 & 4: Color, Video, and Fundamentals of Data Compression 1

Color Science Light is an electromagnetic wave. Its color is characterized by the wavelength content of the light Visible : 400nm 700 nm Spectral power distribution E of a signal daylight 2

3 Human Vision Sensors Rods night vision Cones red, green, blue d q E B d q E G d q E R q q q B G R B G R,, Response curves : = = = q R q G q B B G R q q q V + + =

4 Image Formation d q S E B d q S E G d q S E R S B G R Reflectance : Spectral = = =

Gamma Correction CRT The light emitted from CRT is roughly proportional to the voltage raised to a power. This power is called gamma γ~2.2 Gamma correction : 1/ γ γ R R' = R R' = R Not Corrected Gamma Corrected 5

Different Color Coordinates Different numerical representations to facilitate the applications Coordinate systems: RGB - monitor CMYK - printer CIE Chromaticity XYZ L*a*b* and L*u*v* HSV, HSV, HLS YIQ, YUV, YCbCr 6

CIE 1931 Chromaticity Diagram Visible color Normalized intensity 7

L*u*v* and L*a*b* Nonlinear transformations of the XYZ tristimulus space. Designed to have a more uniform correspondence between geometric distances and perceptual distances between colors that are seen under the same reference illuminant. L*u*v* for additive color L*a*b* for subtractive 8

HSI, HLS, HSV Intuitive Color HLS = Hue, Lightness, Saturation HSI = Hue, Saturation, Intensity HSV = Hue, Saturation, Value Non-linear transform 9

YUV, YCbCr, YIQ Linear transform of RGB to achieve energy compression YUV = analog color space for television YIQ = analog color space for NTSC composite, rotate YUV by 33 o to further band-limit Q YCbCr = discrete and scaled YUV Y U V 10

Analog Video Signals Type of analog signals 3 wires component R,G,B 2 wires S-video 1 for luminance, 1 for chrom. 1 wire composite frequency multiplexed all together NTSC frequency sprectrum 11

Analog television All interlace signals 12

Digital Video Signals 4:2:2 4:2:0 13

HDTV Larger frame size Non-interlaced or progressive P format 16:9 aspect ratio Standard by 2006 mandated by FCC 14

Fundamental of compression Two types: Lossless file, high-quality media Lossy trade-off quality with bitrate Information content: average entropy n - symbol source X : H X = Shannon s Source Coding Theorem: N i.i.d. random variables, each with entropy HX, can be compressed into NHX bits with negligible loss as N. n i= 1 p i log 2 p i 15

Lossless compression Memory-less sources: approximate entropy using variable length code Huffman codes Arithmetic codes Memory sources: exploit redundancy Run-length codes Library based codes Context-based methods Transform coding Linear prediction Others 16

Huffman Code Binary Huffman tree leaf nodes represent symbol and each branch represents 0 or 1. Traversal from root to leaf represents the codeword. 17

Huffman code cont. Advantages: Pre-fix property no decoding ambiguity Easy to decode traverse down the tree Disadvantages: Each symbol must use at least one bit. Blocking Changing statistics Update Huffman tree to reflect statistics 18

Arithmetic Coding Represent the entire sequence of symbols as a real number Each symbol is mapped to an interval whose length is equal to its probability occurrence Higher probability longer interval lower precision needed fewer bits High-level steps 1. Select the interval that corresponds to the symbol 2. Send leading bits if known 3. Renormalize the selected interval 4. Partition again based on probability 5. Go back to 1. 19

Example of Arithmetic coding 20

Arithmetic Coding cont. Advantages Asymptotically optimal Use less than one bit to code a symbol Easier than Huffman to update statistics does not require tree rebuilt Disadvantages Higher implementation cost need multiplication for normalization except binary Very sensitive to transmission error 21

Run-length code Commonly used in image and video coding Run + Level 0 0 0 0 1 0 0 0 0 0 5 0 0 4 0 0 0 0 8 0 0 9 0 run level 5,1 6,5 3,4 5,8 3,9 1,0 22

Library based codes Uses fixed-length codewords to represent variablelength strings of symbols/characters that commonly occur together The encoder and decoder build up the same dictionary dynamically while receiving the data. The codec places longer and longer repeated entries into a dictionary, and then emits the code for an element, rather than the string itself, if the element has already been placed in the dictionary. LZ77: gzip; LZW: UNIX compress, GIF, V.42 bis 23

Context-based prediction 1 2 3 4 =0000 1 2 3 4 =0001 1 2 3 4 =0010 2 3 4 Pb. Pb. Pb. 0 0.4 0 0.9 0 0.1 1 1 0.6 1 0.1 1 0.9 2 3 4 3 4 2 4 1 1 One PDF for each context Even simple context requires LOTS of tables : 81 Two solutions: Model ex. Context-Weighting Trees Willems, Shtarkov, Tjalkens 95 Hand-picked like in H.264 24

Transform Coding Need to code X, X, 2, 1 K X N Goal: Devise a transform T : X 1, X such that 2 Y, K, X 1, Y 2 N, K, Y N Y 1, Y 2, K, Y are easier to compress N 1. Burrows Wheeler Transform 2. Decorrelated Whitening Transform: Transform X s into independent Y s H X + H X 2 + K + H X N H X 1, X 2, K, X 1 N Equality hold if they are all independent. 25

Burrows Wheeler Transform BURROWS URROWSB RROWSBU ROWSBUR OWSBURR WSBURRO SBURROW sort BURROWS OWSBURR ROWSBUR RROWSBU SBURROW URROWSB WSBURRO S R R U W B O MTF ABC :18 SAB :18 RSA :0 RSA :20 URS :21 WUR :5 BWU :18 Runlength And Huffman Used in bzip, bzip2; best text compression to date Burrows and Wheeler 1994. Readable account by Mark Nelson, Dr. Dobb s Journal, 1996 26

Linear Prediction d x, y = 4I x, y I x 1, y I x, y 1 I x 1, y 1-4 -1-1 -1 and all its shifted versions form the transform Many other transforms : DCT, Wavelet Typically considered as lossless because of the floating representation. 27