LOCO. near lossless image compression. A new method for lossless and. Nimrod Peleg Update: Dec. 2003

Similar documents
Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

ECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.

Lec 04 Variable Length Coding (VLC) in JPEG

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Image Compression - JPEG

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)

Image and Multidimensional Signal Processing

Fast Progressive Wavelet Coding

Order Adaptive Golomb Rice Coding for High Variability Sources

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C

Digital communication system. Shannon s separation principle

Progressive Wavelet Coding of Images

Compression and Coding

at Some sort of quantization is necessary to represent continuous signals in digital form

Lec 03 Entropy and Coding II Hoffman and Golomb Coding

Data Compression Techniques (Spring 2012) Model Solutions for Exercise 2

BASIC COMPRESSION TECHNIQUES

Multimedia Networking ECE 599

INTEGER WAVELET TRANSFORM BASED LOSSLESS AUDIO COMPRESSION

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Number Representation and Waveform Quantization

Predictive Coding. Prediction Prediction in Images

Predictive Coding. Prediction

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

Edge Adaptive Prediction for Lossless Image Coding

BASICS OF COMPRESSION THEORY

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

CSE 408 Multimedia Information System Yezhou Yang

CSEP 521 Applied Algorithms Spring Statistical Lossless Data Compression

Digital Image Processing Lectures 25 & 26

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Predictive Coding. Lossy or lossless. Feedforward or feedback. Intraframe or interframe. Fixed or Adaptive

Autumn Coping with NP-completeness (Conclusion) Introduction to Data Compression

Source Coding for Compression

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

Compressing Tabular Data via Pairwise Dependencies

Objective: Reduction of data redundancy. Coding redundancy Interpixel redundancy Psychovisual redundancy Fall LIST 2

An Introduction to Lossless Audio Compression

Lec 05 Arithmetic Coding

Huffman Coding. C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University

RESOLUTION SCALABLE AND RANDOM ACCESS DECODABLE IMAGE CODING WITH LOW TIME COMPLEXITY

Basic Principles of Video Coding

Image Data Compression

Basics of DCT, Quantization and Entropy Coding. Nimrod Peleg Update: Dec. 2005

Statistical Analysis and Distortion Modeling of MPEG-4 FGS

ECE533 Digital Image Processing. Embedded Zerotree Wavelet Image Codec

Image Compression using DPCM with LMS Algorithm

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

A Lossless Image Coder With Context Classification, Adaptive Prediction and Adaptive Entropy Coding

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

Image Compression Basis Sebastiano Battiato, Ph.D.

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

Chapter 2: Source coding

STATISTICS FOR EFFICIENT LINEAR AND NON-LINEAR PICTURE ENCODING

Information and Entropy

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Multimedia & Computer Visualization. Exercise #5. JPEG compression

EE5356 Digital Image Processing

17.1 Binary Codes Normal numbers we use are in base 10, which are called decimal numbers. Each digit can be 10 possible numbers: 0, 1, 2, 9.

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

EE5356 Digital Image Processing. Final Exam. 5/11/06 Thursday 1 1 :00 AM-1 :00 PM

SCALABLE AUDIO CODING USING WATERMARKING

Detailed Review of H.264/AVC

Optimal prefix codes for pairs of geometrically-distributed random variables

Can the sample being transmitted be used to refine its own PDF estimate?

Intraframe Prediction with Intraframe Update Step for Motion-Compensated Lifted Wavelet Video Coding

Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 7.2, Page 1

EE376A: Homework #2 Solutions Due by 11:59pm Thursday, February 1st, 2018

Multimedia Information Systems

Enhanced SATD-based cost function for mode selection of H.264/AVC intra coding

Inverse Problems in Image Processing

- An Image Coding Algorithm

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

A TWO-STAGE VIDEO CODING FRAMEWORK WITH BOTH SELF-ADAPTIVE REDUNDANT DICTIONARY AND ADAPTIVELY ORTHONORMALIZED DCT BASIS

High Throughput Entropy Coding in the HEVC Standard

Lossless Compression of Chemical Fingerprints Using Integer Entropy Codes Improves Storage and Retrieval

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

Department of Electrical Engineering, Polytechnic University, Brooklyn Fall 05 EL DIGITAL IMAGE PROCESSING (I) Final Exam 1/5/06, 1PM-4PM

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

Lecture 22: Final Review

Basics of DCT, Quantization and Entropy Coding

THE currently prevalent video coding framework (e.g. A Novel Video Coding Framework using Self-adaptive Dictionary

Implementation of CCSDS Recommended Standard for Image DC Compression

LOW-COMPLEXITY, MULTI-CHANNEL, LOSSLESS AND NEAR-LOSSLESS EEG COMPRESSION

Compression methods: the 1 st generation

Compression and Coding. Theory and Applications Part 1: Fundamentals

COMM901 Source Coding and Compression. Quiz 1

Lecture 3 : Algorithms for source coding. September 30, 2016

Compression and Coding. Theory and Applications Part 1: Fundamentals

Modelling of produced bit rate through the percentage of null quantized transform coefficients ( zeros )

State of the art Image Compression Techniques

Universal Loseless Compression: Context Tree Weighting(CTW)

EC2252 COMMUNICATION THEORY UNIT 5 INFORMATION THEORY

The information loss in quantization

Efficient sequential compression of multi-channel biomedical signals

CompSci 267 Data Compression

Transcription:

LOCO A new method for lossless and near lossless image compression Nimrod Peleg Update: Dec. 2003

Credit... Suggested by HP Labs, 1996 Developed by: M.J. Weinberger, G. Seroussi and G. Sapiro, LOCO-I: A Low Complexity, Context-Based, Lossless Image Compression Algorithm, IEEE Proceedings of Data Compression Conference, pp. 140-149, 1996

Digital Source Image Data General Block Diagram Context Modeling Prediction Error Encoding Compressed Data Run Mode

Context Based Algorithm An efficient coding needs a statistical model, for a good prediction of pixel value The statistical distribution of a pixel is predicted according to the previous pixels The best distribution for coding is minimum entropy distribution To achieve it, each pixel is assigned to a context (728 context types in LOCO)

Prediction After context definition, a simple prediction is activated. The predictor is NOT context dependent, but same for all contexts. It s a simple and effective edge detector, based on 3 gradients. Prediction error is calculated between predicted and real pixel value.

Error Coding Basic assumption: for each context, a statistical information is available. A Geometric distribution is assumed for the prediction error, and LOCO uses 2 parameters for each context: Average and Decay factor. LOCO uses a Golomb-Rice entropy coding, which is optimal for two-sided geometric distribution (it s a 1 parameter Huffman like method).

The Golomb-Rice technique is simple, low-complexity and efficient: Huffman-like complexity Almost Arithmetic coding efficiency

Run Mode When all local gradient are zero, we can assume that it s a smooth area in the image. In this situation we skip the prediction and the prediction error coding stages. We go on with the run-mode until the condition x=b is no more TRUE. This process saves lots of bits for long runs. c a d b x

General Scheme Single frames Prediction Error Compressed BitStream Prediction Entropy Coding To Network Context Modeling Statistical Parameters

Context Modeling Every pixel is assigned to a context based on locally calculated parameters. Why? Prediction errors for pixels in same context, have the same statistical distribution.

The assumption is - The prediction error is distributed as a two sided geometric distribution. 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0-300 -200-100 0 100 200 300 The extinction factor is context dependent. There is systematic bias for every context.

Calculating the context number There is a tradeoff between the number of contexts and the quality of the estimated statistics for each context. In the LOCO algorithm: There are 728 contexts created by using the local Gradients: D1= d-a D2= a-c D3= c-b c a d b x

Prediction The Loco predictor is casual, it uses nearby pixels which already have been scanned c a d b x Px min( ab, ) = max( ab, ) a+ b c c c max( a, b) min( a, b) otherwise

The principle : c c 250 200 b b 150 100 2 1 5 1 x 1 1 5 a a 2 The assumption is : x should be on the same plane created by a, b and c.

For example : 250 200 150 b c a Here, the lowest value x can get is the value of b (from what we have ). 100 2 1 5 1 1 1 5 2 min( ab, ) Px = max( ab, ) a+ b c c max( ab, ) c min( ab, ) otherwise

Why not take the Average? 250 200 b c a x a b c = + + 3 150 100 2 1 5 1 5 2 1 1

Let s take a look on this case - When c is very close to a... 150 c a x avg 2a+ 3 b 100 b 50 But, 2 0 1 5 1 1 1 5 2 Px = b

Prediction Example Original Image Predicted Image (SNR=25.3dB

The Quantizer The gradients are quantized by a 9 level quantizer T3 T2 T1 0 T1 T2 T3 ( Q1,Q2,Q3 ) The context number - Q

Entropy Coding Golomb-Rice is an optimal entropy coder for geometric distribution Golomb - Rice Codes Does not require tables (vs. Huffman) One parameter only: k msb 8-k bits By unary presentation k bits By binary presentation

Golomb-Rice coding example Let s take the number: x=14 k = 2 0000 1110 Unary presentation of 11 2 (= 3 10 ) = 0 0 0 1 Unary Code 0 0 0 Separating one Binary Code 1 10 Binary presentation - 1 0 X/2 K = 3 with remainder 2 The BitStream

Context parameters : K - calculated from sum of error magnitudes Bias - calculated from sum of errors

Run mode A special mode which allows efficient compression of a sequence of pixels with same value. For example :

Lossy Mode In Lossy Mode Maximal allowed restoration error per pixel = 0 NEAR The method : The prediction error is quantized by a 2Near + 1 step quantizer. Prediction Error Prediction Error 2Near + 1

And the profit is... 1 Narrower geometric distribution Shorter Golomb-Rice Codes 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0-300 -200-100 Spending more time in Run-Mode

15 Compression ratio vs. Near ratio : 1 10 5 50 0 0 5 10 15 20 25 30 SNR vs. Near SNR [db] 40 30 20 10 0 5 10 15 20 25 30

Explanation (last graph: Lena for different Near values) ניתן לראות, שבערכים נמוכים העלייה ביחס הדחיסה ליניארית עם,Near אולם בערכים גבוהים של Near כמעט ואין שיפור ביחס הדחיסה. הסיבה לכך, היא שהLOCO מאבד את ה Contexts בערכי Near גבוהים, ולכן מאבד את יעילותו. גם איכות התמונה הדחוסה בערכים הגבוהים ירודה ביותר, כפי שניתן לראות מגרף ה.SNR

Different Near values compression Near = 3 Near = 10

Benchmark Comparison Download Free from UBC