RLE = [ ; ], with compression ratio (CR) = 4/8. RLE actually increases the size of the compressed image.

Similar documents
Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Objective: Reduction of data redundancy. Coding redundancy Interpixel redundancy Psychovisual redundancy Fall LIST 2

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1

ECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color

Compressing a 1D Discrete Signal

Proyecto final de carrera

Image Compression - JPEG

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

Image and Multidimensional Signal Processing

IMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

6.003: Signals and Systems. Sampling and Quantization

Compressing a 1D Discrete Signal

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression

Basics of DCT, Quantization and Entropy Coding

Basic Principles of Video Coding

Image Data Compression

Multimedia Networking ECE 599

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Compression and Coding

Transform Coding. Transform Coding Principle

Multimedia communications

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site

repetition, part ii Ole-Johan Skrede INF Digital Image Processing

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

Chapter 2: Source coding

Digital Image Processing Lectures 25 & 26

Department of Electrical Engineering, Polytechnic University, Brooklyn Fall 05 EL DIGITAL IMAGE PROCESSING (I) Final Exam 1/5/06, 1PM-4PM

Digital communication system. Shannon s separation principle

Implementation of CCSDS Recommended Standard for Image DC Compression

MATCHING-PURSUIT DICTIONARY PRUNING FOR MPEG-4 VIDEO OBJECT CODING

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Source Coding for Compression

BASICS OF COMPRESSION THEORY

EE67I Multimedia Communication Systems

Basics of DCT, Quantization and Entropy Coding. Nimrod Peleg Update: Dec. 2005

Module 4 MULTI- RESOLUTION ANALYSIS. Version 2 ECE IIT, Kharagpur

Objectives of Image Coding

Waveform-Based Coding: Outline

Compression. Reality Check 11 on page 527 explores implementation of the MDCT into a simple, working algorithm to compress audio.

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Image Compression Basis Sebastiano Battiato, Ph.D.

Transform coding - topics. Principle of block-wise transform coding

Module 4. Multi-Resolution Analysis. Version 2 ECE IIT, Kharagpur

Real-Time Audio and Video

Numerical Analysis. Carmen Arévalo Lund University Arévalo FMN011

CMPT 365 Multimedia Systems. Final Review - 1

JPEG and JPEG2000 Image Coding Standards

Multimedia & Computer Visualization. Exercise #5. JPEG compression

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C

Information and Entropy

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

IDENTIFYING IMAGE MANIPULATION SOFTWARE FROM IMAGE FEATURES THESIS. Devlin T. Boyter, CPT, USA AFIT-ENG-MS-15-M-051

! Where are we on course map? ! What we did in lab last week. " How it relates to this week. ! Compression. " What is it, examples, classifications

Learning goals: students learn to use the SVD to find good approximations to matrices and to compute the pseudoinverse.

DISCRETE HAAR WAVELET TRANSFORMS

Autumn Coping with NP-completeness (Conclusion) Introduction to Data Compression

ECG782: Multidimensional Digital Signal Processing

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

Wavelets and Multiresolution Processing

Huffman Coding. C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University

Lecture 10 : Basic Compression Algorithms

Improvement of DCT-based Compression Algorithms Using Poisson s Equation

CSCI 2570 Introduction to Nanocomputing

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

Entropy Encoding Using Karhunen-Loève Transform

Justify all your answers and write down all important steps. Unsupported answers will be disregarded.

CSE 408 Multimedia Information System Yezhou Yang

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

A study of image compression techniques, with specific focus on weighted finite automata

Multiscale Image Transforms

Image compression. Institute of Engineering & Technology, Ahmedabad University. October 20, 2015

Problem Set 8 - Solution

Embedded Zerotree Wavelet (EZW)

Multimedia Information Systems

Image Filtering, Edges and Image Representation

CSEP 521 Applied Algorithms Spring Statistical Lossless Data Compression

Predictive Coding. Lossy or lossless. Feedforward or feedback. Intraframe or interframe. Fixed or Adaptive

on a per-coecient basis in large images is computationally expensive. Further, the algorithm in [CR95] needs to be rerun, every time a new rate of com

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

The Frequency Domain. Many slides borrowed from Steve Seitz

An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization

Sparse linear models

Inverse Problems in Image Processing

COSC460 Honours Report. A Fast Discrete Tchebichef Transform Algorithm for Image Compression

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

Reduced-Error Constant Correction Truncated Multiplier

The Frequency Domain : Computational Photography Alexei Efros, CMU, Fall Many slides borrowed from Steve Seitz

ECE533 Digital Image Processing. Embedded Zerotree Wavelet Image Codec

Wedgelets and Image Compression

A Complete Video Coding Chain Based on Multi-Dimensional Discrete Cosine Transform

CSE 126 Multimedia Systems Midterm Exam (Form A)

Transcription:

MP/BME 574 Application Solutions. (2 pts) a) From first principles in class, we expect the entropy of the checkerboard image to be since this is the bit depth of the image and the frequency of each value ( and ) is identical. Recall also that a flat image (i.e. constant valued at all pixels) will have entropy since there is not information content by definition. A = [ ; ]; I = repmat(a,32,32); size(i) 64 64 entropy(i) Flat = ones(64); entropy(flat) b) We can evaluate the compression ratio for run length encoding (RLE) by evaluating the fundamental repeated matrix R = [ ; ]. The RLE result from this 2 X 2 matrix is: RLE = [ ; ], with compression ratio (CR) = 4/8. RLE actually increases the size of the compressed image. For a 64 X 64, the R block is repeated 32 times so the CR will be the same for the larger image as it is for the block. c) There is no compression gained by variable length (Huffman) encoding either because the bit depth of the information in the checkerboard is bit, which is equal to the entropy. d) However, significant compression can be attained by using a block encoding scheme, which is natural to this problem since we know the checkerboard image is generated from this approach. Using the R block: R = [ ; ] contains 4 bits which is repeated in a 32 X 32 array. Therefore, the compression ratio is: CR = (64 X 64 X bit)/(2 X 5bits + 4 bits) = 496/4 = 293. 2. (2 pts) Repeating problem for the square test image. a) Now entropy is expected to be substantially lower because the bit depth of the image is much larger than the expected information content. The entropy is.6 bits/pixel, much lower than the bit depth of the image, which is 7 bits. Recall that the uniform intensity of the values within the square is 28.

MP/BME 574 Application Solutions Square = zeros(64); Insert = 28.*ones(8); Square(29:36,29:36)=Insert; entropy(square).6 b) For RLE, the rows -28 and 37-64 are entirely zeros. If we allow breaks in the RLE at each image row, then each of these rows collapses to [ 64]. The compressed image by rows is then: [ 64] X 28 = 7 bits X 28 = 96 bits [ 28; 28 8; 28] X 8 = 22 bits X 8 = 76 bits [ 64] X 28 = 7 bits X 28 = 96 bits Total = 568 bits CR = (64 X 64 X 7)/568 =.5 (c) There is a bit depth of 7 for this image and an entropy of.2, so variable length encoding should significantly compress this image. Binary Code Probability of pixel intensity, i Huffman Code.9844. 7 bits/pixel.563 Entropy =.6 bit/pixel CR = 7. Note that the primary compression for this image comes from assigning the 7 bit value for the squares to a bit number. Any further savings is limited because the bit depth cannot be less than bit/pixel. In a more complex image the CR could potentially be much higher. d) The square image is less amenable to a block encoding scheme than the checkerboard. I opted to use the example I showed in class for the block encoding of a binary border. The problem is simplified by assuming a priori knowledge that the image contains a square. The fundamental boundary block is then: R = [ ; ]; Now the square boundary can be encoded using 4 replicates of R combined with 2 reflections and rotation. The block itself is 4 bits.

MP/BME 574 Application Solutions Reflection Reflection Total = 4bits X 4 + reflection + rotation + reflection + 6 bits X 2 (matrix size) = 3 bits. CR = 496/3 = 32. 3. (5 pts) These are all lossless compression methods that exploit coding and interpixel redundancies. 4. (25 pts) a. Test Image Discrete Cosine Transform, Log Scale 24 DC or Zero Frequency Term k xmax 22 3 4 3 4 5 8 6 7 6 6 7-5 8 4 8 9 4 6 8 2 9 k ymax 4 6 8 - DFT k-space Image, Log Scale k ymax 8 6 3 4 4 -k xmax DC or Zero Frequency Term 2 6 7 8 k xmax 8 6 9 4 2 4 6 8 -k ymax

MP/BME 574 Application Solutions b. DFT Thresholded to : Inverse DFT Thresholded to : DCT Thresholded to : Inverse DCT Thresholded to : c. The primary artifacts are blurring and ringing. The blurring results from loss of detail coefficients representing higher frequency harmonics in the images. The ringing us due to the sharp truncation applied when thresholding the values in transform space. Note the slightly crisper depiction of structures for the thresholded discrete cosine transform coefficients relative to the DFT result. d. The relatively clear depiction of the basic structures in the images in part c is a manifestation of psychovisual redundancy. Lossy compression methods such as JPEG exploit this type of information redundancy. DCT and DFT Thresholding Code: I = phantom; FI = fftshift(fft2(fftshift(i))); figure;imagesc(abs(fi));axis('image');colormap('gray') figure;imagesc(log(abs(fi)));axis('image');colormap('gray') FI_thresh = FI(find(abs(FI)>.*max(max(FI)))); figure;imagesc(log(abs(fi_thresh)));axis('image');colormap('gray') [i,j] =find(abs(fi)>.274*max(max(fi))) size(j) FI_thresh = FI(i,j); FI_thresh = zeros(256); FI_thresh(i,j) = FI(i,j);

MP/BME 574 Application Solutions figure;imagesc(log(abs(fi_thresh)));axis('image');colormap('gray') figure;imagesc(abs(fftshift(ifft2(fftshift(fi_thresh)))));axis('image');colormap('gray') CI_thresh= zeros(256); CI = dct2(fftshift(i)); figure;imagesc(log(abs(ci)));axis('image');colormap('gray') [i,j] =find(abs(ci)>.224*max(max(ci))) size(j) CI_thresh(i,j) = CI(i,j); figure;imagesc(log(abs(ci_thresh)));axis('image');colormap('gray') CI_thresh = zeros(256); CI_thresh(i,j) = CI(i,j); figure;imagesc(log(abs(ci_thresh)));axis('image');colormap('gray') figure;imagesc(abs(idct2(ci_thresh)));axis('image');colormap('gray') 5. (3 pts) Problem 5: The key to the : compressed images is as follows: A = Discrete Fourier Transform (DFT), B = Wavelet encoding, C = JPEG encoding, D = Original Uncompressed Image, and E Discrete Cosine Transform (DCT). The major evidence allowing one to determine the compression type used includes: blurring indicated by the line profiles at left and visible block encoding in the magnified Image C. Clearly A and E are either DFT or DCT due to the blurring in the line profiles at left. There is less blurring in Image E suggesting it is DCT because this transform is known to more efficiently encode spatial frequency information (needs fewer coefficients). This leaves images B-D as Wavelet, Original or JPEG. Since C shows block encoding, it is JPEG. B also shows compression artifacts, so it is wavelet and D is the original. 3 Line Profiles for Test Images A-F Location of profile cuts across boundary and lesion in upper left corner of image A B C D E 2 4 6 8 2 A B C D E

MP/BME 574 Application Solutions Display Code Used for Problem 5: [Px,Py,P,xi,yi] = improfile; Pa = improfile(a,xi,yi); Pc = improfile(c,xi,yi); Pd = improfile(d,xi,yi); Pe = improfile(e,xi,yi); figure;plot(x,pa,x,pb,x,pc(:8,,),x,pd,x,pe)

/;(.::::. z-ii/- - ~..?..l'.~... I ~==. I-( k. A~.?-~-::II,.c.(..-::.