Digital Image Processing Lectures 25 & 26

Similar documents
at Some sort of quantization is necessary to represent continuous signals in digital form

Basic Principles of Video Coding

BASICS OF COMPRESSION THEORY

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Image Data Compression

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

UNIT I INFORMATION THEORY. I k log 2

EE5356 Digital Image Processing

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

Image and Multidimensional Signal Processing

Compression methods: the 1 st generation

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Objectives of Image Coding

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

Information and Entropy

EE5585 Data Compression April 18, Lecture 23

Compression and Coding

Waveform-Based Coding: Outline

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU

Predictive Coding. Prediction Prediction in Images

Predictive Coding. Prediction

Pulse-Code Modulation (PCM) :

Multimedia Networking ECE 599

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1

Image Compression using DPCM with LMS Algorithm

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria

EE 5345 Biomedical Instrumentation Lecture 12: slides

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression


EE 121: Introduction to Digital Communication Systems. 1. Consider the following discrete-time communication system. There are two equallly likely

CS578- Speech Signal Processing

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Digital communication system. Shannon s separation principle

Chapter 9 Fundamental Limits in Information Theory

ECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C

E303: Communication Systems

IMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

EE-597 Notes Quantization

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

Lecture 7 Predictive Coding & Quantization

EE5356 Digital Image Processing. Final Exam. 5/11/06 Thursday 1 1 :00 AM-1 :00 PM

Proyecto final de carrera

A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING

Joint Source-Channel Coding Optimized On Endto-End Distortion for Multimedia Source

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

Design of Optimal Quantizers for Distributed Source Coding

Lecture 2. Capacity of the Gaussian channel

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

Transform coding - topics. Principle of block-wise transform coding

Chapter 10 Applications in Communications

Vector Quantization and Subband Coding

Multimedia Communications. Differential Coding

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color

Fundamentals of Digital Image Processing

ECE Information theory Final (Fall 2008)

Noise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

Predictive Coding. Lossy or lossless. Feedforward or feedback. Intraframe or interframe. Fixed or Adaptive

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Entropies & Information Theory

TTIC 31230, Fundamentals of Deep Learning David McAllester, April Information Theory and Distribution Modeling

repetition, part ii Ole-Johan Skrede INF Digital Image Processing

Entropy as a measure of surprise

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)

CSE 408 Multimedia Information System Yezhou Yang

Coding for Discrete Source

Lecture 20: Quantization and Rate-Distortion

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

2018/5/3. YU Xiangyu

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)

Lec 03 Entropy and Coding II Hoffman and Golomb Coding

Ch. 10 Vector Quantization. Advantages & Design

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

Digital Image Processing Lectures 13 & 14

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

ECE Information theory Final

BASIC COMPRESSION TECHNIQUES

Lecture 16. Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code

The Distributed Karhunen-Loève Transform

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur

Lecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1)

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801

(Classical) Information Theory II: Source coding

MAHALAKSHMI ENGINEERING COLLEGE QUESTION BANK. SUBJECT CODE / Name: EC2252 COMMUNICATION THEORY UNIT-V INFORMATION THEORY PART-A

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Transcription:

Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015

Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image in order to reduce the number of bits to represent an image or a sequence of images (e.g., video). Applications: Image Transmission: e.g., HDTV, 3DTV, satellite/military communication, and teleconferencing. Image Storage: e.g., Document storage & retrieval, medical image archives, weather maps, and geological surveys. Category of Techniques: 1 Pixel Encoding: PCM, run-length encoding, bit-plane, Huffmann encoding, entropy encoding 2 Predictive Encoding: Delta modulation, 2-D DPCM, inter-frame method 3 Transform-based Encoding: DCT-based, WT-based, Zonal encoding 4 Others: Vector quantization (clustering), neural network-based, hybrid encoding

Encoding System There are three steps involved with any encoding system (Fig. 1). a. Mapping: Removes redundancies in the images. Should be invertible. b. Quantization: Mapped values are quantized using uniform or Llyod-Max quantizers. c. Coding: Optimal codewords are assigned to the quantized values. Figure 1: A Typical Image Encoding System. However, before we discuss several types of encoding systems, we need to review some basic results from information theory.

Measure of Information & Entropy Assume there is a source (e.g., an image) that generates a discrete set of independent messages (e.g., grey-levels), r k, with prob. P k, k [1, L] with L being the number of messages (or number of levels). Figure 2: Source and message. Then, information associated with r k is I k = log 2 P k bits Clearly, L 1 k=0 P k = 1. For equally likely levels (messages) information can be transmitted as an n-bit binary number P k = 1 L = 1 2 n I k = n bits For images, P k s are obtained from the histogram.

As an example, consider a binary image with r 0 = Black, P 0 = 1 and r 1 = White, P 1 = 0, then I k = 0 i.e. no information. Entropy: Average information generated by the source L L H = P k I k = P k log 2 P k Avg. bits/pixel k=1 k=1 Entropy also represents a measure of redundancy. Let L = 4,P 1 = P 2 = P 3 = 0 and P 4 = 1, then H = 0 i.e. most certain case and thus maximum redundancy. Now, let L = 4,P 1 = P 2 = P 3 = P 4 = 1/4, then H = 2 i.e. most uncertain case and hence least redundant. Maximum entropy occurs when levels are equally likely, P k = 1 L k [1, L], then Thus, H max = L k=1 1 L log 1 2 L = log 2 L 0 H H max

Entropy and Coding Entropy represents the lower bound on the number of bits required to code the coder inputs, i.e. for a set of coder inputs v k, k [1, L], with prob P k it is guaranteed that it is not possible to code them using less than H bits on the average. If we design a code with codewords C k, k [1, L] with corresponding word lengths β k s, the average number of bits required by the coder is R(L) = L k=1 β kp k. Figure 3: Coder producing codewords C k s with lengths β k s. Shannon s Entropy Coding Theorem (1949) The average length R(L) is bounded by H R(L) H + ɛ,, ɛ = 1/L

i.e. it is possible to encode without distortion a source with entropy H using an average of H + ɛ bits/message; or it is possible to encode with distortion the source using H bits/message. Optimality of the coder depends on how close R(L) is to H. Example: Let L = 2, P 1 = p and P 2 = 1 p 0 p 1. Thus, the entropy is H = p log 2 p (1 p) log 2 (1 p). The above figure shows H as a function of p. Clearly, since the source is binary, we can use 1 bit/pixel. This corresponds to H max = 1 at p = 1/2. However, if p = 1/8,H 0.2 i.e. more redundancies then it is possible to find a coding scheme that uses only 0.2 bits/pixel.

Remark: Max achievable compression is C = Average bit rate of original raw data(b) Average bit rate of encoded data (R(L)) Thus B H + ɛ C B ɛ = 1/L H Since certain distortion is inevitable in any image transmission, it is necessary to find the minimum number of bits to encode the image while allowing a certain level of distortion. Rate Distortion Function Let D be a fixed distortion between the actual values, x and reproduced values, ˆx. Then the question is: allowing D distortion what is minimum number of bits required to encode the data? If we consider x as a Gaussian r.v. with σx, 2 D is D = E[(x ˆx) 2 ] Rate distortion function is defined by

R D = { 1 σ 2 x 2 log 2 D 0 D σx 2 0 D > σx 2 = Max[0, 1 2 log σx 2 2 D ] At maximum D σx, 2 R D = 0 i.e. no information needs is transmitted. Figure 4: Rate Distortion Function R D versus D. R D shows the number of bits required for distortion D. Since R ( ) D represents the number of bits/pixel N = 2 R D σ 2 1/2, x = D D is considered to be quantization noise variance. This variance can be minimized using Llyod-Max quantizer. In transform domain we can assume that x is white (e.g., due to KL).

Pixel-Based Encoding Encode each pixel ignoring their inter-pixel dependencies. Among methods are: 1. Entropy Coding Every block of an image is entropy encoded based upon the P k s within a block. This produces variable length code for each block depending on spatial activities within the blocks. 2. Run-Length Encoding Scan the image horizontally or vertically and while scanning assign a group of pixel with the same intensity into a pair (g i, l i ) where g i is the intensity and l i is the length of the run. This method can also be used for detecting edges and boundaries of an object. It is mostly used for images with a small number of gray levels and is not effective for highly textured images.

Example 1: Consider the following 8 8 image. 4 4 4 4 4 4 4 0 4 5 5 5 5 5 4 0 4 5 6 6 6 5 4 0 4 5 6 7 6 5 4 0 4 5 6 6 6 5 4 0 4 5 5 5 5 5 4 0 4 4 4 4 4 4 4 0 4 4 4 4 4 4 4 0 The run-length codes using vertical (continuous top-down) scanning mode are: (4,9) (5,5) (4,3) (5,1) (6,3) (5,1) (4,3) (5,1) (6,1) (7,1) (6,1) (5,1) (4,3) (5,1) (6,3) (5,1) (4,3) (5,5) (4,10) (0,8) i.e. total of 20 pairs = 40 numbers. The horizontal scanning would lead to 34 pairs = 68 numbers, which is more than the actual number of pixels (i.e. 64).

Example 2: Let the transition probabilities for run-length encoding of a binary image (0: black and 1: white) be p 0 = P (0 1) and p 1 = P (1 0). Assuming all runs are independent, find (a) average run lengths, (b) entropies of white and black runs, and (c) compression ratio. Solution: A run of length l 1 can be represented by a Geometric r.v. X i with PMF P (X i = l) = p i (1 p i ) l 1 with i = 0, 1 which corresponds to happening of 1st occurrences of 0 or 1 after l independent trials. (Note that (1 P (0 1)) = P (1 1) and (1 P (1 0)) = P (0 0)) and Thus, for the average we have µ Xi = lp (X i = l) = lp i (1 p i ) l 1 l=1 which using series n=1 nan 1 = 1 (1 a) reduces to µ 2 Xi = 1 p i. The entropy is given by H Xi = P (X i = l)log 2 P (X i = l) l=1 l=1 = p i (1 p i ) l 1 [log 2 p i + (l 1)log 2 (1 p i )] l=1

Using the same series formula, we get H Xi = 1 p i [p i log 2 p i + (1 p i )log 2 (1 p i )] The achievable compression ratio is where P i = pi p 0+p 1 C = H X 0 + H X1 µ X0 + µ X1 = H X 0 P 0 µ X0 + H X 1 P 1 µ X1 are the a priori probabilities of black and white pixels.

3. Huffman Encoding Algorithm consists of the following steps. 1 Arrange symbols with probability P k s in a decreasing order and consider them as leaf nodes of a tree. 2 Merge two nodes with smallest prob to form a new node whose prob is the sum of the two merged nodes. Go to Step 1 and repeat until only two nodes are left ( root nodes ). 3 Arbitrarily assign 1 s and 0 s to each pair of branches merging into a node. 4 Read sequentially from root node to the leaf nodes to form the associated code for each symbol. Example 3: For the same image in the previous example, which requires 3 bits/pixel using standard PCM we can arrange the table on the next page.

Gray levels # occurrences P k C k β k P k β k P k log 2 P k 0 8 0.125 0000 4 0.5 0.375 1 0 0-0 - - 2 0 0-0 - - 3 0 0-0 - - 4 31 0.484 1 1 0.484 0.507 5 16 0.25 01 2 0.5 0.5 6 8 0.125 001 3 0.375 0.375 7 1 0.016 0001 4 0.64 0.095 64 1 R H Codewords C k s are obtained by constructing the binary tree as in Fig. 5. Figure 5: Tree Structure for Huffman Encoding.

Note that in this case, we have 8 R = β k P k = 1.923 bits/pixel k=1 H = 8 P k log 2 P k = 1.852 bits/pixel k=1 Thus, 1.852 R = 1.923 H + 1 L = 1.977 i.e. an average of 2 bits/pixel (instead of 3 bits/pixel using PCM) can be used to code the image. However, the drawback of the standard Huffman encoding method is that the codes have variable lengths.

Predictive Encoding Idea: Remove mutual redundancy among successive pixels in a region of support (ROS) or neighborhood and encode only the new information. This method is based upon linear prediction. Let us start with 1-D linear predictors. An N th order linear prediction of x(n) based on N previous samples is generated using a 1-D autoregressive (AR) model ˆx(n) = a 1 x(n 1) + a 2 x(n 2) + + a N x(n N) a i s are model coefficients determined based on some sample signals. Now instead of encoding x(n) the prediction error e(n) = x(n) ˆx(n) is encoded as it requires substantially smaller number of bits. Then, at the receiver we reconstruct x(n) using the previous encoded values x(n k) and the encoded error signal, i.e. x(n) = ˆx(n) + e(n) This method is also referred to as differential PCM (DPCM).

Minimum Variance Prediction The predictor ˆx(n) = N a i x(n i) is the best N th order linear mean-squared predictor of x(n), which minimizes the MSE ( ) ] 2 ɛ = E[ x(n) ˆx(n) i=1 This minimization wrt a k s results in the following orthogonal property : [ ɛ ( ] = 2E x(n) ˆx(n)) x(n k) = 0, 1 k N a k which leads to the normal equation r xx (k) N a i r xx (k i) = σeδ(k), 2 i=1 0 k N where r xx (k) is the autocorrelation of the data x(n) and σ 2 e is the variance of the driving process e(n).

Plugging different values for k [0, N] gives the AR Yule-Walker equation for solving for a i s and σ 2 e, i.e. r xx (0) r xx (1) r xx (N) r xx (1) r xx (0) r xx (N 1)......... r xx (N) r xx (N 1) r xx (0). 1 a 1.. a N = Note that correlation matrix, R x in this case is both Toeplitz and Hermitian. The solution to this system of linear equation is given by σ 2 e = 1 [Rx 1 a i = σe 2 [Rx 1 ] 1,1 where [Rx 1 ] i,j is the i, j th element of matrix Rx 1. ] i+1,1 σ 2 e 0 0. 0 (1)

In the 2-D case, an AR model with non-symmetric half-plane (NSHP) ROS is used. This ROS is shown in Fig. 6 when image is scanned from left-to-right and top-to-bottom. Figure 6: A 1st Order 2-D AR Model with NSHP ROS. For a 1st order 2-D AR x(m, n) = a 01 x(m, n 1) + a 11 x(m 1, n 1) + a 10 x(m 1, n) + a 1, 1 x(m 1, n + 1) + e(m, n) where a i,j s are model coefficients. Then, the best linear prediction of x(m, n) is ˆx(m, n) = a 01 x(m, n 1)+a 11 x(m 1, n 1)+a 10 x(m 1, n)+a 1, 1 x(m 1, n+1)

Note that at every pixel four previously scanned pixels are needed to generate predicted value ˆx(m, n). Fig. 7 shows those pixels that need to be stored in the global state vector for this 1st order predictor. Figure 7: Global State Vector. Assuming that the reproduced values (quantized) up to (m, n 1) are available, we generate ˆx o (m, n) = a 10 x o (m, n 1) + a 11 x o (m 1, n 1) + a 10 x o (m 1, n) + a 1, 1 x o (m 1, n + 1) Then, prediction error is applied to the quantizer. e(m, n) := x(m, n) ˆx o (m, n) quantizer input The quantized value e o (m, n) is encoded and transmitted. Also it is used to generate the reproduced value using x o (m, n) = e o (m, n) + ˆx o (m, n) reproduced value

The entire process at the transmitter and receiver is depicted in Fig. 8. Clearly, it is assumed that the model coefficients are available at the receiver. Figure 8: Block Diagram of 2-D Predictive Encoding System. It is interesting to note that q(m, n) := x(m, n) x o (m, n) PCM quantization error = e(m, n) e o (m, n) DPCM quantization error However, for the same quantization error, q(m, n), DPCM requires much fewer number of bits.

Performance Analysis of DPCM For straight PCM the rate distortion function is R P CM = 1 2 log 2 σ 2 x/σ 2 q bit/pixels i.e. number of bits required per pixel in the presence of a particular distortion σ 2 q = E[q 2 (m, n)]. Now for DPCM the rate distortion function R DP CM = 1 2 log 2 σ 2 e/σ 2 q bit/pixels for the same distortion. Clearly, σ 2 e σ 2 x R DP CM R P CM. The bit reduction of DPCM over PCM is R P CM R DP CM = 1 2 log 2 σ 2 x/σ 2 e = 1 0.6 log 10 σ 2 x/σ 2 e The achieved compression depends on the inter-pixel redundancy i.e. for an image with no redundancy image (random). σ 2 x = σ 2 e R P CM = R DP CM

Transform-Based Encoding Idea: Reduce redundancy by applying unitary transformation to blocks of an image. Then the redundancy removed coefficients/features are encoded. The process of transform-based encoding or block quantization is depicted in Fig. 9. The image is first partitioned into non-overlapping blocks. Each block in then unitary transformed and the principal coefficients are quantized and encoded. Figure 9: Transform-Based Encoding Process. Q1: What are the best mapping matrices A and B, so that maximum redundancy removal is achieved and at the same time distortion due to coefficients reduction is minimized? Q2: What is the best quantizer that gives minimum quantization distortion?

Theorem: Let x be a random vector representing blocks of an image and y be the transformed version y = Ax with components y(k) that are mutually uncorrelated. These components are then quantized to y o and then encoded and transmitted. At the receiver the decoded values are reconstructed using matrix B i.e. x o = By o. The objective is find optimum matrices A and B and optimum quantizer such that D = E[ x x o 2 ] is minimized. 1. The optimum matrices are A = Ψ t and B = Ψ, i.e. KL transform pair. 2. The optimum quantizer is Lloyd-Max quantizer. Proof: See Jain s book (ch. 11).

Bit allocation To allocate optimally a given total number of bits (M) to N (retained) components of y o, so that distortion is minimized D = 1 N N E[(y(k) y o (k)) 2 ] = 1 N k=1 N σkf(m 2 k ) f(.): quantizer distortion function (monotonic convex with f(0) = 1 and f( ) = 0). σk 2 : variance of coefficient y(k). m k : number of bits allocated to y o (k). Optimal bit allocation involves finding m k s to minimize D subject to M = N k=1 m k. Note that coefficients with higher variance contain more information than those with lower variance. Thus, more bits are designated to them to improve the performance. i. Shannon s Allocation Strategy m k = m k (θ) = Max(0, 1 2 log 2( σ2 k θ )) θ: Must be found to produce an average rate of p = M N bits per pixel (bpp). k=1

ii. Segall Allocation Strategy 1 1.78 log 2( 1.46σ2 k θ ) 0.083σk 2 θ > 0 m k (θ) = 1 1.57 log 2( σ2 k θ ) σk 2 θ > 0.083σ2 k 0 θ > σk 2 where θ solves N k=1 m k(θ) = M. iii. Huang/Schultheiss Allocation Strategy This bit allocation approximates the optimal non-uniform allocation for Gaussian coefficients giving: ˆm k = M N + 2 log 2 σk 2 2 N log N 2 σi 2 m k = Int[ ˆm k ] with M = i=1 N k=1 m k F ixed. Figs. 10 and 11 show reconstructed images of Lena and Barbara using Shannon (SNR Lena = 20.55 and SNR Barb = 17.24dB) and Segall (SNR Lena = 21.23 and SNR Barb = 16.90dB) bit allocation methods for an average of p = 1.5 bpp together with the corresponding error images.

50 50 100 100 150 150 200 200 250 250 300 300 350 350 400 400 450 450 500 50 100 150 200 250 300 350 400 450 500 500 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 Student Version of MATLAB Student Version of MATLAB 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 Figure 10: Reconstructed & Error Images-Shannon s (1.5bpp).

50 50 100 100 150 150 200 200 250 250 300 300 350 350 400 400 450 450 500 50 100 150 200 250 300 350 400 450 500 500 50 100 150 200 250 300 350 400 450 500 50 50 100 100 150 150 200 250 Student Version of MATLAB 200 250 Student Version of MATLAB 300 300 350 350 400 400 450 450 500 500 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 Figure 11: Reconstructed & Error Images-Segall s (1.5bpp).