Lossless Compression Performance of a Simple Counter- Based Entropy Coder

Similar documents
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Pulse Coded Modulation

VQ widely used in coding speech, image, and video

Introduction to Information Theory, Data Compression,

Chapter 8 SCALAR QUANTIZATION

EGR 544 Communication Theory

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Lecture 3: Shannon s Theorem

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding

Introduction to information theory and data compression

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Department of Electrical & Electronic Engineeing Imperial College London. E4.20 Digital IC Design. Median Filter Project Specification

Transform Coding. Transform Coding Principle

Compression in the Real World :Algorithms in the Real World. Compression in the Real World. Compression Outline

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Switched Quasi-Logarithmic Quantizer with Golomb Rice Coding

Uncertainty in measurements of power and energy on power networks

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

Asymptotic Quantization: A Method for Determining Zador s Constant

CSE4210 Architecture and Hardware for DSP

Novel Pre-Compression Rate-Distortion Optimization Algorithm for JPEG 2000

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION

ECE559VV Project Report

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Flexible Quantization

Error Probability for M Signals

Problem Set 9 Solutions

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing

A New Scrambling Evaluation Scheme based on Spatial Distribution Entropy and Centroid Difference of Bit-plane

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Appendix B: Resampling Algorithms

Queueing Networks II Network Performance

Lecture 4: November 17, Part 1 Single Buffer Management

x = , so that calculated

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Markov Chain Monte Carlo Lecture 6

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

Linear Feature Engineering 11

Kernel Methods and SVMs Extension

Hiding data in images by simple LSB substitution

Regulation No. 117 (Tyres rolling noise and wet grip adhesion) Proposal for amendments to ECE/TRANS/WP.29/GRB/2010/3

Microwave Diversity Imaging Compression Using Bioinspired

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

On balancing multiple video streams with distributed QoS control in mobile communications

Lecture 4: Universal Hash Functions/Streaming Cont d

An Improved multiple fractal algorithm

Average Decision Threshold of CA CFAR and excision CFAR Detectors in the Presence of Strong Pulse Jamming 1

Comparison of Regression Lines

A Robust Method for Calculating the Correlation Coefficient

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

GEMINI GEneric Multimedia INdexIng

The Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL

Notes on Frequency Estimation in Data Streams

Topic 23 - Randomized Complete Block Designs (RCBD)

Mathematical Models for Information Sources A Logarithmic i Measure of Information

TOPICS MULTIPLIERLESS FILTER DESIGN ELEMENTARY SCHOOL ALGORITHM MULTIPLICATION

Comparison of Wiener Filter solution by SVD with decompositions QR and QLP

Global Sensitivity. Tuesday 20 th February, 2018

An Upper Bound on SINR Threshold for Call Admission Control in Multiple-Class CDMA Systems with Imperfect Power-Control

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Chapter 13: Multiple Regression

This column is a continuation of our previous column

MDL-Based Unsupervised Attribute Ranking

A New Design of Multiplier using Modified Booth Algorithm and Reversible Gate Logic

Multigradient for Neural Networks for Equalizers 1

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

18.1 Introduction and Recap

DETERMINATION OF TEMPERATURE DISTRIBUTION FOR ANNULAR FINS WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY BY HPM

A new construction of 3-separable matrices via an improved decoding of Macula s construction

The Digital Fingerprinting Method for Static Images Based on Weighted Hamming Metric and on Weighted Container Model

Low Complexity Soft-Input Soft-Output Hamming Decoder

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Digital Modems. Lecture 2

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

A Fast Fractal Image Compression Algorithm Using Predefined Values for Contrast Scaling

Quantum and Classical Information Theory with Disentropy

Composite Hypotheses testing

Department of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

MMA and GCMMA two methods for nonlinear optimization

Linear Approximation with Regularization and Moving Least Squares

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

Lecture 12: Classification

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

NUMERICAL DIFFERENTIATION

Using the estimated penetrances to determine the range of the underlying genetic model in casecontrol

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

Clock-Gating and Its Application to Low Power Design of Sequential Circuits

Statistics for Economics & Business

Boostrapaggregating (Bagging)

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Transcription:

ITB J. ICT, Vol. 5, No. 3, 20, 73-84 73 Lossless Compresson Performance of a Smple Counter- Based Entropy Coder Armen Z. R. Lang,2 ITB Research Center on Informaton and Communcaton Technology 2 Informaton Technology RG, School of Electrcal Engneerng and Informatcs Insttut Teknolog Bandung, Jalan Ganeca 0, Bandung, 406, Indonesa Emal: armen.z.r.lang@ste.tb.ac.d Abstract. Ths paper descrbes the performance of a smple counter based entropy coder, as compared to other entropy coders, especally Huffman coder. Lossless data compresson, such as Huffman coder and arthmetc coder, are desgned to perform well over a wde range of data entropy. As a result, the coders requre sgnfcant computatonal resources that could be the bottleneck of a compresson mplementaton performance. In contrast, counter-based coders are desgned to be optmal on a lmted entropy range only. Ths paper shows the encodng and decodng process of counter-based coder can be smple and fast, very sutable for hardware and software mplementatons. It also reports that the performance of the desgned coder s comparable to that of a much more complex Huffman coder. Keywords: entropy coder; counter-based coder; lossless compresson; Rce coders. Introducton Compresson schemes reduce the number of bts requred to represent sgnals and data []. Image communcaton and storage are examples of applcatons that beneft from mage compresson, because the compresson results n () faster (real tme) mage transmsson through bandlmted channels and () lower requrements for storage space. Noncompressed mages requre many data bts, makng t mpossble for real tme transmsson through bandlmted channels such as 48 kbt per second (kpbs) and 2 kbps ntegrated servces dgtal network (ISDN), as well as 9.6 kbps voce grade telephone or rado channels []. For example, transmttng a 256 256 pxels grayscale stll mage over the voce grade lne would requre at least 54.6 s. Furthermore, one HDTV format needs 60 frames of 280 720 pxels per second. Usng 24 bpp colour pxels, ths HDTV format would requre an mpractcal channel capacty of,440 Mbps. For example, space exploratons by Natonal Aeronautcs and Space Admnstraton (NASA) mssons generate huge scence data, requrng real-tme compresson [2]. Consequently, nternatonal bodes such as Commttee for Space Data Systems (CCDS) have defned compresson standards for real-tme systems [3]. Receved December 22 nd, 200, Revsed November 7 th, 20, Accepted for publcaton November 7 th, 20. Copyrght 20 Publshed by LPPM ITB, ISSN: 978-3086, DOI: 0.564/tbj.ct.20.5.3.2

74 Armen Z.R. Lang In sgnal and data compresson cases, lossless compresson n forms of entropy coders s always needed. As shown n Fgure, a value quantzaton block converts any nput samples nto quantzed values accordng to a prescrbed target of compresson rato (CR) [4]. A lossless entropy encoder compresses quantzed values to obtan the desred effcent representatons (btstreams). The quantzed values can be recovered by nvertng encodng process at the decoder. An entropy decoder recovers the values losslessly. A value dequantzaton block reconstructs the sgnal samples. Input Samples Value Quantzaton Input Data Lossless Encoder Target CR Btstream Output Samples Value Dequantzaton Output Data Lossless Decoder Fgure A typcal use of lossless coder n an mage compresson system. Snce there are fast algorthms for value quantzaton, such as wavelet scalar quantzaton (WSQ) [5], the entropy coder s usually the tme performance bottleneck. Entropy coders such as zero run length (ZRL) coders and Huffman coders must perform statstcal modelng of data dstrbuton and assgn shorter length codewords to samples that occur more often. Ths n effect reduces the average of representaton bts, thus achevng on average a compresson. If the codeword table does not have to be transmtted, Huffman code s the shortest length code for statstcally ndependent data. However, the Huffman coder s desgned to obtan optmal performance for all range of entropy values [4]. As a result, the encodng cost s hgh because the coder must create a table of codewords for nput data through calculaton of data dstrbuton. Such processng can be very tme consumng. Ths paper then proposes to use counter-based coders [2] because such coders are much smpler. Counter based coder conssts of a set of subcoders operatng at specfc range of entropy values (statstcs). Such subcoders can be desgned to be optmal only for a narrow range of entropy [2]. Each subcoder s smply a counter, countng number of zeros or ones n a data stream. As a result, the encoder and decoder are very smple and fast.

Lossless Compresson Performance of Counter Based Coder 75 However we need to study the performance of counter-based coder n practce n the context of transform codng, snce the work n [2] was for dfferental pulse code modulaton schemes. Furthermore, Rce coders have always been thought to be a specal purpose coder, as compare to Huffman coder. We study and desgn of such counter coders to be used n transform coder compresson scheme. The entropy range of the wavelet coeffcents can be used to desgn a partcular Rce coder. We can then have a performance comparable to that of Huffman coder wth a fracton of computatonal costs [6]. 2 Entropy Coders In prncple, an entropy coder s a coder that allocates a number of bts to a codeword proportonally to an nformaton value of the codeword [4]. 2. Frst Order Entropy Let us assume that nput data consst of a sequence of fxed length codewords comng from a zero-memory nformaton source (.e., a source that produces statstcally ndependent codewords). The source has an alphabet S havng symbolss, s2,, s q. For a partcular set of nput data, the probabltes of symbol occurrences are Satsfyng P s, s 2 q s q P,, s P () P (2) If a symbol s rarely occurs (.e., suprsng value. Hence symbol nformaton s P s s very small), ts occurrence has a hgh s s sad to have a hgh amount of P. If a I, whch s nversely proposonal to magntude of s s occurs, we are sad to have receved an amount of nformaton I symbol from the source, defned as I s log (3) P s s

76 Armen Z.R. Lang Snce the probablty of recevng an amount of nformaton I s P, the average amount of nformaton receved for each symbol s called the frst-order entropy of S, denoted as H s, H q s Ps I s Ps q log (4) P s In other word, the source S produces an average amount of nformaton H for each occurrence of a symbol. If we use bnary logarthm (.e., log 2 / Ps ) the nformaton amount unt s n bts. To carry ths amount of nformaton, the source needs data resources namely codewords. Usng a code W, each symbol s needs a dstnct codeword w. If each codeword w needs nformaton from source S usng code W s l bts, then average bts requred to transfer s s R q P s l Snce we have I s l, or (5) s H R (6) Ths means for a zero-memory source, the entropy s the lower bound of average bts per symbol. In other words, an entropy coder wll have an effcent representaton of S,.e, a small R, but wll not be smaller than the entropy. The mnmum length achevable s the data entropy. Thus the performance of an entropy coder s n how close the resultng R to H s. It should be noted that t s possble to have R H s f the source happens to have memory. For statstcally ndependent data, the entropy s the smply the frst order entropy [2]. Effcent lossless codes depend on statstcal dstrbutons. An entropy coder allocates a number of bts proportonal to the amount of nformaton. A rarely occurrng symbol wll get more bts. Hghly frequent symbols get codewords wth fewer bts. As a result, we can have an effcent average of bt usages.

Lossless Compresson Performance of Counter Based Coder 77 2.2 Zero Run-Length Coder Suppose that we have a source producng many consecutve zero symbols,.e., P s s hgh for s = 0x00. In ths case we can use ZRL, whch s a very smple code. It smply counts the number of consecutve zeros n an nput codeword and produces a zero symbol followed by a number representng the countng result. For example, consder a source that uses an alphabet consstng of fx 8 bts codewords, {0x00, 0x0, 0x02,, 0xFF}. If a ZRL encoder encounters a stream of nne nput data: (0x02, 0x0, 0x00, 0x00, 0x00, 0x00, 0x07, 0x00, 0x7F) (7) t wll produce a stream of eght output data (0x02, 0x0, 0x00, 0x04, 0x07, 0x00, 0x0, 0x7F) (8) Notce that there are fve zeros (0x00) n the nput stream, and the frst four consecutve zeros are replaced by a zero symbol 0x00 and a countng result 0x04. The last zero s replaced wth a zero symbol 0x00 and a countng result 0x0. Gven the smple codng rule, a ZRL decoder wll be able to recover the orgnal stream from the encoder output stream easly (unquely decodable) wthout any data loss. However notce also that the number of data have been reduced from nne to eght, performng a lossless compresson. Ths compresson s effectve as long as the nput stream contans sgnfcant number of consecutve zeros. Otherwse, ZRL wll n fact produce an expanson nstead of a compresson. 2.3 Huffman Coder The varable-length Huffman coder s a well known and wdely used coder because ts compresson rato performance s close to optmal (.e., the actual compresson rato s very close to the nput entropy). In an optmal code, occurrences of long codewords are rare hence the number of bts s reduced. A Huffman coder maps the nput samples usng a table of varable-length codewords. The codewords are desgned such that the most frequent data sample has the shortest codeword. Consequently, the resultng bt rate s always wthn H and H+, regardless of the nput entropy H. A Huffman coder mantans statstcal dstrbutons of the source. The coder uses the dstrbuton to estmate the nformaton value of each symbol. It then

78 Armen Z.R. Lang creates a codeword for each symbol accordng to ts nformaton value. Fnally t uses that codeword to represent correspondng symbol from the source. Thus the process of a Huffman coder s usually computatonally costly because t conssts of two passes. The frst pass reads all data samples to generate statstcal dstrbuton, to be used to allocate bts to the codewords. It allocates bts varably for codewords accordng to a herarchcal tree structure of symbols, n whch the herarchy reflects the mportance of the symbols. The frst bt s dstrbuted to the symbol wth the lowest nformaton values. The remanng bts are dstrbuted by traversng through the tree structure n a way that symbols wth more nformaton value receve more bts, whle ensurng that the resultng code s always unquely decodable. The second pass encodes the sample data usng the resultng codeword table. Ths two-pass Huffman code scheme must then nclude the codeword table n the btstream to allow decoder to decode data correctly. A sngle-pass Huffman code s even more computatonally expensve. To avod havng to send the codeword table, both encoder and decoder must mantan an adaptve codeword table. They adapt the table as each ncomng data sample changes symbol dstrbuton. As a result, both encoder and decoder must create a new tree structure and the codeword table perodcally. 3 Counter-Based Coders We then propose to use the counter coder as an alternatve and more computatonally effcent entropy coder. One basc counter coder called (or PSI ) works as follows [2]. Gven a block of data samples (for j =,..., J), replaces every sample n the block wth a number of consecutve zero bts 0 followed by a closng one bt (see Table ). For example, f a sample happens to have a value of 55, the encodes t usng 56 bts,.e., 55 zeros followed by a one. Hence, the encodng algorthm s smple. The reconstructon s obvously smple too. A decoder just counts the number of zero bts untl a one appears. The countng result s the sample value. Defne now a functon j such that a partcular data block conssts of symbols { s (), s (2),, s (J ) }. The total number of bts requred to encode a data block s (for ndex j and block sze J) L J J x j) j ( (9)

Lossless Compresson Performance of Counter Based Coder 79 Notce that ths code bascally allocates more codeword bts to data samples wth hgher sample values,.e., x x j l l j. Ths means code s optmal f the statstcal dstrbuton of data samples s monotoncally decreasng,.e., x x Ps Ps. The average codeword length of code s R q P s j j (0) It has been studed elsewhere [3] that ths code s optmal for a (monotoncally decreasng dstrbuton) source wth a frst-order entropy around,.e.,.5 H 2.5. It should be clear that n a monotoncally decreasng dstrbuton, the probablty of a sample has a large value s low. As a result, the probablty of long codeword length Eq. () s low. Thus t s optmal. We say that range s the natural entropy range of. For sources wth entropes outsde that range, there are several varatons of. Symbols s Samples Table A codeword table of Code. x Sample Data d Codewords w Length l s 0 0000 0000 2 s 0000 000 0 2 2 3 s 3 2 0000 000 00 3 4 s 4 3 0000 00 000 4 256 s 256 255 0000 0000 256 For example, f H 2. 5, t s safe to assume that the LSBs of sample data d ( j) are completely random. In ths case there s no need to perform any compresson on those LSBs. We can then splt d ( j) nto two portons: k LSBs and (8-k) MSBs. The MSBs are coded usng code before beng sent to btstream, whle the LSBs are sent uncoded. A decoder frst recovers the MSBs usng decoder, and then concatenates the results wth the uncoded LSBs,

80 Armen Z.R. Lang resultng n the desred d ( j). It can be shown that ths approach has a natural entropy range.5 k H 2. 5 k. Ths code s called, k. For 0.75 H. 5 range, we can frst use the, resultng n a btstream wth many bts. To explot ths, we can cascade t usng another smple code. The smple code frst complements the results of (.e., convertng 0 nto and vce versa), and group the resultng bt nto bnary 3-tuples. It fnally codes each bnary 3-tuples n a way that a tuple wth many 0 bts wll use short codewords. Ths code s called 0. For sources wth entropy H 0. 75, we can cascade the coder wth another coder, such as ZRL coder descrbed above. A ZRL encoder processes samples, and provdes ts outputs to a counter coder. In effect the ZRL coder brngs the data entropy nto counter code natural range. 4 Compresson Performance To observe the performance of ths smple counter, we have used a WSQ scheme shown n Fgure [5]. Here we used an mage sample of Lena.pgm, havng a dmenson of 52 52 pxels at 8 bt per pxel (bpp) grayscale, hence a sze of 2,097,52 bts. The scheme frst quantzes the mage accordng to a set of prescrbed rates 0., 0.2,,0.9,, 2,,6, and 7 bpp. The resultng data becomes nput data to a lossless encoder. In addton to the counter code, we have used a Huffman coder for a comparson. The lossless encoder produces compressed btstreams. A lossless decoder can then reproduce output data from the btstreams. Gven a prescrbed rate, the value quantzaton block produces mage data and then later the coder produces the bstream. We measured the sze of the btstream. The compresson rato (CR) s the rato between the orgnal mage sze (n ths case 2,097,52 bts) and the btstream sze. We measured the CR for each prescrbed rate for counter coder, and measure smlar performance of Huffman coder for comparsons. We expect that the resultng CRs should approxmate the prescrbed CR. Table 2 and Fgure 2 show the results of the coders for varous prescrbed rates. Overall the performance of counter coder s comparable to that of Huffman coder. At very low rates Huffman Coder stll outperforms counter coder. However n most cases both coders perform better than expected rates.

Lossless Compresson Performance of Counter Based Coder 8 We can also measure the entropy the mage data for each prescrbed rate, and compare the CR performances relatve over that of the entropy. As shown n Fgure 2, for hgh prescrbed CR (thus low entropy), both counter coder and Huffman coder are able to outperform the entropy. For hgh entropy (low prescrbed CR), the counter coder outperforms the Huffman coder. The counter coder performance s comparable to entropy, whle Huffman coder underperforms the entropy. Table 2 A comparson of compresson rato (CR) performance. Prescrbed CR Performance Relatve Performance Over Entropy Rate (bpp) CR Entropy Huffman Counter Huffman Counter 0. 80.00 44.44 53.33 50.00 20% 3% 0.2 40.00 27.59 32.00 29.63 6% 07% 0.3 26.67 2.05 24.24 22.86 5% 09% 0.4 20.00 6.67 20.00 8.8 20% 09% 0.5 6.00 4.8 7.78 6.33 20% 0% 0.6 3.33 3.33 5.69 4.55 8% 09% 0.7.43.94 3.79 2.70 6% 06% 0.8 0.00 0.96 2.50.59 4% 06% 0.9 8.89 0.3.43 0.67 3% 05% 8.00 9.30 0.39 9.76 2% 05% 2 4.00 4.97 4.79 4.85 96% 98% 3 2.67 3.6 2.92 3. 92% 98% 4 2.00 2.30 2.6 2.32 94% 0% 5.60.79.75.82 98% 0% 6.33.47.44.47 98% 00% 7.4.24.2.22 98% 98% What s the mage qualty assocated wth the CR performance level? For each prescrbed rate the scheme can produce a btstream. The decoder scans then use the btstream to produce a reconstructed mage. We then compare the qualty of the reconstructed mage, as compared to the orgnal mage. One way to measure mage qualty s through a measurement of peak sgnal to nose rato (PSNR). Ths measure compares the two mages, and calculates the energy of mage dfferences. PSNR s then the rato of peak-value energy and that of the mage energy. A PSNR level of 30 db or more s consdered good qualty.

82 Armen Z.R. Lang 40% CR Performance Relatve to Entropy 20% 00% 80% 60% 40% 20% Huffman Counter 0% 0.06 0.2 0.20 0.30 0.43 0.60 0.90 0.95.00.06.2.20.30.43.60.90 Prescrbed CR (In A Logarthmc Scale) Fgure 2 Compresson Rato (CR) performances of counter code as compared to Huffman coder, relatve to data entropy. Table 3 and Fgure 3 show compresson performances of the coders at varous levels of mage qualty, after beng combned wth the WSQ scheme shown n Fgure. The WSQ decoder uses the mage data from the btstream to generate a reconstructed mage. We can then compare the reconstructed mage wth the orgnal one, and the measure ts qualty n terms of PSNR. Here, counter coder outperforms Huffman coder for hgh qualty compresson. Huffman coder outperformed counter coder at lower mage qualty levels, although both outperform the entropy. Beyond 37 db PSNR, the CRs of the coders are bascally smlar. Table 3 Compresson rato performance over compresson qualty. Qualty PSNR (db) CR Entropy Coder CR Huffman Coder CR Counter Coder 28.06 44.4 53.33 50.00 3.56 27.6 32.00 29.63 34.4 6.7 20.00 8.8 36.42.0 2.50.59 37.24 9.3 0.39 9.76 40.95 5.0 4.79 4.85 49.67 2.3 2.6 2.32 5.4.5.44.47

Lossless Compresson Performance of Counter Based Coder 83 CR (n a Logarthmc Scale) 2.0.8.6.4.2.0 0.8 0.6 0.4 0.2 0.0 28. 3.6 34. 36.4 37.2 40.5 49.7 5. PSNR (db) Entropy Huffman Counter Fgure 3 Compresson rato performances n a scalar quantzaton scheme. 5 Conclusons Smple counter coders can be used effectvely to compress mages n a WSQ scheme. The counter coders are optmal for monotoncally decreasng dstrbutons. Ths knd of dstrbuton s natural for ndces of the WSQ. In ths paper we have shown the counter coder performance, especally CR and PSNR, matches that of Huffman coder. Ths results n an almost optmal performance comparable to Huffman codng wth a fracton of code complexty. Acknowledgements Ths work was supported n part by Rset Unggulan (RU) ITB. Nomenclature CR db SNR PSNR References = Compresson rato = Decbel = Sgnal to nose rato = Peak SNR [] Jayant, N., Sgnal Compresson: Technology Targets and Research Drectons, IEEE J. Selected Areas Communcatons, IEEE 0733-876/92, 0(5), pp. 796-88, July 992.

84 Armen Z.R. Lang [2] Rce, R.F., Some Practcal Unversal Noseless Codng Codng Technques, Part III, Module PSI 4,K+, JPL Publcaton 9 3, NASA, JPL Calforna Insttute of Technology, 24p, November 99. [3] CCSDS, Image Data Compresson, Recommended Standard CCSDS 22.0-B-, Consultatve Commttee for Space Data Systems, Nov, 2005. (avalable at http://publc.ccsds.org, accessed 4 March 20) [4] Lang, A., Revew of Data Compresson Methods and Algorthms, Techncal Report, DSP-RTG 200 9, Insttut Teknolog Bandung, Sep. 200. [5] Bradley, J.N. & Brslawn, C.N., The Wavelet/Scalar Quantzaton Compresson Standard for Dgtal Fngerprnt Images, Proc. IEEE Int. Symp. Crcuts and Systems, London, May 3 June 2, 994. [6] Lang, A.Z.R., An FPGA Implementaton of a Smple Lossless Data Compresson Coprocessor, Proc. Internatonal Conference on Electrcal Engneerng and Informatcs (ICEEI 20), Bandung, July 20.