INTEGER SUB-OPTIMAL KARHUNEN-LOEVE TRANSFORM FOR MULTI-CHANNEL LOSSLESS EEG COMPRESSION

Similar documents
Multimedia Networking ECE 599

Image Data Compression

The Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

Lossless Image and Intra-frame Compression with Integer-to-Integer DST

ECE533 Digital Image Processing. Embedded Zerotree Wavelet Image Codec

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1

Waveform-Based Coding: Outline

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

LOSSLESS INTRA CODING IN HEVC WITH INTEGER-TO-INTEGER DST. Fatih Kamisli. Middle East Technical University Ankara, Turkey

Transform Coding. Transform Coding Principle

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

Basic Principles of Video Coding

Compression and Coding

Lapped Unimodular Transform and Its Factorization

Multichannel EEG compression: Wavelet-based image and volumetric coding approach

SCALABLE AUDIO CODING USING WATERMARKING

1686 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 6, JUNE : : : : : : : : :59063

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

LOW-COMPLEXITY, MULTI-CHANNEL, LOSSLESS AND NEAR-LOSSLESS EEG COMPRESSION

Transform coding - topics. Principle of block-wise transform coding

Module 4. Multi-Resolution Analysis. Version 2 ECE IIT, Kharagpur

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

Adaptive Karhunen-Loeve Transform for Enhanced Multichannel Audio Coding

+ (50% contribution by each member)

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

Predictive Coding. Prediction

Predictive Coding. Prediction Prediction in Images

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

BASICS OF COMPRESSION THEORY

IMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression

Lecture 7 Predictive Coding & Quantization

SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING. Kenneth Zeger

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Generalized Triangular Transform Coding

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

Lifting Parameterisation of the 9/7 Wavelet Filter Bank and its Application in Lossless Image Compression

Image and Multidimensional Signal Processing

Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences

Closed-Form Design of Maximally Flat IIR Half-Band Filters

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Digital Image Processing Lectures 25 & 26

Source Coding for Compression

Compression methods: the 1 st generation

Laboratory 1 Discrete Cosine Transform and Karhunen-Loeve Transform

Image Compression - JPEG

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur

Image Compression using DPCM with LMS Algorithm

Scalable color image coding with Matching Pursuit

EE67I Multimedia Communication Systems

Efficient sequential compression of multi-channel biomedical signals

Half-Pel Accurate Motion-Compensated Orthogonal Video Transforms

Embedded Zerotree Wavelet (EZW)

Compression and Coding. Theory and Applications Part 1: Fundamentals

JPEG and JPEG2000 Image Coding Standards

Multiple Description Transform Coding of Images

An Introduction to Filterbank Frames

Source Coding: Part I of Fundamentals of Source and Video Coding

Soft-Output Trellis Waveform Coding

Implementation of Lossless Huffman Coding: Image compression using K-Means algorithm and comparison vs. Random numbers and Message source

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

Cyclic Redundancy Check Codes

Fast Progressive Wavelet Coding

On Scalable Coding in the Presence of Decoder Side Information

Entropy Encoding Using Karhunen-Loève Transform

Objective: Reduction of data redundancy. Coding redundancy Interpixel redundancy Psychovisual redundancy Fall LIST 2

- An Image Coding Algorithm

GAUSSIAN PROCESS TRANSFORMS

Analysis of Fractals, Image Compression and Entropy Encoding

Image Dependent Log-likelihood Ratio Allocation for Repeat Accumulate Code based Decoding in Data Hiding Channels

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems

Performance Bounds for Joint Source-Channel Coding of Uniform. Departements *Communications et **Signal

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color

Image Denoising using Uniform Curvelet Transform and Complex Gaussian Scale Mixture

Compression and Coding. Theory and Applications Part 1: Fundamentals

EBCOT coding passes explained on a detailed example

On the Cost of Worst-Case Coding Length Constraints

Efficient Alphabet Partitioning Algorithms for Low-complexity Entropy Coding

Department of Electrical Engineering, Polytechnic University, Brooklyn Fall 05 EL DIGITAL IMAGE PROCESSING (I) Final Exam 1/5/06, 1PM-4PM

Rounding Transform. and Its Application for Lossless Pyramid Structured Coding ABSTRACT

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Basics of DCT, Quantization and Entropy Coding

CSE 421 Greedy: Huffman Codes

Filter Banks for Image Coding. Ilangko Balasingham and Tor A. Ramstad

798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 10, OCTOBER 1997

Can the sample being transmitted be used to refine its own PDF estimate?

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O.

Direction-Adaptive Transforms for Coding Prediction Residuals

Digital communication system. Shannon s separation principle

The Distributed Karhunen-Loève Transform

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER

A Variation on SVD Based Image Compression

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

AN IMPROVED CONTEXT ADAPTIVE BINARY ARITHMETIC CODER FOR THE H.264/AVC STANDARD

INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND AUDIO

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Transcription:

INTEGER SUB-OPTIMAL KARHUNEN-LOEVE TRANSFORM FOR MULTI-CHANNEL LOSSLESS EEG COMPRESSION Yodchanan Wongsawat, Soontorn Oraintara and K. R. Rao Department of Electrical Engineering, University of Texas at Arlington, 416 Yates St., Arlington, TX, 76019-0016 USA Email: yodchana@gauss.uta.edu, oraintar@uta.edu, and rao@uta.edu ABSTRACT In this paper, a method for approximating the Karhunen-Loeve Transform () for the purpose of multi-channel lossless electroencephalogram (EEG) compression is proposed. The approximation yields a near optimal transform for the case of Markov process, but significantly reduces the computational complexity. The sub-optimal is further parameterized by a ladder factorization, rendering a reversible structure under quantization of coefficients called IntS. The IntS is then used to compress the multi-channel EEG signals. Lossless coding results show that the degradation in compression ratio using the IntS is about 3% when the computational complexity is reduced by more than 60%. 1. INTRODUCTION In some countries, legally recording of electroencephalogram (EEG) signals for diagnosis has to be done losslessly for the purpose of perfect transmission to the expert analysts. For this reason, there have been interests in lossless EEG signal compression. Generally, EEG signals are measured from the electrodes at different positions on the human scalp. A typical number of channels (N) of the signals can be as high as a few hundred (or may be a few thousand in the near future), and thus these neighboring channels can be highly correlated. In order to efficiently compress multi-channel signals, the inter-channel redundancy must be exploited. Although, these multi-channel signals seem to be correlated, the correlation models of the signals are unpredictable due to the unknown combination of the cortical potentials. Thus, data independent transforms such as DCT, DST, or DFT [1 usually fail to efficiently decorrelate these types of signals. This problem can be solved by employing the optimal transform to decorrelate these EEG signals by exploiting the eigenvectors of their correlation matrix. This optimal transform is known as Kahunen-Loeve Transform () [1. However, calculating N-point can be complicated especially for large N (number of electrodes or number of channels). Several schemes for single-channel lossless EEG compression have been proposed [2, [3. However, not many methods exploit the multi-channel case. An efficient approach to losslessly compress multi-channel EEG signals by employing to reduce interchannel redundancy is proposed in [4. is approximated with finite precision and reversible operations. The key to the approximation of these transforms is ladder-type, also known as lifting, factorization. The method for matrix factorization in [5, [6, [7 are used. The resulting approximation is called integer (Int- ). According to [7, it is known that, for a given matrix, the factorization is not unique. Each solution results in a different permutation and dynamic range of coefficients, which are of particular importance to lossless coding applications. To demonstrate this, let us consider an example. Example 1, lifting factorization: Let A be an arbitrary matrix of size 3 3 such that det A = 1 as follows: 0.6 0.64 0.48 A = 0.8 0.48 0.36. 0 0.6 0.8 This work was supported in part by NSF Grant ECS-0528964. Two lifting factorizations of A can be shown: A = A = [ [ 1 0.3 0.6 [ [ 1 3.33 0.8 [ 0.33 0.5 1 [ 0.33 0.5 1 [ 3.75 3.42 1 [ 1.25 4.92 1 [ 0.27 1 0.8 [ [ 1.2 1 0.48 [ It is clear that both factorizations can be approximated with reversibility preserved. The magnitudes of the coefficients in (1) are all less than or equal to one whereas those in (2) are as high as 4.92. These large coefficients can significantly impact the dynamic range of the internal nodes in the transform and thus the lossless coding performance. The problem is more severe as N increases. Considerations for using the Int for multi-channel coding include: Approximating a reversible structure, since the factorization of the is not unique, finding the best solution in the sense of minimum dynamic range of coefficients is very difficult. An obvious approach is to compare all the possible factorizations. This, however, is impractical for a large number of channels since the number of solutions is of order O(N!), where N is the number of channels. Since the is statistically dependent transform, its parameters must be transmitted as side information which is of order O(N 2 ). Hence, the side information should also be minimized. The calculation of the and the implementation of the Int- are highly complex, especially for large N. In this paper, an efficient approximation with reversibility preserved for large is proposed. The signals are divided into groups using the divide-and-conquer philosophy. For each group, the signals are decorrelated by its, referred to as marginal in [8. Each of the small s is realized by a lifting factorization, rendering a reversible transform with small rational lifting coefficients. In order to obtain the small lifting coefficients, appropriate permutation matrices have to be well selected. Finding the right permutation matrices for factorizing each marginal instead of full-sized N-point is more practical [7. As a result, using the proposed simplified transform can significantly reduce computational complexity over using Int. For convenience, this proposed transform is called integer sub-optimal (IntS). The performance of IntS is presented via the decay of its sorted variances. Since, our focus is on the investigation of IntS for decorrelating inter-channel redundancy of EEG data. IntS will also be embedded to the inter-channel decorrelation part of the multichannel EEG lossless scheme in [4 (Figure 6) instead of Int to verify the merit of its coding performance. (1) (2)

N/2-point N/2-point x 0 x 1 2-point T 1 z 0 z 1 2-point T 3 y 0 y 1 N channels Input N/2-point N -channels output x 2 x 3 2-point T 2 z 2 z 3 y 2 y 3 Figure 1: Structure of sub-optimal for N-channel signals 2. SUB-OPTIMAL In order to solve the complexity issue caused by the large dimension of the, the N channels of EEG signals are equally divided into two groups of N/2 channels along the arbitrary scan order (in this paper, the scanning orders in Figure 7 are used with N = 64). Each group is decorrelated by its local marginal, and the outputs are sorted according to the variances from large to small. To further reduce the dependency, the largest N/4 channels from the two groups are combined, and decorrelated by their N/2-point. Under the assumption that, the EEG signals are highly decorrelated, the remaining N/2 channels of small signals are exported directly to the outputs. Figure 1 shows a block diagram for the case of N = 64. A self-similar structure is used to further reduce the complexity where the same structure is repeated for each of the N/2. It is noted that every marginal is updated over the period of m second EEG data (in this paper, m = 8 is used). Figure 1 shows structure of the proposed sub-optimal, where the details of each N/2 = 32-point sub-optimal is illustrated in Figure 2 for the case of N = 64. Figure 2 also demonstrates that the iteration of dividing the channels stops at which is a feasible size to optimize for the best parameters. From the implementation point of view, the N = 64 channels are clustered into groups of eight processed locally (Figure 7(b)). Optimality of the proposed approximation of the depends on the statistics of the signals and how they are clustered. 2.1 Markov process In this section, let us consider a special case of Markov process. Assume that the correlation between the i-th and j-th channels is given by: [R x i j = E{x i x j } = ρ i j, i, j = 0, 1,, N 1. It can be shown that the maximum achievable coding gain [9 for this case is G N = 2.1.1 Four-channel case 1 N tr(r x) det(r x ) 1/N = (1 ρ2 ) (1 1 N ). Consider a simple case of N = 4 where the signals are decorrelated using the structure in Figure 3. It is easy to see that the marginal s T 1 and T 2 are T i = 1 2 ( 1 1 1 1 ), (3) and the corresponding eigenvalues are 1 ± ρ. The coding gain after the first stage (z i ) is (1 ρ 2 ) 1 2. Applying a 2-point to the larger components z 0 and z 2 results in a sub-optimal coding gain of Ĝ 4 = (1 ρ 2 ) 1 2 [1 14 ρ2 (1ρ) 2 1 4. It can be shown that 0.9036 ( 2 3 ) 1/4 Ĝ 4 G 4 1 for 0 ρ 1. G N /G N 1 0.9995 0.999 0.9985 0.998 Figure 3: Approximated 4-point. N = 64 0.9975 0 0.2 0.4 0.6 0.8 1 ρ 2.1.2 2 p -channel case Figure 4: Ĝ N /G N for the case of N = 64. In extension to the case of N = 2 p, assume that the three N/2-point s used in the approximation in Figure 1 are obtained by the eigenvectors of the correlation matrices of the corresponding inputs, i.e. the three s are the marginal s. Figure 4 plots the ratio Ĝ N G N as a function of ρ for the case of N = 64. It is clear that when these sub-s are optimal, the degradation in coding gain is very insignificant (less than 0.3%). In order to apply the proposed sub-optimal to multichannel EEG compression, two points should be noted. First, in the proposed recursive structure, the N/2-point s are further approximated, and thus the difference in coding gain accumulates. Second, and perhaps more importantly, the EEG signals, although highly correlated, may not be a Markov process. In fact, there exists some correlation among the neighboring channels. Hence, the choice in clustering the inputs into groups (Figure 7(b)) also has an impact on the coding performance. 3. INTEGER In order to design the reversible structure for each for the purpose of constructing a reversible approximation of N-point sub-optimal in section 2, a lifting factorization in [7 together with the slight modification in [4 is exploited. The N-point matrix, K N N, is factorized using single-row non-zero off-diagonal factorization as, K N N = P L S N S N 1 S 1 S 0 P R, (4) where P L and P R are the permutation matrices that exchange the rows and the columns, S n (n = 1, 2..., N) are single-row non-zero off-diagonal matrices, S 0 is a lower triangular matrix, and N is the number of inputs. To clarify (4), an example of this factorization for the case of 4-channel is shown in Figure 5. According to Figure 5, the magnitudes of the rational lifting coefficients, s i j, depend directly on the selection of the permutation matrices. It should be noted that, for some choices of P L and P R, the coefficients obtained in S i can be very large. This can result in degradations of lossless coding performance. In order to make the problem feasible for selecting good P L and P R, the size of smallest is limited

16-point sub-optimal 16-point sub-optimal x 0 x1 x2 x3 x4 x5 x6 x7 y 0 y1 y2 y3 y4 y5 y6 y7 x 8 x9 x10 x11 x12 x x 13 x 14 15 y 8 y9 y10 y11 y12 y y 13 y 14 15 x 16 x17 x18 x19 x20 x21 x22 x23 y 16 y17 y18 y19 y20 y21 y22 y23 x 24 x25 x26 x27 x28 x29 x30 x31 y 24 y25 y26 y27 y28 y29 y30 y31 16-point sub-optimal Figure 2: Structure of sub-optimal for 32-channel signals which is composed of 16-point sub-optimal s and s to N=8. Therefore, the reversible approximation of the sub-optimal mentioned in Section 2 can be constructed using the structure of Ints. On the computational complexity, for the case of N-point Int-, the number of lifting coefficients used in the structure is 3 2 N(N 1). For the case of N-point IntS, the number of lifting coefficients is 3 2 8(8 1) 3log 2 (N/8) = 28 9 Nlog 23 3.1N 1.585. It is clear that the ratio between the numbers of coefficients from both cases grows exponentially for large N. 4. MULTI-CHANNEL EEG LOSSLESS CODER The performance of the proposed IntS is evaluated in the multichannel EEG lossless coder proposed in [4. The block diagram of this coder is shown in Figure 6. Different pulse code modulation (forward difference) is used to remove DC bias in each EEG channel. Inter-channel decorrelation is then performed using the Int. The resolution at all internal nodes is fixed to sixteen bits. The IntDCT-IV [10 is applied to reduce temporal redundancy. Finally, the statistical redundancy is removed by using the Huffman coding. As mentioned, using Int may introduce undue computational complexity to the coder for large number of channels. The problem is solved by replacing the Int with the IntS. For the details of the coder, the reader is referred to [4. 5. SIMULATION RESULTS In the simulation, eight seconds of sixty-four channels of EEG signals sampled at 1.024 khz and digitized to sixteen bits are used. In order to achieve efficient channel partitioning, two types of channel scanning (spiral and clustering) shown in Figure 7 are applied. Figure 8 compares the (sorted) variances at the outputs when Int, IntS and IntDCT are used to decorrelate the inter-channel correlation. Since the IntDCT is signal independent, the majority of the variances are much higher. On the other hand, the output variances in the case of IntS are similar to those obtained from the Int. Figures 8(a) and (b) show the results for two different cases of spiral and clustering scanning. It is noticeable that the output variances obtained by the IntS in the case of clustering are closer those of the Int than in the case of spiral scanning. The lossless coder described in Section 4 is applied to the EEG data. Table 1 summarizes the compression ratios obtained from the cases of Int and IntS with different channel scanning. When there is no inter-channel decorrelation, i.e. the Int block in Figure 6 is removed, the compression ratio is 2.53. Maximum compression ratio of 2.84 is achieved when the full 64-point Int is applied. When the IntS, consisting of twenty-seven eightpoint Ints, is used, compression ratios of 2.80 and 2.82 are obtained for spiral and clustering scanning. Note that when the signals are randomly grouped, the average compression ratio is 2.78. This shows that the order of the EEG channels has some impact on the coding performance. Although, using IntS causes about 3% coding degradation compared with using Int, computational cost is dramatically reduced by more than 60%. Table 2 compares coding result of the coder in [4 (except that Int is replaced with IntS) with the existing algorithm Shorten (Lossless linear prediction based coder of order 6) [11, lossless JPEG2000 [12 and GZIP [13. At the higher computational cost over the existing lossless algorithms, coding gain is improved by more than 20%. The proposed coder yields the best compression ratio at 2.82, while Shorten, lossless JPEG2000 and GZIP yield compression ratios of 2.16, 1.97 and 1.44, respectively. 6. CONCLUSIONS This paper presents a method for efficiently approximating the for the purpose of lossless multi-channel EEG compression. The approximation is done by dividing the signals into small groups

s 01 s 11 s 12 s 13 s 22 s 21 s 23 s 02 s 03 P R P L s 04 s 05 s 06 s 31 s 32 s 41 s 42 s 43 s 33 S 0 S 1 S 2 S 3 S 4 Figure 5: Example of the structure of factorization used in [4 for the case of 4 4 matrix Multi-channel EEG signals Scanning (spiral or clustering) DPCM Inter-channel Decorrelation (Int) Time-frequency Transform (IntDCT-IV) Entropy Coding (Huffman) Bit stream Figure 6: Multi-channel EEG lossless coder Table 1: Compression ratio (CR) of the 64-channel EEG lossless compression coder (Figure 6) using Int and IntS for channel decorrelation 64-channel EEG signals CR No channel decorrelation 2.53 64-point Int 2.84 64-point IntS (spiral scan data) 2.80 64-point IntS (clustered data) 2.82 64-point IntS (random) 2.78 Table 2: Encoding 16 bit per sample (bps) of 64-channel EEG signals with various lossless coding algorithms: Compression percentage is defined as (16 output bit rate) 100/16 64 channels IntS Shorten JPEG2000 GZIP Compression 64.56 55.37 49.33 30.56 Percentage CR 2.82 2.24 1.97 1.44 bps 5.67 7.14 8.11 11.11 and using a number of small-size s. A special case of Markov process is discussed. The sub-optimal is then further approximated by using lifting factorization, resulting in the integer-tointeger mapping IntS. It is shown that the number of lifting coefficients of the IntS is of order O(N 1.585 ) instead of O(N 2 ) for the case of Int. Not only is the computational cost reduced, but also the amount of side information that needs to be transmitted with the compressed data is also reduced by using the IntS. Despite the complexity reduction, the new transform yields near optimal results in multi-channel EEG lossless compression performance. Optimal channel ordering/mapping requires further study. REFERENCES [1 K. R. Rao and P. C. Yip, The Transform and Data Compression Handbook, Boca Raton: CRC Press, 2001. [2 G. Antoniol and P. Tonella, EEG data compression techniques, IEEE Trans. Biomed. Eng., vol.44, pp.105-114, Feb. 1997. [3 N. Memon, X. Kong, and J. Cinkler Context-based lossless and near-lossless compression of EEG signals, IEEE Trans. Biomed. Eng., vol.3, pp.231-238, Sep. 1999. [4 Y. Wongsawat, S. Oraintara, T. Tanaka and K. R. Rao, Lossless multi-channel EEG compression, Accepted to present at IEEE ISCAS, 2006. [5 P. Hao and Q. Shi, Reversible integer for progressiveto-lossless compression of multiple component images, IEEE ICIP, vol.1, pp.633-636, Sept. 2003. [6 L. Galli and S. Salzo, Lossless hyperspectral compression using, IEEE IGARSS, vol.1, pp.313-316, Sept. 2004. [7 P. Hao and Q. Shi, Matrix factorizations for reversible integer mapping, IEEE Trans. Signal Processing, vol.49, pp.2314-2324, Oct. 2001. [8 M. Gastpar, P.-L. Dragotti and M. Vetterli, The distributed, partial, and conditional Karhunen-Loeve transforms IEEE DCC, pp.283-292, Mar. 2003. [9 P. P. Vaidyanathan, Multirate Systems and Filterbanks, Englewood cliffs, NJ: Prentice Hall, 1993. [10 R. Geiger, Y. Yokotani, G. Schuller and J. Herre, Improved integer transforms using multi-dimensional lifting, IEEE ICASSP, vol.2, pp.1005-1008, May 2004. [11 Available Shorten software: http://www.etree.org/shncom.html/ [12 Available JPEG2000 software: http://www.kakadusoftware.com/ [13 Available GZIP software: http://www.gzip.org/

VEOL HEOL VEOU Ch. 64 FP1 FPZ FP2 AF7 AF3 GND AF4 AF8 HEOR 150 Int IntDCT IntS F7 F5 F3 F1 FZ F2 F4 F6 F8 100 FT7 FC5 FC3 FC1 FCZ FC2 FC4 FC6 FT8 CZ T7 C5 C3 C1 C2 C4 C6 T8 Ch 1 TP7 CP5 CP3 CP1 CPZ CP2 CP4 CP^ TP8 M1 M2 50 P7 P5 P3 P1 PZ P2 P4 P6 P8 PO7 PO5 PO3 POZ PO4 PO6 PO8 O1 OZ O2 VEOL VEOU (a) 0 150 10 20 30 40 50 60 (a) Int IntDCT IntS HEOL Cluster 1 FP1 FPZ FP2 Cluster 2 HEOR 100 AF7 AF3 GND AF4 AF8 F7 F5 F3 F1 FZ F2 F4 F6 F8 FT7 FC5 FC3 FC1 FCZ FC2 FC4 FC6 FT8 Cluster 3 Cluster 4 CZ T7 C5 C3 C1 C2 C4 C6 T8 Ch 1 50 M1 TP7 CP5 CP3 CP1 CPZ CP2 CP4 CP^ TP8 Cluster 5 Cluster 6 P7 P5 P3 P1 PZ P2 P4 P6 P8 M2 PO7 PO5 PO3 POZ PO4 PO6 PO8 Cluster 7 O1 OZ (b) O2 Cluster 8 0 10 20 30 40 50 60 (b) Figure 8: Performance analysis of 64-point using: (a) spiral scan data and (b) clustering scan data, where x-axis represents the transform coefficients and y-axis represents their (sorted) variances. Figure 7: Pattern of scanning scheme: (a) spiral and (b) clustering (VEOU and HEOL are included in cluster 8).