Fast image compression using matrix K-L transform

Similar documents
Least-Squares Regression on Sparse Spaces

Influence of weight initialization on multilayer perceptron performance

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Linear First-Order Equations

Euler equations for multiple integrals

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

A Hybrid Approach for Modeling High Dimensional Medical Data

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Lower bounds on Locality Sensitive Hashing

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

Parameter estimation: A new approach to weighting a priori information

Calculus of Variations

7.1 Support Vector Machine

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The Exact Form and General Integrating Factors

Lagrangian and Hamiltonian Mechanics

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is

Image Coding Algorithm Based on All Phase Walsh Biorthogonal Transform

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Lie symmetry and Mei conservation law of continuum system

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

Comparative Approaches of Calculation of the Back Water Curves in a Trapezoidal Channel with Weak Slope

Hyperbolic Systems of Equations Posed on Erroneous Curved Domains

Agmon Kolmogorov Inequalities on l 2 (Z d )

Table of Common Derivatives By David Abraham

Sturm-Liouville Theory

Image Denoising Using Spatial Adaptive Thresholding

Vectors in two dimensions

State observers and recursive filters in classical feedback control theory

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Situation awareness of power system based on static voltage security region

Math 342 Partial Differential Equations «Viktor Grigoryan

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods

A Course in Machine Learning

Optimal CDMA Signatures: A Finite-Step Approach

One-dimensional I test and direction vector I test with array references by induction variable

Integration Review. May 11, 2013

A Review of Multiple Try MCMC algorithms for Signal Processing

THE ACCURATE ELEMENT METHOD: A NEW PARADIGM FOR NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS

Multi-View Clustering via Canonical Correlation Analysis

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y

Nonlinear Adaptive Ship Course Tracking Control Based on Backstepping and Nussbaum Gain

Quantum mechanical approaches to the virial

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.

arxiv: v1 [cs.lg] 22 Mar 2014

State-Space Model for a Multi-Machine System

Symbolic integration with respect to the Haar measure on the unitary groups

Strength Analysis of CFRP Composite Material Considering Multiple Fracture Modes

05 The Continuum Limit and the Wave Equation

Predictive Control of a Laboratory Time Delay Process Experiment

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

An Iterative Incremental Learning Algorithm for Complex-Valued Hopfield Associative Memory

TRAJECTORY TRACKING FOR FULLY ACTUATED MECHANICAL SYSTEMS

CS9840 Learning and Computer Vision Prof. Olga Veksler. Lecture 2. Some Concepts from Computer Vision Curse of Dimensionality PCA

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France

A Second Time Dimension, Hidden in Plain Sight

Using LSTM Network in Face Classification Problems

APPLICATION of compressed sensing (CS) in radar signal

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential

Entanglement is not very useful for estimating multiple phases

Conservation laws a simple application to the telegraph equation

Optimized Schwarz Methods with the Yin-Yang Grid for Shallow Water Equations

SYNCHRONOUS SEQUENTIAL CIRCUITS

Why Bernstein Polynomials Are Better: Fuzzy-Inspired Justification

The Principle of Least Action

Analytic Scaling Formulas for Crossed Laser Acceleration in Vacuum

On the Enumeration of Double-Base Chains with Applications to Elliptic Curve Cryptography

Diagonalization of Matrices Dr. E. Jacobs

arxiv: v4 [math.pr] 27 Jul 2016

Polynomial Inclusion Functions

EXACT TRAVELING WAVE SOLUTIONS FOR A NEW NON-LINEAR HEAT TRANSFER EQUATION

ON THE OPTIMALITY SYSTEM FOR A 1 D EULER FLOW PROBLEM

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

Homework 2 Solutions EM, Mixture Models, PCA, Dualitys

ON ISENTROPIC APPROXIMATIONS FOR COMPRESSIBLE EULER EQUATIONS

Qubit channels that achieve capacity with two states

On the number of isolated eigenvalues of a pair of particles in a quantum wire

ELEC3114 Control Systems 1

The Ehrenfest Theorems

Computing the Longest Common Subsequence of Multiple RLE Strings

Calculus in the AP Physics C Course The Derivative

u!i = a T u = 0. Then S satisfies

Robustness and Perturbations of Minimal Bases

Implicit Differentiation

Math 1B, lecture 8: Integration by parts

Exponential Tracking Control of Nonlinear Systems with Actuator Nonlinearity

Generalization of the persistent random walk to dimensions greater than 1

Sparse Reconstruction of Systems of Ordinary Differential Equations

Solving the Schrödinger Equation for the 1 Electron Atom (Hydrogen-Like)

II. First variation of functionals

A Probabilistic Data Association Based MIMO Detector Using Joint Detection of Consecutive Symbol Vectors

Placement and tuning of resonance dampers on footbridges

Transmission Line Matrix (TLM) network analogues of reversible trapping processes Part B: scaling and consistency

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Delocalization of boundary states in disordered topological insulators

Applications of the Wronskian to ordinary linear differential equations

Necessary and Sufficient Conditions for Sketched Subspace Clustering

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Transcription:

Fast image compression using matrix K-L transform Daoqiang Zhang, Songcan Chen * Department of Computer Science an Engineering, Naning University of Aeronautics & Astronautics, Naning 2006, P.R. China. Abstract: A novel matrix K-L transform (MatKL) is propose as an extension to the conventional K-L transform (KL) in this paper for fast image compression. Experimental results on 8 publicly available benchmark images show that the MatKL is tens to hunres times faster than stanar KL with comparable image compression quality. Keywors: Karhunen-Loeve transform (KL); Matrix KL (MatKL); Image compression; Fast algorithm. Introuction ransform coing is one of the most important methos for lossy image compression. However, as an optimal linear imension-reuction transform in the mean-square error or reconstructe error sense, the Karhunen-Loeve transform (KL) [] or principal component analysis (PCA) [2] can harly be use in image compression ue to its slow spee in seeking the transform from the covariance matrix constructe by given training ata. Assume we are given a set of n-imensional training ata { x, x,..., x }, then for any n-imensional vector x,the Karhunen-Loeve transform is efine as: 2 L y =Φ ( x m x ) () where L mx = (/ L) xi i= is the mean vector of training ata, an the column vector of Φ is the eigenvector corresponing to the covariance matrix L C = x (/ L) ( x )( ) i i mx x i m = x. he larger the scale or imension of the covariance matrix is, the slower the spee of computing the eigenvectors an hence transform is, an then so is performing compressing or encoing transform. o mitigate this problem, two methos are usually aopte. he first one is to replace KL with Discrete Cosine ransform (DC) as use in the JPEG stanar []. Although able to achieve much faster compression than KL, DC leas to relatively great egraation of compression quality at the same compression ratio compare to KL. he secon is to use the parallelism technique [3][4] such * Corresponing author: el: +86-25-489-2805; Fax: +86-25-489-3777. E-mail: qzhang@nuaa.eu.cn (D.Q. Zhang), s.chen@nuaa.eu.cn (S.C. Chen)

as neural networks incluing associative memories [5] an the aaptive principal component extraction (APEX) [2]. Despite of being a powerful parallelism metho, neural networks [5] must also seek a traeoff between the compression spee an quality. In aition, in orer to obtain enough accurate approximation to classical KL, more training steps an thus more time are neee in realization of neural networks for the KLs. Apart from the above-mentione methos, to reuce the scale of the covariance matrix of the original KL is obviously a cheaper alternative. o the best of our knowlege no other researchers have mae attempts in this irection. In orer to introuce our iea, let us recall construction of the covariance matrix in KL for image compression: First, partition an image to be compresse into a set of non-overlapping subimage blocks with specifie size an then for each block, row by row concatenate it into a vector, finally collect all these concatenate vectors to construct a so-neee covariance matrix with which the eigenvalues an corresponing eigenvectors, hence so-neee transform, can be foun. Obviously, the scale of the block etermines efficiency of computation. he smaller the block is, the faster the computation for the covariance matrix, an hence also the faster getting the transform. However, smaller block limits, in turn, the improvement of the compression ratio so that makes the KL inapplicable in low-bit-rate encoing cases. In orer to improve this situation, we change the traitional constructing metho irectly from the concatenate vectors. Encourage by the successes on previous results [6], in this paper, a matrix K-L (linear) transform technique is propose as an extension to the KL for fast image compression. Its main iea is to use irectly a matrix-type rather than vector-type representation to construct the covariance matrix, calle as generalize covariance matrix (GCM), so that its scale is smaller than that of the covariance matrix constructe by those concatenate vectors. As a consequence, the spee of computing the transform from the GCM is greatly promote. aking an image block concatenate to a n-imension vector as an example, the scale of the KL covariance matrix is n n, while that of the GCM is ust m m, when it is constructe by the m p-imensional matrix which is a rearrangement for that n-imensional vector, where the m an p meet m p=n. Obviously the reuction ratio of both scales reaches p 2, which is an impressive result for large p. Such a reuction naturally greatly spees up fining the transform. Experimental results on 8 publicly available benchmark images show that MatKL is tens to hunres times faster than stanar KL with comparable image compression quality. he rest of this paper is organize as following: We first formulate the matrix K-L transform

(MatKL) in Section 2. An in Section 3, image compression experiments base on MatKL is introuce an results are given. At last, we raw our conclusions in Section 4. 2. Matrix K-L transform A. Formulation of MatKL here are two main steps in our propose MatKL. he first is the reassemble or rearrange step which rearranges the components of a concatenate subimage-block vector into a new matrix with imensionality m p, which will irectly use the subimage block matrix as a special case an thus be more flexible. For example, a vector x=[,2,3,3,5,2,7,0,2,] can be rearrange into A=[,2,3,3,5,2,7,0,2,] an 3 5 7 2 2 3 2 0, corresponing to p= an 5 respectively. An the secon step is performing MatKL on these matrices obtaine in the first step. Below is a erivation of our MatKL. Suppose that we are given a matrix-type ranom variable A = [ a i ] with imension m p, an inten to fin a linear transform Φ to compress A to B escribe by B =Φ ( A A) (2) m Φ = L R 2 where A= E( A) is the mean of A, E(.) is an mathematical expectation, an ( φ, φ,, φ ) is a transform to be foun. Equation (2) implements a imensionality-reuction from A to B. Reversing the process for solving B, we can reconstruct A by A = Φ B + A by means of Eq. (2). Now the key to implementing compression is to fin the transform Φ. We still aopt the reconstructe error criterion (REC) as use in the KL an minimize it to seek such an optimal transform Φ. Define the RCE as follows: 2 2 { } { } 2 REC( Φ ) = E A A = E A ΦB A = E A A ΦΦ( A A) { (( ( ))( ( )) } = E tr A A Φ Φ A A A A Φ Φ A A 2 ( ) = E A A tr( Φ E(( A A)( A A) ) Φ ) (3) where tr(.) enotes the trace of a matrix, an I the ientity with imension, an A m p 2 ( a ) /2 i= = i = is the Frobenius norm. An the latter two lines of Eq. (3) are from the following two characters of trace an Frobenius norm: ) A = tr( AA ); 2) tr( AB) = tr( BA). Set

R = E(( A A)( A A) ), i.e., a GCM for matrix-type ranom variable A. From Eq.(3), the minimization of REC with respect to Φ is equivalent to maximization of J(Φ ) efine by J( Φ ) = tr( Φ RΦ ) (4) subect to the constraints that ΦΦ = I. Equations (3) an (4) give an ientical optimization formulation to the KL except that the KL covariance matrix constructe by vector-type ranom variables is replace with a new GCM by matrix-type ones. In fact, the following erivation of the transform is ientical to that of the KL. However for completeness of escription, we still give the following reformulation. o maximize J ( Φ ) subect to the constraints, we efine a Lagrangian function as follows: = L( Φ ) = J( Φ ) λ ( φ φ ) (5) where λ s are Lagrange multipliers. Differentiating Eq.(5) with respect to φ s an setting the corresponing erivatives to 0, we have Rφ = λφ =, 2,, (6) his is an eigenvalue-eigenvector system with respect to (λ, φ ). aking into account that R is symmetric an positive semi-efinite, therefore its eigenvalues are non-negative, which leas Eq.(6) to a maximum value of J ( Φ ) = λ, equivalently, a minimum of REC( Φ ) = = m λ. Here we = + take the eigenvectors of R corresponing to the first largest eigenvalues to construct the so-neee transform Φ = ( φ, φ2, L, φ ), which achieves the REC minimization an at the same time retain maximally the original information of ata. As a result, the original ata is effectively compresse. B. Un-correlation Analysis As we have known, the KL can remove correlation between the original ata components to achieve a goal of compression. In fact, our MatKL also possesses such a property as analyze below. Rewriting B in row to [(b ), (b 2 ),, (b ) ], where b = φ ( A A ) =,2,,. hen for any b i an b,

λ E( bb i ) = E[ φi ( A A)( A A) φ] = φi Rφ = λφi φ = 0 i = i (7) In the above erivation, we use Eq.(4). Equation (7) tells us that any two ifferent row vectors, b i an b, of B are inee uncorrelate, which is helpful to the subsequent compression. In particular, when the matrix A is concatenate into a vector, our MatKL reuces to the orinary KL. C. Real Reconstruction In a real implementation, generally, we are only given a limite sample set of matrix ata {A i, i=,2,,n}. hen the covariance matrix or GCM R an the mean A will respectively be estimate by the given sample ataset as follows: R = N N i= ( Ai A )( Ai A ) an A = N N A i i= (8) Substituting them into Eq.(6) an then solving it, we can obtain a transform Φ for the given ataset. o implement ecompression or reconstruction, we use the following equation Aˆ =Φ B+ A (9) It implements a restoration for the original image with minimal errors. Now let us iscuss the effect of p on the compression spee. Accoring to the efinition of R, its scale becomes smaller ue to the relation of m p=n as p increases. herefore, to obtain a high speeup for MatKL, we require p to take a value as large as possible. On the other han, as can be seen from Eq. (2), in orer to achieve a high compression ratio, the value of p cannot be too large. So p controls the traeoff between the spee an compression ratio or compression quality. In the next section, we will give comparison experimental results on 8 publicly available benchmark images to illustrate characteristic of our MatKL. 3. Image compression experiments A. Configuration In our experiments, we compare propose MatKL with classical KL an DC algorithms on 8 benchmark images. Specifically, we compare the compresse image quality evaluate by peak signal-to-noise ratio (PSNR), an the corresponing run-time of the algorithms. All the algorithms are execute on the same computer equippe with.7 G Intel Pentium 4 processor, 256 M memory, an 40G har isk, an simulate using Matlab 6.5 by Mathworks Inc. It is important to note that no

optimizations regaring execution time are consiere in all algorithms, an we are only concerne on the relative execution spee of KL an MatKL. On the other han, we only compare the compresse image quality performances between DC an the other two algorithms. We o not list the execution time of DC because the program structure of DC is much ifferent from those of KL an MatKL an hence the comparison between them is of little sense. he size of the images use in the experiments are 52x52, except for the images barbara (720x580), boats (720x576), camera (256x256), columbia (480x480) an golhill (720x576). For all images, we use block size of 6x6, except for the image barbara (20x20). At last, we use =6 in Eqs. (4) an (5) for KL (p=) for all images except for the image barbara (=20). o maintain the same compression ratio, =8 (p=2) an =4 (p=4) are use in Eqs (4) an (5) for MatKL (for babara, =0 (p=2) or =5 (p=4)). Similarly, 6 components (20 for barbara ) are kept in the transforme image matrix. Furthermore, no quantization steps are use in all algorithms an 8-bits are use to represent components of the compresse matix B in Eq. (2). hus for all images the compression ratio is 6:, except for the image barbara (20:). B. Results able shows the comparisons of compresse image quality (in PSNR) an execution times between KL, MatKL. Here we compute PSNR as follows: PSNR( b) = 0log 255 2 ( ˆ i i ) 0 m p 2 (/ mp) a a i= = (0) where a i an a ˆi represent the pixel values of original image A= [ a i ] m p an reconstructe image Aˆ = [ a ˆi ] m p respectively. From able, it can be observe that KL achieves the best reconstructe image quality, which proves its optimum in transform image coing. he reconstructe image using MatKL ( p=2) is a little worse than KL, in which case only about 0.5 b egraation is incurre compare with KL. Even if p=4, the egraation in MatKL is still ust about -2 b. As a comparison, the compresse image quality is very poor when using DC. As shown in table, generally, there is about 5 b egraation in DC compare to KL, an for some images such as 'woman2', the egraation is even b. Fig. gives the reconstructe images of KL, MatKL an DC on the image 'Lena'. From Fig., there are no apparent visual ifferences between the reconstructe images using KL an MatKL, while the DC reconstructe image is much egrae.

able Comparisons of the algorithm performances: PSNR an spee (in the bracket, unit: secon) est image KL (p=) MatKL (p=2) MatKL (p=4) DC Baboon 22.89(3.3) 22.7(0.) 22.43( 0.02) 22.43 Barbara 24.5(3.28) 23.84(0.44) 23.32(0.03) 20.77 Boats 28.63(2.8) 27.95(0.) 26.9(0.02) 23.26 Brige 23.73(3.09) 23.40(0.) 22.9(0.03) 2.3 Camera 24.37(2.47) 23.39(0.09) 22.83(0.03) 20.84 Columbia 29.38(2.49) 28.53(0.06) 26.73(0.02) 22.6 Couple 26.52(2.88) 26.07(0.08) 25.38(0.02) 23.6 Crow 27.23(2.67) 26.64(0.08) 25.57(0.02) 2.45 Golhill 29.33(3.02) 28.96(0.09) 28.20(0.03) 24.9 Lake 26.06(2.86) 25.47(0.08) 24.50(0.02) 2.8 Lax 23.26(3.25) 23.09(0.09) 22.87(0.02) 2.98 Lena 30.8(2.89) 30.39(0.09) 29.6(0.02) 25.7 Man 27.43(3.05) 27.2(0.08) 26.48(0.02) 23.43 Milkrop 23.26(3.25) 23.09(0.09) 22.87(0.02) 2.98 Peppers 29.73(3.08) 28.86(0.09) 27.70(0.02) 22.66 Plane 27.48(2.6) 26.95(0.08) 25.67(0.02) 22.49 Woman 27.68(3.05) 27.29(0.09) 26.63(0.02) 23.65 On the other han, the execution spee of MatKL (p=2, 4) is much improve compare to KL (p=). From the same table, the improvement of spee of MatKL (p=2) over KL (p=) is about 30:, an that of MatKL (p=4) over KL (p=) is about 300: for most images. o achieve so great improvement in spee, the pai price is only -2 b egraation of the reconstructe image compare with that of KL. Fig. 2 shows a comparison of performances of MatKL uner ifferent values of p for 'Lena' image with block size 32x32. Seen from the figure, as p grows, the quality of reconstructe image egraes graually, while the execution time is greatly reuce. However, in orer to keep a balance among the spee, compression ratio an image quality, we confine ourselves to only p=2 an 4 in this paper. 4. Conclusions We presente a novel matrix K-L transform an applie it into image compression. he experimental results show that the MatKL metho requires much less computation time than KL at the price of a little egraation of compresse image quality. his metho has the potentiality to be a faster metho for image ata reuction, especially for real-time an progressive ecoing applications. he next research is to compare the execution time of MatKL, KL an DC algorithms

implemente with optimizations, e.g aopting the C programming language instea of the Matlab use in this paper. It is worth to mention for the K-L transform or PCA implementation that although we employe the so-calle batch methos [2] for the transform computation in our experiments; however, in practice, we can use the Hebbian-base neural networks to more effectively an aaptively implement the MatKL algorithm, which will become one of our subects in the next research. Acknowlegements We woul like to thank the reviewers for their valuable suggestions for improving the presentation of this paper. Meanwhile we also thank National Science Founations of China an of Jiangsu uner Grant Nos. 60473035 an BK2002092, Jiangsu Natural Science Key Proect (BK200400), Jiangsu QingLan Proect Founation an the Returnee s Founation of China Scholarship Council for partial supports respectively. References [] R.C. Gonzalez, R.E. Woos, Digital Image Processing, 2n Eition, Prentice-Hall, Jan. 2002. [2] S. Haykin, Neural Networks: A Comprehensive Founation, 2n Eition, Prentice-Hall, Jul. 998. [3] S. Bhama, H. Singh, N.D. Phate, "Parallelism for the faster implementation of the K-L transform for image compression", Pattern Recognition Letters, vol. 4, pp. 65-659, Aug. 993. [4] S. Costa, S. Fiori, "Image compression using principal component neural networks", Image an Vision Computing, vol. 9, pp. 649-668, Aug. 200. [5] C.C. Wang, C.R. sai, Data compression by the recursive algorithm of exponential biirectional associative memory, IEEE rans. Syst. Man. Cybern., vol.28, no. 4,pp. 25-34, Apr. 998. [6] Jian Yang, Jingyu Yang, "From image vector to matrix: a straightforwar image proection technique-impca vs. PCA", Pattern Recognition, vol. 35, no. 9, pp. 997-999, Sep. 2002.

(a) (b) (c) () Fig. Compresse images at 0.5 bpp without quantization. (a) original 'Lena' image, (b) KL result (PSNR=30.8), (c) MatKL (p=2, PSNR=30.4), () MatKL (p=4, PSNR=29.6), (e) DC result (PSNR=25.2). (e) Fig. 2 Comparisons of performance of five methos:. KL (i.e. MatKL with p=), 2. MatKL (p=2), 3. MatKL (p=4), 4. MatKL (p=8), 5. MatKL (p=6).