HOSVD Based Image Processing Techniques

Similar documents
Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm

MATLAB algorithms for TP transformation

Polytopic Decomposition of Linear Parameter-Varying Models by Tensor-Product Model Transformation

On Informational Coefficient of Correlation for Possibility Distributions

A Multi-Affine Model for Tensor Decomposition

Wavelets and Multiresolution Processing

Lecture 6. Numerical methods. Approximation of functions

Multilinear Singular Value Decomposition for Two Qubits

Deep Learning: Approximation of Functions by Composition

Parallel Tensor Compression for Large-Scale Scientific Data

A new truncation strategy for the higher-order singular value decomposition

Mathematical Beer Goggles or The Mathematics of Image Processing

Weighted Singular Value Decomposition for Folded Matrices

Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences

Multiscale Tensor Decomposition

Wavelets For Computer Graphics

Proyecto final de carrera

Wavelet Transform And Principal Component Analysis Based Feature Extraction

The New Graphic Description of the Haar Wavelet Transform

Wavelet Footprints: Theory, Algorithms, and Applications

3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy

SPLINE INTERPOLATION

Kronecker Decomposition for Image Classification

What is Image Deblurring?

On the convergence of higher-order orthogonality iteration and its extension

Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo

Preliminary Examination, Numerical Analysis, August 2016

Numerical Analysis. Carmen Arévalo Lund University Arévalo FMN011

TP MODEL TRANSFORMATION BASED CONTROL OF THE TORA SYSTEM

Matrix assembly by low rank tensor approximation

An Introduction to Wavelets and some Applications

Tensor SVD and distributed control

Optimized Compact-support Interpolation Kernels

vmmlib Tensor Approximation Classes Susanne K. Suter April, 2013

Lecture Notes 5: Multiresolution Analysis

Image Compression Using Singular Value Decomposition

Linear Algebra for Machine Learning. Sargur N. Srihari

Multiresolution Analysis

Polynomials. p n (x) = a n x n + a n 1 x n 1 + a 1 x + a 0, where

Applied Numerical Analysis Quiz #2

Tensor rank-one decomposition of probability tables

On Measures of Dependence Between Possibility Distributions

SPARSE signal representations have gained popularity in recent

Practical Linear Algebra: A Geometry Toolbox

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

Part IV Compressed Sensing

1 Piecewise Cubic Interpolation

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

A Reverse Technique for Lumping High Dimensional Model Representation Method

Computational Linear Algebra

Linear Algebra (Review) Volker Tresp 2017

Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces

Construction of Wavelets and Applications

Part III Super-Resolution with Sparsity

Simple Examples on Rectangular Domains

Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche

CS 231A Section 1: Linear Algebra & Probability Review

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang

Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising

MATHEMATICAL METHODS AND APPLIED COMPUTING

NONLINEAR FINITE ELEMENT METHOD IN MAGNETISM

/16/$ IEEE 1728

Proof of the inuence of the Tensor Product model representation on the feasibility of Linear Matrix Inequality

Fixed point iteration and root finding

Least Squares Approximation

Singular Value Decompsition

MLISP: Machine Learning in Signal Processing Spring Lecture 10 May 11

MGA Tutorial, September 08, 2004 Construction of Wavelets. Jan-Olov Strömberg

arxiv: v1 [math.na] 1 May 2013

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Linear Algebra & Geometry why is linear algebra useful in computer vision?

( nonlinear constraints)

Function Construction with Fuzzy Operators

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

High Dimensional Model Representation (HDMR) Based Folded Vector Decomposition

Linear Algebra (Review) Volker Tresp 2018

On the approximation properties of TP model forms

Fundamentals of Multilinear Subspace Learning

Compressed Sensing and Neural Networks

Handling Nonlinearity by the Polarization Method and the Newton-Raphson Technique

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Linear Algebra Review. Vectors

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

Linear Algebra and Dirac Notation, Pt. 3

Towards A New Multiway Array Decomposition Algorithm: Elementwise Multiway Array High Dimensional Model Representation (EMAHDMR)

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

Introduction to Signal Spaces

Strengthened Sobolev inequalities for a random subspace of functions

Applied Linear Algebra in Geoscience Using MATLAB

A Tensor Approximation Approach to Dimensionality Reduction

A Brief Outline of Math 355

Kernel Method: Data Analysis with Positive Definite Kernels

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Correlating Structure, Conductance and Mechanics of Silver. Atomic-Scale Contacts

Direct Learning: Linear Classification. Donglin Zeng, Department of Biostatistics, University of North Carolina

The multiple-vector tensor-vector product

3.1 Interpolation and the Lagrange Polynomial

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Transcription:

HOSVD Based Image Processing Techniques András Rövid, Imre J. Rudas, Szabolcs Sergyán Óbuda University John von Neumann Faculty of Informatics Bécsi út 96/b, 1034 Budapest Hungary rovid.andras@nik.uni-obuda.hu, rudas@uni-obuda.hu sergyan.szabolcs@nik.uni-obuda.hu László Szeidl Széchenyi István University System Theory Laboratory Egyetem tér 1., 9026 Győr Hungary szeidl@sze.hu Abstract: In the framework of the paper an improved method for image resolution enhancement is introduced. The enhancement is performed through another representation domain, where the image function is expressed with the help of polylinear functions on higher order singular value decomposition (HOSVD) basis. The paper gives a detailed description on how to determine the polylinear functions corresponding to an image and how to process them in order to obtain a higher resolution image. Furthermore, the proposed approach and the Fourier-based approach will be compared from the point of view of their effectiveness. Key Words: Image resolution, HOSVD, approximation, numerical reconstruction 1 Introduction Nowadays the importance of image processing and machine vision-based applications is increasing significantly. In order to perform the desired task more accurately and reliably, the related algorithms should be developed. An important factor regarding the processing is the applied data representation domain, in which the processing is performed. Representing an image in other domain may give new possibilities regarding its processing. In many cases these representations are related to expressing the image intensity function as a combination of simpler functions (components) having useful predefined properties. Let us mention some applications and some concepts of the related methods and algorithms from the field of image processing, where the applied domain plays a significant role. The image resolution enhancement, filtering [1], image compression [2], etc. can be performed much more effectively when working in another image representation domain [3]. In the frequency domain for instance, the image compression is much more efficient than in the spatial one. On the other hand, to represent the image in frequency domain without meaningful quality decline, relatively large number of trigonometric components is needed. Another well known application is the resolution enhancement of an image, investigated in the framework of this paper in more details, focusing on its realization and effectiveness in the proposed domain. Numerical reconstruction or recovering of a continuous intensity surface from discrete image data samples is considered, for example, when the image is rescaled or remapped from one pixel grid to another one. In case of image enlargement the color components or in case of grayscale images the intensity of missing pixels should be estimated. One common way for estimating the color values of such pixels is interpolating the discrete source image. There are several issues which affect the perceived quality of the interpolated images: sharpness of edges, freedom from artifacts and reconstruction of high frequency details. There are numerous methods approximating the image intensity function based on the color and location of known image points, such as the bilinear, bicubic or spline interpolation all working in the spatial domain of the input image. A new data representation domain is introduced in the paper, connected to the higher order singular value decomposition (HOSVD), which is a generalized form of the well known singular value decomposition (SVD). As shown in the upcoming sections, any n-variable smooth function can be expressed with the help of a system of orthonormal one-variable smooth functions on HOSVD basis. The main aim of the paper is to numerically reconstruct these specially determined one-variable functions using the HOSVD and to show that how this approach can give support for certain image processing tasks and problems. The paper is organized as follows: Section 2 gives a closer view on how to express a multidimensional function using polylinear functions on HOSVD basis, and how to reconstruct these polylinear functions, ISBN: 978-960-474-273-8 297

Section 3 shows how this representation can be applied in image processing for resolution enhancement, while in section 4 the proposed method is compared to the well known Fourier transformation from the approximation error point of view. Section 5 shows the experimental results and finally, conclusions are reported. 2 Theoretical background The approximation methods of mathematics are widely used in theory and practice for several problems. If we consider an n-variable smooth function f(x), x = (x 1,...,x N ) T, x n [a n,b n ], 1 n N, then we can approximate the function series I 1 k 1 =1... I N k N =1 f(x) with a α k1,...,k n p 1,k1 (x 1 )... p N,kN (x N ). (1) where the system of orthonormal functions p n,kn (x n ) can be chosen in classical way by orthonormal polynomials or trigonometric functions in separate variables and the numbers of functions I n playing role in (1) are large enough. With the help of Higher Order Singular Value Decomposition (HOSVD) a new approximation method was developed in [7], [5] in which a specially determined system of orthonormal functions can be used depending on function f(x), instead of some other systems of orthonormal polynomials or trigonometric functions. Assume that the function f(x) can be given with some functions w n,i (x n ),x n [a n,b n ] in the form I 1 k 1 =1... I N k N =1 Recent Researches in Artificial Intelligence, Knowledge Engineering and Data Bases α k1,...,k n w 1,k1 (x 1 )... w N,kN (x N ). (2) Denote by A R I 1... I N the N-dimensional tensor determined by the elementsα i1,...,i N, 1 i n I n, 1 n N and let us use the following notations (see : [4]). A n U: the n-mode tensor-matrix product, A N n=1 U n: the multiple product asa 1 U 1 2 U 2... N U N. The n-mode tensor-matrix product is defined by the following way. LetUbe ank n M n -matrix, then A n U is am 1... M n 1 K n M n+1... M N - tensor for which the relation (A n U) m1,...,m n 1,k n,m n+1,...,m N def = a m1,...,m n,...,m N U kn,mn 1 m n M n holds. Detailed discussion of tensor notations and operations is given in [4]. We also note that we use the sign n instead the sign n given in [4]. Using this definition the function (2) can be rewritten as a tensor product form A N n=1 w n (x n ), (3) where w n (x n ) = ( w n,1 (x n ),..., w n,in (x n )) T, 1 n N. Based on HOSVD it was proved in [6] that under mild conditions the (3) can be represented in the form where D N n=1 w n (x n ), (4) D R r 1... r N is a special (so called core) tensor with the properties: 1. r n = rank n (A) is the n-mode rank of the tensor A, i.e. rank of the linear space spanned by the n-mode vectors of A: {(a i1,...,i n 1,1,i n+1,...,i N,...,a i1,...,i n 1,In,i n+1,...,i N ) T : 1 i j I n, 1 j N}, 2. all-orthogonality of tensor D: two subtensors D in=α and D in=β (the n-th indices i n = α and i n = β of the elements of the tensor D keeping fix) orthogonal for all possible values of n, α and β : D in=α,d in=β = 0 when α β. Here the scalar product D in=α,d in=β denotes the sum of products of the appropriate elements of subtensors D in=α and D in=β, 3. ordering: D in=1 D in=2 D in=r n > 0 for all possible values of n ( D in=α = D in=α,d in=α denotes the Kronecker-norm of the tensor D in=α). Components w n,i (x n ) of the vector valued functions w n (x n ) = (w n,1 (x n ),...,w n,rn (x n )) T, 1 n N, are orthonormal in L 2 -sense on the interval [a n,b n ], i.e. n : bn a n w n,in (x n )w n,jn (x n )dx = δ in,j n, 1 i n,j n r n, ISBN: 978-960-474-273-8 298

where δ i,j is a Kronecker-function (δ i,j = 1, if i = j and δ i,j = 0, if i j) The form (4) was called in [6] HOSVD canonical form of the function (2). red, green and blue color components in case of RGB image. Function f(x) can be approximated (based on notes discussed in the previous section) in the following way: Let us decompose the intervals[a n,b n ],n = 1..N into M n number of disjunct subintervals n,mn, 1 m n M n as follows: ξ n,0 = a n < ξ n,1 <... < ξ n,mn = b n, n,mn = [ξ n,mn,ξ n,mn 1). Assume that the functions w n,kn (x n ), x n [a n,b n ], 1 n N in the equation (2) are piecewise continuously differentiable and assume also that we can observe the values of the function f(x) in the points where y i1,...,i N = (x 1,i1,...,x N,iN ), 1 i n M n. (5) x n,mn n,mn, 1 m n M n, 1 n N Based on the HOSVD a new method was developed in [6] for numerical reconstruction of the canonical form of the function f(x) using the values f(y i1,...,i N ), 1 i n M n, 1 i n N. We discretize function f(x) for all grid points as: b m1,..,m N = f(y m1,..,m N ). Then we construct N dimensional tensor B = (b m1,...,m N ) from the valuesb m1,..,m N. Obviously, the size of this tensor is M 1... M N. Further, we discretize vector valued functions w n (x n ) over the discretization points x n,mn and construct matrices W n from the discretized values as: W n = w n,1 (x n,1 ) w n,2 (x n,1 ) w n,rn (x n,1 ) w n,1 (x n,2 ) w n,2 (x n,2 ) w n,rn (x n,2 )....... w n,1 (x n,mn ) w n,2 (x n,mn ) w n,rn (x n,mn ) (6) Then tensor B can simply be given by (4) and (6) as For further details see [5]. B = D N n=1 W n. (7) 3 Resolution Enhancement on HOSVD basis Letf(x),x = (x 1,x 2,x 3 ) T represent the image function, where x 1 and x 2 correspond to the vertical and horizontal coordinates of the pixel, respectively. x 3 is related to the color components of the pixel, i.e. the I 1 I 2 k 1 =1 k 2 =1 k 3 =1 I 3 α k1,k 2,k 3 w 1,k1 (x 1 ) w 2,k2 (x 2 ) w 3,k3 (x 3 ). (8) The red, green and blue color components of pixels can be stored in a m n 3 tensor, where n andmcorrespond to the width and height of the image, respectively. Let B denote this tensor. The first step is to reconstruct the functions w n,kn,1 n 3,1 k n I n based on the HOSVD of tensor B as follows: B = D 3 n=1 U(n) (9) where D is the so called core tensor. Vectors corresponding to the columns of matrices U (n),1 n 3 as described in the previous section are representing the discretized form of functions w n,kn (x n ) corresponding to the appropriate dimension n, 1 n 3. Our goal is to demonstrate the effectiveness of image scaling in the HOSVD based domain. Let s {1,2,...} denote the number of pixels having to be injected between each neighbouring pixel pair in horizontal and vertical directions. First, let us consider the first column U (1) 1 of matrix U (1). Based on the previous sections, it can be seen, that the value w 1,1 (1) corresponds to the 1st element of U (1) 1, w 1,1 (2) to the 2nd element,..., w 1,1 (M n ) to the M n th element of U (1) 1. To scale the image in the HOSVDbased domain, the U (i),i = 1..2 matrices should be updated, depending on s, as follows: The number of columns remains the same, the number of lines will be extended according to the factor s. Let us denote the such obtained matrix as V (1). For example let consider the column U (1) 1 of U (1). The elements of V (1) 1 are determined as follows: V (1) 1 (1) := U (1) 1 (1), V (1) 1 (s+2) := U (1) (1) 1 (2), V 1 (2s+3) := U (1) 1 (3),..., V (1) 1 ((M n 1)s + M n ) := U (1) 1 (M n). The missing elements of V (1) 1 can be determined by interpolation. In the paper the cubic spline interpolation was applied. The remaining columns should be processed similarly. After every matrix element has been determined the enlarged image can be obtained using the equation (9). 4 Fourier vs. HOSVD The proposed approach introduced in the previous sections uses orthonormal functions w n,i (x n ),x n ISBN: 978-960-474-273-8 299

[a n,b n ] (see Section 2) for approximating an n- variable smooth function. We saw how these functions w n,i (x n ) can numerically be reconstructed and what properties they have. Comparing the proposed approach to the Fourier transformation, similarities can be observed in their behaviour. As it is well known, the Fourier Transformation is connected to trigonometric functions, while in case of HOSVD approach the functions w n,i (x n ) are considered, which are specific to the approximated n-variable function. In both cases the functions are forming an orthonormal basis. Since in case of HOSVD the functions are specific ones, much fewer number of components is needed then in case of Fourier based approach to achieve the same approximation accuracy. Let us mention some common, widely used applications of both approaches. In case of Fourier based smoothing, some of higher frequencies from the frequency domain are dismissed, resulting a smoothed image (low pass filter). In case of HOSVD similar effect can be observed when dismissing polylinear functions corresponding to smaller singular values for certain dimensions. The same concept can be used also for data compression. In the opposite case, i.e. when maintaining only the functions corresponding to the smaller singular values, will result an edge detector. In case of Fourier approach detecting edges in an image is equivalent to dismissing the smaller frequency components (high pass filter). The examples show that in case of HOSVD much smaller number of basis functions is enough than in case of Fourier-based approach to represent the image without significant information loss. When dismissing high frequency components, there is a frequency threshold depending on the concrete image, below which by dismissing further frequencies some kind of waves can be observed in the image as noise. In case of the proposed approach the ratio of maintained and dismissed components may be significantly smaller in order to achieve the same quality than in case when trigonometric functions are applied. Vectors corresponding to the columns of matrices U (n),1 n 3 as described in the previous section contain the control points of functions w n,kn (x n ) corresponding to the appropriate dimension n, 1 n 3. It means that there will be as many one-variable functions for a dimension as many columns there are in the orthonormal matrix corresponding to that dimension. The number of these functions can be further decreased by dismissing some columns from the orthonormal matrices obtained by HOSVD. Let C n, 0 C n I n, n = 1..N stand for the number of dismissed columns in nth dimension. Dismissing some of the columns is equivalent to dismissing some of the frequencies in case of Fourier approach. I 1 C 1 k 1 =1 I 2 C 2 k 2 =1 I 3 C 3 k 3 =1 α k1,k 2,k 3 w 1,k1 (x 1 ) w 2,k2 (x 2 ) w 3,k3 (x 3 ). The below examples clearly show that the proposed approach has good compression capabilities, which further extends its applicability also in the field of image processing. 5 Examples Part-1 (HOSVD vs. Fourier) In this section some approximations can be observed, performed by the proposed and by the Fourier-based approach. As the number of the used components decreases, the observable differences in quality become more significant. In the examples below in both the HOSVD-based and Fourier-based cases the same number of components have been used in order to show how the form of determined functions influences the quality. The differences in quality of below images are much more observable in color images (see the electronic version of the paper). Part-2 (Resolution Enhancement) The pictures are illustrating the effectiveness of the image zooming using the proposed approach. The results obtained by the HOSVD are compared to the results obtained by the bilinear and bicubic image interpolation methods. 6 Conclusion In the present paper a new image representation domain and reconstruction technique has been introduced. The results show that how the efficiency of the certain tasks depends on the applied domain. Image rescaling has been performed using the proposed technique and has been compared to other well known image interpolation methods. Using this technique the resulted image maintains the edges more accurately then the other well-known image interpolation methods. Furthermore, some properties of the proposed representation domain have been compared to the corresponding properties of the Fourier-based approximation. The results show that in the proposed domain some tasks can be performed more efficiently than in other domains. (10) ISBN: 978-960-474-273-8 300

Figure 1: Original image (24bit RGB) Figure 4: HOSVD-based approximation using 2700 components composed from polylinear functions on HOSVD basis Figure 2: HOSVD-based approximation using 7500 components composed from polylinear functions on HOSVD basis Figure 5: Fourier-based approximation using 2700 components composed from trigonometric functions Figure 3: Fourier-based approximation using 7500 components composed from trigonometric functions Figure 6: The original image. ISBN: 978-960-474-273-8 301

Acknowledgements: The research was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. Figure 7: The 10x enlarged rectangular area using bilinear interpolation. Figure 8: The 10x enlarged rectangular area using bicubic interpolation. Figure 9: The 10x enlarged rectangular area using the proposed HOSVD-based method. Smoother edges can be observed. References: [1] P. Bojarczak and S. Osowski, Denoising of Images - A Comparison of Different Filtering Approaches, WSEAS Transactions on Computers, Issue 3, Vol.3, pp.738-743, July 2004. [2] A. Khashman, and K. Dimililer, Image Compression using Neural Networks and Haar Wavelet, WSEAS Transactions on Signal Processing, Vol. 4, No. 5, 2008, pp. 330-339. [3] Roumen Kountchev, Stuart Rubin, Mariofanna Milanova, Vladimir Todorov, Roumiana Kountcheva, Non-Linear Image Representation Based on IDP with NN, WSEAS Transactions on Signal Processing, ISSN: 1790-5052, Issue 9, Volume 5, pp. 315 325, September 2009. [4] L. De Lathauwer, B. De Moor, and J. Vandewalle, A multilinear singular value decomposition, SIAM Journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1253 1278, 2000. [5] L. Szeidl, P. Várlaki, HOSVD Based Canonical Form for Polytopic Models of Dynamic Systems, in Journal of Advanced Computational Intelligence and Intelligent Informatics, ISSN : 1343-0130, Vol. 13 No. 1, pp. 52 60, 2009. [6] L. Szeidl, P. Baranyi, Z. Petres, and P. Várlaki, Numerical Reconstruction of the HOSVD Based Canonical Form of Polytopic Dynamic Models, in 3rd International Symposium on Computational Intelligence and Intelligent Informatics, Agadir, Morocco, 2007, pp. 111 116. [7] L. Szeidl, I. Rudas, A. Rövid, P. Várlaki, HOSVD Based Method for Surface Data Approximation and Compression in 12th International Conference on Intelligent Engineering Systems 978-1-4244-2083-4, Miami, Florida, February 2529, 2008, pp. 197 202. [8] M. Nickolaus, L. Yue, N. Do Minh, Image interpolation using multiscale geometric representations, in Proceedings of the SPIE, Volume 6498, pp. 1 11, 2007. [9] S. YUAN, M. ABE, A. TAGUCHI and M. KAWAMATA, High Accuracy Bicubic Interpolation Using Image Local Features in IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E90- A(8), pp. 1611-1615, 2007. ISBN: 978-960-474-273-8 302