ON COMPUTING MAXIMUM/MINIMUM SINGULAR VALUES OF A GENERALIZED TENSOR SUM

Similar documents
Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

APPENDIX A Some Linear Algebra

Singular Value Decomposition: Theory and Applications

Inexact Newton Methods for Inverse Eigenvalue Problems

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

Errors for Linear Systems

The Order Relation and Trace Inequalities for. Hermitian Operators

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Developing an Improved Shift-and-Invert Arnoldi Method

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lecture 12: Discrete Laplacian

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

On a direct solver for linear least squares problems

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

1 GSW Iterative Techniques for y = Ax

Difference Equations

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Numerical Heat and Mass Transfer

Formulas for the Determinant

The lower and upper bounds on Perron root of nonnegative irreducible matrices

DUE: WEDS FEB 21ST 2018

Deriving the X-Z Identity from Auxiliary Space Method

A MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Appendix B. The Finite Difference Scheme

2.3 Nilpotent endomorphisms

Grover s Algorithm + Quantum Zeno Effect + Vaidman

A new Approach for Solving Linear Ordinary Differential Equations

Module 9. Lecture 6. Duality in Assignment Problems

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

Chapter 11: Simple Linear Regression and Correlation

EEE 241: Linear Systems

Finding The Rightmost Eigenvalues of Large Sparse Non-Symmetric Parameterized Eigenvalue Problem

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Norm Bounds for a Transformed Activity Level. Vector in Sraffian Systems: A Dual Exercise

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

One-sided finite-difference approximations suitable for use with Richardson extrapolation

Graph Reconstruction by Permutations

A new construction of 3-separable matrices via an improved decoding of Macula s construction

Relaxation Methods for Iterative Solution to Linear Systems of Equations

MMA and GCMMA two methods for nonlinear optimization

A FORMULA FOR COMPUTING INTEGER POWERS FOR ONE TYPE OF TRIDIAGONAL MATRIX

THE Hadamard product of two nonnegative matrices and

4DVAR, according to the name, is a four-dimensional variational method.

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing

Chapter Newton s Method

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

Time-Varying Systems and Computations Lecture 6

DIVIDE AND CONQUER LOW-RANK PRECONDITIONING TECHNIQUES

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Feb 14: Spatial analysis of data fields

A property of the elementary symmetric functions

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry

A Hybrid Variational Iteration Method for Blasius Equation

Finite Element Modelling of truss/cable structures

NUMERICAL DIFFERENTIATION

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

Report on Image warping

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Linear Algebra and its Applications

Determinants Containing Powers of Generalized Fibonacci Numbers

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

is the calculated value of the dependent variable at point i. The best parameters have values that minimize the squares of the errors

High resolution entropy stable scheme for shallow water equations

Key words. multilinear algebra, singular value decomposition, tensor decomposition, low rank approximation

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

Preconditioning techniques in Chebyshev collocation method for elliptic equations

7. Products and matrix elements

The Geometry of Logit and Probit

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

New Method for Solving Poisson Equation. on Irregular Domains

Some modelling aspects for the Matlab implementation of MMA

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem

An efficient algorithm for multivariate Maclaurin Newton transformation

Inductance Calculation for Conductors of Arbitrary Shape

1 Derivation of Point-to-Plane Minimization

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Linear Regression Analysis: Terminology and Notation

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

PARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems

Research Article Green s Theorem for Sign Data

COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION. Erdem Bala, Dept. of Electrical and Computer Engineering,

Number of cases Number of factors Number of covariates Number of levels of factor i. Value of the dependent variable for case k

Linear Approximation with Regularization and Moving Least Squares

Transcription:

Electronc Transactons on Numercal Analyss. Volume 43, pp. 244 254, 215. Copyrght c 215, Kent State Unversty. ISSN 168 9613. ETNA Kent State Unversty ON COMPUTING MAXIMUM/MINIMUM SINGULAR VALUES OF A GENERALIZED TENSOR SUM ASUKA OHASHI AND TOMOHIRO SOGABE Abstract. We consder the effcent computaton of the maxmum/mnmum sngular values of a generalzed tensor sum T. The computaton s based on two approaches: frst, the Lanczos bdagonalzaton method s reconstructed over tensor space, whch leads to a memory-effcent algorthm wth a smple mplementaton, and second, a promsng ntal guess gven n Tucer decomposton form s proposed. From the results of numercal experments, we observe that our computaton s useful for matrces beng near symmetrc, and t has the potental of becomng a method of choce for other cases f a sutable core tensor can be gven. Key words. generalzed tensor sum, Lanczos bdagonalzaton method, maxmum and mnmum sngular values AMS subject classfcatons. 65F1 1. Introducton. We consder computng the maxmum/mnmum sngular values of a generalzed tensor sum (1.1) T := I n I m A + I n B I l + C I m I l R lmn lmn, where A R l l, B R m m, C R n n, I m s the m m dentty matrx, the symbol " denotes the Kronecer product, and the matrx T s assumed to be large, sparse, and nonsngular. Such matrces T arse n a fnte-dfference dscretzaton of three-dmensonal constant-coeffcent partal dfferental equatons, such as [ a ( ) + b + c ] u(x, y, z) = g(x, y, z) n Ω, (1.2) u(x, y, z) = on Ω, where Ω = (, 1) (, 1) (, 1), a, b R 3, c R, and the symbol " denotes the elementwse product. If a = (1, 1, 1), then a ( ) =. Wth regard to effcent numercal methods for lnear systems of the form T x = f, see [3, 9]. The Lanczos bdagonalzaton method s wdely nown as an effcent method to compute the maxmum/mnmum sngular values of a large and sparse matrx. For ts recent successful varants, see, e.g., [2, 6, 7, 1, 12], and for other successful methods, see, e.g., [5]. The Lanczos bdagonalzaton method does not requre T and T T themselves but only the results of the matrx-vector multplcatons T v and T T v. Even though one stores, as usual, only the non-zero entres of T, the storage requred grows cubcally wth n under the assumpton that l = m = n and, as t s often the case, that the number of non-zero entres of A, B, and C grows lnearly wth n. In order to avod large memory usage, we consder the Lanczos bdagonalzaton method over tensor space. Advantages of ths approach are a low memory requrement and a very smple mplementaton. In fact, the requred memory for storng T grows only lnearly under the above assumptons. Usng the tensor structure, we present a promsng ntal guess n order to mprove the speed of convergence of the Lanczos bdagonalzaton method over tensor space, whch s a major contrbuton of ths paper. Receved October 31, 213. Accepted July 29, 215. Publshed onlne on October 16, 215. Recommended by H. Sado. Graduate School of Informaton Scence and Technology, Ach Prefectural Unversty, 1522-3 Ibaragabasama, Nagaute-sh, Ach, 48-1198, Japan (d1412@cs.ach-pu.ac.jp). Graduate School of Engneerng, Nagoya Unversty, Furo-cho, Chusa-u, Nagoya, 464-863, Japan (sogabe@na.cse.nagoya-u.ac.jp). 244

Kent State Unversty COMPUTING MAX/MIN SINGULAR VALUES OF GENERALIZED TENSOR SUM 245 Ths paper s organzed as follows. In Secton 2, the Lanczos bdagonalzaton method s ntroduced, and an algorthm s presented. In Secton 3, some basc operatons on tensors are descrbed. In Secton 4, we consder the Lanczos bdagonalzaton method over tensor space and propose a promsng ntal guess usng the tensor structure of the matrx T. Numercal experments and concludng remars are gven n Sectons 5 and 6, respectvely. 2. The Lanczos bdagonalzaton method. The Lanczos bdagonalzaton method, whch s due to Golub and Kahan [4], s sutable for computng maxmum/mnmum sngular values. In partcular, the method s wdely used for large and sparse matrces. It employs sequences of projectons of a matrx onto judcously chosen low-dmensonal subspaces and computes the sngular values of the obtaned matrx. By means of the projectons, computng these sngular values s more effcent than for the orgnal matrx snce the obtaned matrx s smaller and has a smpler structure. For a matrx M R l n (l n), the Lanczos bdagonalzaton method dsplayed n Algorthm 1 calculates a sequence of vectors p R n and q R l and scalars α and β, where = 1, 2,...,. Here, represents the number of bdagonalzaton steps and s typcally much smaller than ether one of the matrx dmensons l and n. Algorthm 1 The Lanczos bdagonalzaton method [4]. 1: Choose an ntal vector p 1 R n such that p 1 2 = 1. 2: q 1 := Mp 1 ; 3: α 1 := q 1 2 ; 4: q 1 := q 1 /α 1 ; 5: for = 1, 2,..., do 6: r := M T q α p ; 7: β := r 2 ; 8: p +1 := r /β ; 9: q +1 := Mp +1 β q ; 1: α +1 := q +1 2 ; 11: q +1 := q +1 /α +1 ; 12: end for (2.1) After steps, Algorthm 1 yelds the followng decompostons: MP = Q D, M T Q = P D T + r e T, where the vectors e and r denote the -th canoncal and the -th resdual vector n Algorthm 1, respectvely, and the matrces D, P, and Q are gven as α 1 β 1 α 2 β 2 D =...... (2.2) R, α 1 β 1 α P = (p 1, p 2,..., p ) R n, Q = (q 1, q 2,..., q ) R l. Here, P and Q are column orthogonal matrces,.e., P TP = Q T Q = I. Now, the sngular trplets of the matrces M and D are explaned. Let σ (M) 1, σ (M) 2,...,σ n (M) be the sngular values of M such that σ (M) 1 σ (M) 2 σ n (M). Moreover, let u (M) R l

Kent State Unversty 246 A. OHASHI AND T. SOGABE and v (M) R n, where = 1, 2,..., n, be the left and rght sngular vectors correspondng to, v (M) } s referred to as a sngular the sngular values σ (M), respectvely. Then, {σ (M), u (M) trplet of M, and the relatons between M and ts sngular trplets are gven as Mv (M) = σ (M) u (M), M T u (M) = σ (M) v (M), where = 1, 2,..., n. Smlarly, wth regard to D n (2.2), let {σ (D ), u (D ), v (D ) } be the sngular trplets of D. Then, the relatons between D and ts sngular trplets are (2.3) D v (D ) = σ (D ) u (D ), D T u (D ) = σ (D ) v (D ), where = 1, 2,...,. Moreover, { σ (M), ũ (M), ṽ (M) } denotes the approxmate sngular trplet of M. They are determned from the sngular trplet of D as follows: (2.4) σ (M) := σ (D ), ũ (M) := Q u (D ), ṽ (M) := P v (D ). Then, t follows from (2.1), (2.3), and (2.4) that (2.5) Mṽ (M) M T ũ (M) = σ (M) ũ (M), = σ (M) ṽ (M) + r e T u (D ), where = 1, 2,...,. Equatons (2.5) mply that the approxmate sngular trplet { σ (M), ũ (M), ṽ (M) } s acceptable for the sngular trplet {σ (M), u (M), v (M) } f the value of r 2 e T u(d ) s suffcently small. 3. Some basc operatons on tensors. Ths secton provdes a bref explanaton of tensors. For further detals, see, e.g., [1, 8, 11]. A tensor s a multdmensonal array. In partcular, a frst-order tensor s a vector, a secondorder tensor s a matrx, and a thrd-order tensor, whch s manly used n ths paper, has three ndces. Thrd-order tensors are denoted by X, Y, P, Q, R, and S. An element (, j, ) of a thrd-order tensor X s denoted by x j. When the sze of a tensor X s I J K, the ranges of, j, and are = 1, 2,..., I, j = 1, 2,..., J, and = 1, 2,..., K, respectvely. We descrbe the defntons of some basc operatons on tensors. Let x j and y j be elements of the tensors X, Y R I J K. Then, addton s defned by elementwse summaton of X and Y: (X + Y) j := x j + y j, scalar tensor multplcaton s defned by the product of the scalar λ and each element of X : (λx ) j := λx j, and the dot product s defned by the summaton of elementwse products of X and Y: (X, Y) := I J =1 j=1 =1 K (X Y) j, where the symbol " denotes the elementwse product. Then, the norm s defned as X := (X, X ).

Kent State Unversty COMPUTING MAX/MIN SINGULAR VALUES OF GENERALIZED TENSOR SUM 247 Let us defne some tensor multplcatons: an n-mode product, whch s denoted by the symbol n ", s a products of a matrx M and a tensor X. The n-mode product for a thrd-order tensor has three dfferent types. The 1-mode product of X R I J K and M R P I s defned as (X 1 M) pj := I x j m p, the 2-mode product of X R I J K and M R P J s defned as (X 2 M) p := =1 J x j m pj, and the 3-mode product of X R I J K and M R P K s defned as (X 3 M) jp := j=1 K x j m p, where = 1, 2,..., I, j = 1, 2,..., J, = 1, 2,..., K, and p = 1, 2,..., P. Fnally, the operator vec vectorzes a tensor by combnng all column vectors of the tensor nto one long vector: =1 vec : R I J K R IJK, and the operator vec 1 reshapes a tensor from one long vector: vec 1 : R IJK R I J K. We wll see that the vec 1 -operator plays an mportant role n reconstructng the Lanczos bdagonalzaton method over tensor space. 4. The Lanczos bdagonalzaton method over tensor space wth a promsng ntal guess. 4.1. The Lanczos bdagonalzaton method over tensor space. If the Lanczos bdagonalzaton method s appled to the generalzed tensor sum T n (1.1), the followng matrx-vector multplcatons are requred: (4.1) T p = (I n I m A + I n B I l + C I m I l ) p, T T p = ( I n I m A T + I n B T I l + C T I m I l ) p, where p R lmn. In an mplementaton, however, computng the multplcatons (4.1) becomes complcated snce t requres the non-zero structure of a large matrx T. Here, the relatons between the vec-operator and the Kronecer product are represented by (I n I m A) vec(p) = vec(p 1 A), (I n B I l ) vec(p) = vec(p 2 B), (C I m I l ) vec(p) = vec(p 3 C), where P R l m n s such that vec(p) = p. Usng these relatons, the multplcatons (4.1) can be descrbed by (4.2) T p = vec (P 1 A + P 2 B + P 3 C), T T p = vec ( P 1 A T + P 2 B T + P 3 C T).

Kent State Unversty 248 A. OHASHI AND T. SOGABE Then, an mplementaton based on (4.2) only requres the non-zero structures of the matrces A, B, and C beng much smaller than of T, and thus t s smplfed. We now consder the Lanczos bdagonalzaton method over tensor space. Applyng the vec 1 -operator to (4.2) yelds vec 1 (T p) = P 1 A + P 2 B + P 3 C, vec 1 ( T T p ) = P 1 A T + P 2 B T + P 3 C T. Then, the Lanczos bdagonalzaton method over tensor space for T s obtaned and summarzed n Algorthm 2. Algorthm 2 The Lanczos bdagonalzaton method over tensor space. 1: Choose an ntal tensor P 1 R l m n such that P 1 = 1. 2: Q 1 := P 1 1 A + P 1 2 B + P 1 3 C; 3: α 1 := Q 1 ; 4: Q 1 := Q 1 /α 1 ; 5: for = 1, 2,..., do 6: R := Q 1 A T + Q 2 B T + Q 3 C T α P ; 7: β := R ; 8: P +1 := R /β ; 9: Q +1 := P +1 1 A + P +1 2 B + P +1 3 C β Q ; 1: α +1 := Q +1 ; 11: Q +1 := Q +1 /α +1 ; 12: end for The maxmum/mnmum sngular values of T are computed by a sngular value decomposton of the matrx D n (2.2), whose entres α and β are obtaned from Algorthm 2. The convergence of Algorthm 2 can be montored by R e T u (D ) (4.3), where u (D ) s computed by the sngular value decomposton for D n (2.2). 4.2. A promsng ntal guess. We consder utlzng the egenvectors of the small matrces A, B, and C for determnng a promsng ntal guess for Algorthm 2. We propose the ntal guess P 1 that s gven n Tucer decomposton form: (4.4) P 1 := S 1 P A 2 P B 3 P C, where S R 2 2 2 s the core tensor such that 2 2 2 =1 j=1 =1 s j = 1 and s j, P A = [x (A) M, x (A) m ], P B = [x (B), x (B) jm ], and P C = [x (C), x (C) ]. The rest of ths secton M m defnes the vectors x (A) M, x (B), x (C), x (A) M m, x(b), and x(c) m. Let {λ (A), x (A) }, {λ (B) j, x (B) j jm }, and {λ (C), x (C) and C, respectvely. Then, x (A) M, x (B), and x (C) M egenvalues λ (A) M, λ (B), and λ (C) M of A, B, and C, where { λ (A) ( M,, M ) = argmax + λ (B) j (,j,) } be egenpars of the matrces A, B, are the egenvectors correspondng to the + λ (C) }.

Kent State Unversty Smlarly, x (A) λ (B) jm, and λ(c) m COMPUTING MAX/MIN SINGULAR VALUES OF GENERALIZED TENSOR SUM 249, and x(c) m of A, B, and C, where m, x(b) jm are the egenvectors correspondng to the egenvalues λ(a) ( m, j m, m ) = argmn (,j,) { λ (A) + λ (B) j + λ (C) Here, we note that the egenvector correspondng to the maxmum absolute egenvalue of T s gven by x (C) x (B) M x (A) M and that the egenvector correspondng to the mnmum absolute egenvalue of T s gven by x (C) m x(b) jm x(a) m. 5. Numercal examples. In ths secton, we report the results of numercal experments usng test matrces gven below. All computatons were carred out usng MATLAB verson R211b on a HP Z8 worstaton wth two 2.66 GHz Xeon processors and 24GB of memory runnng under a Wndows 7 operatng system. The maxmum/mnmum sngular values were computed by Algorthm 2 wth random ntal guesses and wth the proposed ntal guess (4.4). From (4.3), the stoppng crtera we used were R e T u(d ) M < 1 1 for the maxmum sngular value σ (D ) M and R e T u(d ) m < 1 1 for the mnmum sngular value σ (D ) m. Algorthm 2 was stopped when both crtera were satsfed. 5.1. Test matrces. The test matrces T arse from a 7-pont central dfference dscretzaton of the PDE (1.2) over an (n + 1) (n + 1) (n + 1) grd, and they are wrtten as a generalzed tensor sum of the form }. T = I n I n A + I n B I n + C I n I n R n3 n 3, where A, B, C R n n. To be specfc, the matrces A, B, and C are gven by (5.1) (5.2) A = 1 h 2 a 1M 1 + 1 2h b 1M 2 + ci n, B = 1 h 2 a 2M 1 + 1 2h b 2M 2, (5.3) C = 1 h 2 a 3M 1 + 1 2h b 3M 2, where a and b ( = 1, 2, 3) correspond to the -th elements of a and b n (1.2), respectvely, and h, M 1, and M 2 are gven as (5.4) (5.5) h = 1 n + 1, 2 1 1 2 1 M 1 =......... R n n, 1 2 1 1 2 1 1 1 M 2 =......... R n n. 1 1 1 As can be seen from (5.1) (5.5), the matrx T has hgh symmetry when a 2 s much larger than b 2 and low symmetry else. m,

Kent State Unversty 25 A. OHASHI AND T. SOGABE 5.2. Intal guesses used n the numercal examples. In our numercal experments, we set S n (4.4) to be a dagonal tensor,.e., s j = except for = j = = 1 and = j = = 2. Then, the proposed ntal guess (4.4) s represented by the followng convex combnaton of ran-one tensors: (5.6) ( ) ( P 1 = s 111 x (A) M x (B) x (C) + s 222 x (A) M m x(b) jm ) x(c), m where the symbol " denotes the outer product and s 111 + s 222 = 1 wth s 111, s 222. As seen n Secton 4.2, the vectors x (A) M, x (B), x (C), x (A) M m, x(b), and x(c) are determned jm m by specfc egenvectors of the matrces A, B, and C. Snce these matrces are trdagonal Toepltz matrces, t s wdely nown that the egenvalues and egenvectors are gven n analytcal form as follows: let D be a trdagonal Toepltz matrx d 1 d 3 d 2 d 1 d 3 D =......... R n n, d 2 d 1 d 3 d 2 d 1 where d 2 and d 3. Then, the egenvalues λ (D) λ (D) ( π = d 1 + 2dcos n + 1 are computed by ), = 1, 2,..., n, where d = sgn(d 2 ) d 2 d 3, and the correspondng egenvectors x (D) ( ) ( ) d3 2 π sn n + 1 x (D) 2 = n + 1 d ( 2 ) 1 d3 ( d3 d 2 d 2 ) n 1 2 2 sn ( 2π n + 1. sn ) ( ) nπ n + 1 are gven by, = 1, 2,..., n. 5.3. Numercal results. Frst, we use the matrx T wth the parameters a = (1, 1, 1), b = (1, 1, 1), and c = 1 n (5.1) (5.3). The convergence hstory of Algorthm 2 wth the proposed ntal guess (5.6) usng s 111 = s 222 =.5 s dsplayed n Fgure 5.1 wth the number of teratons requred by Algorthm 2 on the horzontal axs and the log 1 of the resdual norms on the vertcal axs. Here, the -th resdual norms are computed by R e T u(d) M for the maxmum sngular value and R e T u(d) m for the mnmum sngular value, and the sze of the tensor n Algorthm 2 s 2 2 2. As llustrated n Fgure 5.1, Algorthm 2 wth the proposed ntal guess requred 68 teratons for the maxmum sngular value and 46 teratons for the mnmum sngular value. From Fgure 5.1, we observe a smooth convergence behavor and faster convergence for the maxmum sngular value. Next, usng the matrx T wth the same parameters as for Fgure 5.1, the varaton n the number of teratons s dsplayed n Fgure 5.2, where the dependence of the number

Kent State Unversty COMPUTING MAX/MIN SINGULAR VALUES OF GENERALIZED TENSOR SUM 251 log 1 of the resdual norm 2-2 -4-6 -8 Convergence to maxmum sngular value Convergence to mnmum sngular value -1 5 1 15 2 25 3 35 4 45 Number of teratons FIG. 5.1. The convergence hstory for Algorthm 2 wth the proposed ntal guess. of teratons on the value s 111 n the proposed ntal guess s gven n Fgure 5.2(a), and the dependence on the tensor sze s gven n Fgure 5.2(b). For comparson, the numbers of teratons requred by Algorthm 2 wth ntal guesses beng random numbers are also dsplayed n Fgure 5.2(b). In Fgure 5.2(a), the horzontal axs denotes the value of s 111, varyng from to 1 ncrementally wth a stepsze of.1, and the vertcal axs denotes the number of teratons requred by Algorthm 2 wth the proposed ntal guess. Here, the value of s 222 s computed by s 222 = 1 s 111, and the matrx T used for Fgure 5.2(a) s obtaned from the dscretzaton of the PDE (1.2) over a 21 21 21 grd. On the other hand, n Fgure 5.2(b), the horzontal axs denotes the value of n, where the sze of the tensor s n n n (n = 5, 1,..., 35), and the vertcal axs denotes the number of teratons requred by Algorthm 2 wth the proposed ntal guess usng s 111 = s 222 =.5. Here, ths ntal guess s referred to as the typcal case. Number of teratons 8 7 6 5 4 3 2 1 Convergence to maxmum sngular value Convergence to mnmum sngular value.1.2.3.4.5.6.7.8.9 1 The value of s 111 (a) Number of teraton versus the value of s 111. Number of teratons 2 15 1 5 Typcal case(s 111 =.5) Random numbers 1 Random numbers 2 5 1 15 2 25 3 35 The value of n (sze of tensor: n n n) (b) Number of teraton versus the tensor sze. FIG. 5.2. The varaton n the number of teratons for the matrx T wth a = b = (1, 1, 1), c = 1. In Fgure 5.2(a), the number of teratons hardly depends on the value of s 111, but there s a bg dfference n ths number between the cases s 111 =.9 and s 111 = 1. We therefore ran Algorthm 2 wth the proposed ntal guess usng the values s 111 =.9,.91,...,.99. As a result, we confrm that the numbers of teratons are almost the same as those for the cases s 111 =,.1,...,.9. From these results, we fnd almost no dependency of the number of teratons on the choce of the parameter s 111, and ths mples the robustness of the proposed ntal guess. It seems that the hgh number of teratons requred for the case

Kent State Unversty 252 A. OHASHI AND T. SOGABE s 111 = 1 s due to that the gven ntal guess only has a very small component of the sngular vector correspondng to the mnmum sngular value. In fact, for a symmetrc matrx, such a choce means that the proposed ntal guess ncludes no component of the sngular vector correspondng to the mnmum sngular value. In Fgure 5.2(b) we observe that Algorthm 2 n the typcal case requres fewer teratons than wth an ntal guess of random numbers, and the gap grows as n ncreases. In what follows, we use matrces T wth hgher or lower symmetry than for the matrces used n Fgures 5.1 and 5.2. A matrx T wth hgher symmetry s created from the parameters a = (1, 1, 1), b = (1, 1, 1), and c = 1, and a matrx T wth lower symmetry from the parameters a = (1, 1, 1), b = (1, 1, 1), and c = 1. The varaton n the number of teratons by the value of s 111 n the proposed ntal guess s presented n Fgure 5.3. Here, the matrces T arse from the dscretzaton of the PDE (1.2) wth the above parameters over a 21 21 21 grd. 8 7 Convergence to maxmum sngular value Convergence to mnmum sngular value 8 7 Convergence to maxmum sngular value Convergence to mnmum sngular value Number of teratons 6 5 4 3 2 Number of teratons 6 5 4 3 2 1 1.1.2.3.4.5.6.7.8.9 1 The value of s 111.1.2.3.4.5.6.7.8.9 1 The value of s 111 (a) T wth hgher symmetry. (b) T wth lower symmetry. FIG. 5.3. The varaton n the number of teratons versus the value of s 111. In Fgure 5.3, the varaton n the number of teratons for the matrces wth hgh and low symmetry showed smlar tendences as n Fgure 5.2(a). Furthermore, the varaton n the number of teratons requred by Algorthm 2 wth the proposed ntal guess usng the values s 111 =.9,.91,...,.99 n Fgure 5.3 has the same behavor as that n Fgure 5.2(a). For the low-symmetry case, the choce s 111 = 1 was optmal unle the other cases. In the followng example, the varaton n the number of teratons versus the tensor sze s dsplayed n Fgure 5.4. For comparson, we ran Algorthm 2 wth several ntal guesses: random numbers and the proposed one wth the typcal case s 111 =.5. Accordng to Fgure 5.4(a), Algorthm 2 n the typcal case requred fewer teratons than for the ntal guesses usng random numbers when T had hgher symmetry. On the other hand, Fgure 5.4(b) ndcates that Algorthm 2 n the typcal case requred as many teratons as for a random ntal guess when T had lower symmetry. From Fgures 5.2(b) and 5.4, we observe that the ntal guess usng s 111 < 1 mproves the speed of convergence of Algorthm 2 except for the case where T has lower symmetry. As can be observed n Fgure 5.4(b), the typcal case shows no advantage over the random ntal guess for the low-symmetry matrx. On the other hand, for some cases the proposed ntal guess could stll become a method of choce, for nstance, for the case s 111 = 1 dsplayed n Fgure 5.5. It s lely, though t requrng further nvestgaton, that the result n Fgure 5.5 ndcates a potental for mprovement of the proposed ntal guess (4.4) even for the low-symmetry case.

Kent State Unversty COMPUTING MAX/MIN SINGULAR VALUES OF GENERALIZED TENSOR SUM 253 Number of teratons 2 15 1 5 Typcal case(s 111 =.5) Random numbers 1 Random numbers 2 Number of teratons 2 15 1 5 Typcal case(s 111 =.5) Random numbers 1 Random numbers 2 5 1 15 2 25 3 35 The value of n (sze of tensor: n n n) (a) T wth hgher symmetry. 5 1 15 2 25 3 35 The value of n (sze of tensor: n n n) (b) T wth lower symmetry. FIG. 5.4. The varaton n the number of teratons versus the tensor sze. Number of teratons 2 15 1 5 Typcal case(s 111 =.5) Random numbers 1 Random numbers 2 Sutable case(s 111 =1) 5 1 15 2 25 3 35 The value of n (sze of tensor: n n n) FIG. 5.5. The varaton n the number of teratons requred by Algorthm 2 n the sutable case versus the tensor sze when T has lower symmetry. In fact, we only used a dagonal tensor as ntal guess, whch s a subtensor of the core tensor. For low-symmetry matrces, an expermental nvestgaton of the optmal choce of a full core tensor wll be consdered n future wor. 6. Concludng remars. In ths paper, frst, we derved the Lanczos bdagonalzaton method over tensor space from the conventonal Lanczos bdagonalzaton method usng the vec 1 -operator n order to compute the maxmum/mnmum sngular values of a generalzed tensor sum T. The resultng method acheved a low memory requrement and a very smple mplementaton snce t only requred the non-zero structure of the matrces A, B, and C. Next, we proposed an ntal guess gven n Tucer decomposton form usng egenvectors correspondng to the maxmum/mnmum egenvalues of T. Computng the egenvectors of T was easy snce the egenpars of T were obtaned from the egenpars of A, B, and C. Fnally, from the results of the numercal experments, we showed that the maxmum/mnmum sngular values of T were successfully computed by the Lanczos bdagonalzaton method over tensor space wth some of the proposed ntal guesses. We see that the proposed ntal guesses mproved the speed of convergence of the Lanczos bdagonalzaton method over tensor space for the hgh-symmetry case and that t could become a method of choce for other cases f a sutable core tensor can be found. Future wor s devoted to an expermental nvestgatons usng the full core tensor n the proposed ntal guess n order to choose an optmal ntal guess for low-symmetry matrces. If the generalzed tensor sum (1.1) s suffcently close to a symmetrc matrx, our ntal guess

Kent State Unversty 254 A. OHASHI AND T. SOGABE wors very well, but n general, restartng technques are mportant for a further mprovement of the speed of convergence to the mnmum sngular value. In ths case, restartng technques should be combned not n vector- but n tensor space. Thus, constructng a general framewor n tensor space and combnng Algorthm 2 wth successful restartng technques, e.g., [2, 6, 7] are topcs of future wor. Wth regards to other methods, the presented approach may be appled to other successful varants of the Lanczos bdagonalzaton method, e.g., [1] and to Jacob-Davdson-type sngular value decomposton methods, e.g., [5]. Acnowledgments. Ths wor has been supported n part by JSPS KAKENHI (Grant No. 2628688). We wsh to express our grattude to Dr. D. Savostyanov, Unversty of Southampton, for constructve comments at the NASCA213 conference. We are grateful to Dr. T. S. Usuda and Dr. H. Yoshoa of Ach Prefectural Unversty for ther support and encouragement. We would le to than the anonymous referees for nformng us of reference [11] and many useful comments that enhanced the qualty of the manuscrpt. REFERENCES [1] B. W. BADER AND T. G. KOLDA, Algorthm 862: MATLAB tensor classes for fast algorthm prototypng, ACM Trans. Math. Software, 32 (26), pp. 635 653. [2] J. BAGLAMA AND L. REICHEL, An mplctly restarted bloc Lanczos bdagonalzaton method usng Leja shfts, BIT Numer. Math., 53 (213), pp. 285 31. [3] J. BALLANI AND L. GRASEDYCK, A projecton method to solve lnear systems n tensor format, Numer. Lnear Algebra Appl., 2 (213), pp. 27 43. [4] G. GOLUB AND W. KAHAN, Calculatng the sngular values and pseudo-nverse of a matrx, J. Soc. Indust. Appl. Math. Ser. B Numer. Anal., 2 (1965), pp. 25 224. [5] M. E. HOCHSTENBACH, A Jacob-Davdson type method for the generalzed sngular value problem, Lnear Algebra Appl., 431 (29), pp. 471 487. [6] Z. JIA AND D. NIU, A refned harmonc Lanczos bdagonalzaton method and an mplctly restarted algorthm for computng the smallest sngular trplets of large matrces, SIAM J. Sc. Comput., 32 (21), pp. 714 744. [7] E. KOKIOPOULOU, C. BEKAS, AND E. GALLOPOULOS, Computng smallest sngular trplets wth mplctly restarted Lanczos bdagonalzaton, Appl. Numer. Math., 49 (24), pp. 39 61. [8] T. G. KOLDA AND B. W. BADER, Tensor decompostons and applcatons, SIAM Rev., 51 (29), pp. 455 5. [9] D. KRESSNER AND C. TOBLER, Krylov subspace methods for lnear systems wth tensor product structure, SIAM J. Matrx Anal. Appl., 31 (21), pp. 1688 1714. [1] D. NIU AND X. YUAN, A harmonc Lanczos bdagonalzaton method for computng nteror sngular trplets of large matrces, Appl. Math. Comput., 218 (212), pp. 7459 7467. [11] B. SAVAS AND L. ELDEN, Krylov-type methods for tensor computatons I, Lnear Algebra Appl., 438 (213), pp. 891 918. [12] M. STOLL, A Krylov-Schur approach to the truncated SVD, Lnear Algebra Appl., 436 (212), pp. 2795 286.