ON COMPUTING MAXIMUM/MINIMUM SINGULAR VALUES OF A GENERALIZED TENSOR SUM

Size: px
Start display at page:

Download "ON COMPUTING MAXIMUM/MINIMUM SINGULAR VALUES OF A GENERALIZED TENSOR SUM"

Transcription

1 Electronc Transactons on Numercal Analyss. Volume 43, pp , 215. Copyrght c 215, Kent State Unversty. ISSN ETNA Kent State Unversty ON COMPUTING MAXIMUM/MINIMUM SINGULAR VALUES OF A GENERALIZED TENSOR SUM ASUKA OHASHI AND TOMOHIRO SOGABE Abstract. We consder the effcent computaton of the maxmum/mnmum sngular values of a generalzed tensor sum T. The computaton s based on two approaches: frst, the Lanczos bdagonalzaton method s reconstructed over tensor space, whch leads to a memory-effcent algorthm wth a smple mplementaton, and second, a promsng ntal guess gven n Tucer decomposton form s proposed. From the results of numercal experments, we observe that our computaton s useful for matrces beng near symmetrc, and t has the potental of becomng a method of choce for other cases f a sutable core tensor can be gven. Key words. generalzed tensor sum, Lanczos bdagonalzaton method, maxmum and mnmum sngular values AMS subject classfcatons. 65F1 1. Introducton. We consder computng the maxmum/mnmum sngular values of a generalzed tensor sum (1.1) T := I n I m A + I n B I l + C I m I l R lmn lmn, where A R l l, B R m m, C R n n, I m s the m m dentty matrx, the symbol " denotes the Kronecer product, and the matrx T s assumed to be large, sparse, and nonsngular. Such matrces T arse n a fnte-dfference dscretzaton of three-dmensonal constant-coeffcent partal dfferental equatons, such as [ a ( ) + b + c ] u(x, y, z) = g(x, y, z) n Ω, (1.2) u(x, y, z) = on Ω, where Ω = (, 1) (, 1) (, 1), a, b R 3, c R, and the symbol " denotes the elementwse product. If a = (1, 1, 1), then a ( ) =. Wth regard to effcent numercal methods for lnear systems of the form T x = f, see [3, 9]. The Lanczos bdagonalzaton method s wdely nown as an effcent method to compute the maxmum/mnmum sngular values of a large and sparse matrx. For ts recent successful varants, see, e.g., [2, 6, 7, 1, 12], and for other successful methods, see, e.g., [5]. The Lanczos bdagonalzaton method does not requre T and T T themselves but only the results of the matrx-vector multplcatons T v and T T v. Even though one stores, as usual, only the non-zero entres of T, the storage requred grows cubcally wth n under the assumpton that l = m = n and, as t s often the case, that the number of non-zero entres of A, B, and C grows lnearly wth n. In order to avod large memory usage, we consder the Lanczos bdagonalzaton method over tensor space. Advantages of ths approach are a low memory requrement and a very smple mplementaton. In fact, the requred memory for storng T grows only lnearly under the above assumptons. Usng the tensor structure, we present a promsng ntal guess n order to mprove the speed of convergence of the Lanczos bdagonalzaton method over tensor space, whch s a major contrbuton of ths paper. Receved October 31, 213. Accepted July 29, 215. Publshed onlne on October 16, 215. Recommended by H. Sado. Graduate School of Informaton Scence and Technology, Ach Prefectural Unversty, Ibaragabasama, Nagaute-sh, Ach, , Japan (d1412@cs.ach-pu.ac.jp). Graduate School of Engneerng, Nagoya Unversty, Furo-cho, Chusa-u, Nagoya, , Japan (sogabe@na.cse.nagoya-u.ac.jp). 244

2 Kent State Unversty COMPUTING MAX/MIN SINGULAR VALUES OF GENERALIZED TENSOR SUM 245 Ths paper s organzed as follows. In Secton 2, the Lanczos bdagonalzaton method s ntroduced, and an algorthm s presented. In Secton 3, some basc operatons on tensors are descrbed. In Secton 4, we consder the Lanczos bdagonalzaton method over tensor space and propose a promsng ntal guess usng the tensor structure of the matrx T. Numercal experments and concludng remars are gven n Sectons 5 and 6, respectvely. 2. The Lanczos bdagonalzaton method. The Lanczos bdagonalzaton method, whch s due to Golub and Kahan [4], s sutable for computng maxmum/mnmum sngular values. In partcular, the method s wdely used for large and sparse matrces. It employs sequences of projectons of a matrx onto judcously chosen low-dmensonal subspaces and computes the sngular values of the obtaned matrx. By means of the projectons, computng these sngular values s more effcent than for the orgnal matrx snce the obtaned matrx s smaller and has a smpler structure. For a matrx M R l n (l n), the Lanczos bdagonalzaton method dsplayed n Algorthm 1 calculates a sequence of vectors p R n and q R l and scalars α and β, where = 1, 2,...,. Here, represents the number of bdagonalzaton steps and s typcally much smaller than ether one of the matrx dmensons l and n. Algorthm 1 The Lanczos bdagonalzaton method [4]. 1: Choose an ntal vector p 1 R n such that p 1 2 = 1. 2: q 1 := Mp 1 ; 3: α 1 := q 1 2 ; 4: q 1 := q 1 /α 1 ; 5: for = 1, 2,..., do 6: r := M T q α p ; 7: β := r 2 ; 8: p +1 := r /β ; 9: q +1 := Mp +1 β q ; 1: α +1 := q +1 2 ; 11: q +1 := q +1 /α +1 ; 12: end for (2.1) After steps, Algorthm 1 yelds the followng decompostons: MP = Q D, M T Q = P D T + r e T, where the vectors e and r denote the -th canoncal and the -th resdual vector n Algorthm 1, respectvely, and the matrces D, P, and Q are gven as α 1 β 1 α 2 β 2 D = (2.2) R, α 1 β 1 α P = (p 1, p 2,..., p ) R n, Q = (q 1, q 2,..., q ) R l. Here, P and Q are column orthogonal matrces,.e., P TP = Q T Q = I. Now, the sngular trplets of the matrces M and D are explaned. Let σ (M) 1, σ (M) 2,...,σ n (M) be the sngular values of M such that σ (M) 1 σ (M) 2 σ n (M). Moreover, let u (M) R l

3 Kent State Unversty 246 A. OHASHI AND T. SOGABE and v (M) R n, where = 1, 2,..., n, be the left and rght sngular vectors correspondng to, v (M) } s referred to as a sngular the sngular values σ (M), respectvely. Then, {σ (M), u (M) trplet of M, and the relatons between M and ts sngular trplets are gven as Mv (M) = σ (M) u (M), M T u (M) = σ (M) v (M), where = 1, 2,..., n. Smlarly, wth regard to D n (2.2), let {σ (D ), u (D ), v (D ) } be the sngular trplets of D. Then, the relatons between D and ts sngular trplets are (2.3) D v (D ) = σ (D ) u (D ), D T u (D ) = σ (D ) v (D ), where = 1, 2,...,. Moreover, { σ (M), ũ (M), ṽ (M) } denotes the approxmate sngular trplet of M. They are determned from the sngular trplet of D as follows: (2.4) σ (M) := σ (D ), ũ (M) := Q u (D ), ṽ (M) := P v (D ). Then, t follows from (2.1), (2.3), and (2.4) that (2.5) Mṽ (M) M T ũ (M) = σ (M) ũ (M), = σ (M) ṽ (M) + r e T u (D ), where = 1, 2,...,. Equatons (2.5) mply that the approxmate sngular trplet { σ (M), ũ (M), ṽ (M) } s acceptable for the sngular trplet {σ (M), u (M), v (M) } f the value of r 2 e T u(d ) s suffcently small. 3. Some basc operatons on tensors. Ths secton provdes a bref explanaton of tensors. For further detals, see, e.g., [1, 8, 11]. A tensor s a multdmensonal array. In partcular, a frst-order tensor s a vector, a secondorder tensor s a matrx, and a thrd-order tensor, whch s manly used n ths paper, has three ndces. Thrd-order tensors are denoted by X, Y, P, Q, R, and S. An element (, j, ) of a thrd-order tensor X s denoted by x j. When the sze of a tensor X s I J K, the ranges of, j, and are = 1, 2,..., I, j = 1, 2,..., J, and = 1, 2,..., K, respectvely. We descrbe the defntons of some basc operatons on tensors. Let x j and y j be elements of the tensors X, Y R I J K. Then, addton s defned by elementwse summaton of X and Y: (X + Y) j := x j + y j, scalar tensor multplcaton s defned by the product of the scalar λ and each element of X : (λx ) j := λx j, and the dot product s defned by the summaton of elementwse products of X and Y: (X, Y) := I J =1 j=1 =1 K (X Y) j, where the symbol " denotes the elementwse product. Then, the norm s defned as X := (X, X ).

4 Kent State Unversty COMPUTING MAX/MIN SINGULAR VALUES OF GENERALIZED TENSOR SUM 247 Let us defne some tensor multplcatons: an n-mode product, whch s denoted by the symbol n ", s a products of a matrx M and a tensor X. The n-mode product for a thrd-order tensor has three dfferent types. The 1-mode product of X R I J K and M R P I s defned as (X 1 M) pj := I x j m p, the 2-mode product of X R I J K and M R P J s defned as (X 2 M) p := =1 J x j m pj, and the 3-mode product of X R I J K and M R P K s defned as (X 3 M) jp := j=1 K x j m p, where = 1, 2,..., I, j = 1, 2,..., J, = 1, 2,..., K, and p = 1, 2,..., P. Fnally, the operator vec vectorzes a tensor by combnng all column vectors of the tensor nto one long vector: =1 vec : R I J K R IJK, and the operator vec 1 reshapes a tensor from one long vector: vec 1 : R IJK R I J K. We wll see that the vec 1 -operator plays an mportant role n reconstructng the Lanczos bdagonalzaton method over tensor space. 4. The Lanczos bdagonalzaton method over tensor space wth a promsng ntal guess The Lanczos bdagonalzaton method over tensor space. If the Lanczos bdagonalzaton method s appled to the generalzed tensor sum T n (1.1), the followng matrx-vector multplcatons are requred: (4.1) T p = (I n I m A + I n B I l + C I m I l ) p, T T p = ( I n I m A T + I n B T I l + C T I m I l ) p, where p R lmn. In an mplementaton, however, computng the multplcatons (4.1) becomes complcated snce t requres the non-zero structure of a large matrx T. Here, the relatons between the vec-operator and the Kronecer product are represented by (I n I m A) vec(p) = vec(p 1 A), (I n B I l ) vec(p) = vec(p 2 B), (C I m I l ) vec(p) = vec(p 3 C), where P R l m n s such that vec(p) = p. Usng these relatons, the multplcatons (4.1) can be descrbed by (4.2) T p = vec (P 1 A + P 2 B + P 3 C), T T p = vec ( P 1 A T + P 2 B T + P 3 C T).

5 Kent State Unversty 248 A. OHASHI AND T. SOGABE Then, an mplementaton based on (4.2) only requres the non-zero structures of the matrces A, B, and C beng much smaller than of T, and thus t s smplfed. We now consder the Lanczos bdagonalzaton method over tensor space. Applyng the vec 1 -operator to (4.2) yelds vec 1 (T p) = P 1 A + P 2 B + P 3 C, vec 1 ( T T p ) = P 1 A T + P 2 B T + P 3 C T. Then, the Lanczos bdagonalzaton method over tensor space for T s obtaned and summarzed n Algorthm 2. Algorthm 2 The Lanczos bdagonalzaton method over tensor space. 1: Choose an ntal tensor P 1 R l m n such that P 1 = 1. 2: Q 1 := P 1 1 A + P 1 2 B + P 1 3 C; 3: α 1 := Q 1 ; 4: Q 1 := Q 1 /α 1 ; 5: for = 1, 2,..., do 6: R := Q 1 A T + Q 2 B T + Q 3 C T α P ; 7: β := R ; 8: P +1 := R /β ; 9: Q +1 := P +1 1 A + P +1 2 B + P +1 3 C β Q ; 1: α +1 := Q +1 ; 11: Q +1 := Q +1 /α +1 ; 12: end for The maxmum/mnmum sngular values of T are computed by a sngular value decomposton of the matrx D n (2.2), whose entres α and β are obtaned from Algorthm 2. The convergence of Algorthm 2 can be montored by R e T u (D ) (4.3), where u (D ) s computed by the sngular value decomposton for D n (2.2) A promsng ntal guess. We consder utlzng the egenvectors of the small matrces A, B, and C for determnng a promsng ntal guess for Algorthm 2. We propose the ntal guess P 1 that s gven n Tucer decomposton form: (4.4) P 1 := S 1 P A 2 P B 3 P C, where S R s the core tensor such that =1 j=1 =1 s j = 1 and s j, P A = [x (A) M, x (A) m ], P B = [x (B), x (B) jm ], and P C = [x (C), x (C) ]. The rest of ths secton M m defnes the vectors x (A) M, x (B), x (C), x (A) M m, x(b), and x(c) m. Let {λ (A), x (A) }, {λ (B) j, x (B) j jm }, and {λ (C), x (C) and C, respectvely. Then, x (A) M, x (B), and x (C) M egenvalues λ (A) M, λ (B), and λ (C) M of A, B, and C, where { λ (A) ( M,, M ) = argmax + λ (B) j (,j,) } be egenpars of the matrces A, B, are the egenvectors correspondng to the + λ (C) }.

6 Kent State Unversty Smlarly, x (A) λ (B) jm, and λ(c) m COMPUTING MAX/MIN SINGULAR VALUES OF GENERALIZED TENSOR SUM 249, and x(c) m of A, B, and C, where m, x(b) jm are the egenvectors correspondng to the egenvalues λ(a) ( m, j m, m ) = argmn (,j,) { λ (A) + λ (B) j + λ (C) Here, we note that the egenvector correspondng to the maxmum absolute egenvalue of T s gven by x (C) x (B) M x (A) M and that the egenvector correspondng to the mnmum absolute egenvalue of T s gven by x (C) m x(b) jm x(a) m. 5. Numercal examples. In ths secton, we report the results of numercal experments usng test matrces gven below. All computatons were carred out usng MATLAB verson R211b on a HP Z8 worstaton wth two 2.66 GHz Xeon processors and 24GB of memory runnng under a Wndows 7 operatng system. The maxmum/mnmum sngular values were computed by Algorthm 2 wth random ntal guesses and wth the proposed ntal guess (4.4). From (4.3), the stoppng crtera we used were R e T u(d ) M < 1 1 for the maxmum sngular value σ (D ) M and R e T u(d ) m < 1 1 for the mnmum sngular value σ (D ) m. Algorthm 2 was stopped when both crtera were satsfed Test matrces. The test matrces T arse from a 7-pont central dfference dscretzaton of the PDE (1.2) over an (n + 1) (n + 1) (n + 1) grd, and they are wrtten as a generalzed tensor sum of the form }. T = I n I n A + I n B I n + C I n I n R n3 n 3, where A, B, C R n n. To be specfc, the matrces A, B, and C are gven by (5.1) (5.2) A = 1 h 2 a 1M h b 1M 2 + ci n, B = 1 h 2 a 2M h b 2M 2, (5.3) C = 1 h 2 a 3M h b 3M 2, where a and b ( = 1, 2, 3) correspond to the -th elements of a and b n (1.2), respectvely, and h, M 1, and M 2 are gven as (5.4) (5.5) h = 1 n + 1, M 1 = R n n, M 2 = R n n As can be seen from (5.1) (5.5), the matrx T has hgh symmetry when a 2 s much larger than b 2 and low symmetry else. m,

7 Kent State Unversty 25 A. OHASHI AND T. SOGABE 5.2. Intal guesses used n the numercal examples. In our numercal experments, we set S n (4.4) to be a dagonal tensor,.e., s j = except for = j = = 1 and = j = = 2. Then, the proposed ntal guess (4.4) s represented by the followng convex combnaton of ran-one tensors: (5.6) ( ) ( P 1 = s 111 x (A) M x (B) x (C) + s 222 x (A) M m x(b) jm ) x(c), m where the symbol " denotes the outer product and s s 222 = 1 wth s 111, s 222. As seen n Secton 4.2, the vectors x (A) M, x (B), x (C), x (A) M m, x(b), and x(c) are determned jm m by specfc egenvectors of the matrces A, B, and C. Snce these matrces are trdagonal Toepltz matrces, t s wdely nown that the egenvalues and egenvectors are gven n analytcal form as follows: let D be a trdagonal Toepltz matrx d 1 d 3 d 2 d 1 d 3 D = R n n, d 2 d 1 d 3 d 2 d 1 where d 2 and d 3. Then, the egenvalues λ (D) λ (D) ( π = d 1 + 2dcos n + 1 are computed by ), = 1, 2,..., n, where d = sgn(d 2 ) d 2 d 3, and the correspondng egenvectors x (D) ( ) ( ) d3 2 π sn n + 1 x (D) 2 = n + 1 d ( 2 ) 1 d3 ( d3 d 2 d 2 ) n sn ( 2π n + 1. sn ) ( ) nπ n + 1 are gven by, = 1, 2,..., n Numercal results. Frst, we use the matrx T wth the parameters a = (1, 1, 1), b = (1, 1, 1), and c = 1 n (5.1) (5.3). The convergence hstory of Algorthm 2 wth the proposed ntal guess (5.6) usng s 111 = s 222 =.5 s dsplayed n Fgure 5.1 wth the number of teratons requred by Algorthm 2 on the horzontal axs and the log 1 of the resdual norms on the vertcal axs. Here, the -th resdual norms are computed by R e T u(d) M for the maxmum sngular value and R e T u(d) m for the mnmum sngular value, and the sze of the tensor n Algorthm 2 s As llustrated n Fgure 5.1, Algorthm 2 wth the proposed ntal guess requred 68 teratons for the maxmum sngular value and 46 teratons for the mnmum sngular value. From Fgure 5.1, we observe a smooth convergence behavor and faster convergence for the maxmum sngular value. Next, usng the matrx T wth the same parameters as for Fgure 5.1, the varaton n the number of teratons s dsplayed n Fgure 5.2, where the dependence of the number

8 Kent State Unversty COMPUTING MAX/MIN SINGULAR VALUES OF GENERALIZED TENSOR SUM 251 log 1 of the resdual norm Convergence to maxmum sngular value Convergence to mnmum sngular value Number of teratons FIG The convergence hstory for Algorthm 2 wth the proposed ntal guess. of teratons on the value s 111 n the proposed ntal guess s gven n Fgure 5.2(a), and the dependence on the tensor sze s gven n Fgure 5.2(b). For comparson, the numbers of teratons requred by Algorthm 2 wth ntal guesses beng random numbers are also dsplayed n Fgure 5.2(b). In Fgure 5.2(a), the horzontal axs denotes the value of s 111, varyng from to 1 ncrementally wth a stepsze of.1, and the vertcal axs denotes the number of teratons requred by Algorthm 2 wth the proposed ntal guess. Here, the value of s 222 s computed by s 222 = 1 s 111, and the matrx T used for Fgure 5.2(a) s obtaned from the dscretzaton of the PDE (1.2) over a grd. On the other hand, n Fgure 5.2(b), the horzontal axs denotes the value of n, where the sze of the tensor s n n n (n = 5, 1,..., 35), and the vertcal axs denotes the number of teratons requred by Algorthm 2 wth the proposed ntal guess usng s 111 = s 222 =.5. Here, ths ntal guess s referred to as the typcal case. Number of teratons Convergence to maxmum sngular value Convergence to mnmum sngular value The value of s 111 (a) Number of teraton versus the value of s 111. Number of teratons Typcal case(s 111 =.5) Random numbers 1 Random numbers The value of n (sze of tensor: n n n) (b) Number of teraton versus the tensor sze. FIG The varaton n the number of teratons for the matrx T wth a = b = (1, 1, 1), c = 1. In Fgure 5.2(a), the number of teratons hardly depends on the value of s 111, but there s a bg dfference n ths number between the cases s 111 =.9 and s 111 = 1. We therefore ran Algorthm 2 wth the proposed ntal guess usng the values s 111 =.9,.91,...,.99. As a result, we confrm that the numbers of teratons are almost the same as those for the cases s 111 =,.1,...,.9. From these results, we fnd almost no dependency of the number of teratons on the choce of the parameter s 111, and ths mples the robustness of the proposed ntal guess. It seems that the hgh number of teratons requred for the case

9 Kent State Unversty 252 A. OHASHI AND T. SOGABE s 111 = 1 s due to that the gven ntal guess only has a very small component of the sngular vector correspondng to the mnmum sngular value. In fact, for a symmetrc matrx, such a choce means that the proposed ntal guess ncludes no component of the sngular vector correspondng to the mnmum sngular value. In Fgure 5.2(b) we observe that Algorthm 2 n the typcal case requres fewer teratons than wth an ntal guess of random numbers, and the gap grows as n ncreases. In what follows, we use matrces T wth hgher or lower symmetry than for the matrces used n Fgures 5.1 and 5.2. A matrx T wth hgher symmetry s created from the parameters a = (1, 1, 1), b = (1, 1, 1), and c = 1, and a matrx T wth lower symmetry from the parameters a = (1, 1, 1), b = (1, 1, 1), and c = 1. The varaton n the number of teratons by the value of s 111 n the proposed ntal guess s presented n Fgure 5.3. Here, the matrces T arse from the dscretzaton of the PDE (1.2) wth the above parameters over a grd. 8 7 Convergence to maxmum sngular value Convergence to mnmum sngular value 8 7 Convergence to maxmum sngular value Convergence to mnmum sngular value Number of teratons Number of teratons The value of s The value of s 111 (a) T wth hgher symmetry. (b) T wth lower symmetry. FIG The varaton n the number of teratons versus the value of s 111. In Fgure 5.3, the varaton n the number of teratons for the matrces wth hgh and low symmetry showed smlar tendences as n Fgure 5.2(a). Furthermore, the varaton n the number of teratons requred by Algorthm 2 wth the proposed ntal guess usng the values s 111 =.9,.91,...,.99 n Fgure 5.3 has the same behavor as that n Fgure 5.2(a). For the low-symmetry case, the choce s 111 = 1 was optmal unle the other cases. In the followng example, the varaton n the number of teratons versus the tensor sze s dsplayed n Fgure 5.4. For comparson, we ran Algorthm 2 wth several ntal guesses: random numbers and the proposed one wth the typcal case s 111 =.5. Accordng to Fgure 5.4(a), Algorthm 2 n the typcal case requred fewer teratons than for the ntal guesses usng random numbers when T had hgher symmetry. On the other hand, Fgure 5.4(b) ndcates that Algorthm 2 n the typcal case requred as many teratons as for a random ntal guess when T had lower symmetry. From Fgures 5.2(b) and 5.4, we observe that the ntal guess usng s 111 < 1 mproves the speed of convergence of Algorthm 2 except for the case where T has lower symmetry. As can be observed n Fgure 5.4(b), the typcal case shows no advantage over the random ntal guess for the low-symmetry matrx. On the other hand, for some cases the proposed ntal guess could stll become a method of choce, for nstance, for the case s 111 = 1 dsplayed n Fgure 5.5. It s lely, though t requrng further nvestgaton, that the result n Fgure 5.5 ndcates a potental for mprovement of the proposed ntal guess (4.4) even for the low-symmetry case.

10 Kent State Unversty COMPUTING MAX/MIN SINGULAR VALUES OF GENERALIZED TENSOR SUM 253 Number of teratons Typcal case(s 111 =.5) Random numbers 1 Random numbers 2 Number of teratons Typcal case(s 111 =.5) Random numbers 1 Random numbers The value of n (sze of tensor: n n n) (a) T wth hgher symmetry The value of n (sze of tensor: n n n) (b) T wth lower symmetry. FIG The varaton n the number of teratons versus the tensor sze. Number of teratons Typcal case(s 111 =.5) Random numbers 1 Random numbers 2 Sutable case(s 111 =1) The value of n (sze of tensor: n n n) FIG The varaton n the number of teratons requred by Algorthm 2 n the sutable case versus the tensor sze when T has lower symmetry. In fact, we only used a dagonal tensor as ntal guess, whch s a subtensor of the core tensor. For low-symmetry matrces, an expermental nvestgaton of the optmal choce of a full core tensor wll be consdered n future wor. 6. Concludng remars. In ths paper, frst, we derved the Lanczos bdagonalzaton method over tensor space from the conventonal Lanczos bdagonalzaton method usng the vec 1 -operator n order to compute the maxmum/mnmum sngular values of a generalzed tensor sum T. The resultng method acheved a low memory requrement and a very smple mplementaton snce t only requred the non-zero structure of the matrces A, B, and C. Next, we proposed an ntal guess gven n Tucer decomposton form usng egenvectors correspondng to the maxmum/mnmum egenvalues of T. Computng the egenvectors of T was easy snce the egenpars of T were obtaned from the egenpars of A, B, and C. Fnally, from the results of the numercal experments, we showed that the maxmum/mnmum sngular values of T were successfully computed by the Lanczos bdagonalzaton method over tensor space wth some of the proposed ntal guesses. We see that the proposed ntal guesses mproved the speed of convergence of the Lanczos bdagonalzaton method over tensor space for the hgh-symmetry case and that t could become a method of choce for other cases f a sutable core tensor can be found. Future wor s devoted to an expermental nvestgatons usng the full core tensor n the proposed ntal guess n order to choose an optmal ntal guess for low-symmetry matrces. If the generalzed tensor sum (1.1) s suffcently close to a symmetrc matrx, our ntal guess

11 Kent State Unversty 254 A. OHASHI AND T. SOGABE wors very well, but n general, restartng technques are mportant for a further mprovement of the speed of convergence to the mnmum sngular value. In ths case, restartng technques should be combned not n vector- but n tensor space. Thus, constructng a general framewor n tensor space and combnng Algorthm 2 wth successful restartng technques, e.g., [2, 6, 7] are topcs of future wor. Wth regards to other methods, the presented approach may be appled to other successful varants of the Lanczos bdagonalzaton method, e.g., [1] and to Jacob-Davdson-type sngular value decomposton methods, e.g., [5]. Acnowledgments. Ths wor has been supported n part by JSPS KAKENHI (Grant No ). We wsh to express our grattude to Dr. D. Savostyanov, Unversty of Southampton, for constructve comments at the NASCA213 conference. We are grateful to Dr. T. S. Usuda and Dr. H. Yoshoa of Ach Prefectural Unversty for ther support and encouragement. We would le to than the anonymous referees for nformng us of reference [11] and many useful comments that enhanced the qualty of the manuscrpt. REFERENCES [1] B. W. BADER AND T. G. KOLDA, Algorthm 862: MATLAB tensor classes for fast algorthm prototypng, ACM Trans. Math. Software, 32 (26), pp [2] J. BAGLAMA AND L. REICHEL, An mplctly restarted bloc Lanczos bdagonalzaton method usng Leja shfts, BIT Numer. Math., 53 (213), pp [3] J. BALLANI AND L. GRASEDYCK, A projecton method to solve lnear systems n tensor format, Numer. Lnear Algebra Appl., 2 (213), pp [4] G. GOLUB AND W. KAHAN, Calculatng the sngular values and pseudo-nverse of a matrx, J. Soc. Indust. Appl. Math. Ser. B Numer. Anal., 2 (1965), pp [5] M. E. HOCHSTENBACH, A Jacob-Davdson type method for the generalzed sngular value problem, Lnear Algebra Appl., 431 (29), pp [6] Z. JIA AND D. NIU, A refned harmonc Lanczos bdagonalzaton method and an mplctly restarted algorthm for computng the smallest sngular trplets of large matrces, SIAM J. Sc. Comput., 32 (21), pp [7] E. KOKIOPOULOU, C. BEKAS, AND E. GALLOPOULOS, Computng smallest sngular trplets wth mplctly restarted Lanczos bdagonalzaton, Appl. Numer. Math., 49 (24), pp [8] T. G. KOLDA AND B. W. BADER, Tensor decompostons and applcatons, SIAM Rev., 51 (29), pp [9] D. KRESSNER AND C. TOBLER, Krylov subspace methods for lnear systems wth tensor product structure, SIAM J. Matrx Anal. Appl., 31 (21), pp [1] D. NIU AND X. YUAN, A harmonc Lanczos bdagonalzaton method for computng nteror sngular trplets of large matrces, Appl. Math. Comput., 218 (212), pp [11] B. SAVAS AND L. ELDEN, Krylov-type methods for tensor computatons I, Lnear Algebra Appl., 438 (213), pp [12] M. STOLL, A Krylov-Schur approach to the truncated SVD, Lnear Algebra Appl., 436 (212), pp

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Norms, Condition Numbers, Eigenvalues and Eigenvectors Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton

More information

Developing an Improved Shift-and-Invert Arnoldi Method

Developing an Improved Shift-and-Invert Arnoldi Method Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 93-9466 Vol. 5, Issue (June 00) pp. 67-80 (Prevously, Vol. 5, No. ) Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) Developng an

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra MEM 255 Introducton to Control Systems Revew: Bascs of Lnear Algebra Harry G. Kwatny Department of Mechancal Engneerng & Mechancs Drexel Unversty Outlne Vectors Matrces MATLAB Advanced Topcs Vectors A

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Perron Vectors of an Irreducible Nonnegative Interval Matrix Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

A MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS

A MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS Journal of Mathematcs and Statstcs 9 (1): 4-8, 1 ISSN 1549-644 1 Scence Publcatons do:1.844/jmssp.1.4.8 Publshed Onlne 9 (1) 1 (http://www.thescpub.com/jmss.toc) A MODIFIED METHOD FOR SOLVING SYSTEM OF

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Finding The Rightmost Eigenvalues of Large Sparse Non-Symmetric Parameterized Eigenvalue Problem

Finding The Rightmost Eigenvalues of Large Sparse Non-Symmetric Parameterized Eigenvalue Problem Fndng he Rghtmost Egenvalues of Large Sparse Non-Symmetrc Parameterzed Egenvalue Problem Mnghao Wu AMSC Program mwu@math.umd.edu Advsor: Professor Howard Elman Department of Computer Scences elman@cs.umd.edu

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013 ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

Norm Bounds for a Transformed Activity Level. Vector in Sraffian Systems: A Dual Exercise

Norm Bounds for a Transformed Activity Level. Vector in Sraffian Systems: A Dual Exercise ppled Mathematcal Scences, Vol. 4, 200, no. 60, 2955-296 Norm Bounds for a ransformed ctvty Level Vector n Sraffan Systems: Dual Exercse Nkolaos Rodousaks Department of Publc dmnstraton, Panteon Unversty

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

One-sided finite-difference approximations suitable for use with Richardson extrapolation

One-sided finite-difference approximations suitable for use with Richardson extrapolation Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,

More information

Graph Reconstruction by Permutations

Graph Reconstruction by Permutations Graph Reconstructon by Permutatons Perre Ille and Wllam Kocay* Insttut de Mathémathques de Lumny CNRS UMR 6206 163 avenue de Lumny, Case 907 13288 Marselle Cedex 9, France e-mal: lle@ml.unv-mrs.fr Computer

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

Relaxation Methods for Iterative Solution to Linear Systems of Equations

Relaxation Methods for Iterative Solution to Linear Systems of Equations Relaxaton Methods for Iteratve Soluton to Lnear Systems of Equatons Gerald Recktenwald Portland State Unversty Mechancal Engneerng Department gerry@pdx.edu Overvew Techncal topcs Basc Concepts Statonary

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

A FORMULA FOR COMPUTING INTEGER POWERS FOR ONE TYPE OF TRIDIAGONAL MATRIX

A FORMULA FOR COMPUTING INTEGER POWERS FOR ONE TYPE OF TRIDIAGONAL MATRIX Hacettepe Journal of Mathematcs and Statstcs Volume 393 0 35 33 FORMUL FOR COMPUTING INTEGER POWERS FOR ONE TYPE OF TRIDIGONL MTRIX H Kıyak I Gürses F Yılmaz and D Bozkurt Receved :08 :009 : ccepted 5

More information

THE Hadamard product of two nonnegative matrices and

THE Hadamard product of two nonnegative matrices and IAENG Internatonal Journal of Appled Mathematcs 46:3 IJAM_46_3_5 Some New Bounds for the Hadamard Product of a Nonsngular M-matrx and Its Inverse Zhengge Huang Lgong Wang and Zhong Xu Abstract Some new

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence

More information

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

DIVIDE AND CONQUER LOW-RANK PRECONDITIONING TECHNIQUES

DIVIDE AND CONQUER LOW-RANK PRECONDITIONING TECHNIQUES DIVIDE AND CONQUER LOW-RANK PRECONDITIONING TECHNIQUES RUIPENG LI AND YOUSEF SAAD Abstract. Ths paper presents a precondtonng method based on a recursve multlevel lowrank approxmaton approach. The basc

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information

A property of the elementary symmetric functions

A property of the elementary symmetric functions Calcolo manuscrpt No. (wll be nserted by the edtor) A property of the elementary symmetrc functons A. Esnberg, G. Fedele Dp. Elettronca Informatca e Sstemstca, Unverstà degl Stud della Calabra, 87036,

More information

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry Workshop: Approxmatng energes and wave functons Quantum aspects of physcal chemstry http://quantum.bu.edu/pltl/6/6.pdf Last updated Thursday, November 7, 25 7:9:5-5: Copyrght 25 Dan Dll (dan@bu.edu) Department

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

Finite Element Modelling of truss/cable structures

Finite Element Modelling of truss/cable structures Pet Schreurs Endhoven Unversty of echnology Department of Mechancal Engneerng Materals echnology November 3, 214 Fnte Element Modellng of truss/cable structures 1 Fnte Element Analyss of prestressed structures

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system. Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Lnear Algebra and ts Applcatons 4 (00) 5 56 Contents lsts avalable at ScenceDrect Lnear Algebra and ts Applcatons journal homepage: wwwelsevercom/locate/laa Notes on Hlbert and Cauchy matrces Mroslav Fedler

More information

Determinants Containing Powers of Generalized Fibonacci Numbers

Determinants Containing Powers of Generalized Fibonacci Numbers 1 2 3 47 6 23 11 Journal of Integer Sequences, Vol 19 (2016), Artcle 1671 Determnants Contanng Powers of Generalzed Fbonacc Numbers Aram Tangboonduangjt and Thotsaporn Thanatpanonda Mahdol Unversty Internatonal

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

is the calculated value of the dependent variable at point i. The best parameters have values that minimize the squares of the errors

is the calculated value of the dependent variable at point i. The best parameters have values that minimize the squares of the errors Multple Lnear and Polynomal Regresson wth Statstcal Analyss Gven a set of data of measured (or observed) values of a dependent varable: y versus n ndependent varables x 1, x, x n, multple lnear regresson

More information

High resolution entropy stable scheme for shallow water equations

High resolution entropy stable scheme for shallow water equations Internatonal Symposum on Computers & Informatcs (ISCI 05) Hgh resoluton entropy stable scheme for shallow water equatons Xaohan Cheng,a, Yufeng Ne,b, Department of Appled Mathematcs, Northwestern Polytechncal

More information

Key words. multilinear algebra, singular value decomposition, tensor decomposition, low rank approximation

Key words. multilinear algebra, singular value decomposition, tensor decomposition, low rank approximation ON THE TENSOR SVD AND THE OPTIMAL LOW RANK ORTHOGONAL APPROXIMATION OF TENSORS JIE CHEN AND YOUSEF SAAD Abstract. It s known that a hgher order tensor does not necessarly have an optmal low rank approxmaton,

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Preconditioning techniques in Chebyshev collocation method for elliptic equations

Preconditioning techniques in Chebyshev collocation method for elliptic equations Precondtonng technques n Chebyshev collocaton method for ellptc equatons Zh-We Fang Je Shen Ha-We Sun (n memory of late Professor Benyu Guo Abstract When one approxmates ellptc equatons by the spectral

More information

7. Products and matrix elements

7. Products and matrix elements 7. Products and matrx elements 1 7. Products and matrx elements Based on the propertes of group representatons, a number of useful results can be derved. Consder a vector space V wth an nner product ψ

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

New Method for Solving Poisson Equation. on Irregular Domains

New Method for Solving Poisson Equation. on Irregular Domains Appled Mathematcal Scences Vol. 6 01 no. 8 369 380 New Method for Solvng Posson Equaton on Irregular Domans J. Izadan and N. Karamooz Department of Mathematcs Facult of Scences Mashhad BranchIslamc Azad

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan

More information

An efficient algorithm for multivariate Maclaurin Newton transformation

An efficient algorithm for multivariate Maclaurin Newton transformation Annales UMCS Informatca AI VIII, 2 2008) 5 14 DOI: 10.2478/v10065-008-0020-6 An effcent algorthm for multvarate Maclaurn Newton transformaton Joanna Kapusta Insttute of Mathematcs and Computer Scence,

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

1 Derivation of Point-to-Plane Minimization

1 Derivation of Point-to-Plane Minimization 1 Dervaton of Pont-to-Plane Mnmzaton Consder the Chen-Medon (pont-to-plane) framework for ICP. Assume we have a collecton of ponts (p, q ) wth normals n. We want to determne the optmal rotaton and translaton

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

PARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY

PARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY POZNAN UNIVE RSITY OF TE CHNOLOGY ACADE MIC JOURNALS No 86 Electrcal Engneerng 6 Volodymyr KONOVAL* Roman PRYTULA** PARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY Ths paper provdes a

More information

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems Mathematca Aeterna, Vol. 1, 011, no. 06, 405 415 Applcaton of B-Splne to Numercal Soluton of a System of Sngularly Perturbed Problems Yogesh Gupta Department of Mathematcs Unted College of Engneerng &

More information

Research Article Green s Theorem for Sign Data

Research Article Green s Theorem for Sign Data Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of

More information

COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION. Erdem Bala, Dept. of Electrical and Computer Engineering,

COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION. Erdem Bala, Dept. of Electrical and Computer Engineering, COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION Erdem Bala, Dept. of Electrcal and Computer Engneerng, Unversty of Delaware, 40 Evans Hall, Newar, DE, 976 A. Ens Cetn,

More information

Number of cases Number of factors Number of covariates Number of levels of factor i. Value of the dependent variable for case k

Number of cases Number of factors Number of covariates Number of levels of factor i. Value of the dependent variable for case k ANOVA Model and Matrx Computatons Notaton The followng notaton s used throughout ths chapter unless otherwse stated: N F CN Y Z j w W Number of cases Number of factors Number of covarates Number of levels

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information