Bulletin of the. Iranian Mathematical Society

Size: px
Start display at page:

Download "Bulletin of the. Iranian Mathematical Society"

Transcription

1 ISSN: X (Print) ISSN: (Online) Bulletin of the Iranian Matheatical Society Vol. 41 (2015), No. 4, pp Title: Convergence analysis of the global FOM and GMRES ethods for solving atrix equations AXB = C with SPD coefficients Author(s): M. Mohseni Moghada, A. Rivaz, A. Tajaddini and F. Saberi Movahed Published by Iranian Matheatical Society

2 Bull. Iranian Math. Soc. Vol. 41 (2015), No. 4, pp Online ISSN: CONVERGENCE ANALYSIS OF THE GLOBAL FOM AND GMRES METHODS FOR SOLVING MATRIX EQUATIONS AXB = C WITH SPD COEFFICIENTS M. MOHSENI MOGHADAM, A. RIVAZ, A. TAJADDINI AND F. SABERI MOVAHED (Counicated by Davod Khojasteh Salkuyeh) Abstract. In this paper, we study convergence behavior of the global FOM (Gl-FOM) and global GMRES (Gl-GMRES) ethods for solving the atrix equation AXB = C where A and B are syetric positive definite (SPD). We present soe new theoretical results of these ethods such as coputable exact expressions and upper bounds for the nor of the error and residual. In particular, the obtained upper bounds for the Gl-FOM ethod help us to predict the behavior of the Frobenius nor of the Gl-FOM residual. We also explore the worst-case convergence behavior of these ethods. Finally, soe nuerical experients are given to show the perforance of the theoretical results. Keywords: Convergence analysis; Global FOM; Global GMRES; Global Lanczos algorith; Worst-case behavior. MSC(2010): Priary: 65F10; Secondary: 15A Introduction In this paper, we consider the atrix equation (1.1) AXB = C, where A R n n, B R s s, C R n s and the atrices A and B are syetric positive definite (SPD). Different ethods are devoted to find the special solution structures of the atrix equation AXB = C such as diagonal, triangular, reflexive, syetric, centro-syetric, skew-syetric or least squares solution X; see [7, 8, 13, 18] and the references therein. During the last years, several projection ethods have been proposed to solve the atrix equations. The ain idea developed in these ethods is to Article electronically published on August 16, Received: 18 May 2014, Accepted: 27 June Corresponding author. 981 c 2015 Iranian Matheatical Society

3 Convergence analysis 982 construct suitable bases of the Krylov subspace and project the large proble into a saller one; see [9 11, 20, 21] for ore details. For exaples, Beik and Salkuyeh [2] have proposed two global Krylov subspace ethods for solving the following general coupled linear atrix equations (1.2) p A ij X j B ij = C i, i = 1,..., p. j=1 Furtherore, using the Schur copleent forula, Beik [1] has studied soe convergence results of the Gl-GMRES for solving the generalized Sylvester equations and the atrix equations (1.2). In the case where B is the identity atrix I s s, Jbilou et al. [14] have presented the properties of the Gl-FOM and Gl-GMRES ethods for solving a atrix equation AX = C. Then, using the product, Bouyouli et al. [4] have studied the convergence properties of the Gl-FOM and Gl-GMRES ethods. When A is a SPD atrix and B = I s s with s = 1, the Conjugate Gradient (CG) and the MINRES ethods are two well-known Krylov subspace approaches for solving the linear syste Ax = b. An overview of the convergence analysis of these ethods is given in [17]. Also, an interesting work was recently done by Bouyouli et al. [5]. Based on the relationship between the CG and Lanczos ethods, they have derived soe convergence results for the error and the residual of the CG ethod. In this paper, we discuss convergence analysis of the Gl-FOM and Gl- GMRES ethods for solving the atrix equation (1.1). Our ain interest is to understand the behavior of the error and residual of these ethods. These ethods use a global Lanczos-based algorith onto a atrix Krylov subspace. In addition, since these ethods are the global orthogonal residual and global inial residual ethods, we call the the G-OR-L and G-MR-L ethods, respectively. This paper is organized as follows. In section 2, we give soe preliinary tools. We also recall the atrix Krylov subspaces, then we give a brief description of the global Arnoldi algorith and its properties. In section 3, we present the G-OR-L and G-MR-L ethods and their atheatical properties for solving the atrix equation (1.1). We also prove soe convergence results of these ethods. In particular, we discuss the iportance of the spectral inforation of A and B on the convergence behavior of the Frobenius nor of the G-OR- L residual. In section 4, we investigate the worst-case convergence behavior of these ethods. For the special case AX = C where A is a diagonalizable atrix, we also prove the worst-case convergence behavior of the Gl-GMRES ethod. Furtherore, in coparison with the proof given in [3], our proof is shorter. Nuerical experients are presented in the last section.

4 983 Mohseni Moghada, Rivaz, Tajaddini and Saberi Movahed 2. Preliinaries We use the following notations. The notation R n s denotes the real vector space of n s atrices. e j denotes the jth colun of an identity atrix of a suitable diension. Let X, Y R n s. The vector vec(x) denotes the vector of R ns defined by vec(x) = [x T 1,..., x T s ] T where x i is the ith colun of X. The Frobenius inner product is defined by X, Y F = trace(y T X) where trace( ) denotes the trace of the square atrix Y T X. The associated nor is the Frobenius nor denoted by F. The notation X F Y eans that X, Y F = 0. The Kronecker product of atrices A R k l and B R n is defined by A B = [a ij B]. For this product, we have the following properties. For ore details we refer to [4]. (1) (A B)(C D) = (AC BD), provided that AC and BD exist. (2) (A B) 1 = A 1 B 1, provided that A 1 and B 1 exist. (3) (A B) T = A T B T. (4) vec(abc) = (C T A)vec(B), provided that ABC exists. (5) If A R k l, B R l and C R k, then trace(abc) = vec(a T ) T (I k B)vec(C). We give the following proposition whose proof is straightforward. Proposition 2.1. Assue that A R n n and B R s s are SPD. The ap, (A,B) : R n s R n s R defined as X, Y (A,B) = trace(y T AXB) is an inner product denoted by the (A, B)-inner product. Moreover, the associated nor of the (A, B)-inner product is X (A,B) = vec(x) B A, X R n s. Let E = [E 1, E 2,..., E p ] R n ps and F = [F 1, F 2,..., F l ] R n ls, where E i and F j are n s atrices. The atrices E T F and E T (A,B) F are defined by (E T F ) ij = E i, F j F and (E T (A,B) F ) ij = E i, F j (A,B), respectively. One can see that the product satisfies the following properties; see [4] for ore details. Proposition 2.2. Let E, F, M R n ps, L R p p, K R s s, y R s and α R. Then we have (1) E T (F (L I s )(I p K)) = (E T (F (I p K)))L. (2) E T (F (I s L)(y I p )) = (E T (F (I s L)))y. (3) E T (AE(I p B)) = E T (A,B) E = ẼT (B A)Ẽ, where Ẽ = [vec(e 1), vec(e 2 ),..., vec(e p )] The generalized atrix Krylov subspace. Let A R n n, B R s s be arbitrary atrices. The generalized atrix Krylov subspace associated with

5 Convergence analysis 984 the triplet (A, V, B) where V R n s is defined by: (2.1) GK i (A, V, B) = span{v, AV B,..., A i 1 V B i 1 }. The following definition of [16, p. 411] is a basic tool to describe the structure of GK i (A, V, B). Definition 2.3. Let p(x, y) = k i,j=0 c ijx i y j be a polynoial in two variables, x and y, with real coefficients c ij. p(c :D) is to be defined a atrix of the for p(c :D) = k i,j=0 c ij(c i D j ) where C R and D R n n. Reark 2.4. Suppose that K i (G, v) = span{v, Gv,..., G i 1 v} is the classical atrix Krylov subspace where v = vec(v ) and G = B T A. The ap T : GK i (A, V, B) K i (G, v) given by Z vec(z) is an isoorphis. So, GK i (A, V, B) is the set of all atrices Z such that vec(z) = p k (B T : A)v in which p k (x, y) = k j=0 c jx j y j and k i 1. To construct an orthonoral basis of GK k (A, V, B), we apply the global Arnoldi algorith which can be suarized in Algorith 1. 1: Set β = V F and V 1 = V/β. 2: for j = 1,..., k do 3: W j = AV j B 4: for i = 1,..., j do 5: h ij = W j, V i F 6: W j = W j h ij V i 7: end for 8: h j+1,j = W j F. If h j+1,j = 0 then stop. 9: V j+1 = W j/h j+1,j. 10: end for Let V k = [V 1,..., V k ]. It is easy to see that V T k V k = I k and V T k V k+1 = 0 k 1. In addition, we have AV k (I k B) = [AV 1 B,..., AV k B]. Fro Algorith 1, it follows that AV j B = j+1 i=1 h ijv i, for j = 1,..., k. Hence, we get 2 k AV k (I k B) = [ h i1 V i,..., h i V i ] + h k+1,k [0,..., V k+1 ]. So, it is clear that i=1 (2.2) AV k (I k B) = V k (H k I s ) + h k+1,k V k+1 (e T k I s ), i=1 where H k = [h ij ] is an upper Hessenberg atrix whose nonzero entries h ij are defined by the generalized global Arnoldi algorith.

6 985 Mohseni Moghada, Rivaz, Tajaddini and Saberi Movahed Using the relation (2.2), the part (3) of Proposition 2.2 iplies that H k = Ṽ T k (B A)Ṽk, where Ṽk = [vec(v 1 ),..., vec(v k )]. In the particular case where A, B are SPD, we state an iportant property of the atrix H k, which is easy to prove. Proposition 2.5. If A, B are SPD, then the atrix H k is SPD and tridiagonal. Moreover, all entries of the three diagonals of H k are positive. As a consequence of the above proposition, when A and B are SPD, then the atrices V 1, V 2,..., V k constructed by Algorith 1 satisfy the three-ter recurrence h j+1,j V j+1 = AV j B h jj V j h j 1,j V j 1. Therefore, the generalized global Arnoldi algorith is transfored to the following algorith. In fact, this algorith is a version of the global Lanczos algorith. We call this algorith the generalized global Lanczos (GGL) algorith. Algorith 1 The generalized global Lanczos (GGL) algorith 1: Set β = V F, V 1 = V/β, h 0,1 0 and V : for j = 1,..., k do 3: W j = AV jb h j 1,jV j 1 4: h jj = W j, V j F 5: W j = W j h jjv j 6: h j+1,j = W j F. If h j+1,j = 0 then stop. 7: V j+1 = W j /h j+1,j. 8: end for 3. Convergence analysis In this section, we present soe convergence results for the Gl-FOM and Gl- GMRES ethods for solving the atrix equation (1.1). Both of these ethods use the GGL algorith to construct an orthonoral basis of a generalized atrix Krylov subspace. In addition, since these ethods are the global orthogonal residual and global inial residual ethods, we call the the G-OR-L and G-MR-L ethods, respectively The G-OR-L ethod. Assue that X 0 R n s is an initial guess and R 0 = C AX 0 B is its corresponding residual. The G-OR-L ethod consists in generating approxiate solutions of the for X or = X 0 + Z with Z GK (A, R 0, B) such that (3.1) R or := C AX or B F GK (A, R 0, B), where = 1, 2,.... The atrix R or is called the th residual associated with X or.

7 Convergence analysis 986 Let {V 1,..., V } be the basis of GK (A, R 0, B) constructed by the GGL algorith. The approxiation X or can be written as (3.2) X or = X 0 + V (α or I s ), where V = [V 1,..., V ]. It is also easy to see that α or solves the linear syste (3.3) H α or = R 0 F e 1. Moreover, as H is SPD, α or is the unique solution of (3.3). Now, we show how to copute the Frobenius nor of the residual without coputing explicitly the residual. Proposition 3.1. Under the sae assuptions as in Proposition 2.5, the residual atrix R or satisfies the following relation (3.4) R or F = R 0 F i=1 h i+1,i det(h ). So, R or F = 0 if and only if h +1, = 0. Proof. We have R or = R 0 AV (I B)(α or I s ). So, using the relations (2.2) and (3.3), we obtain R or = R 0 V (H α or I s ) h +1, (e T α or )V +1 = h +1, (e T α or )V +1. Therefore, R or F = h +1, α or() where α or() is the last coponent of α or. On the other hand, fro (3.3) and the Craer rule, it follows that α or() = ( 1) +1 R 0 F 1 i=1 h i+1,i det(h ) Using Proposition 2.5, we have h i+1,i > 0, i = 1,..., 1. Hence R or F = 0 if and only if h +1, = 0. We study further the G-OR-L ethod by considering the properties of the error associated with X or, i.e., X X or where X is the solution of the atrix equation (1.1). Since the G-OR-L ethod is an orthogonal projection ethod onto the generalized atrix Krylov subspace GK i (A, R 0, B), we have the iniization property of the error of the G-OR-L ethod. It is not hard to prove the following theore. Theore 3.2. Assue that X is the solution of (1.1) and X 0 R n s. Then is the th approxiation of the G-OR-L ethod if and only if X or (A,B) = in X X (A,B). X X 0+GK (A,R 0,B).

8 987 Mohseni Moghada, Rivaz, Tajaddini and Saberi Movahed By considering the iniization property of the (A, B) nor of the error of the G-OR-L ethod, we will try to establish ore properties of the error. First, we give two expressions for X X or (A,B). In the following, we assue that D 0 = X X 0 and D = X X or Theore 3.3. The error X X or relation. associated with X or satisfies the following (16-i) (A,B) = R or F V T +1 (A 1,B 1 ) V +1. Also, suppose that K +1 = [R 0,..., A R 0 B ]. If {R 0,..., A R 0 B } is a linearly independent set, then (16-ii) 2 (A,B) = 1 e T 1 (KT +1 (A 1,B 1 ) K +1 ) 1 e 1. Proof. (i) Since D = A 1 R or B 1, we have 2 (A,B) = vec(d ) T (B A)vec(D ) = vec(r or ) T (B 1 A 1 )vec(r or ) = R or 2 F vec(v +1 ) T (B 1 A 1 )vec(v +1 ) = R or 2 F (V T +1 (A 1,B 1 ) V +1 ). (ii) We have X or = X 0 + K (α or I s ) such that R or B)(α or I s ), where K = [R 0,..., A 1 R 0 B 1 ]. So, 2 (A,B) = trace ((X X or ) T A(X X or )B) = R 0 AK (I = D 0 K (α or I s ), R or F = D 0, R or F = D 0, R 0 AK (I B)(α or I s ) F. Fro [12, Theore ], the atrix K T (A,B) K is nonsingular. Now, the orthogonality relation (3.1) yields α or = (K T (A,B) K ) 1 (K T R 0 ). In addition, we have R 0 = AD 0 B. Therefore, 2 (A,B) = DT 0 (A,B) D 0 (D0 T (A,B) K )α or = D T 0 (A,B) D 0 (D0 T (A,B) K )(K T (A,B) K ) 1 (K T (A,B) D 0 ) ( [ ] D0 = T (A,B) D 0 D0 T / (A,B) K K T (A,B) D 0 K T K (A,B) K T (A,B) K ), where M/F is the Schur copleent of F in M. As K +1 = [AD 0 B, AK (I B)] and A 1 K +1 (I +1 B 1 ) = [D 0, K ],

9 Convergence analysis 988 we get ( A 1 K +1 (I +1 B 1 ) ) [ T D T K+1 = 0 (A,B) D 0 D0 T (A,B) K K T (A,B) D 0 K T (A,B) K On the other hand ( A 1 K +1 (I +1 B 1 ) ) T K+1 = K T +1 (A 1,B 1 ) K +1, so we obtain 2 (A,B) = ( K T +1 (A 1,B 1 ) K +1 / K T (A,B) K ) = det(kt +1 (A 1,B 1 ) K +1 ) det(k T (A,B) K ) 1 = e T 1 (KT +1. (A 1,B 1 ) K +1 ) 1 e 1 Reark 3.4. (1) Proposition 3.1 together with the relation (16-i) follows that h +1, = 0 if and only if (A,B) = 0. (2) Fro [12, Theore ], the atrix K+1 T (A 1,B 1 ) K +1 is SPD, so e T 1 (K+1 T (A 1,B 1 ) K +1 ) 1 e 1 > 0. Now, we introduce soe upper bounds for (A,B). First, we state the following relations: (B A) = λ in (B A) = 1 λ in (A)λ in (B) = A 1 2 B 1 2, B A 2 = λ ax (B A) = λ ax (A)λ ax (B) = A 2 B 2. Therefore, κ(b A) = κ(a)κ(b), where κ(z) = Z 2 Z 1 2. In the following, the expression UB is used to denote Upper Bound. Theore 3.5. The error X X or (UB.1) (A,B) R or F ]. associated with X or satisfies the relations κ(a)κ(b). A 2 B 2 (UB.2) (UB.3) (A,B) 2 (A,B) Ror 2 R or F κ(a)κ(b) + 1. V+1 T (A,B) V +1 κ(a)κ(b) F ( κ(a)κ(b) + 1 A 2 B 2 ). (UB.4) where γ = (A,B) γ R or F, κ(a)κ(b) R 0 F A 2 B 2 + α or 2.

10 989 Mohseni Moghada, Rivaz, Tajaddini and Saberi Movahed Proof. (1): Let G = B A and v = vec(v +1 ). If we apply the Courant-Fisher theore [12, Theore ], then v T G 1 v 1 λ in (A)λ in (B) = κ(a)κ(b). A 2 B 2 The relation (16-i) together with the above relation follows the inequality (UB.1). (2): Since v T G 1 v = V T +1 (A 1,B 1 ) V +1 and v T Gv = V T +1 (A,B) V +1, the Kantorovich inequality [12, Theore ] iplies that (3.5) V T +1 (A 1,B 1 ) V (V T +1 (A,B) V +1 ) (κ(a)κ(b) + 1) 2. κ(a)κ(b) Multiplying both sides of the relation (3.5) by R or 2 F follows the inequality (UB.2). On the other hand, using the Courant-Fisher theore, we obtain which follows the inequality (UB.3). (3): We have (3.6) and (3.7) 1 v T Gv 1 λ in (G) = κ(a)κ(b), A 2 B 2 2 (A,B) = Gvec(X X or ), vec(x X or ), C AX or B 2 F = A(X X or )B 2 F = vec(a(x X or )B) 2 2 = G 2 vec(x X or ), vec(x X or ). Let y = vec(x X or ) and z = y/ y 2. Using a result given in [15], we get Gz, z 2 G 2 z, z, or equivalently, Gy, y 2 y 2 2 G 2 y, y. Since vec(x X or ) 2 = F, it follows fro (3.6) and (3.7) that (A,B) F C AX or B F. Fro the relation X X or = D 0 V (α or I s ), we have F D 0 F + V (α or I s ) F. Therefore, as V (α or I s ) F = α or 2 and D 0 F = vec(d 0 ) 2 = (B A) 1 vec(r 0 ) 2 A 1 2 B 1 2 vec(r 0 ) 2, the proof copletes. It can be shown that the (A, B) nor of the error at each step of the G-OR- L ethod satisfies X X or (A,B) X X 0 (A,B). However, the Frobenius nor of the residual at each step of the G-OR-L ethod ay oscillate. In the next result, we show an upper bound for R or F.

11 Convergence analysis 990 Theore 3.6. Assue that R or is the G-OR-L residual obtained at step. Then (3.8) F in{α 1 R 0 F, α 2 R 0 F }, where α 1 = A κ(a)κ(b) and α 2 = 2 B 2 R or (V T 1 (A,B)V 1 )κ(a)κ(b) ( κ(a)κ(b)+1 2 Proof. The Courant-Fisher theore together with the relation (16-i) iplies that R or 2 F 2 (A,B) A 2 B. 2 On the other hand, siilar to Theore 3.5, we obtain κ(a)κ(b) X X 0 (A,B) R 0 F, A 2 B 2 X X 0 (A,B) Therefore, using the above relations, we have R 0 F κ(a)κ(b) V1 T (A,B) V 1 κ(a)κ(b) X X 0 2 (A,B) 2 (A,B) κ(a)κ(b) R F R or 2 F, A 2 B 2 A 2 B 2 and, X X 0 2 (A,B) 2 (A,B) (κ(a)κ(b) + 1)2 R 0 2 F 4(V1 T (A,B) V 1 )κ(a)κ(b) Ror 2 F. A 2 B 2 Now, since X X 0 2 (A,B) 2 (A,B) 0, the results follow. Reark 3.7. Since R or F in{α 1 R 0 F, α 2 R 0 F }, we expect that the Frobenius nor of the residuals will not oscillate too uch when α 1 and α 2 are sall. On the other hand, it is easy to see that α 2 (κ(a)κ(b) + 1)/2, so we have R or F in{ κ(a)κ(b) R 0 F, ( κ(a)κ(b) ) R 0 F }. Hence, we conclude that when κ(a) = 1 and κ(b) = 1, then R or F R 0 F. We also consider the especial case where κ(a) and κ(b) are close to 1. We will coe back to these points in the nuerical illustrations section. ) The G-MR-L ethod. Let X 0 R n s be an initial guess and R 0 = C AX 0 B be its corresponding residual. The G-MR-L ethod consists in generating approxiate solutions of the for X r = X 0 + Z with Z GK (A, R 0, B) such that (3.9) R r := C AX r B F AGK (A, R 0, B)B,

12 991 Mohseni Moghada, Rivaz, Tajaddini and Saberi Movahed where = 1, 2,.... The atrix R r X r. is called the th residual associated with Proposition 3.8. Assue that {V 1,..., V } is the basis of GK (A, R 0, B) constructed by the GGL algorith. The approxiation X r can be written as (3.10) X r where α r = X 0 + V (α r I s ), is the unique solution of the linear syste (3.11) (H 2 + h 2 +1,e e T )α r = R 0 F H e 1. Proof. We have X r B)(α r I s ) where α r = X 0 + V (α r relation (3.9) yields (W T W )α r (2.2) and Proposition 2.2, we obtain I s ) such that R r = R 0 AV (I R. Let W = AV (I B). The orthogonality = W T R 0. Since R 0 = R 0 F V 1, using W T W = H 2 + h 2 +1,e e T, and W T R 0 = R 0 F H e 1. As H 2 + h 2 +1,e e T is SPD, the solution of (3.11) exists and is unique. Proposition 3.9. The residual atrix R r satisfies the following relation ( (3.12) R r = h +1, α r() h +1, V (H 1 e I s ) V +1 ), where α r() is the last coponent of α r. In addition, we get h 2 +1, H 1 e (3.13) R r F = h +1, α r() So, R r F = 0 if and only if h +1, = 0. Proof. We have R r = R 0 AV (I B)(α r I s ). Using the relations (2.2) and (3.11), it follows that R r = R 0 V (H α r I s ) h +1, (e T α r )V +1 = R 0 R 0 F V (e 1 I s ) + h 2 +1,(e T α r )V (H 1 e I s ) h +1, (e T α r )V +1 ( = h +1, (e T α r ) h +1, V (H 1 e I s ) V +1 ). Therefore, R r F = (R r ) T R r = h +1, α r() Fro (3.11) and the Craer rule, we obtain α r() = ( 1) +1 R 0 F 1 i=1 h i+1,i h 2 +1, H 1 det(h + h 2 +1, H 1 e e T ). e Using Proposition 2.5, we have h i+1,i > 0, i = 1,..., 1. Hence R r F = 0 if and only if h +1, = 0.

13 Convergence analysis 992 The next result shows that the Frobenius nor of the residual has the iniization property. Theore Let X 0 R n s be an initial guess and R 0 = C AX 0 B be its corresponding residual. Then X r is the th approxiation of the G-MR-L ethod if and only if C AX r B F = in C AXB F. X X 0 +GK (A,R 0,B) Proof. The proof is siilar to that of Theore 3.2 and is oitted. 4. The worst-case behavior of the G-OR-L and G-MR-L ethods The worst-case convergence behavior of any well known Krylov subspace ethods for noral atrices is described by the in-ax approxiation proble on the discrete set of the atrix eigenvalues [17], in p P ax p (λ). λ σ(a) We investigate the worst-case convergence behavior of the G-OR-L and G- MR-L ethods The worst-case behavior of the G-OR-L ethod. First, we prove the following lea. Lea 4.1. Let X X 0 + GK (A, R 0, B) where X 0 R n s. If X is the solution of (1.1), then vec(x X) = p (B : A)vec(D 0 ) where p (x, y) = i=0 c ix i y i with p (0, 0) = 1. Proof. We have X = X i=1 d ia i 1 R 0 B i 1 where d i R, i = 1,..., 1. As R 0 = AD 0 B, we get vec(x X) = i=0 c i(b A) i vec(d 0 ) such that c 0 = 1 and c i = d i, i = 1,...,. Now, if p (x, y) = i=0 c ix i y i, then the result follows. Let A = V T DV and B = U T ΛU with V T V = I n and U T U = I s, where D and Λ are the diagonal atrices whose eleents are the eigenvalues µ 1,..., µ n of A and the eigenvalues λ 1,..., λ s of B, respectively. If V = [v 1,..., v n ] and U = [u 1,..., u s ], then vec(d 0 ) can be written as vec(d 0 ) = s n i=1 j=1 a ij(u i v j ) where a ij R. Fro [16, Theore 1, p. 411], we obtain (4.1) p (B : A)vec(D 0 ) = s i=1 j=1 n a ij p (λ i, µ j )(u i v j ).

14 993 Mohseni Moghada, Rivaz, Tajaddini and Saberi Movahed Theore 4.2. Let X be the solution of the atrix equation (1.1). If X or the th approxiation of the G-OR-L ethod, then (4.2) (A,B) X X 0 (A,B) in p P ax λ σ(b) p (0,0)=1 µ σ(a) p (λ, µ), where P denotes the set of polynoials in two variables x, y of the for p (x, y) = i=0 c ix i y i with value one at the origin. Proof. Using Theore 3.2 and Lea 4.1, we obtain (A,B) = in p (B : A)vec(D 0 ) B A. p P p (0,0)=1 Let, be the standard inner product in R ns. Fro (4.1), it is straightforward to verify that p (B : A)vec(D 0 ) 2 B A = p (B : A)vec(D 0 ), (B A)p (B : A)vec(D 0 ) = p (B : A)vec(D 0 ), p (B : A)(B A)vec(D 0 ) s n = a 2 ijp 2 (λ i, µ j )λ i µ j. i=1 j=1 As vec(d 0 ) 2 (B A) = s i=1 n j=1 a2 ij λ iµ j, we get p (B : A)vec(D 0 ) B A ax p (λ i, µ j ) vec(d 0 ) B A i,j = ax λ σ(b) µ σ(a) p (λ, µ) vec(d 0 ) B A. is Finally, since vec(d 0 ) B A = X X 0 (A,B), the proof copletes The worst-case behavior of the G-MR-L ethod. In this section, we study the worst-case behavior of the G-MR-L ethod. The ain result of this section is derived with the assuptions that the atrices A and B are SPD. Theore 4.3. If X r (4.3) is the th approxiation of the G-MR-L ethod, then C AX r B F C AX 0 B F in p P ax λ σ(b) p (0,0)=1 µ σ(a) p (λ, µ), where P denotes the set of polynoials in two variables x, y of the for p (x, y) = i=0 c ix i y i with value one at the origin.

15 Convergence analysis 994 Proof. Let X X 0 + GK (A, R 0, B) and R = C AXB. Then vec(r) = vec(r 0 ) c i (B A) i vec(r 0 ) i=1 = (U V ) T ( I c i (Λ D) i) (U V )vec(r 0 ) i=1 = (U V ) T p (Λ : D)(U V )vec(r 0 ), which p (x, y) = 1 + i=1 c ix i y i where c i R, i = 1,...,. Hence, we have C AXB F = vec(r) 2 (U V ) T 2 p (Λ : D) 2 (U V ) 2 vec(r 0 ) 2. Now, using [16, Theore 1, p. 411], it follows that C AXB F ax λ σ(b) µ σ(a) Together with Theore 3.10, we conclude p (λ, µ) R 0 F. C AX r B F = in C AXB F X X 0 +GK (A,R 0,B) in p P ax λ σ(b) p (0,0)=1 µ σ(a) p (λ, µ) R 0 F. Reark 4.4. In the process of proving Theore 4.3, we also consider the following cases: (1): A and B are diagonalizable. When A and B are diagonalizable, i.e., A = V 1 DV and B = U 1 ΛU with D = diag(µ 1,..., µ n ) and Λ = diag(λ 1,..., λ s ), then we obtain the following convergence bound (4.4) C AX r B F C AX 0 B F κ(v )κ(u) in p P ax λ σ(b) p (0,0)=1 µ σ(a) p (λ, µ). Moreover, if we assue that B = I s, then we have σ(b) = {1} and κ(u) = 1. In this case, we deduce that Theore 4.3 coincides with the result of the Gl- GMRES given in [3, Theore 5]. (2): A and B are noral atrices. If A and B are noral atrices, then Theore 4.3 holds. In addition, if B = I s, then Theore 4.3 reduces to Theore 7 in [3]. Reark 4.5. Interesting related work was recently done by Bellalij et al. [3] for solving a atrix equation AX = C. They presented the worst-case convergence behavior of the Gl-GMRES ethod for diagonalizable atrices. Although all presented results in [3] are correct, as it was discussed in Reark 4.4, when B =

16 995 Mohseni Moghada, Rivaz, Tajaddini and Saberi Movahed I s, Theore 4.3 leads to Theores 5 and 7 in [3]. Furtherore, a coparison between our proof process with that of [3] shows that Theore 4.3 presents an easier proof for the worst-case convergence behavior of the Gl-GMRES ethod. 5. Nuerical illustrations The G-OR-L and G-MR-L ethods are suarized in the following algorith. It is iportant to notice that when the iterate becoes large, the coputational requireents increase. To reedy this proble, we use the algorith in a restarted ode, i.e., X 0 = X or (X 0 = X r ) where X or (X r ) is the last coputed approxiate solution obtained with the G-OR-L (G-MR-L) ethod. Algorith 2 (The G-OR-L() and G-MR-L() ethods) 1: Choose X 0. 2: Copute R 0 = C AX 0 B and set V 1 = R 0 / R 0 F. Use the generalized global Lanczos algorith to copute the basis {V 1, V 2,..., V } of GK (A, R 0, B) and the atrix H. 3: G-OR-L: copute the approxiate solution X or = X 0 + V (α or I s) where α or solves the equation (3.3). 4: If the approxiation X or is suitable then stop, else set X 0 = X or and goto 2. 5: G-MR-L: copute the approxiate solution X r = X 0 + V (α r solves the equation (3.11). 6: If the approxiation X r is suitable then stop, else set X 0 = X r and goto 2. I s ) where α r In this section, we present soe nuerical exaples to illustrate the quality of the presented theoretical results. To do so, we use the atrices A 1 (n) = tridiag( 1, 10, 1) and B 1 (s) = tridiag( 1, 10, 1), A 2 (n) = and B 2(s) = where n and s are the order of atrices A and B, respectively , Also, we use soe atrices fro the Matrix-Market website at MatrixMarket. The entries of the atrix C in all the exaples are rando values uniforly distributed on [0, 1] and X 0 = 0. All nuerical tests run in Matlab. Exaple 5.1. In this exaple, we seek two goals. First, we exaine the influence of the condition nubers of the atrices A and B on the speed of convergence of the G-OR-L() and G-MR-L() ethods. Then, we study the behavior of the Frobenius nor of the residual of the G-OR-L() and G- MR-L() ethods. Also, as stopping criteria, we use C AX or B F 10 6 and C AX r B F 10 6 for the G-OR-L() and G-MR-L() ethods, respectively.

17 Convergence analysis 996 Tables 1 and 2 report the final Frobenius nor of the residuals, the nuber of iterations and the CPU tie required to convergence for each ethod. We note that the nuber of iterations refers to the nuber of restarts. As shown in Tables 1 and 2, we see that the nuber of iterations increases when A or B have very large condition nubers. However, if the condition nubers of A and B are sall, then the nuber of iterations decreases. Table 1. Effectiveness of the G-OR-L and G-MR-L ethods: = 3. (A, B) G-OR-L(3) G-MR-L(3) n = 2000, s = 100 Residuals e e-008 (A 1, B 1) κ(a) = 1.5 Nuber of iterations 6 6 κ(b) = CPU tie (sec) n = 1000, s = 500 Residuals e e-007 (A 2, B 2) κ(a) = 3 Nuber of iterations κ(b) = 3 CPU tie (sec) Table 2. Effectiveness of the G-OR-L and G-MR-L ethods: = 20. (A, B) G-OR-L(20) G-MR-L(20) n = 900, s = 10 Residuals e e-007 (GR3030, B 1) κ(a) = 3.8e + 02 Nuber of iterations κ(b) = CPU tie (sec) n = 100, s = 10 Residuals e e-007 (NOS4, B 1) κ(a) = 2.7e + 03 Nuber of iterations κ(b) = CPU tie (sec) n = 10, s = 468 Residuals e e-007 (A 1, NOS5) κ(a) = Nuber of iterations κ(b) = 2.9e + 04 CPU tie (sec) According to the aforeentioned stopping criteria, we investigate further this exaple by considering the convergence behavior of the Frobenius nor of the residual at each iteration of the G-OR-L() and G-MR-L() ethods, respectively. The G-OR-L ethod. In Figures 1 5, the left plots show the Frobenius nor of the G-OR-L residuals and the right plots show the values of α 1 and α 2. As shown in Figures 1 3, we see that α 1 and α 2 are sall. Moreover, as it is expected fro Reark 3.7, we observe that the Frobenius nor of the residuals associated with the atrices (A 1 (2000), B 1 (100)), (A 2 (1000), B 2 (500)) and (GR3030, B 1 (10)) decreases. Figures associated with the atrices (NOS4, B 1 (10)) and (B 1 (10), NOS5), especially (B 1 (10), NOS5), illustrate that α 1 and α 2 are large. Therefore, we expect that the rate of changes of the Frobenius nor of the residuals

18 997 Mohseni Moghada, Rivaz, Tajaddini and Saberi Movahed associated with these atrices increases. As we see in Figures 4 and 5, the Frobenius nor of the residuals associated with the atrices (NOS4, B 1 (10)) and (B 1 (10), NOS5), especially (B 1 (10), NOS5), oscillates too uch. Figure 1. Illustration of R or F associated with (A 1 (2000), B 1 (100)). Figure 2. Illustration of R or F associated with (A 2 (1000), B 2 (500)). Figure 3. Illustration of R or F associated with (GR3030, B 1 (10)). Figure 4. Illustration of R or F associated with (NOS4, B 1 (10)). The G-MR-L ethod. It can be shown that the Frobenius nor of the residual at each step of the G-MR-L ethod satisfies R r F R 0 F.

19 Convergence analysis 998 Figure 5. Illustration of R or F associated with (B 1 (10), NOS5). Therefore, since we use the G-MR-L ethod in a restarted ode, we expect a onotonically decreasing behavior of the Frobenius nor of the residual. The plots of Figure 6 show that the Frobenius nor of the residual is onotonically decreasing. Figure 6. Illustration of R r F associated with the atrices (A 1 (2000), B 1 (100)), (A 2 (1000), B 2 (500)), (GR3030, B 1 (10)), (NOS4, B 1 (10)) and (B 1 (10), NOS5). We suarize what we have stated so far. As we have seen, the Frobenius nor of the residual of the G-OR-L() ay oscillate. In contrast, the Frobenius nor of the residual of the G-MR-L() shows a onotonically decreasing behavior. Therefore, this contrast can be used as one of the reasons for the observed difference in the speed of convergence of the G-OR-L() and G-MR-L() ethods. Exaple 5.2. In this exaple, we study the behavior of the (A, B) nor of the error of the G-OR-L() ethod. The test atrices are the sae as the previous exaple. Also, the iterations are stopped when (A,B) In Table 3, we report the final values of the (A, B) nor of the errors and the final upper bounds derived in Theore 3.5. The obtained results indicate that these upper bounds are noticeable. However, the upper bound (UB.2) is better than other upper bounds. Finally, Figure 7 shows the values of the (A, B) nor of the errors and the upper bound (UB.2) together with the nuber of iterations.

20 999 Mohseni Moghada, Rivaz, Tajaddini and Saberi Movahed Table 3. The final values of (A,B) and the quality of the upper bounds (UB.1) (UB.4). (A, B) (A,B) (UB.1) (UB.2) (UB.3) (UB.4) (A 1(2000), B 1(100)) e e e e e-007 (A 2(1000), B 2(500)) e e e e e-006 (GR3030, B 1(10)) e e e e e-006 (NOS4, B 1(10)) e e e e e-005 (B 1(10), NOS5) e e e e e-006 Figure 7. Illustration of (A,B) and the values of the upper bound (UB.2). 6. Conclusion and further works We have considered the convergence behavior of the Gl-FOM and Gl-GMRES ethods for solving the atrix equation AXB = C where A and B are SPD. More precisely, soe new theoretical results of these ethods, such as coputable exact expressions, upper bounds for the nor of the error and residual and the worst-case convergence behavior of these ethods, have been established. In particular, our upper bounds for the nor of the Gl-FOM error depend on the condition nubers of the atrices A and B and the inforation generated by the Gl-FOM ethod. In the nuerical test section, we have explored the convergence behavior of these ethods. The nuerical results show the efficiency of the theoretical results. The relationship between the nor of the Gl-GMRES residual and the spectral inforation of A and B and the inforation generated by the Gl-GMRES ethod can be the subject of further investigations. Furtherore, the theoretical results, presented in this paper, can be used for the Gl-FOM and Gl-GMRES ethods in order to solve the generalized Sylvester equations with SPD coefficients.

21 Convergence analysis 1000 References [1] F. P. A. Beik, Theoretical results on the global GMRES ethod for solving generalized Sylvester atrix equations, Bull. Iranian Math. Soc. 40 (2014), no. 5, [2] F. P. A. Beik and D. K. Salkuyeh, On the global Krylov subspace ethods for solving general coupled atrix equation, Coput. Math. Appl. 62 (2011), no. 12, [3] M. Bellalij, K. Jbilou and H. Sadok, New convergence results on the global GMRES ethod for diagonalizable atrices, J. Coput. Appl. Math. 219 (2008), no. 2, [4] R. Bouyouli, K. Jbilou, R. Sadaka and H. Sadok, Convergence properties of soe block Krylov subspace ethods for ultiple linear systes, J. Coput. Appl. Math. 196 (2006), no. 2, [5] R. Bouyouli, G. Meurant, L. Soch and H. Sadok, New results on the convergence of the conjugate gradient ethod, Nuer. Linear Algebra Appl. 16 (2009), no. 3, [6] J. B. Conway, A Course in Functional Analysis, Springer-Verlag, New York, [7] D. Cvetković-Iliíc, The reflexive solutions of the atrix equations AXB = C. Coput. Math. Appl. 51 (2006), no. 6-7, [8] H. Dai, On the syetric solutions of linear atrix equations, Linear Algebra Appl. 131 (1990) 1 7. [9] M. Heyouni, The global Hessenberg and CMRH ethods for linear systes with ultiple right-hand sides, Nuer. Algoriths 26 (2001), no. 4, [10] M. Heyouni, Extended Arnoldi ethods for large low-rank Sylvester atrix equations, Appl. Nuer. Math. 60 (2010), no. 11, [11] M. Heyouni and A. Essai, Matrix Krylov subspace ethods for linear systes with ultiple right-hand sides, Nuer. Algoriths 40 (2005), no. 2, [12] R. A. Horn and C. R. Johnson, Matrix Analysis, Cabridge University Press, Cabridge, [13] G. X. Huang, F. Yin and K. Guo, An iterative ethod for the skew-syetric solution and the optial approxiate solution of the atrix equation AXB = C, J. Coput. Appl. Math. 212 (2008), no. 2, [14] K. Jbilou, A. Messaoudi and H. Sadok, Global FOM and GMRES algoriths for atrix equations, Appl. Nuer. Math. 31 (1999), no. 1, [15] F. Kittaneh, Notes on soe inequalities for Hilbert space operators, Publ. Res. Inst. Math. Sci. 24 (1988), no. 2, [16] P. Lancaster and M. Tisenetsky, The Theory of Matrices with Applications, Acadeic Press, New York, [17] J. Liesen and P. Tichý, Convergence analysis of Krylov subspace ethods, GAMM Mitt. Ges. Angew. Math. Mech. 27 (2004), no. 2, [18] Y. A. Peng, X. Hu and L. Zhang, An iteration ethod for the syetric solutions and the optial approxiation solution of the atrix equation AXB = C, Appl. Math. Coput. 160 (2005), no. 3, [19] Y. Saad, Iterative Methods for Sparse Linear Systes, PWS Press, New York, [20] K. Zhang and C. Gu, Flexible global generalized Hessenberg ethods for linear systes with ultiple right-hand sides, J. Coput. Appl. Math. 263 (2014) [21] K. Zhang and C. Gu, A global transpose-free ethod with quasi-inial residual strategy for the Sylvester equations, Filoat 27 (2013), no. 8, (Mahood Mohseni Moghada) Departent of Matheatics and Coputer Science, Shahid Bahonar University of Keran, P.O.Box , Keran, Iran E-ail address: ohseni@uk.ac.ir

22 1001 Mohseni Moghada, Rivaz, Tajaddini and Saberi Movahed (Azi Rivaz) Departent of Matheatics and Coputer Science, Shahid Bahonar University of Keran, P.O.Box Keran, Iran E-ail address: (Azita Tajaddini) Departent of Matheatics and Coputer Science, Shahid Bahonar University of Keran, P.O.Box , Keran, Iran E-ail address: (Farid Saberi Movahed) Departent of Matheatics and Coputer Science, Shahid Bahonar University of Keran, P.O.Box , Keran, Iran E-ail address:

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS BIT Nuerical Matheatics 43: 459 466, 2003. 2003 Kluwer Acadeic Publishers. Printed in The Netherlands 459 RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS V. SIMONCINI Dipartiento di

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

Lecture 13 Eigenvalue Problems

Lecture 13 Eigenvalue Problems Lecture 13 Eigenvalue Probles MIT 18.335J / 6.337J Introduction to Nuerical Methods Per-Olof Persson October 24, 2006 1 The Eigenvalue Decoposition Eigenvalue proble for atrix A: Ax = λx with eigenvalues

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Departent of Matheatics, Virginia Tech www.ath.vt.edu/people/sturler/index.htl sturler@vt.edu Efficient

More information

NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS

NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS NIKOLAOS PAPATHANASIOU AND PANAYIOTIS PSARRAKOS Abstract. In this paper, we introduce the notions of weakly noral and noral atrix polynoials,

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

1. Introduction. This paper is concerned with the study of convergence of Krylov subspace methods for solving linear systems of equations,

1. Introduction. This paper is concerned with the study of convergence of Krylov subspace methods for solving linear systems of equations, ANALYSIS OF SOME KRYLOV SUBSPACE METODS FOR NORMAL MATRICES VIA APPROXIMATION TEORY AND CONVEX OPTIMIZATION M BELLALIJ, Y SAAD, AND SADOK Dedicated to Gerard Meurant at the occasion of his 60th birthday

More information

A note on the realignment criterion

A note on the realignment criterion A note on the realignent criterion Chi-Kwong Li 1, Yiu-Tung Poon and Nung-Sing Sze 3 1 Departent of Matheatics, College of Willia & Mary, Williasburg, VA 3185, USA Departent of Matheatics, Iowa State University,

More information

Fixed-Point Iterations, Krylov Spaces, and Krylov Methods

Fixed-Point Iterations, Krylov Spaces, and Krylov Methods Fixed-Point Iterations, Krylov Spaces, and Krylov Methods Fixed-Point Iterations Solve nonsingular linear syste: Ax = b (solution ˆx = A b) Solve an approxiate, but sipler syste: Mx = b x = M b Iprove

More information

Algebraic Montgomery-Yang problem: the log del Pezzo surface case

Algebraic Montgomery-Yang problem: the log del Pezzo surface case c 2014 The Matheatical Society of Japan J. Math. Soc. Japan Vol. 66, No. 4 (2014) pp. 1073 1089 doi: 10.2969/jsj/06641073 Algebraic Montgoery-Yang proble: the log del Pezzo surface case By DongSeon Hwang

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011 Page Lab Eleentary Matri and Linear Algebra Spring 0 Nae Due /03/0 Score /5 Probles through 4 are each worth 4 points.. Go to the Linear Algebra oolkit site ransforing a atri to reduced row echelon for

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

A new type of lower bound for the largest eigenvalue of a symmetric matrix

A new type of lower bound for the largest eigenvalue of a symmetric matrix Linear Algebra and its Applications 47 7 9 9 www.elsevier.co/locate/laa A new type of lower bound for the largest eigenvalue of a syetric atrix Piet Van Mieghe Delft University of Technology, P.O. Box

More information

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes Explicit solution of the polynoial least-squares approxiation proble on Chebyshev extrea nodes Alfredo Eisinberg, Giuseppe Fedele Dipartiento di Elettronica Inforatica e Sisteistica, Università degli Studi

More information

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS DAVOD KHOJASTEH SALKUYEH and FATEMEH PANJEH ALI BEIK Communicated by the former editorial board Let A : R m n R m n be a symmetric

More information

ON SOME MATRIX INEQUALITIES. Hyun Deok Lee. 1. Introduction Matrix inequalities play an important role in statistical mechanics([1,3,6,7]).

ON SOME MATRIX INEQUALITIES. Hyun Deok Lee. 1. Introduction Matrix inequalities play an important role in statistical mechanics([1,3,6,7]). Korean J. Math. 6 (2008), No. 4, pp. 565 57 ON SOME MATRIX INEQUALITIES Hyun Deok Lee Abstract. In this paper we present soe trace inequalities for positive definite atrices in statistical echanics. In

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

The linear sampling method and the MUSIC algorithm

The linear sampling method and the MUSIC algorithm INSTITUTE OF PHYSICS PUBLISHING INVERSE PROBLEMS Inverse Probles 17 (2001) 591 595 www.iop.org/journals/ip PII: S0266-5611(01)16989-3 The linear sapling ethod and the MUSIC algorith Margaret Cheney Departent

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

NON-COMMUTATIVE GRÖBNER BASES FOR COMMUTATIVE ALGEBRAS

NON-COMMUTATIVE GRÖBNER BASES FOR COMMUTATIVE ALGEBRAS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volue 126, Nuber 3, March 1998, Pages 687 691 S 0002-9939(98)04229-4 NON-COMMUTATIVE GRÖBNER BASES FOR COMMUTATIVE ALGEBRAS DAVID EISENBUD, IRENA PEEVA,

More information

Lecture 20 November 7, 2013

Lecture 20 November 7, 2013 CS 229r: Algoriths for Big Data Fall 2013 Prof. Jelani Nelson Lecture 20 Noveber 7, 2013 Scribe: Yun Willia Yu 1 Introduction Today we re going to go through the analysis of atrix copletion. First though,

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

The Fundamental Basis Theorem of Geometry from an algebraic point of view

The Fundamental Basis Theorem of Geometry from an algebraic point of view Journal of Physics: Conference Series PAPER OPEN ACCESS The Fundaental Basis Theore of Geoetry fro an algebraic point of view To cite this article: U Bekbaev 2017 J Phys: Conf Ser 819 012013 View the article

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

Research Article Norm Estimates for Solutions of Polynomial Operator Equations

Research Article Norm Estimates for Solutions of Polynomial Operator Equations Matheatics Volue 5, Article ID 5489, 7 pages http://dxdoiorg/55/5/5489 Research Article Nor Estiates for Solutions of Polynoial Operator Equations Michael Gil Departent of Matheatics, Ben Gurion University

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions Journal of Matheatical Research with Applications Jul., 207, Vol. 37, No. 4, pp. 496 504 DOI:0.3770/j.issn:2095-265.207.04.0 Http://jre.dlut.edu.cn An l Regularized Method for Nuerical Differentiation

More information

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number Research Journal of Applied Sciences, Engineering and Technology 4(23): 5206-52, 202 ISSN: 2040-7467 Maxwell Scientific Organization, 202 Subitted: April 25, 202 Accepted: May 3, 202 Published: Deceber

More information

A Bernstein-Markov Theorem for Normed Spaces

A Bernstein-Markov Theorem for Normed Spaces A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :

More information

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE CHRISTOPHER J. HILLAR Abstract. A long-standing conjecture asserts that the polynoial p(t = Tr(A + tb ] has nonnegative coefficients whenever is

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Least squares fitting with elliptic paraboloids

Least squares fitting with elliptic paraboloids MATHEMATICAL COMMUNICATIONS 409 Math. Coun. 18(013), 409 415 Least squares fitting with elliptic paraboloids Heluth Späth 1, 1 Departent of Matheatics, University of Oldenburg, Postfach 503, D-6 111 Oldenburg,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Numerical Solution of Volterra-Fredholm Integro-Differential Equation by Block Pulse Functions and Operational Matrices

Numerical Solution of Volterra-Fredholm Integro-Differential Equation by Block Pulse Functions and Operational Matrices Gen. Math. Notes, Vol. 4, No. 2, June 211, pp. 37-48 ISSN 2219-7184; Copyright c ICSRS Publication, 211 www.i-csrs.org Available free online at http://www.gean.in Nuerical Solution of Volterra-Fredhol

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

Linear Transformations

Linear Transformations Linear Transforations Hopfield Network Questions Initial Condition Recurrent Layer p S x W S x S b n(t + ) a(t + ) S x S x D a(t) S x S S x S a(0) p a(t + ) satlins (Wa(t) + b) The network output is repeatedly

More information

Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction

Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction Finding Rightost Eigenvalues of Large Sparse Non-syetric Paraeterized Eigenvalue Probles Applied Matheatics and Scientific Coputation Progra Departent of Matheatics University of Maryland, College Par,

More information

A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS. 1. Introduction. The evaluation of. f(a)b, where A C n n, b C n (1.

A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS. 1. Introduction. The evaluation of. f(a)b, where A C n n, b C n (1. A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS MICHAEL EIERMANN AND OLIVER G. ERNST Abstract. We show how the Arnoldi algorith for approxiating a function of a atrix ties a vector

More information

Slanted coupling of one-dimensional arrays of small tunnel junctions

Slanted coupling of one-dimensional arrays of small tunnel junctions JOURNAL OF APPLIED PHYSICS VOLUME 84, NUMBER 1 15 DECEMBER 1998 Slanted coupling of one-diensional arrays of sall tunnel junctions G. Y. Hu Departent of Physics and Astronoy, Louisiana State University,

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

Principal Components Analysis

Principal Components Analysis Principal Coponents Analysis Cheng Li, Bingyu Wang Noveber 3, 204 What s PCA Principal coponent analysis (PCA) is a statistical procedure that uses an orthogonal transforation to convert a set of observations

More information

Curious Bounds for Floor Function Sums

Curious Bounds for Floor Function Sums 1 47 6 11 Journal of Integer Sequences, Vol. 1 (018), Article 18.1.8 Curious Bounds for Floor Function Sus Thotsaporn Thanatipanonda and Elaine Wong 1 Science Division Mahidol University International

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

ITALIAN JOURNAL OF PURE AND APPLIED MATHEMATICS N ( ) 528

ITALIAN JOURNAL OF PURE AND APPLIED MATHEMATICS N ( ) 528 ITALIAN JOURNAL OF PURE AND APPLIED MATHEMATICS N. 40 2018 528 534 528 SOME OPERATOR α-geometric MEAN INEQUALITIES Jianing Xue Oxbridge College Kuning University of Science and Technology Kuning, Yunnan

More information

Bernoulli Wavelet Based Numerical Method for Solving Fredholm Integral Equations of the Second Kind

Bernoulli Wavelet Based Numerical Method for Solving Fredholm Integral Equations of the Second Kind ISSN 746-7659, England, UK Journal of Inforation and Coputing Science Vol., No., 6, pp.-9 Bernoulli Wavelet Based Nuerical Method for Solving Fredhol Integral Equations of the Second Kind S. C. Shiralashetti*,

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Solving initial value problems by residual power series method

Solving initial value problems by residual power series method Theoretical Matheatics & Applications, vol.3, no.1, 13, 199-1 ISSN: 179-9687 (print), 179-979 (online) Scienpress Ltd, 13 Solving initial value probles by residual power series ethod Mohaed H. Al-Sadi

More information

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket.

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket. Generalized eigenfunctions and a Borel Theore on the Sierpinski Gasket. Kasso A. Okoudjou, Luke G. Rogers, and Robert S. Strichartz May 26, 2006 1 Introduction There is a well developed theory (see [5,

More information

Physics 139B Solutions to Homework Set 3 Fall 2009

Physics 139B Solutions to Homework Set 3 Fall 2009 Physics 139B Solutions to Hoework Set 3 Fall 009 1. Consider a particle of ass attached to a rigid assless rod of fixed length R whose other end is fixed at the origin. The rod is free to rotate about

More information

lecture 36: Linear Multistep Mehods: Zero Stability

lecture 36: Linear Multistep Mehods: Zero Stability 95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,

More information

A Quantum Observable for the Graph Isomorphism Problem

A Quantum Observable for the Graph Isomorphism Problem A Quantu Observable for the Graph Isoorphis Proble Mark Ettinger Los Alaos National Laboratory Peter Høyer BRICS Abstract Suppose we are given two graphs on n vertices. We define an observable in the Hilbert

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information

which together show that the Lax-Milgram lemma can be applied. (c) We have the basic Galerkin orthogonality

which together show that the Lax-Milgram lemma can be applied. (c) We have the basic Galerkin orthogonality UPPSALA UNIVERSITY Departent of Inforation Technology Division of Scientific Coputing Solutions to exa in Finite eleent ethods II 14-6-5 Stefan Engblo, Daniel Elfverson Question 1 Note: a inus sign in

More information

IN modern society that various systems have become more

IN modern society that various systems have become more Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto

More information

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe PROPERTIES OF MULTIVARIATE HOMOGENEOUS ORTHOGONAL POLYNOMIALS Brahi Benouahane y Annie Cuyt? Keywords Abstract It is well-known that the denoinators of Pade approxiants can be considered as orthogonal

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Research Article Approximate Multidegree Reduction of λ-bézier Curves

Research Article Approximate Multidegree Reduction of λ-bézier Curves Matheatical Probles in Engineering Volue 6 Article ID 87 pages http://dxdoiorg//6/87 Research Article Approxiate Multidegree Reduction of λ-bézier Curves Gang Hu Huanxin Cao and Suxia Zhang Departent of

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Partial traces and entropy inequalities

Partial traces and entropy inequalities Linear Algebra and its Applications 370 (2003) 125 132 www.elsevier.co/locate/laa Partial traces and entropy inequalities Rajendra Bhatia Indian Statistical Institute, 7, S.J.S. Sansanwal Marg, New Delhi

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

Lean Walsh Transform

Lean Walsh Transform Lean Walsh Transfor Edo Liberty 5th March 007 inforal intro We show an orthogonal atrix A of size d log 4 3 d (α = log 4 3) which is applicable in tie O(d). By applying a rando sign change atrix S to the

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

DIFFERENTIAL EQUATIONS AND RECURSION RELATIONS FOR LAGUERRE FUNCTIONS ON SYMMETRIC CONES

DIFFERENTIAL EQUATIONS AND RECURSION RELATIONS FOR LAGUERRE FUNCTIONS ON SYMMETRIC CONES TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volue 359, Nuber 7, July 2007, Pages 3239 3250 S 0002-9947(07)04062-7 Article electronically published on February 8, 2007 DIFFERENTIAL EQUATIONS AND RECURSION

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

A SIMPLE METHOD FOR FINDING THE INVERSE MATRIX OF VANDERMONDE MATRIX. E. A. Rawashdeh. 1. Introduction

A SIMPLE METHOD FOR FINDING THE INVERSE MATRIX OF VANDERMONDE MATRIX. E. A. Rawashdeh. 1. Introduction MATEMATIČKI VESNIK MATEMATIQKI VESNIK Corrected proof Available online 40708 research paper originalni nauqni rad A SIMPLE METHOD FOR FINDING THE INVERSE MATRIX OF VANDERMONDE MATRIX E A Rawashdeh Abstract

More information

Lecture 21 Nov 18, 2015

Lecture 21 Nov 18, 2015 CS 388R: Randoized Algoriths Fall 05 Prof. Eric Price Lecture Nov 8, 05 Scribe: Chad Voegele, Arun Sai Overview In the last class, we defined the ters cut sparsifier and spectral sparsifier and introduced

More information

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3 A. Find all ordered pairs a, b) of positive integers for which a + b = 3 08. Answer. The six ordered pairs are 009, 08), 08, 009), 009 337, 674) = 35043, 674), 009 346, 673) = 3584, 673), 674, 009 337)

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Lecture 9 November 23, 2015

Lecture 9 November 23, 2015 CSC244: Discrepancy Theory in Coputer Science Fall 25 Aleksandar Nikolov Lecture 9 Noveber 23, 25 Scribe: Nick Spooner Properties of γ 2 Recall that γ 2 (A) is defined for A R n as follows: γ 2 (A) = in{r(u)

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

G G G G G. Spec k G. G Spec k G G. G G m G. G Spec k. Spec k

G G G G G. Spec k G. G Spec k G G. G G m G. G Spec k. Spec k 12 VICTORIA HOSKINS 3. Algebraic group actions and quotients In this section we consider group actions on algebraic varieties and also describe what type of quotients we would like to have for such group

More information

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements 1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer

More information

New Classes of Positive Semi-Definite Hankel Tensors

New Classes of Positive Semi-Definite Hankel Tensors Miniax Theory and its Applications Volue 017, No., 1 xxx New Classes of Positive Sei-Definite Hankel Tensors Qun Wang Dept. of Applied Matheatics, The Hong Kong Polytechnic University, Hung Ho, Kowloon,

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product. Coding Theory Massoud Malek Reed-Muller Codes An iportant class of linear block codes rich in algebraic and geoetric structure is the class of Reed-Muller codes, which includes the Extended Haing code.

More information