An algorithm for symmetric generalized inverse eigenvalue problems

Size: px
Start display at page:

Download "An algorithm for symmetric generalized inverse eigenvalue problems"

Transcription

1 Linear Algebra and its Applications 296 (999) 79±98 An algorithm for symmetric generalized inverse eigenvalue problems Hua Dai Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 26, People's Republic of hina Received 23 November 998; accepted 26 April 999 Submitted by P. Lancaster Abstract Using QR-like decomposition with column pivoting and least squares techniques, we propose a new and e cient algorithm for solving symmetric generalized inverse eigenvalue problems, and give its locally quadratic convergence analysis. We also present some numerical experiments which illustrate the behaviour of our algorithm. Ó 999 Elsevier Science Inc. All rights reserved. AMS classi cation: 65F5; 65H5 Keywords: Eigenvalue; Inverse problems; QR-like decomposition; Least squares; Gauss±Newton method. Introduction Let A c and B c be real n n twice continuously di erentiable symmetric matrix-valued functions depending on c ˆ c ;...;c n T 2R n, and B c be positive de nite whenever c 2 X, an open subset of R n. The generalized inverse eigenvalue problem(giep) under consideration is as follows. The research was supported by the National Natural Science Foundation of hina and the Jiangsu Province Natural Science Foundation /99/$ ± see front matter Ó 999 Elsevier Science Inc. All rights reserved. PII: S ( 9 9 ) 9-3

2 8 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 GIEP. Given n real numbers k 6 k k n, nd c 2 R n such that the symmetric generalized eigenvalue problem A c x ˆ kb c x has the prescribed eigenvalues k ; k 2 ;... ; k n. The GIEP arises in various areas of applications, such as the discrete analogue of inverse Sturm±Liouville problems [2] and structural design [3]. A special case of the GIEP is obtained when A c and B c are de ned by A c ˆ A Xn c j A j ; B c ˆ I; jˆ where A ; A ;... ; A n are real symmetric n n matrices, and I is the identity matrix. This problem is known as the algebraic inverse eigenvalue problem. There is a large literature on various aspects of the existence theory as well as numerical methods for the problem. See, for example, [±4,,4,7,8,2,23± 25,27±3]. The well-known additive and multiplicative inverse eigenvalue problems [8,9] are also included in our formulation by taking A c ˆ A diag c ;... ; c n ; B c ˆ I and A c ˆ A; B c ˆ diag c ;... ; c n ; respectively, where A is an n n constant matrix, about which there has been considerable discussion. See, for example, [3] and the references contained therein. Recently, Dai and Lancaster [5] considered another special case of the GIEP, i.e., A c and B c are the following a ne families A c ˆ A Xn iˆ c i A i ; B c ˆ B Xn iˆ c i B i ; where A ; A ;... ; A n ; B ; B ;... ; B n are the real symmetric n n matrices, and B c is positive de nite whenever c 2 X. Let k c 6 k 2 c 6 6 k n c be the eigenvalues of the symmetric generalized eigenvalue problem A c x ˆ kb c x. Then there is a well-de ned map k : X 7! R n given by k c ˆ k c ;... ; k n c T. The numerical algorithm analysed in [5] is Newton's method for solving the following nonlinear system k c k ˆ ; 2 where k ˆ k ;... ; k n T. Each step in the numerical solution by Newton's method of the system (2), however, involves the complete solution of the generalized eigenvalue problem A c x ˆ kb c x. Based on determinant evaluations originating with Lancaster [5] and Biegler-Konig [2], an approach to the GIEP has been studied in [7] as well, but it is not computationally attractive for real symmetric matrices. When k ;... ; k n include multiple eigenvalues,

3 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 8 however, the eigenvalues k c ;... ; k n c of the generalized eigenvalue problem A c x ˆ kb c x are not, in general, di erentiable [26] at a solution c. Furthermore, the eigenvectors are not unique, and they cannot generally be de ned to be continuous functions of c at c. The modi cation to the GIEP has been considered in [5], but the number of given eigenvalues and their multiplicities must satisfy a certain condition in the modi ed problem. In this paper, we propose a new, e cient and reliable algorithm for solving the GIEP, which consists of extension of ideas developed by [6,9,2]. The problem formulation and our algorithm do not need to be changed when multiple eigenvalues are present. The algorithm is suitable for both the distinct and multiple eigenvalue cases. Each step in the iterative process does not involve any solution of the generalized eigenvalue problem A c x ˆ kb c x. The paper is organized as follows. In Section 2 we recall some necessary di erentiability theory for QR-like decomposition of a matrix depending on several parameters. In Section 3 a new algorithm based on QR-like decomposition with column pivoting and least squares techniques is described, and its locally quadratic convergence analysis is given. In Section 4 some numerical examples are presented to illustrate the behaviour of our algorithm. For our consideration, we shall need the following notation. A solution to the GIEP will always be denoted by c. k k 2 denotes the Euclidean vector norm or induced spectral norm, and k k F the Frobenius matrix norm. For an n m matrix A ˆ a ;... ; a m Š, where a i is the ith column vector of A, we de ne a vector col A by col A ˆ a T ;... ; at m ŠT, and the norm kak w :ˆ max jˆ;...;m ka j k 2. The symbol denotes the Kronecker product of matrices. 2. QR-like decomposition and di erentiability Following Li [9], we rst de ne a QR-like decomposition of a real n n matrix A with index m 6 m 6 n to be a factorization A ˆ QR; R ˆ R R 2 ; 3 R 22 where Q 2 R nn is orthogonal, R is n m n m upper triangular, and R 22 is m m square. When m ˆ, this is just a classical QR decomposition of A. The existence of a QR-like decomposition is obvious. In fact, we need only construct a ``partial'' QR decomposition, see [], for example. In general, however, it is not unique as the following theorem shows. Theorem 2. (See [6,9]). Let A be an n n matrix whose first n m columns are linearly independent and let A ˆ QR be a QR-like decomposition with index m. Then A ˆ bqbr is also a QR-like decomposition with index m if and only if

4 82 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 Q ˆ bqd; R ˆ D T br; 4 where D ˆ diag D ; D 22 ; D is an orthogonal diagonal matrix, and D 22 is an m m orthogonal matrix. The following is a theorem on perturbation of QR-like decomposition. Theorem 2.2 (See [6]). Let 2 R nn have its first n m columns linearly independent and let ˆ Q R be a QR-like decomposition with index m. Let 2 2 R nn be any matrix satisfying k 2 k 2 < e. Then, for sufficiently small e, 2 has a QR-like decomposition with index m, 2 ˆ Q 2 R 2, such that kq Q 2 k 2 6 j e; kr R 2 k 2 6 j 2 e 5 where j ; j 2 are constants independent on 2. Note that the linear independence hypothesis ensures that the R blocks of R and br are nonsingular. In order to make the submatrix R of QR-like decomposition nonsingular, we admit a permutation of the columns of matrix A. The QR-like decomposition with column pivoting of A 2 R nn may be expressed as AP ˆ QR; 6 where P is an n n permutation matrix, and R is similar to Eq. (3). If rank A ˆ n m, then the permutation matrix P can be chosen such that the rst n m columns of the matrix AP are linearly independent and the block upper triangular matrix R satis es je T Re j P je T 2 Re 2j P P je T n m Re n mj > ; R 22 ˆ : 7 Now let A c ˆ a ij c 2 R nn be a continuously di erentiable matrix-valued function of c 2 R n. Here, the di erentiability of A c with respect to c means, for any c 2 R n, we have A c ˆ A c Xn jˆ where c ˆ c ;... ; c n T ; c ˆ c ;... ; c n T, and oa c ˆ oa ij c j oc cˆc : j oa c c j c j o kc c k oc 2 ; 8 j Note that if A c is twice di erentiable, then o kc c k 2 in replaced by O kc c k 2 2. (8) may be

5 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 83 The next result which follows from Theorem 3.2 in [6] concerns the existence of a locally smooth QR-like decomposition of A c. Theorem 2.3. Let A c 2 R nn be twice continuously differentiable at c 2 R n and assume that rank A c P n m. Let P be a permutation matrix such that the first n m columns of A c P are linearly independent, and! A c P ˆ Q R ; R ˆ R R 2 9 R 22 be a QR-like decomposition of A c P with index m. Then there exists a neighbourhood N c of c in R n such that, for any c 2 N c, there is a QRlike decomposition of A c P with index m A c P ˆ Q c R c ; R c ˆ R c R 2 c R 22 c with the following properties:. Q c ˆ Q ; R c ˆ R. 2. All elements of Q c and R c are continuous in N c. 3. R 22 c and the diagonal elements r jj c ; j ˆ ;... ; n m, of R c are continuously differentiable at c. Moreover, if we write Q T oa c P ˆ Aj; A j;2 A j;2 A j;22 ; A j; 2 R n m n m ; j ˆ ;... ; n then R 22 c ˆ R 22 Xn jˆ A j;22 A j;2 R R 2 c j c j O kc c k 2 2 : 2 3. A new algorithm We rst present a new formulation of the GIEP, which is an extension of ideas developed in [6,9,2]. For convenience, we assume that only the rst eigenvalue is multiple, with multiplicity m, i.e., k ˆ ˆ k m < k m < < k n : 3 There is no di culty in generalizing all our results to an arbitrary set of given eigenvalues.

6 84 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 ompute QR-like decomposition with column pivoting of A c k B c with index m A c k B c P c ˆ Q c R c ; R c ˆ R c R 2 c! ; 4 R 22 c where R c 2 R n m n m, and n m QR decompositions with column pivoting of A c k i B c i ˆ m ;... ; n A c k i B c P i c ˆ Q i c R i c ; R i c ˆ R i c R i 2 c! r nn i c ; 5 i ˆ m ;... ; n; where R i c 2 R n n. We assume permutation matrices P i c 2 R nn i ˆ ; m ;... ; n are constant matrices in a su ciently small neighbourhood of c for each i. If column pivoting is performed such that det R c 6ˆ ; jet R c e j P je T 2 R c e 2 j P P je T n m R c e n m j P kr 22 c k w 6 and je T R i c e j P je T 2 R i c e 2 j P P je T n R i c e n j ˆ jr i nn c j; i ˆ m ;... ; n 7 then the symmetric generalized eigenvalue problem A c x ˆ kb c x has the eigenvalues k ; k m ;... ; k n in which k is a multiple eigenvalue with multiplicity m if and only if R 22 c ˆ ; r nn i c ˆ ; i ˆ m ;... ; n: 8 Thus we consider solving the following least squares problem minimize F c with ( F c ˆ 2 kr 22 c k2 F Xn r i nn ): c 2 9 iˆm In fact, the GIEP has a solution c precisely when the function F c de ned by (9) has the minimal value zero at c. It is worth noting that F c may not be uniquely determined for any c because of the nonuniqueness of QR-like and QR decompositions. However, we

7 shall show that such nonuniqueness does not a ect the e ectiveness of our algorithm (see Theorem 3.). If m ˆ, i.e., the given eigenvalues k ; k 2 ;... ; k n are distinct, we may consider solving the nonlinear system r nn c r nn 2 c B A ˆ : 2 r nn n c The formulation (2) has been studied by Li [2] in the case of the algebraic inverse eigenvalue problem. If m ˆ and a solution of the GIEP exists, (9) and (2) are equivalent. Let c k be su ciently close to c. It follows from Theorem 2.3 that the matrix-valued function R 22 c and n m functions r i nn c i ˆ m ;... ; n are continuously di erentiable at c k, and R 22 c and r i nn c i ˆ m ;... ; n can be expressed as c k Xn where R 22 r i nn Q T Q T i c ˆ R 22 jˆ ˆ R 22 c k Xn c ˆ r i nn jˆ or 22 c k h O kc c k k 2 2 c k Xn jˆ ˆ r i nn c k Xn h oa c c h oa c c H. Dai / Linear Algebra and its Applications 296 (999) 79±98 85 jˆ T j;22 c k or nn i c k h t i j;22 c k c j c k j O kc c k k 2 2 i T j;2 c k R c k R 2 c k c j c k j t i j;2 O kc c k k 2 2 c R c k i k R i 2 c k i c j c k j 2 c j c k j O kc c k k 2 2 ; i ˆ m ;... ; n; 22 k ob c k i ob c i P c ˆ i P i c ˆ T j; c T j;2 c T j;2 c T j;22 c T i j; c t i j;2 c t i j;2 c t i j;22 c A; T j; c 2 R n m n m ; A; T i j; c 2 R n n : 23

8 86 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 Let col R 22 c r nn m c f c ˆ B A : r nn n c 24 Then F c ˆ f T c f c : 25 2 We apply the Gauss±Newton method (see [2]) to solve the least squares problem (9). By use of (2) and (22), one step of Gauss±Newton method for the solution of (9) has the following form J T f c k J f c k c k c k ˆ J T f c k f c k ; 26 where J f c ˆ B col or 22 c or m nn c oc or n nn c oc oc col or 22 c oc n or m nn c oc n or n nn c oc n A 27 with or 22 c ˆ T j;22 c T j;2 c R c R 2 c ; or nn i c ˆ t i ;22 c t i j;2 c R i c R i 2 c ; j i ˆ m ;... ; n: 28 Thus the new method for solving the GIEP may be summarized as follows. Algorithm 3... hoose an initial approximation c to c, and for k ˆ ; ; ompute A c k k i B c k i ˆ ; m ;... ; n and oa c ob c k i ; i ˆ ; m ;... ; n; j ˆ ;... ; n : 3. ompute QR-like decomposition with column pivoting of A c k k B c k with index m:

9 A c k k B c k P c k ˆ Q c k R c k ; R c k ˆ R c k R 2 c k R 22 c k and n m QR decompositions with column pivoting of A c k k i B c k i ˆ m ;... ; n : A A c k k i B c k P i c k ˆ Q i c k R i c k ; R i c k ˆ H. Dai / Linear Algebra and its Applications 296 (999) 79±98 87! R i c k R i 2 c k : r nn i c k 4. If s kr 22 c k k 2 F Xn r nn c i k 2 iˆm is small enough stop; Otherwise. 5. Form f c k and J f c k using (24) and (27). 6. Find c k by solving linear system (26). 7. Go to 2. We rst compare the computational requirements of Algorithm 3. and algorithms in [5]. Since in all the algorithms for solving GIEP, Step 2 and 6 are indispensable, our comparison does not include the computational requirements for Step 2 and 6. It is well-known that the QR decomposition with column pivoting for each A c k k i B c k requires 2=3n 3 ops, while the QR-like decomposition with column pivoting of A c k k B c k with index m requires about 2=3n 2 n m ops (see, for example, []). Therefore Step 3 requires approximately 2=3 n 3 n 2 n m ops. It is easy to verify that Step 5 requires approximately n 4 ops. Thus Algorithm 3. requires approximately n 4 2=3 n 3 n 2 n m ops per iteration, while algorithms in [5] require only about n 4 8n 3 ops per iteration. However, our numerical experiments showed that Algorithm 3. took generally less iterations than algorithms in [5]. In the next section we will comment on the numerical behaviour of the algorithms. Now we prove that the iterates c k generated by Algorithm 3. do not vary with di erent QR-like decompositions of A c k k B c k P c k and different QR decompositions of A c k k i B c k P i c k i ˆ m ;... ; n.

10 88 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 Theorem 3.. In Algorithm 3., for any fixed k, suppose A c k k B c k P c k ˆ Q c k R c k ˆ eq c k er c k ;! er er c k ˆ c k er 2 c k er 22 c k 29 are two (different) QR-like decompositions with column pivoting of A c k k B c k with index m, and A c k k i B c k P i c k ˆ Q i c k R i c k ˆ eq i c k er i c k ; er i c k i ˆ er c k er i 2 c k ; i ˆ m ;... ; n er i nn c k 3 are two (different) QR decompositions with column pivoting of A c k k i B c k i ˆ m ;... ; n, J f c k ; f c k and ej f c k ; e f c k are obtained in Step 5 of Algorithm 3. corresponding to two different decompositions of (29) and (3). Then J T f c k J f c k ˆ ej T f c k ej f c k ; 3 J T f c k f c k ˆ ej T f c k e f c k : 32 Proof. It follows from Theorem 2. that there exist a partitioned orthogonal matrix D ˆ diag D ; D 22 where D is an orthogonal diagonal matrix, D 22 is an m m orthogonal matrix, and n m orthogonal diagonal matrices D i ˆ diag d i ;... ; d i n where d i j ˆ i ˆ m ;... ; n such that Q c k ˆ eq c k D ; R c k ˆ D T e R c k ; Q i c k ˆ eq i c k D i ; R i c k ˆ D i er i c k ; i ˆ m ;... ; n: 33 By use of (23), (28) and (33), we have R 22 c k ˆ D T 22 e R 22 c k ; r i nn c k ˆ d i n er i nn c k ; or 22 c k ˆ D T o ~R 22 c k 22 ; j ˆ ;... ; n; or i nn c k ˆ d i n o~r i nn c k ; i ˆ m ;... ; n: 34 From (34) and the properties of the Kronecker product (see [6]), we obtain col R 22 c k ˆ I D T 22 col er 22 c k ; col or 22 c k ˆ I D T 22 col o ~R 22 c k ; j ˆ ;... ; n: 35

11 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 89 From (24), (27), (34) and (35), the matrices J f c k, ej f c k and the vectors f c k, e f c k obtained in Algorithm 3. satisfy J f c k ˆ diag I D T 22 ; d m n ;... ; d n n ej f c k ; 36 f c k ˆ diag I D T 22 ; d m n ;... ; d n n e f c k : 37 Hence (3) and (32) hold. Although QR-like and QR decompositions are not unique, Theorem 3. shows that the iterates c k generated by Algorithm 3. do not depend on the used decompositions. Theorem 3.2. Suppose that the GIEP has a solution c, and that in Algorithm 3. P i c k ˆ P i c i ˆ ; m ;... ; n are independent on k when kc c k k 2 is sufficiently small. Assume also that J f c 2 R m2 n m n corresponding to a QRlike decomposition of A c k B c P c with index m and to QR decompositions of A c k i B c P i c i ˆ m ;... ; n is of full rank. Then Algorithm 3. is locally quadratically convergent. Proof. First form the QR-like decomposition of A c k B c P c with index m and n m QR decompositions of A c k i B c P i c i ˆ m ;... ; n A c k B c P c ˆ Q c R c ; A c k i B c P i c ˆ Q i c R i c ; i ˆ m ;... ; n: 38 Note that the matrix J f c corresponding to the decompositions (38), by assumption, has full rank, and that J T f c J f c is invertible. Assuming that kc c k k 2 is su ciently small, we can form a QR-like decomposition of A c k k B c k P c with index m and n m QR decompositions of A c k k i B c k P i c i ˆ m ;... ; n A c k k B c k P c ˆ eq c k er c k ; A c k k i B c k P i c ˆ eq i c k er i c k i ˆ m ;... ; n : 39 It follows from Theorem 2.2 that kq i c eq i c k k 2 6 j i e; kr i c er i c k k 2 6 j i 2 e; i ˆ ; m ;... ; n; 4

12 9 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 where e ˆ max iˆ;m ;...;n fk A c A c k k i B c B c k ŠP i c k 2 g: orresponding to the decompositions (39), we obtain a matrix ej f c k 2 R m2 n m n. From the de nition of J f c and (4) we know that kj f c ej f c k k 2 is su ciently small, and so is kjf T c J f c ej f T c k ej f c k k 2 when c k is su ciently close to c. Therefore, ej f T c k ej f c k is invertible, ej f c k has full rank, and kej f c k k 2 is bounded. The QR-like decomposition and n m QR decompositions obtained in Algorithm 3. at c k are not necessarily (39). Write them 8 < A c k k B c k P c ˆ Q c k R c k ; 4 : A c k k i B c k P i c ˆ Q i c k R i c k i ˆ m ;... ; n : It follows from Theorem 4. and (36) that J f c k corresponding to the decompositions (4) also has full rank, kjf T c J f c Jf T c k J f c k k 2 is su ciently small, and kj f c k k 2 is bounded if kc c k k 2 is small enough. Using the perturbation theory for the inversion of a matrix (see, for example, [22]), we have h h J T f c k J f c i k 2 6 J T f c J f c i 2 x e 42 for the su ciently small kc c k k 2, where x e P is a continuous function of e and x ˆ. Now we use Theorem 2.3 to extend smoothly the decompositions (4) to a neighbourhood of c k which may be assumed to include c. Then, abbreviating (2) and (22) R 22 c ˆ R 22 c k Xn r i nn c ˆ r i nn c k Xn jˆ jˆ or 22 c k c j or nn i c k c j c k j O kc c k k 2 2 ; c k j O kc c k k 2 2 : But R 22 c ˆ, r nn i c ˆ i ˆ m ;... ; n, and we have f c k J f c k c c k ˆ O kc c k k 2 2 : Since kj f c k k 2 is bounded, then J T f c k J f c k c c k ˆ J T f c k f c k O kc c k k 2 2 : 43 omparing (43) with Eq. (26) de ning c k, we have J T f c k J f c k c c k ˆ O kc c k k 2 2 :

13 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 9 It follows from this and (42) that kc c k k 2 ˆ O kc c k k 2 2 : as required. 4. Numerical experiments Now we present some of our numerical experiments with Algorithm 3., and also give a numerical comparison between Algorithm 3. and algorithms in [5] for our examples. The following tests were made on a SUN workstation. Double precision arithmetic was used throughout. The starting points were chosen close to the solution, so that few iterations were required for convergence. We were interested in verifying that locally quadratic convergence takes place in practice, in both the distinct and multiple eigenvalue cases. The iterations were stopped when the norm kf c k k 2 was less than 9. For convenience, all vectors will be written as row-vectors.when specifying a symmetric matrix we will only write its upper triangular part. Example 4.. This is a generalized inverse eigenvalue problem with distinct eigenvalues. Let n ˆ 5, A ˆ diag 9; ; ; 8; 4 ; B ˆ diag ; 3; 5; ; ; A ˆ B ˆ I; 2 A 2 ˆ ; B 2 ˆ B B A ; A A 3 ˆ B 3 2 A ; B 3 ˆ B A ;

14 92 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 2 A 4 ˆ B A ; B 4 ˆ B A ; A 5 ˆ B A ˆ B 5; A c ˆ A Xn iˆ c i A i ; B c ˆ B Xn iˆ c i B i : The eigenvalues are prescribed to be k ˆ : ; : ; : ; : ; : With the starting point c ˆ :25; :5; :5; :95; :85, Algorithm 3. converges to a solution c ˆ ; ; ; ; and the results are displayed in Table. Example 4.2. This is a generalized inverse eigenvalue problem with multiple eigenvalues. Let n ˆ 6, Table Iteration Algorithm 3. Algorithm 2. in [5] kc c k k 2 kf c k k 2 kc c k k 2 kk k c k k 2 :34E :83E :34E :34E :4E :5E :97E :97E 3 2 :5E 2 :49E 3 :2E 2 :2E 4 3 :4E 6 :8E 7 :23E 6 :39E 8 4 :4E 2 :68E 4 :42E 3 :2E 4

15 26 889: :4 4:6 858:2 66:3 483:75 82:39 43:2 598:29 24:375 2:425 3:5 54:325 A ˆ 58: : :2 ; B 29:35 54:57 A 36:63 B ˆ I B ˆ diag 43: ; 43: ; 43: ; 43: ; 43: ; 43: u ˆ B H. Dai / Linear Algebra and its Applications 296 (999) 79± :5 :4 :2 : ; u 2 ˆ B A 2 :5 :2 u 5 ˆ B ; u 6 ˆ ; B 2 A A 2 ; u 3 ˆ B A 2 :5 :4 ; u 4 ˆ B A : : : : : : : : : V ˆ : : : ; B : : A : :5 ; A A k ˆ u k u T k ; B k ˆ X6 jˆk v k ;j e k e T j e je T k ; k ˆ ;... ; 6; A c ˆ A X6 iˆ c i A i ; B c ˆ B X6 iˆ c i B i : The eigenvalues are prescribed by k ˆ :5; :5; :5; 3: ; 9: ; 33: :

16 94 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 With the starting point c ˆ 3; 4; 3; 4; ; 8, Algorithm 3. and Algorithm 2. in [5] converge the locally unique solution c ˆ 3: ; 4: ; 2: ; 3: ; : ; 7: : hoosing the target eigenvalues k ˆ :5; :5; :5 and using the same starting point c, Algorithm 5. in [5] also converges the same solution c. Table 2 displays the residuals. Example 4.3. This is also an example with multiple eigenvalues. Let n ˆ 8, A ˆ, B ˆ diag 99; 3; ; 5; ; 7; ; 8 ; B ˆ I; : : A ˆ ; :2 B 99 2:8 A 48: B ˆ : 9 B 22 A Now we de ne the matrices fa k g, fb k g, A c and B c from A and B as follows: A k ˆ Xk a kj e k e T j e je T k a kk e k e T k ; k ˆ ;... ; 8; jˆ B k ˆ X8 jˆk b k ;j e k e T j e je T k ; k ˆ 2;... ; 8;

17 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 95 Table 2 Iteration Algorithm 3. Algorithm 2. in [5] Algorithm 5. in [5] kc c k k kc c k k kc c k k kf c k k 2 kk k c k k 2 kk k c k k 2 :E :62E 3 :E :46E :E :43E :E :26E 2 :79E :54E :34E :2E 2 :97E 3 :74E :26E :9E :34E :E 3 :62E 6 :42E 5 :3E :4E 2 :47E 3 :4E 3 4 :94E :86E :48E 5 :4E 5 :82E 7 :25E 7 5 :88E :7E :64E :8E 4 Table 3 Iteration Algorithm 3. Algorithm 2. in [5] Algorithm 5. in [5] kc c k k kc c k k kc c k k kf c k k 2 kk k c k k 2 kk k c k k 2 :28E :E 4 :28E :8E :28E :34E :36E 2 :39E :64E :49E :2E :76E 2 :87E 4 :8E :29E 2 :42E :23E 3 :E 2 3 :E 8 :5E 6 :24E 4 :23E 3 :5E 7 :28E 6 4 :93E 4 :3E :2E 8 :8E 7 :22E 3 :2E 2 5 :8E 3 :47E 2

18 96 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 A c ˆ A Xn c i A i ; iˆ B c ˆ B Xn iˆ c i B i : The given eigenvalues are k ˆ 2; 2; 2; 3: ; : ; 8: ; 36: ; 723: : With the starting point c ˆ :99; :99; :99; :99; :; :; :; :, Algorithm 3. and Algorithm 2. in [5] converge to the exact solution c ˆ ; ; ; ; ; ; ; : Specifying one eigenvalue of multiplicity 3 and 2 distinct eigenvalues, i.e., k ˆ 2; 2; 2; 3: ; : and using the same starting point c, the solution found by Algorithm 5. in [5] is precisely c. The results are displayed in Table 3. Example 4.4. In this example we consider the usefulness of the GIEP in structural design. Fig. shows a -bar truss structure where Young's modulus E ˆ 6:95 N=m 2, weight density P ˆ 265 kg=m 3, l ˆ m, acceleration of gravity g ˆ 9:8 m=s 2, nonstructural mass at all nodes m ˆ 425 kg. The design parameters are the areas of cross section of the bars. Since the number of design parameters exceeds the order of the global sti ness and mass matrices by two, the areas of cross section of the 7th and 8th bars are xed with the values :865m 2 and :65m 2, respectively. The global sti ness and mass matrices of the structure can be expressed, respectively as Fig.. -bar Truss structure.

19 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 97 Table 4 Iteration Algorithm 3. Algorithm 2. in [5] kc c k k 2 kf c k k 2 kc c k k 2 kk k c k k 2 :5E 4 :72E 4 :5E 4 :2E 4 :2E 4 :4E 3 :2E 4 :3E 2 2 :22E 6 :3E :77E 6 :8E 3 :2E 9 :23E 2 :26E 8 :46E 2 4 :2E 5 :87E 9 :33E 3 :56E 7 5 :7E 5 :74E A c ˆ A X8 c i A i ; iˆ B c ˆ B X8 iˆ c i B i where A ; B ; A i and B i i ˆ ;... ; 8 are 8 8 symmetric matrices, c ;... ; c 8 are the areas of cross section of the bars -6, 9,, respectively. The frequencies of the structure are prescribed to be x j ˆ 5j j ˆ ;... ; 8, i.e., the given eigenvalues are k j ˆ 2px j 2 j ˆ ;... ; 8. With the starting point c ˆ 3 :7; :4; :7; :6;, :7; :; :4; :, Algorithm 3. and Algorithm 2. in [5] converge to a solution c ˆ 3 :724; :423; :6994; :6; :756; :64; :439; :92 : The results are displayed in Table 4. These examples and our other numerical experiments with Algorithm 3. indicate that quadratic convergence indeed occurs in practice. We observed in most of our tests that Algorithm 3. took less iterations than algorithms in [5]. Acknowledgements The author would like to thank Professor Peter Lancaster for his careful reading of a preliminary version of this paper. The author is also indebted to the referees for their comments and suggestions. References [] F.W. Biegler-Konig, Su cient conditions for the solubility of inverse eigenvalue problems, Linear Algebra Appl. 4 (98) 89±. [2] F.W. Biegler-Konig, A Newton iteration process for inverse eigenvalue problems, Numer. Math. 37 (98) 349±354. [3] Z. Bohte, Numerical solution of the inverse algebraic eigenvalue problem, omput. J. (968) 385±388.

20 98 H. Dai / Linear Algebra and its Applications 296 (999) 79±98 [4] H. Dai, Su cient condition for the solubility of an algebraic inverse eigenvalue problem (in hinese), Math. Numer. Sinica (989) 333±336. [5] H. Dai, P. Lancaster, Newton's method for a generalized inverse eigenvalue problem, Numer. Linear Algebra Appl. 4 (997) ±2. [6] H. Dai, P. Lancaster, Numerical methods for nding multiple eigenvalues of matrices depending on parameters, Numer. Math. 76 (997) 89±28. [7] H. Dai, Numerical methods for solving generalized inverse eigenvalue problems (in hinese), J. Nanjing University of Aeronautics & Astronautics, to appear. [8] A.. Downing, A.S. Householder, Some inverse characteristic value problems, J. Assoc. omput. Mach. 3 (956) 23±27. [9] S. Friedland, Inverse eigenvalue problems, Linear Algebra Appl. 7 (977) 5±5. [] S. Friedland, J. Nocedal, M.L. Overton, The formulation and analysis of numerical methods for inverse eigenvalue problems, SIAM J. Numer. Anal. 24 (987) 634±667. [] G.H. Golub,.F. Van Loan, Matrix omputations, 2nd ed., Johns Hopkins University Press, Baltimore, MD, 989. [2] O. Hald, On discrete and numerical Sturm±Liouville problems, Ph.D. Dissertation, New York University, New York, 972. [3] K.T. Joseph, Inverse eigenvalue problem in structural design, AIAA J. 3 (992) 289±2896. [4] W.N. Kublanovskaya, On an approach to the solution of the inverse eigenvalue problem, Zap. Naucn. Sem. Leningrad. Otdel. Mat. Inst., in: V. A. Steklova Akad. Nauk SSSR, 97, pp. 38±49. [5] P. Lancaster, Algorithms for lambda-matrices, Numer. Math. 6 (964) 388±394. [6] P. Lancaster, M. Tismenetsky, The Theory of Matrices with Applications, Academic Press, New York, 985. [7] L.L. Li, Some su cient conditions for the solvability of inverse eigenvalue problems, Linear Algebra Appl. 48 (99) 225±236. [8] L.L. Li, Su cient conditions for the solvability of algebraic inverse eigenvalue problems, Linear Algebra Appl. 22 (995) 7±29. [9] R.. Li, ompute multiply nonlinear eigenvalues, J. omput. Math. (992) ±2. [2] R.. Li, Algorithms for inverse eigenvalue problems, J. omput. Math. (992) 97±. [2] M. Ortega, W.. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 97. [22] G.W. Stewart, J.-G. Sun, Matrix Perturbation Analysis, Academic Press, New York, 99. [23] J.-G. Sun, On the su cient conditions for the solubility of algebraic inverse eigenvalue problems (in hinese), Math. Numer. Sinica 9 (987) 49±59. [24] J.-G. Sun, Q. Ye, The unsolvability of inverse algebraic eigenvalue problems almost everywhere, J. omput. Math. 4 (986) 22±226. [25] J.-G. Sun, The unsolvability of multiplicative inverse eigenvalues almost everywhere, J. omput. Math. 4 (986) 227±244. [26] J.-G. Sun, Multiple eigenvalue sensitivity analysis, Linear Algebra Appl. 37/38 (99) 83± 2. [27] S.F. Xu, On the necessary conditions for the solvability of algebraic inverse eigenvalue problems, J. omput. Math. (992) 93±97. [28] S.F. Xu, On the su cient conditions for the solvability of algebraic inverse eigenvalue problems, J. omput. Math. (992) 7±8. [29] Y.H. Zhang, On the su cient conditions for the solubility of algebraic inverse eigenvalue problems with real symmetric matrices (in hinese), Math. Numer. Sinica 4 (992) 297±33. [3] S.Q. Zhou, H. Dai, The Algebraic Inverse eigenvalue Problem (in hinese), Henan Science and Technology Press, Zhengzhou, hina, 99.

S.F. Xu (Department of Mathematics, Peking University, Beijing)

S.F. Xu (Department of Mathematics, Peking University, Beijing) Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

Introduction to Linear Algebra. Tyrone L. Vincent

Introduction to Linear Algebra. Tyrone L. Vincent Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew

More information

Stability of self-adjoint square roots and polar decompositions in inde nite scalar product spaces

Stability of self-adjoint square roots and polar decompositions in inde nite scalar product spaces Linear Algebra and its Applications 302±303 (1999) 77±104 www.elsevier.com/locate/laa Stability of self-adjoint square roots and polar decompositions in inde nite scalar product spaces Cornelis V.M. van

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs

More information

Linear Algebra. James Je Heon Kim

Linear Algebra. James Je Heon Kim Linear lgebra James Je Heon Kim (jjk9columbia.edu) If you are unfamiliar with linear or matrix algebra, you will nd that it is very di erent from basic algebra or calculus. For the duration of this session,

More information

The skew-symmetric orthogonal solutions of the matrix equation AX = B

The skew-symmetric orthogonal solutions of the matrix equation AX = B Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Linear estimation in models based on a graph

Linear estimation in models based on a graph Linear Algebra and its Applications 302±303 (1999) 223±230 www.elsevier.com/locate/laa Linear estimation in models based on a graph R.B. Bapat * Indian Statistical Institute, New Delhi 110 016, India Received

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Subset selection for matrices

Subset selection for matrices Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Linearized methods for ordinary di erential equations

Linearized methods for ordinary di erential equations Applied Mathematics and Computation 104 (1999) 109±19 www.elsevier.nl/locate/amc Linearized methods for ordinary di erential equations J.I. Ramos 1 Departamento de Lenguajes y Ciencias de la Computacion,

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Meng Fan *, Ke Wang, Daqing Jiang. Abstract

Meng Fan *, Ke Wang, Daqing Jiang. Abstract Mathematical iosciences 6 (999) 47±6 wwwelseviercom/locate/mbs Eistence and global attractivity of positive periodic solutions of periodic n-species Lotka± Volterra competition systems with several deviating

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Widely applicable periodicity results for higher order di erence equations

Widely applicable periodicity results for higher order di erence equations Widely applicable periodicity results for higher order di erence equations István Gy½ori, László Horváth Department of Mathematics University of Pannonia 800 Veszprém, Egyetem u. 10., Hungary E-mail: gyori@almos.uni-pannon.hu

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

On matrix equations X ± A X 2 A = I

On matrix equations X ± A X 2 A = I Linear Algebra and its Applications 326 21 27 44 www.elsevier.com/locate/laa On matrix equations X ± A X 2 A = I I.G. Ivanov,V.I.Hasanov,B.V.Minchev Faculty of Mathematics and Informatics, Shoumen University,

More information

Sums of diagonalizable matrices

Sums of diagonalizable matrices Linear Algebra and its Applications 315 (2000) 1 23 www.elsevier.com/locate/laa Sums of diagonalizable matrices J.D. Botha Department of Mathematics University of South Africa P.O. Box 392 Pretoria 0003

More information

Polynomial functions over nite commutative rings

Polynomial functions over nite commutative rings Polynomial functions over nite commutative rings Balázs Bulyovszky a, Gábor Horváth a, a Institute of Mathematics, University of Debrecen, Pf. 400, Debrecen, 4002, Hungary Abstract We prove a necessary

More information

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction Journal of Computational Mathematics, Vol22, No4, 2004, 535 544 THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1 Zhen-yun Peng Department of Mathematics, Hunan University of

More information

Derivations, derivatives and chain rules

Derivations, derivatives and chain rules Linear Algebra and its Applications 302±303 (1999) 231±244 www.elsevier.com/locate/laa Derivations, derivatives and chain rules Rajendra Bhatia *, Kalyan B. Sinha 1 Indian Statistical Institute, 7, S.J.S.

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992. Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that

More information

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract UMIACS-TR-9-86 July 199 CS-TR 2494 Revised January 1991 An Updating Algorithm for Subspace Tracking G. W. Stewart abstract In certain signal processing applications it is required to compute the null space

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Exact a posteriori error analysis of the least squares nite element method 1

Exact a posteriori error analysis of the least squares nite element method 1 Applied Mathematics and Computation 116 (2000) 297±305 www.elsevier.com/locate/amc Exact a posteriori error analysis of the least squares nite element method 1 Jinn-Liang Liu * Department of Applied Mathematics,

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity problems

An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity problems Applied Mathematics and Computation 109 (2000) 167±182 www.elsevier.nl/locate/amc An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

MTH 2530: Linear Algebra. Sec Systems of Linear Equations

MTH 2530: Linear Algebra. Sec Systems of Linear Equations MTH 0 Linear Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Week # Section.,. Sec... Systems of Linear Equations... D examples Example Consider a system of

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory

More information

Initial condition issues on iterative learning control for non-linear systems with time delay

Initial condition issues on iterative learning control for non-linear systems with time delay Internationa l Journal of Systems Science, 1, volume, number 11, pages 15 ±175 Initial condition issues on iterative learning control for non-linear systems with time delay Mingxuan Sun and Danwei Wang*

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

An improved generalized Newton method for absolute value equations

An improved generalized Newton method for absolute value equations DOI 10.1186/s40064-016-2720-5 RESEARCH Open Access An improved generalized Newton method for absolute value equations Jingmei Feng 1,2* and Sanyang Liu 1 *Correspondence: fengjingmeilq@hotmail.com 1 School

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart. UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated

More information

Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences

Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences Applied Mathematics E-Notes, 2(202), 4-22 c ISSN 607-250 Available free at mirror sites of http://www.math.nthu.edu.tw/amen/ Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences Abar Zada

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

THE RELATION BETWEEN THE QR AND LR ALGORITHMS SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian

More information

A new interpretation of the integer and real WZ factorization using block scaled ABS algorithms

A new interpretation of the integer and real WZ factorization using block scaled ABS algorithms STATISTICS,OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 2, September 2014, pp 243 256. Published online in International Academic Press (www.iapress.org) A new interpretation

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Lecture 11: Eigenvalues and Eigenvectors

Lecture 11: Eigenvalues and Eigenvectors Lecture : Eigenvalues and Eigenvectors De nition.. Let A be a square matrix (or linear transformation). A number λ is called an eigenvalue of A if there exists a non-zero vector u such that A u λ u. ()

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method Journal of Mathematics Research; Vol 6, No ; 014 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016 Name (Last name, First name): MTH 5 Linear Algebra Practice Final Exam April 6, 6 Exam Instructions: You have hours to complete the exam. There are a total of 9 problems. You must show your work and write

More information

A note on the unique solution of linear complementarity problem

A note on the unique solution of linear complementarity problem COMPUTATIONAL SCIENCE SHORT COMMUNICATION A note on the unique solution of linear complementarity problem Cui-Xia Li 1 and Shi-Liang Wu 1 * Received: 13 June 2016 Accepted: 14 November 2016 First Published:

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

ECON0702: Mathematical Methods in Economics

ECON0702: Mathematical Methods in Economics ECON0702: Mathematical Methods in Economics Yulei Luo SEF of HKU January 12, 2009 Luo, Y. (SEF of HKU) MME January 12, 2009 1 / 35 Course Outline Economics: The study of the choices people (consumers,

More information

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment

More information

Max-Min Problems in R n Matrix

Max-Min Problems in R n Matrix Max-Min Problems in R n Matrix 1 and the essian Prerequisite: Section 6., Orthogonal Diagonalization n this section, we study the problem of nding local maxima and minima for realvalued functions on R

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Lecture 12: Diagonalization

Lecture 12: Diagonalization Lecture : Diagonalization A square matrix D is called diagonal if all but diagonal entries are zero: a a D a n 5 n n. () Diagonal matrices are the simplest matrices that are basically equivalent to vectors

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS n n Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems Journal of Informatics Mathematical Sciences Volume 1 (2009), Numbers 2 & 3, pp. 91 97 RGN Publications (Invited paper) Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

More information

Spectrally arbitrary star sign patterns

Spectrally arbitrary star sign patterns Linear Algebra and its Applications 400 (2005) 99 119 wwwelseviercom/locate/laa Spectrally arbitrary star sign patterns G MacGillivray, RM Tifenbach, P van den Driessche Department of Mathematics and Statistics,

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Chapter 3. Linear and Nonlinear Systems

Chapter 3. Linear and Nonlinear Systems 59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter

More information

SOLUTIONS: ASSIGNMENT Use Gaussian elimination to find the determinant of the matrix. = det. = det = 1 ( 2) 3 6 = 36. v 4.

SOLUTIONS: ASSIGNMENT Use Gaussian elimination to find the determinant of the matrix. = det. = det = 1 ( 2) 3 6 = 36. v 4. SOLUTIONS: ASSIGNMENT 9 66 Use Gaussian elimination to find the determinant of the matrix det 1 1 4 4 1 1 1 1 8 8 = det = det 0 7 9 0 0 0 6 = 1 ( ) 3 6 = 36 = det = det 0 0 6 1 0 0 0 6 61 Consider a 4

More information