On the Quadratic Convergence of the. Falk-Langemeyer Method. Ivan Slapnicar. Vjeran Hari y. Abstract

Size: px
Start display at page:

Download "On the Quadratic Convergence of the. Falk-Langemeyer Method. Ivan Slapnicar. Vjeran Hari y. Abstract"

Transcription

1 On the Quadratic Convergence of the Falk-Langemeyer Method Ivan Slapnicar Vjeran Hari y Abstract The Falk{Langemeyer method for solving a real denite generalized eigenvalue problem, Ax = Bx; x 6= 0, is proved to be quadratically convergent under arbitrary cyclic pivot strategy if the eigenvalues of the problem are simple. The term \quadratically convergent" roughly means that the sum of squares of the o{diagonal elements of matrices from the sequence of matrix pairs generated by the method tends to zero quadratically per cycle. Key words: generalized eigenvalue problem, Jacobi method, quadratic convergence, asymptotic convergence AMS(MOS) subject classication. 65F5, 65F30 Department of Mathematics, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, R. Boskovica b.b., YU Split, Yugoslavia y Department of Mathematics, University of Zagreb, pp.87, YU-4000 Zagreb, Yugoslavia

2 Introduction In this paper we study the asymptotic convergence of the method established in 960 by S. Falk and P. Langemeyer in []. Their method solves generalized eigenvalue problem Ax = Bx ; x 6= 0; () where A and B are real symmetric matrices of order n such that the pair (A; B) is denite. By denition the pair (A; B) is denite if the matrices A and B are hermitian or real symmetric and there exist real constants a and b such that the matrix a A + b B is positive denite. The Falk{Langemeyer method is the most commonly used Jacobi{type method for solving problem (). Its advantages over other methods of solving problem () are that it applies to problem () for the widest class of starting pairs. Although it is not, in general, the fastest method for solving the given problem, in some cases it is the most appropriate. The QR method [] is usually several times faster, at least on conventional computers, but it solves problem () only if matrix B is positive denite (or positive denitizing shift for the pair is known in advance) and if matrix B is well conditioned for Cholesky decomposition. The Falk{Langemeyer method is superior to the QR method in terms of numerical stability if matrix B is badly conditioned for Cholesky decomposition. It is also superior to the QR method if approximate eigenvectors are known, i.e. if the matrices A and B are almost diagonal. This happens in the course of modeling the parameters of a system where a sequence of matrix pairs diering only slightly from each other has to be reduced. This also happens in various subspace iteration techniques (see []). Another reason why Jacobi{type methods have attracted attention recently is that they are adaptable for parallel processing (see [], [0]). The Jacobi{type method for solving problem () recently proposed by Veselic in [5] is somewhere in between previously mentioned methods in both, speed and requirements. Although Veselic's method works for denite matrix pairs, a linear combination A? B which is reasonably well conditioned for J{symmetric Cholesky decomposition must be known in advance. This method is one of the implicit methods, i.e. it works only on the eigenvectors matrix, and is therefore approximately two times faster than the Falk{Langemeyer method. The Jacobi{type method considered by Zimmerman in [9] is closely re-

3 lated to the Falk{Langemeyer method (this is briey described in Section 3) but requires positive denite matrix B. In [9] the convergence of this method is proved under the assumption that the starting matrices are almost diagonal. The same conclusion holds for the Falk{Langemeyer method as we shall show in this paper. In [4] Hari studied the asymptotic convergence of complex extension of Zimmerman's method (also for positive denite B). He showed that his method converges quadratically under the cyclic pivot strategies if the eigenvalues of the problem are simple, while in the case of multiple eigenvalues the method can be modied so that the quadratic convergence persists. We are interested only in cyclic pivot strategies since some of them are amenable for parallel processing. These results, the informal analysis of the convergence properties of the Falk{Langemeyer method performed by Hari in [7], and the numerical investigation suggested that the Falk{Langemeyer method behaves in the similar fashion. In this paper we prove that the Falk{Langemeyer method is quadratically convergent if the eigenvalues of the problem are simple and the pivot strategy is cyclic. The technique of the proof, originally established by the late J. H. Wilkinson in [6] (cf. [6]), is similar to that used in [4]. Two main problems that had to be solved are that neither of the matrices A and B has to be positive denite and that the transformation matrices are not orthogonal and therefore dicult to estimate. Both problems were solved using the results about almost diagonal denite matrix pairs from [7]. The paper is organized as follows. In Section we state the known results about almost diagonal denite matrix pairs from [7] to the extent necessary for understanding the rest of the paper. In Section 3 we describe the Falk{Langemeyer method, show that it always works for denite matrix pairs (without use of denitizing shifts), and give its algorithm. We also briey describe Zimmerman method from [9] and [4] and relate it to the Falk{Langemeyer method. Section 4 is the central section of the paper. We rst state the known result about the quadratic convergence of Zimmerman method from [4] and show to what extent can this result be applied to the Falk{Langemeyer method. We introduce measure e" k which we use for dening and proving quadratic convergence. Then we prove the quadratic convergence of the Falk{Langemeyer method under the assumptions that the eigenvalues of the problem are simple, the pivot strategy is arbitrary cyclic and the matrices A and B are almost diagonal. At the end we show that the 3

4 quadratic convergence implies the convergence of Falk{Langemeyer method. In Section 5 we give the quadratic convergence results for parallel and serial strategies, briey explain the possible modication of the Falk{Langemeyer method in case of multiple eigenvalues, and briey discuss numerical experiments. Most of the results presented in the paper are part of an M. S. thesis [3] done under the supervision of professor V. Hari. We would like to thank professor K. Veselic from Fernuniversitat Hagen for his helpful suggestions. We would also like to thank both reviewers for their comments which helped us clarify some important parts of the paper. Almost Diagonal Denite Matrix Pairs Here we consider the structure of almost diagonal denite matrix pair. We rst state some properties of denite matrix pairs. Then we introduce chordal metric for measuring distance between eigenvalues of denite matrix pairs. We dene the measures for the almost diagonality of the square matrix and of the pair of square matrices. At the end we state an important theorem from [7]. The theorem and its corollary reveal the structure of almost diagonal denite matrix pairs. All results are given for the general case of hermitian matrices even though in the rest of the paper we shall consider only the case of real symmetric matrices. Denite matrix pair (A; B) has some important properties: a) There exists a nonsingular matrix F such that F AF = diag (a ; : : : ; a n ) = D A F BF = diag (b ; : : : ; b n ) = D B : () The ratios a i =b i ; i = ; : : : ; n; of real numbers a i ; b i are the eigenvalues of the pair (A; B) and are unique to the ordering. If [f ; :::; f n ] denotes the partition by columns of F, vectors f i ; i = ; :::; n, are the corresponding eigenvectors. Matrices D A and D B are not uniquely determined by the pair (A; B). In the real symmetric case F can be changed to F T in the relation (). b) The Crawford constant c(a; B), c(a; B) = inffj x (A + ib)x j ; x C n ; k x k= g (3) 4

5 is positive. Therefore, A and B share no common nul{subspace and ja i j + jb i j > 0; i = ; : : : ; n; independently of the choice of F. Note that the choice x = e i (the i-th coordinate vector) in the relation (3) for i = ; : : : ; n implies d i = 4q (a ii ) + (b ii ) > 0 ; i = ; : : : ; n ; (4) where A = (a ij ) and B = (b ij ). Hence the matrix D = diag ( d ; : : : ; d n ) ; (5) is positive denite. In the real symmetric case for n 6= only real vectors x can be taken in the relation (3). c) There exists a real number ', such that the matrix B ' from the pair (A ' ; B ' ), A ' = A cos '? B sin ' B ' = A sin ' + B cos ' ; (6) is positive denite. The matrices A and B can be simultaneously diagonalized if and only if the same holds for the matrices A ' and B '. The proofs of the above properties are simple (see [4]). If some f i is a vector from the nul{subspace of B, the eigenvalue i is innite. Such eigenvalues are not badly posed because they are zero eigenvalues of the pair (B; A) counting their multiplicities. Hence, it is better to dene eigenvalues as pairs of numbers i = [a i ; b i ]; i = ; : : : ; n. It is also necessary to choose a nite metric for measuring the distance between eigenvalues. Such is the chordal metric. Let R = R R and R 0 = R n f[0; 0]g, where R is the set of real numbers. We say that the pairs [a; b]; [c; d] R 0 are equivalent if ad? bc = 0 and write [a; b] [c; d]. It is easily seen that is an equivalence relation on R 0. Let R 0 j be the set of equivalence classes. Let ; R 0 j and let [a; b]; [c; d] be their representatives, respectively. Chordal distance between [a; b] and [c; d] is dened with the formula ([a; b]; [c; d]) = j ad? bc j p a + b p c + d : 5

6 It is easily seen that is constant when [a; b] and [c; d] vary over and, respectively. This denes metric e : R 0 j R 0 j! R by e(; ) = ([a; b]; [c; d]) where [a; b] and [c; d] are any representatives of and, respectively. However, for the sake of simplicity we shall use for the both functions and e. We see that (; ) for all ; R 0 j. The proof of these and some other properties of the chordal metric can be found in [], [4] and [3]. From now on, let n denote the order of the matrices A and B and let p denote the number of distinct eigenvalues of the pair (A; B). We assume that n 3 ; p ; and that the pair (A; B) is denite. Note that if p = then A = B, so = = = n = and all vectors are eigenvectors of the pair (A; B). The o{norm of the square matrix A is the quantity S(A) = v nx u t i;j= i6=j j a ij j = k A? diag (A) k ; where k k denotes the Euclidean matrix norm. The o{norm of the pair (A; B) is the quantity q "(A; B) = S (A) + S (B) : (7) Where no misunderstanding can arise, " shall be used instead of "(A; B). Let where = : : : = t ; t + = : : : = t ; : : : ; tp?+ = : : : = tp ; (8) ti = [s i ; c i ] ; s i + c i = ; i = ; : : : ; p ; (9) be all eigenvalues of the pair (A; B). Thus, we assume that the pair (A; B) has p distinct eigenvalues t ; : : : ; tp with the appropriate multiplicities n i = t i? t i? ; i = ; : : : ; p ; t 0 = 0 ; (0) and the representatives which behave as sine and cosine are chosen. Since p > we can dene quantities i = 3 min ( ti ; tj ) ; jp j6=i = min ip i : () 6

7 Note that > 0. In the analysis we shall need matrices e A and e B dened as ea = DAD ; e B = DBD ; () where matrix D is dened with relations (5) and (4). Since D is positive denite, the matrices A e and B e are congruent to the matrices A and B, respectively. Let us partition the matrices A e and B, e ea = 6 4 ea e Ap.. ea p App e ; B e = 6 4 eb e Bp.. eb p Bpp e ; (3) where e A ii and e B ii are diagonal blocks of order n i ; i = ; : : : ; p, and n i 's are dened with relation (0). The relation (3) shall be written as e A = ( e A ij ) and e B = ( e B ij ). Let the matrices A and B be partitioned according to the relation (3). Departure from the block{diagonal form of the pair (A; B) is the quantity q (A) + (B) ; (A; B) = where (A) = px i= px j= j6=i k A ij k ; (B) = px i= px j= j6=i k B ij k : Theorem Let (A; B) be a denite pair and let the matrices e A and e B be dened by the relations (), (5) and (4). If "( e A; e B) < ; (4) then there exists a permutation matrix P such that for matrices A e0 = P T AP e and B e 0 = P T BP e, partitioned according to the relation (3), holds px k c i e A 0 ii? s i e B 0 ii k i j= j6=i k c i f A 0 ij? s i e B 0 ij k ; i = ; : : : ; p : (5) On the both sides of the inequalities (5) the Euclidean matrix norm can be substituted with the spectral norm. 7

8 Proof: The proof of this theorem is found in [7]. Q.E.D. Corollary Let the relation (4) hold for the denite pair (A; B). Then there exists a permutation matrix P such that for the matrices A f0 = P T AP e = (ea 0 ), f ij B 0 = P T BP e = ( e b 0 ), ij A0 = P T AP = (a 0 ) and ij B0 = P T BP = (b 0 ), ij partitioned according to the relation (3), holds px k c ia e0? ii s e ib 0 ii k 4 ( A f0 ; B f0 ) px t i X i= i= j=t i?+ ([s i ; c i ]; [a 0 jj ; b0 jj]) = px ; (6) t i X i= j=t i?+ j c i ea 0 jj? s i e b 0 jj j 4 ( f A 0 ; f B 0 ) ; (7) ([s i ; c i ]; [a 0 jj ; b0 jj ]) = j c iea 0 jj? s i e b 0 jj j ( f A 0 ; f B 0 ) ; j = t i? + ; : : : ; t i ; i = ; : : : ; p : (8) Proof: By Theorem there exists a permutation matrix P such that the relation (5) holds for the matrices e A 0 and e B 0. The Cauchy{Schwarz inequality implies k c i e A 0 ij? s i e B 0 ij k (j c i jk e A 0 ij k + j s i jk e B 0 ij k) k e A 0 ij k + k e B 0 ij k ; i 6= j : (9) From the relations (5) and (9), the denition of ( A f0 ; B f0 ), and the symmetry of matrices A f0 and B f0 follows k c ia e0? ii s e ib 0 k px ii (k A e0 ij k + k B e 0 ij k ) i j= j6=i ( e A 0 ; e B 0 ) ; i = ; : : : ; p : (0) Finally, the relations (5), (9) and (0) and the denition of ( f A 0 ; f B 0 ) imply px i= k c i e A 0 ii? s i e B 0 ii k ( e A 0 ; e B 0 ) 8 px i= px j= j6=i i (k e A 0 ij k + k e B 0 ij k )

9 4 ( e A 0 ; e B 0 ) ; which completes the proof of the relation (6). The equalities in the relations (7) and (8) follow from the denition of the chordal metric and the fact that it does not depend upon the choice of the representatives. Inequality in the relation (7) now follows from the relation (6) and inequality in the relation (8) from the relation (0). Q.E.D. Theorem and Corollary reveal the structure of almost diagonal denite matrix pairs in both the hermitian and the real symmetric case. The relation (8) implies that for i = ; : : : ; p, pairs [a 0 jj ; b0 jj] ; j ft i? + ; : : : ; t i g approximate the eigenvalues ti with an error of order of magnitude ( e A 0 ; e B 0 ) in the chordal metric. The relation (6) implies that the blocks e A 0 ii and e B 0 ii, i = ; : : : ; p, are proportional with the proportionality constants being ti also with the error of order ( e A 0 ; e B 0 ). This proportionality becomes apparent when ( e A 0 ; e B 0 ) is small enough compared to. Note that the relations (5) and (8) do not imply that the o-diagonal elements of blocks e A 0 ii and eb 0 ii tend to zero together with ( e A 0 ; e B 0 ). The relation (5) shows that for xed i the proportionality of the blocks A e0 and e ii B 0 ii depends on the local separation i of the eigenvalue ti from other eigenvalues and on quantities k c i e A 0 ij? s i e B 0 ij k ; j = ; : : : ; p ; j 6= i: 3 The Falk{Langemeyer method In this section we dene the Falk{Langemeyer method, show that it always works for denite matrix pairs, and give its algorithm. At the end of the section we briey describe the method of Zimmermann from [4] and [9], because it is closely related with the Falk{Langemeyer method. This relationship is also described. The Falk{Langemeyer method solves problem () by constructing a sequence of \congruent" matrix pairs where (A () ; B () ); (A () ; B () ); : : : () A () = A ; B () = B ; 9

10 A (k+) = F T k A(k) F k ; B (k+) = F T k B(k) F k ; k : () Note that the transformation () with nonsingular matrix F k preserves the eigenvalues of the pair (A (k) ; B (k) ). This is a Jacobi{type method, hence the transformation matrices are chosen as nonsingular elementary plane matrices. An elementary plane matrix F = (f ij ) diers from the identity matrix only at the positions (l; l); (l; m); (m; l) and (m; m), where l < m n. The matrix bf = " # fll f lm f ml f mm is called (l; m){restriction of the square matrix F = (f ij ). For each k, the (l; m){restriction of the matrix F k has the form bf k = " k? k # ; (3) where real parameters k and k are chosen to satisfy the condition a (k+) lm = 0 ; b (k+) lm = 0 : (4) Here A (k) = (a (k) ij ) and B (k) = (b (k) ij ). Indices l and m are called pivot indices and the pair (l; m) is called pivot pair. As k varies the pivot pair also varies, hence l = l(k) and m = m(k). The transition from the pair (A (k) ; B (k) ) to the pair (A (k+) ; B (k+) ) is called the k{th step of the method. The manner in which we choose elements which are to be annihilated in the k{th step (or just the indices (l; m) of these elements) is called pivot strategy. The pivot strategy is cyclic if every sequence of N = n(n? )= successive pairs (l; m) contains all pairs (i; j); i < j n. A sequence of N successive steps is referred to as a cycle. Two most common cyclic pivot strategies are the column{cyclic strategy and the row{cyclic strategy. The former is dened by the sequence of pairs (; ); (; 3); (; 3); (; 4); (; 4); (3; 4); : : : ; (; n); (; n); : : : ; (n? ; n); and the latt er by the sequence of pairs (; ); (; 3); : : : ; (; n); (; 3); (; 4); : : : ; (; n); (3; 4); : : : ; (n? ; n): These two strategies are also called serial strategies. Parallel cyclic strategies are cyclic strategies which enable simultaneous execution of approximately 0

11 n= steps on parallel computers. These strategies have recently attracted considerable attention (see [], [0]). We state the quadratic convergence results for serial and parallel strategies in Section 5. Note that if the eigenvectors are needed, we must calculate the sequence of matrices F () ; F () ; : : :, where F () = I ; F (k+) = F (k) F k ; k : (5) From the relations () and (5) we obtain for k F (k) = F F k? A (k) = (F (k) ) T A () F (k) ; B (k) = (F (k) ) T B () F (k) : We shall now derive one step of the algorithm. Note that only (l; m){ restrictions of the involved matrices are needed. Since () is the congruence transformation, the pairs (A (k) ; B (k) ) are denite for every k and the pairs of the corresponding (l; m){restrictions are denite as well. Let (index k is omitted for the sake of simplicity) bf T b A b F = b A 0 ; b F T b B b F = b B 0 ; or respectively "? "? # " # " a ll a lm a lm a mm? # " bll b lm b lm b mm # "? # # = = " a 0 ll a 0 lm a 0 lm a0 mm " b 0 ll b 0 lm b 0 lm b0 mm # # ; : (6) Condition (4) now reads a 0 lm = b 0 lm = 0. From the relation (6) the system in unknowns and is obtained a 0 lm = a ll + (? )a lm? a mm = 0 ; b 0 lm = b ll + (? )b lm? b mm = 0 : (7) Eliminating nonlinear terms in both equations we obtain = = m ; = = l ; (8)

12 where is solution of the equation? = lm? = l = m = 0 (9) and = l = a ll b lm? b ll a lm ; = m = a mm b lm? b mm a lm ; = lm = a ll b mm? a mm b ll : (30) Dening we obtain = = (= lm ) + 4= l = m (3) = sgn (= lm)(j = lm j p =) : The algorithm is more stable if and are smaller in modulus, so we take = + = sgn (= lm)(j = lm j + p =) : (3) From the above fromula we see that the necessary condition for carrying out this step is = 0. Let us show that this condition is fullled in each step due to the denitness of the pairs ((A (k) ; (k) ); k ). Proposition 3 Let the pair (A; B) be denite. Then the following holds: (i) = 0 ; (ii) The following statments are equivalent: (a) = = 0 ; (b) = lm = = l = = m = 0 ; (c) There exist real constants s and t, jsj + jtj 0, such that s b A + t b B = O : Proof: The proof can be found in [4] and [3] but for the completeness of exposition we present it below.

13 Using the relation (6) we can dene the pair (A ' ; B ' ) such that the matrix B ' is positive denite. Let us calculate the quantities (= lm ) ' ; (= l ) ' ; (= m ) ' and (=) ' from the pair (A ' ; B ' ) using the relations (30) and (3). It is easy to verify that = l = (= l ) ' ; = m = (= m ) ' ; = lm = (= lm ) ' ; = = (=) ' : Therefore, without loss of generality, we can assume that the matrix B from the pair (A; B) is positive denite. The statement (c) is now equivalent to the statement A b = cb, b c R. (i) With the notation s s b mm b ll x = a ll ; y = a mm ; z = b lm p ; b ll b mm bll b mm the following identity holds: = = b ll b mm [(x? y) + 4(xz? a lm )(yz? a lm )]: Since the right side of the above relation is the square polynomial in a lm, we have x + y = = b ll b mm P (a lm ) b ll b mm P z! = b ll b mm (x? y) (? z ) = = lm? b lm 0 : (33) b ll b mm In the last inequality we have used the assumption that the matrix B, and therefore the matrix B, b is positive denite. (ii) Let (a) hold. The relation (33) implies that = lm = 0. Matrix B is by the assumption positive denite. Therefore b ll > 0; b mm > 0, and the equality = lm = 0 can be written as Using this relation we can write b mm b ll a ll = a mm : = m = a mm b lm? b mm a lm = b mm (a ll b lm? b ll a lm ) = b mm = l ; b ll b ll 3

14 or b ll = m = b mm = l. From the denition of =, since = lm = 0, we conclude that = l = = m = 0. This gives (b). Let (b) hold. Then Therefore, b A = c b B, where a mm = b mm a ll b ll ; a lm = b lm a ll b ll : c = a ll b ll = a mm b mm ; and (c) holds. Let (c) hold. Then obviously (b) holds, and if (b) holds, then (a) holds. Q.E.D. Now we see that the Falk{Langemeyer method can be applied to all definite matrix pairs. Note that denitizing shifts are not used and need not to be known. We have two special cases in the algorithm. If = = 0, then the matrices ba and B b are proportional as shown in Proposition 3. Therefore, the two equations in the system (7) are linearly dependent and the system has a parametric solution in one of the following forms:! c b mm? b lm c amm? a lm (; ) = ; c ; (; ) = ; c ; b ll? c b lm a ll? c a! lm c b ll + b lm c a ll + a lm (; ) = c; ; (; ) = c; ; b mm + c b lm a mm + c a lm where c is real. For every c at least one of the quotients is well dened due to the deniteness of the pair ( b A; b B). It is best to set c = 0 to ensure that k and k tend to zero together with "(A (k) ; B (k) ) as k! (see step (5a) in Algorithm 4). Setting c = 0 also reduces the operation count. This choice yields four possibilities for (; ): (? b lm b ll ; 0) ; (? a lm a ll ; 0) ; (34) (0 ; b lm b mm ) ; (0 ; a lm a mm ) : (35) 4

15 Due to the deniteness of the pair (A; B), we have ja ii j + jb ii j > 0; i = ; : : : ; n ; (36) so at least one quotient is dened in each of the relations (34) and (35). In order to obtain better condition of the transformation matrix, we choose the relation in which the dened quotient has smaller absolute value. If both quotients in the chosen relation are dened, then they are equal, and for numerical reasons it is better to choose one in which the sum of squares of the numerator and the denominator is greater. The second special case is when = > 0 and = lm = 0. This means that diagonals of the matrices b A and b B are proportional, while the matrices themselves are not. Then sgn (= lm ) is not dened. Since = l = m > 0, we have sgn (= l ) = sgn (= m ). Substituting sgn (= lm ) with sgn (= l ) in the equation (3) gives = sgn (= l ) q = l = m : Inserting this in the equation (8) gives, after simple calculation, s s b mm amm = = ; = b ll a ll : (37) The relation (36) implies that at least one of the quotients b mm =b ll and a mm =a ll is dened and dierent from zero. If both quotients are dened then they are equal and it is better to choose one in which the sum of squares of the numerator and the denominator is greater. We can now dene an algorithm of the method: Algorithm 4 Denite matrix pair (A; B) is given. () Set k =, A () = A, B () = B, F () = I and choose the pivot strategy. () Choose the pivot pair (l; m) = (l(k); m(k)) according to the strategy. (3) If a (k) lm = b (k) lm = 0, then set k = k + ; A (k+) = A (k) ; B (k+) = B (k) ; F (k+) = F (k) and go to step (). Otherwise go to step (4). 5

16 (4) Calculate the quantities = (k) l ; = (k) m ; =(k) and lm =(k) from formulas = (k) l = a (k) ll b (k) lm? b (k) ll a (k) lm ; =(k) m = a (k) mm b(k) lm? b (k) mm a(k) lm ; = (k) = lm a(k) ll b (k)? mm a(k) mm b(k) ll ; = (k) = (= (k) lm ) + 4= (k) l = (k) m : (5) (a) If = (k) = 0 perform the following steps: If j b (k) ll then set k =?b (k) lm =b(k) ll ; j j a (k) ll j, otherwise set k =?a (k) lm =a(k) ll. If j b (k) mm j j a(k) mm j, then set k = b (k) lm =b(k) mm ; otherwise set k = a (k) lm =a(k) mm. Finally, if j k j j k j, then set k = 0 ; otherwise set k = 0. (b) If = (k) > 0 perform the following steps: (i) If = (k) lm 6= 0, then calculate k = sgn (=(k) lm )(j =(k) j +p lm = (k) ); k = =(k) m ; k = =(k) l : k k (ii) If = (k) lm = 0, then, according to the relation (37), calculate k = v u ut b(k) mm b (k) ll = v u ut a(k) mm a (k) ll ; k = k : If both quotients for k are dened, then choose one in which the sum of squares of the numarator and the denominator is greater. 6

17 (6) Perform the transformation A (k+) = F T k A(k) F k ; B (k+) = F T k B(k) F k ; (38) (7) Set k = k + and move to step (). F (k+) = F (k) F k : (39) Since matrices A (k) ; B (k) ; A (k+) and B (k+) are symmetric, it is enough to store and to transform only upper triangles. In the transformation (38) only l?th and m?th row and column of the matrices A (k) and B (k) are changed and in the transformation (39) only l?th and m?th columns of the matrix F (k) are changed. Note that the eigenvalues can be found without calculating the matrices F (k) ; k, and therefore the trasformation (39) can be omitted. This reduces the operational count about fty percent. Stopping criteria of the innite iterative procedure dened with this algorithm are described in Section 5. From now on, the term \Falk{Langemeyer method" denotes the method described by Algorithm 4. The Zimmerman method. We shall now relate the Falk{Langemeyer method with another method for solving the generalized eigenvalue problem. This method is due to K. Zimmermann who roughly described it in her thesis [9]. Later on, in his thesis [4], Hari derived its algorithm and proved its quadratic convergence. The Zimmermann method is dened for symmetric matrix pairs (A; B) where matrix B is positive denite. We shall denote this fact as B > 0. At the beginning of the iterative procedure the initial pair (A; B) is normalized such that A () = DAD ; B () = DBD ; where D = diag ( p ; : : : ; p ) : b bnn Therefore, b () ii = ; i = ; : : : ; n. The Zimmerman method constructs a sequence of pairs ((A (k) ; B (k) ); k ) by the rule A (k+) = Z T k A(k) Z k ; B (k+) = Z T k B(k) Z k ; k : 7

18 The nonsingular matrices Z k are chosen to preserve the units on the diagonal of B (k+) (automatic normalization at each step) and to annihilate the pivot elements. In [4] it is shown that for k holds where bz k = r? (b (k) lm) " # cos 'k sin ' k ;? sin k cos k cos ' k = cos k + k (sin k? k cos k ) ; sin ' k = sin k? k (cos k + k sin k ) ; cos k = cos k? k (sin k + k cos k ) ; sin k = sin k + k (cos k? k sin k ) ; If a (k) lm = b(k) lm k = k = are proportional and a (k) lm b (k) lm q + b (k) lm + ( + q + b (k) q? b (k) lm b (k) lm tan k = a(k) lm? (a (k) ll + a r (k) (a (k) mm? a (k) ll )? 4 k 4 : q ; )( + lm? b (k) ) lm ; mm)b (k) lm? (b (k) lm ) ; = 0 we set k = 0. If the (l; m){restrictions of A (k) and B(k) and b(k) are not both equal to zero, we set lm k =. 4 If the matrix B is not positive denite but the pair (A; B) is, then there exists a denitizing shift such that the matrix A? B is positive denite. If this shift is known in advance, then the Zimmermann method can be applied to the pair (A; B) in the sense that each Z k is computed from the pair (B (k) ; A (k)? B (k) ). Although the Zimmermann method seems quite dierent from the Falk{ Langemeyer method, the two methods are closely related. The following theorem gives precise formulation of this relationship. For this occasion only 8

19 we assume that in step (5a) of Algorithm 4 (that is when = (k) = 0), parameters k and k are computed according to the formulae (37). For this version of the Falk{Langemeyer method holds: Theorem 5 Let A and B be symmetric matrices of order n and let B be positive denite. Let the sequences ((A (k) ; B (k) ); k ) and ((A (k)0 ; B (k)0 ); k ) be generated from the starting pair (A; B) with the Falk{Langemeyer and the Zimmermann method, respectively. If the corresponding pivot strategies are the same, then A (k)0 = D (k) A (k) D (k) ; B (k)0 = D (k) B (k) D (k) ; k ; where D (k) = diag ( q b (k) ; : : : ; q b (k) nn ) ; k : Proof: The proof of this theorem is found in [4] Section.3. Q.E.D. Let us suppose again that the matrix B is not positive denite while the pair (A; B) is, and that a positive denitizing shift is known in advance. Let us apply to the pair (A; B) the Zimmermann method in the sence mentioned above and the version of the Falk{Langemeyer method which we used in Theorem 5. It is easy to see that the parameters k and k from the Falk{Langemeyer method are invariant under the transformations (A; B)! (B; A? B). Therefore, Theorem 5 holds in this case, as well, with D (k) = diag q ; : : : ; a (k)? b (k) q a (k) nn? b (k) nn A ; k : We can conclude that if the starting pair is positive denite or the denitizing shift is known in advance, then the Falk{Langemmeyer (Zimmermann) method is the fast scaled (normalized) version of the Zimmermann (Falk{ Langemmeyer) method. 9

20 4 Quadratic convergence In this section we prove that the Falk{Langemeyer method is quadratically convergent if the starting denite pair has simple eigenvalues and the pivot strategy is cyclic. Denitizing shifts are not used and need not to be known. We rst state the result about the quadratic convergence of the Zimmermann's method, and show to what extent can this result be applied to the Falk{Langemeyer method if the matrix B is positive denite. Then we dene the quadratic convergence for the Falk{Langemeyer method. In Subsection 4. we prove preliminary results which we use in the proof of the quadratic convergence of the Falk{Langemeyer method in Subsection 4.. The result about the quadratic convergence of Zimmermann method can be summarized as follows. Let the sequence ((A (k) ; B (k) ); k ) be generated by the Zimmerman method from the pair (A; B), B > 0, and let " k = "(A (k) ; B (k) ), where " is dened with the relation (7). Note that " k is natural measure for convergence of the Zimmerman method since each matrix B (k) has units along the diagonal. We say that the Zimmerman method is quadratically convergent on the pair (A; B) if " k! 0 as k! and there exist a constant c 0 > 0 and an integer r 0 such that for r r 0 holds " (r+)n + c 0 " rn + : Hence of special importance are conditions under which the above relation holds for r =. We call them asymptotic assumptions. Let = spr (A; B) = max in j ij ; = 3 min i6=j j i? j j : Theorem 6 Let the sequence ((A (k) ; B (k) ; k ) be generated by the Zimmerman method from the starting pair (A; B), B > 0, and let the asymptotic assumptions S(B () ) N ; p + " < ; (40) hold. If the eigenvalues of the pair (A; B) are simple and the pivot strategy is cyclic, then q " N + N( + ) " : (4) 0

21 Proof: The proof of this theorem is found in [4] Section 3.3. Q.E.D. In Theorem 6 the term appears in the assumption (40) and in the assertion (4) because matrix B is not diagonal and matrix A is not normalized. From Theorem 5 we see that Theorem 6 holds for the Falk{Langemeyer method provided that the step (5a) of Algorithm 4 is appropriately changed, the matrix B is positive denite, and the pairs (A (k) ; B (k) ) generated by the Falk{Langemeyer method are normalized so that b (k) ii = ; i = : : : n; k. In the rest of this section we prove that the Falk{Langemeyer method dened with Algorithm 4 is quadratically convergent on denite matrix pairs with simple eigenvalues if the pivot strategy is cyclic. We rst have to dene the measure for the quadratic convergence. Let (A; B) be a denite pair. We shall use the measure e" = e"(a; B) dened by e"(a; B) = "( e A; e B); where e A and e B are given by the relations (),(5) and (4). The measure e" enables us to use results of Corollary and it takes into account the diagonal elements of matrices A and B. Note that the measure "(A; B) is generally not the proper measure for almost diagonality of the pair (A; B) since it takes no account of the diagonals of matrices A and B. Let the sequence of pairs (A () ; B () ); (A () ; B () ); : : : (4) be generated by the Falk{Langemeyer method from the starting denite pair (A; B). For k we set e" k = e"(a (k) ; B (k) ) = "( e A (k) ; e B (k) ) ; (43) ea (k) = D k A (k) D k ; e B (k) = D k B (k) D k ; (44) D k = diag ( d (k) i = 4q (a (k) ii d (k) ; : : : ; d (k) n ) ; (45) ) + (b (k) ii ) ; i = ; : : : ; n : (46) From the relations (44), (45) and (46) we see that the pairs ( e A (k) ; e B (k) ) are normalized in the sence that (ea (k) ii ) + ( e b (k) ii ) = ; i = ; : : : ; n : (47)

22 Definition 7 The Falk-Langemeyer method is quadratically convergent on the pair (A; B) if e" k! 0 as k! and there exist a constant c 0 > 0 and an integer r 0 such that for r r 0 holds e" (r+)n + c 0 e" rn + : (48) From Denition 7 we see that ultimately e" k decreases quadratically per cycle. At the end of Subsection 4. we shall show that the quadratic convergence implies the convergence of the sequence (4) towards the pair of diagonal matrices (D A ; D B ), where D A = diag (a ; : : : ; a n ); D B = diag (b ; : : : ; b n ) : (49) Here i = [a i ; b i ]; i = ; : : : ; n, are the eigenvalues of the pair (A; B). Finally, we shall show that ultimately the quadratic reduction of e" rn + implies the quadratic reduction of " rn + and vice versa. In order to be able to observe the measure e" we must solve one more problem. The transformation matrices F k are calculated from unnormalized pairs (A (k) ; B (k) ) and are therefore dicult to estimate. To solve this problem we shall observe the sequence obtained from the pair (A; B) with following process: normalization, step of the method, normalization, step of the method,... This sequence reads ( e A () ; e B () ); (A () ; B () ); ( e A () ; e B () ); (A (3) ; B (3) ); ( e A (3) ; e B (3) ); : : : ; (50) where and for k holds A (k+) = F e T k A (k) F e k ; e ( e A () ; e B () ) = ( e A () ; e B () ) ; (5) e B (k+) = F e T k B (k) F e k ; (5) Here " k measures o{diagonal elements of the pairs from sequence (4) and should not be confused with the quantity used in connection with Zimmerman method.

23 e A (k+) = D k+ A (k+) D k+ ; D k+ = diag ( r d (k+) i = 4 (a (k+) ii d (k+) ; : : : ; d (k+) n e B (k+) = D k+ B (k+) D k+ ; (53) ) ; (54) ) + (b (k+) ii ) ; i = ; : : : ; n : (55) Of course, the sequences (4) and (50) are generated using the same pivot strategy. The matrices e F k are calculated according to Algorithm 4, but from the pairs ( e A (k) ; e B (k) ). Since in the transition from ( e A (k) ; e B (k) ) to (A (k+) ; B (k+) ) of all diagonal elements only those at positions (l; l) and (m; m) are being changed, we conclude that r d (k+) i = 4 (a (k+) ii ) + (b (k+) ii ) = 4q (ea (k) ii ) + ( e b (k) ii ) = ; i = ; : : : ; n; i 6= l; m : (56) We will now show that the operations of normalization and of carrying out one step of the algorithm commute. This is equivalent to showing that e A (k) = e A (k) and e B (k) = e B (k) for k. Let e F k be the transformation matrices obtained according to Algorithm 4 from the pairs ( e A (k) ; e B (k) ); k. The following proposition shows that the matrices F k and e F k are simply related. Proposition 8 For k holds e F k = D? k F kd k. Proof: Because of the relations (44), (45) and (46) we have a (k) ll a (k) 3 lm 6 ) (k) d (k) b (k) ll m 7 6 ) = bea 6 4 (d (k) l a (k) lm d (k) l d (k) m d (k) l a (k) mm (d (k) m ) (k) 7 be 5 ; B = 6 4 (d (k) l b (k) lm d (k) l d (k) m d (k) l b (k) lm d (k) m b (k) mm (d (k) m ) : (57) The assertion is now obtained by simply using the relation (57) in Algorithm 4 and calculating the matrix e F k. Q.E.D. Proposition 9 For k the following holds: (i) ( e A (k) ; e B (k) ) = ( e A (k) ; e B (k) ) ; 3

24 (ii) D k = D D D 3 D k : Proof: The proof is by induction in respect to k. (i) For k = the assertion holds due to the relation (5). Suppose that the assertion holds for some k. This means that e A (k) = e A (k) ; e B (k) = e B (k) ; e F k = F e k : (58) e From the relation (5) it follows that A (k+) = F e T k A (k) F e k, which, because of the relation (58), implies that A (k+) = e F T k e A (k) e F k. Since the relation (44) and Proposition 8 imply A (k+) = D k F T k D? k D ka (k) D k D? k F kd k = D k F T k A(k) F k D k = D k A (k+) D k ; (59) we conclude that normalizations of the matrices A (k+) and A (k+) give the same matrix. Now we use the same argument to show that e B (k+) = e B (k+) for k and to prove (i). (ii) For k = the assertion is trivially fullled. Let the assertion hold for some k. From the relations (59), (53) and the assertion (i) we obtain D k+ D k A (k+) D k D k+ = D k+ A (k+) D k+ = = e A (k+) = e A (k+) = D k+ A (k+) D k+ : It is obvious that D k+ = D k D k+ and inserting the induction assumption we conclude that (ii) holds. Q.E.D. From Proposition 9 we see that the relations (50), (5) and (53) can be written as ( e A () ; e B () ); (A () ; B () ); ( e A () ; e B () ); (A (3) ; B (3) ); ( e A (3) ; e B (3) ); : : : (60) A (k+) = e F T k e A (k) e F k ; B (k+) = e F T k e B (k) e F k ; (6) ea (k+) = D k+ A (k+) D k+ ; e B (k+) = D k+ B (k+) D k+ : (6) The relations (60), (6), (6), (54) and (55) dene the normalized Falk{ Langemeyer method. We use the normalized method only as an aid to obtain information about the quantity e" k. 4

25 4. Preliminaries Here we dene asymptotic assumptions and prove several lemmas which are used later in the proof of the quadratic convergence of the Falk{Langemeyer method. The quadratic convergence proof is based on the idea of Wilkinson (see [8]) which consists in estimating the growth of already annihilated elements in the current cycle. To this end we must estimate the transformation parametars e k and e k and also the growth of all o-diagonal elements in the current cycle. These two tasks are solved in Lemma, Lemma 3, Lemma 4 and Lemma 5. Lemma 0 gives us two numeric relations which are used in the proof. Lemma and Lemma estimate the transformation parametars e k and e k, and the measure e" k in one step. Lemma 3, Lemma 4 and Lemma 5 estimate the growth of e k, e k and e" k during N consecutive steps. Lemma 5 is the most important for the proof of the quadratic convergence. In this subsection we do not assume that the pivot strategy is cyclic. Therefore the results of this subsection hold for any pivot strategy. However, if the pivot strategy is cyclic, then Lemma 3, Lemma 4 and Lemma 5 explain the behaviour of e k, e k and e" k during one cycle. As we said in Section, the quadratic convergence can always be expected if the eigenvalues of problem () are simple. We will therefore use two quadratic convergence assumptions: (A) The eigenvalues of the pair (A; B) are simple, i. e. p = n 3 : (A) The pair (A; B) is almost diagonal, i. e. e" < N : Asymptotic assumption (A) is similar to the assumptions used in Theorem 6 and in convergence results of some other Jacobi{type methods (see [4], []). Assumption (A) implies N 3 (63) and " k = k ; e" k = e k ; k ; (64) 5

26 where k = (A (k) ; B (k) ) and e k = ( e A (k) ; e B (k) ). We shall use the notation ea k = jea (k) lm j ; e bk = j e b (k) lm j ; k : (65) Lemma 0 Let r be an integer such that r 3 and let x be a nonnegative real number satisying xr <. Then the following inequalities hold: (? x)?r + 7 r x ; ( + x)r r x : Proof: The proof of this lemma is elementary and can be found in [4]. Q.E.D. The following lemma shows how are the transformation parameters e k and e k from matrices e F k bounded with e k. Lemma Let the assumption (A) hold. If for some k holds then max fje k j; j e k jg 0:34 e" k < 3N ; (66) q (ea k ) + ( e b k ) : (67) Proof: Suppose that for some k the relation (66) holds. Then Theorem and Corollary hold for the pair ( e A (k) ; e B (k) ) as well. The assumption (A) and the relations (63), (64) and (8) imply that there exists an ordering of the eigenvalues of the pair (A; B) such that ( i ; [ea (k) ii ;e b (k) ii ]) e" k < 4 9N < 8 < 0:05 ; i = ; : : : ; n : (68) Applying twice the triangle inequality and using the denition () and the relation (68), we obtain j = e (k) lm j = j ea (k) e ll b k) mm? ea e (k) mmb (k) ll j = ([ea (k) ll ; e b (k) ll ]; [ea (k) ;e mm b (k) mm]) ( l ; m )? ( l ; [ea (k) ll ; e b (k) ll ])? ( m ; [ea (k) ;e mm b (k) mm]) > 3? 0:05 = :95 : (69) Since p = n, the eigenvalues can be ordered so that the matrix P from Theorem and Corollary is identity matrix. 6

27 It is obvious that j= e (k) lmj 6= 0. This excludes cases (5a) and (5bii) of Algorithm 3. Therefore, we have maxfje k j; j e k jg j e = (k) lm j + q e= (k) maxfj= e (k) l j ; j= e (k) m jg : (70) From the Cauchy{Schwarz inequality and the relations (47) and (65) we have j= e (k) l j = jea (k) e(k) ll b lm? e q b (k) ll ea (k) lmj (ea (k) ll ) + ( e q b (k) ll ) (ea (k) lm ) + ( e b (k) lm ) = q (ea k ) + ( e b k ) : The same estimate holds for e = (k) m and therefore maxfj= e (k) l j ; j= e (k) jg m q(ea k ) + ( e b k ) : (7) Since (ea k ) + ( e b k ) e" k ; (7) the relations (69), (7),and (7) imply q e= (k) = q s ( = e (k) lm ) + 4 = e (k) l e= (k) m (:95 )? s (:95 )? 4 e" k 4 9N :933 : The assertion (67) now follows from the relations (70), (69), (7) and the above relation. Q.E.D. The following Lemma gives the relation between e" k and e" k+. It is used later in the proof of of Lemma 5. Lemma Let the assumption (A) hold. If for some k the relation (66) holds, then e" k+ + 0:494 e" k? 0:077 e" k [e" k? (ea k + e b k)] ; (73) 7

28 Proof: Suppose that the relation (66) holds for some k. The relation (6), together with the denition of e" k, implies If e" k+ = S (D k+ A (k+) D k+ ) + S (D k+ B (k+) D k+ ) : (74) then the relation (74) implies m k+ = minfd (k+) ; : : : ; d (k+) n g ; e" k+ S ( A(k+) ) + S ( B(k+) ) = (m k+ ) (m k+ ) Let us dene vectors (m k+ ) 4 " k+ : (75) ea l = (ea (k) l; ; ea(k) l; ; : : : ; ea(k) l;l? ; ea(k) l;l+ ; : : : ; ea(k) l;m? ; ea(k) l;m+ ; : : : ; ea(k) l;n ) ; ea m = (ea (k) m; ; ea(k) m; ; : : : ; ea(k) m;l? ; ea(k) m;l+ ; : : : ; ea(k) m;m? ; ea(k) m;m+ ; : : : ; ea(k) m;n) ; ea T l = (ea (k) ;l ; ea(k) ;l ; : : : ; ea(k) l?;l ; ea(k) l+;l ; : : : ; ea(k) m?;l ; ea(k) m+;l ; : : : ; ea(k) n;l ) ; ea T m = (ea (k) ;m ; ea(k) ;m ; : : : ; ea(k) l?;m ; ea(k) l+;m ; : : : ; ea(k) m?;m ; ea(k) m+;m ; : : : ; ea(k) n;m) ; where generally a T denotes the transposed vector a. Let a l ; a m ; a l and a m be row and column vectors dened in the same way, but from elements of the matrix A (k+). Relation (6) implies that " # a l = b T F e a m k " ea l ea m # ; [a l ; a m ] = [ea l ; ea m ] b e F k : Therefore, " a l a m # k b e F T k k " ea l ea m # ; k [a l ; a m ] k k b e F k k k [ea l ; ea m ] k : The o{diagonal elements of the matrix A e(k) which are changed in the transformation (6), are exactly the elements of vectors ea l ; ea m ; ea l and ea m with the exception of ea (k) lm we conclude that and ea(k) ml which are annihilated. Since k b e F T k k = k b e F k k, S (A (k+) ) S ( e A (k) )+(k b e F k k?)(k eal k + k ea m k + k ea l k + k ea m k )?ea k : 8

29 Since k b e F k k (see further in the proof), we conclude that S (A (k+) ) k b e F k k (S ( e A (k) )? ea k ) : By applying the similar analysis to matrix e B (k), we obtain S (B (k+) ) k b e F k k (S ( e B (k) )? e b k ) : Adding two previous inequalities and using the denitions of " k+ and e" k, gives " k+ k b e F k k [e" k? (ea k + e b k )] : Inserting this inequality into relation (75), we obtain e" k b F e k k k+ (m k+ ) 4 [e"? k (ea + e k b k )] : (76) To complete the proof we must nd the upper bound for k b e F k k lower bound for m k+. and the The relation (56) implies that Relation (6) implies that a (k+) ll = ea (k) ll? e k ea (k) lm + e k ea(k) mm ; a (k+) mm = e k ea(k) ll + e k ea (k) lm + ea (k) mm ; Therefore (d (k+) m k+ = minf; d (k+) l ; d (k+) m g : (77) b(k+) ll b(k+) mm = e b (k) ll = e k e b (k) ll l ) 4 = (a (k+) ) + (b (k+) ll ) = + 4 k [(ea(k) lm ) + ( e b (k) lm ) ] + ll + e4? 4 e k k (ea (k) ll ea (k) + e k(ea (k) ll ea (k) mm + e b (k) e ll b (k) + e lm b (k) e(k) ll b )? 4 e lm 3 k (ea(k) mm) mm ea(k)? e k e b (k) lm + e k e b (k) mm ; + e k e b (k) lm + e b (k) mm : + e lm b e (k) mmb (k) ) + lm? 4 j e k jj ea (k) ll ea (k) + e lm b (k) e(k) ll b j?4 j e lm k j 3 j ea (k) mm ea(k) + e lm b e (k) mmb (k) j? lm? e j k ea(k) ll ea (k) + e mm b (k) e ll b (k) j mm : (78) 9

30 Using the relation (47) and the Cauchy{Schwarz inequality in the relation (78), we obtain (d (k+) l ) 4? 4 j e k j (+ j e k j ) Using similar argument we obtain (d (k+) m ) 4? 4 j e k j (+ j e k j ) q q (ea (k) lm ) + ( e b (k) lm )? e k : (79) (ea (k) lm ) + ( e b (k) lm )? e k : (80) Relations (77), (79), (80), (7) and Lemma now imply m 4 k+? 4 0:34! p e" k 4 0:34 + p e" 3! k e" 5 k 0:34 p? p e" k : (8) Since is chordal metric, from the relation () we see that =3. The relations (66) and (63) therefore imply e" k < 9N < 7 : (8) Inserting the relation (8) and the assumption (66) into the relation (8) we obtain m 4 k+ >? 0:34 p e" k :34 p 3N Finally, taking into account that N 3 we obtain! A 7 p + 0:34 p 3N 3 5 : m 4 k+? 0:077 e" k (83) We shall now estimate k b e F k k. Since k b e F k k k b e F k k k b e F k k ; where kak = max j P i ja ij j, kak = max i Pj ja ij j for A = (a ij ), we obtain k b e F k k ( + maxfje k j; j e k jg) : 30

31 Lemma and the relation (66) now imply k b F e k k + 0:34! e" p k + 0:34 e" p k + 0:34! p 3N + 0:494 e" k : (84) The relation (73) now follows from the relations (76), (83) and (84). Q.E.D. We shall now prove that if the assumptions (A) and (A) hold, then Lemma and Lemma hold during N consecutive steps. Lemma 3 Let the asymptotic assumptions (A) and (A) hold. Then for each k f; : : : ; Ng holds e" k? 0:3 (k? ) e" e" ; e" k < 3N : Proof: The proof is by induction. For k = lemma is trivially fullled. Suppose that lemma holds for some k f; : : : ; N? g. From the second inequality in the induction assumption we conclude that, for the chosen k, Lemma and Lemma hold. From Lemma it follows that e" k+ + 0:494 e" k? 0:077 e" k e" k (? 0:494 e" k )(? 0:077e" k ) e" k (? 0:3 e" k ) e" k : (85) Hence e" k+ e" k :? 0:3 e" k Inserting the induction assumption in this inequality we obtain e" k+? 0:3?0:3(k?) e" e"? 0:3(k? ) e"? 0:3e" 3? 0:3(k? ) e" e" e" =? 0:3 k e" e" ;

32 and the rst assertion of the lemma is proved. From this assertion for k +, because of the asymptotic assumption (A), we now have e" k+ e"? 0:3 k e" <? 0:3 (N? ) which completes the proof. N N =? 0:3 N < 3N Q.E.D. Lemma 4 If the asymptotic assumptions (A) and (A) hold, then the assertions (67) and (73) of Lemma and Lemma hold for every k f; : : : ; Ng. Proof: The assertion follows imidiately from second assertion of Lemma 3. Q.E.D. The next lemma explains behaviour of S( e A (k) ), S( e B (k) ) and e" k, and of the transformation parameters e k and e k during N consecutive steps. Let us dene the quantity 3N 3N c N = + 0:494? 0:077 : (86) Lemma 5 Let the asymptotic assumptions (A) and (A) hold. Then: (i) For k = ; : : : ; N holds 6 4 S ( e A (k+) ) S ( e B (k+) ) e" k (c N ) k 4 S ( e A () ) S ( e B () ) e" : S ( e A () ) S ( e B () ) e" : (ii) For any choice e! k fe k ; e k g; k N, holds NX k= e! k 0:46 e" : 3

33 Proof: (i) Because of Lemma and Lemma 4, for k = ; : : : ; N holds e" k+ c N (e" k? (ea k + e b k)) c N fc N [ e" k?? (ea k? + e b k?)]? (ea k + e b k)g kx : : : (c N ) k e"? j= From the relation (87) immediately follows (c N ) k?j+ (ea j + e b j ) : (87) e" k+ (c N) k e" (c N) N e" ; k = ; : : : ; N : (88) Using Lemma 0 we obtain (c N ) N < ( :494 3N N)( + 7 0:077 3N Inserting this inequality into relation (88), we obtain e" k+ :566 e" ; k = ; : : : ; N : N) < :566 : (89) From the proof of Lemma we se that the above estimates hold for the quantities S ( A e(k+) ) and S ( B e (k+) ), as well. Therefore (i) is proved. (ii) Since c N >, from the relation (87) for k = N we have NX e" N + (c N ) N e"? Since e" N + 0, this inequality implies NX k= k= (ea k + e b k) : (ea k + e b k) (c N) N e" 0:783 e" : The above inequality together with Lemma and Lemma 4 imply NX k= e! k NX k= and the lemma is proved. maxfe k ; e kg NX k= 0:56 0:783 e" 0:09 e" 0:34 (ea k + e b k) Q.E.D. 33

34 4. The proof Here we prove that the Falk{Langemeyer method is quadratically convergent if the assumptions (A) and (A) are fullled and the pivot strategy is cyclic. Then we prove that the quadratic convergence implies the convergence of the sequence of pairs (4) towards the pair of diagonal matrices. At the end we prove that the measures e" k and " k are equivalent in the sense that ultimately the quadratic reduction of e" kn + implies the quadratic reduction of " kn + and vice versa. We can now prove our paper's central theorem. Theorem 6 Let the asymptotic assumptions (A) and (A) hold and let the sequence ((A (k) ; B (k) ); k ) be generated with the Falk{Langemeyer method from the pair (A; B). Then for any cyclic strategy holds e" N + p N e" : Proof: Let us x some k f; : : : ; Ng. Then the pivot pair (l; m) is also xed. We want to know what happens with the element on this position till the end of cycle. Therefore, we will observe the elements ea (r) lm ; r = k + ; : : : ; N. We know that ea (k+) lm = 0 and that the elements ea (r) actually lm change at most (n? ) times. Let r ; : : : ; r s ; s n? 4, denote those values of r for which ea (r) lm changes in the r{th step. Let us introduce the notation: z i = a (r i+) lm ; ez i = ea (r i+) lm ; d (i) j = 4 r m (i) (a (r i+) jj ) + b (r i+) jj ) ; j fl; mg ; lm = minfd (i) l ; d (i) m g ; d N = s? 0:077 Performing the r {th step according to Algorithm 4, gives z = (0 ea (r ) e! r) ; 3N : (90) 34

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

or H = UU = nx i=1 i u i u i ; where H is a non-singular Hermitian matrix of order n, = diag( i ) is a diagonal matrix whose diagonal elements are the

or H = UU = nx i=1 i u i u i ; where H is a non-singular Hermitian matrix of order n, = diag( i ) is a diagonal matrix whose diagonal elements are the Relative Perturbation Bound for Invariant Subspaces of Graded Indenite Hermitian Matrices Ninoslav Truhar 1 University Josip Juraj Strossmayer, Faculty of Civil Engineering, Drinska 16 a, 31000 Osijek,

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations

More information

Final Project # 5 The Cartan matrix of a Root System

Final Project # 5 The Cartan matrix of a Root System 8.099 Final Project # 5 The Cartan matrix of a Root System Thomas R. Covert July 30, 00 A root system in a Euclidean space V with a symmetric positive definite inner product, is a finite set of elements

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B.

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B. EE { QUESTION LIST EE KUMAR Spring (we will use the abbreviation QL to refer to problems on this list the list includes questions from prior midterm and nal exams) VECTORS AND MATRICES. Pages - of the

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz THE REAL POSITIVE DEFINITE COMPLETION PROBLEM FOR A SIMPLE CYCLE* WAYNE BARRETT**, CHARLES R JOHNSONy and PABLO TARAZAGAz Abstract We consider the question of whether a real partial positive denite matrix

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

Asymptotic Quadratic Convergence of the Parallel Block-Jacobi EVD Algorithm for Hermitian Matrices

Asymptotic Quadratic Convergence of the Parallel Block-Jacobi EVD Algorithm for Hermitian Matrices Asymptotic Quadratic Convergence of the Parallel Block-Jacobi EVD Algorithm for ermitian Matrices Gabriel Okša Yusaku Yamamoto Marián Vajteršic Technical Report 016-01 March/June 016 Department of Computer

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Chapter 3. Linear and Nonlinear Systems

Chapter 3. Linear and Nonlinear Systems 59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter

More information

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Linear Algebra, 4th day, Thursday 7/1/04 REU Info: Linear Algebra, 4th day, Thursday 7/1/04 REU 004. Info http//people.cs.uchicago.edu/laci/reu04. Instructor Laszlo Babai Scribe Nick Gurski 1 Linear maps We shall study the notion of maps between vector

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form

We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form Linear algebra This chapter discusses the solution of sets of linear algebraic equations and defines basic vector/matrix operations The focus is upon elimination methods such as Gaussian elimination, and

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

NOTES ON LINEAR ALGEBRA CLASS HANDOUT NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations 2 3.1. Example: Rotation Transformation 3 4. Matrix Multiplication and Function

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

A connection between number theory and linear algebra

A connection between number theory and linear algebra A connection between number theory and linear algebra Mark Steinberger Contents 1. Some basics 1 2. Rational canonical form 2 3. Prime factorization in F[x] 4 4. Units and order 5 5. Finite fields 7 6.

More information

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Lecture 11: Eigenvalues and Eigenvectors

Lecture 11: Eigenvalues and Eigenvectors Lecture : Eigenvalues and Eigenvectors De nition.. Let A be a square matrix (or linear transformation). A number λ is called an eigenvalue of A if there exists a non-zero vector u such that A u λ u. ()

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Symmetric and self-adjoint matrices

Symmetric and self-adjoint matrices Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

Balance properties of multi-dimensional words

Balance properties of multi-dimensional words Theoretical Computer Science 273 (2002) 197 224 www.elsevier.com/locate/tcs Balance properties of multi-dimensional words Valerie Berthe a;, Robert Tijdeman b a Institut de Mathematiques de Luminy, CNRS-UPR

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes. Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes Matrices and linear equations A matrix is an m-by-n array of numbers A = a 11 a 12 a 13 a 1n a 21 a 22 a 23 a

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to Coins with arbitrary weights Noga Alon Dmitry N. Kozlov y Abstract Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to decide if all the m given coins have the

More information

Matrix Arithmetic. j=1

Matrix Arithmetic. j=1 An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract A Finite Element Method for an Ill-Posed Problem W. Lucht Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D-699 Halle, Germany Abstract For an ill-posed problem which has its origin

More information

Jacobi method for small matrices

Jacobi method for small matrices Jacobi method for small matrices Erna Begović University of Zagreb Joint work with Vjeran Hari CIME-EMS Summer School 24.6.2015. OUTLINE Why small matrices? Jacobi method and pivot strategies Parallel

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

16. Local theory of regular singular points and applications

16. Local theory of regular singular points and applications 16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information