MATRICES WITH POSITIVE DEFINITE HERMITIAN PART: INEQUALITIES AND LINEAR SYSTEMS ROY MATHIAS

Size: px
Start display at page:

Download "MATRICES WITH POSITIVE DEFINITE HERMITIAN PART: INEQUALITIES AND LINEAR SYSTEMS ROY MATHIAS"

Transcription

1 MATRICES WITH POSITIVE DEFINITE HERMITIAN PART: INEQUALITIES AND LINEAR SYSTEMS ROY MATHIAS Abstract. The Hermitian and skew-hermitian parts of a square matrix A are dened by H(A) (A + A )=2 and S(A) (A? A )=2: We show that the function f(a) = (H(A?1 ))?1 is convex with respect to the Loewner partial order on the cone of matrices with positive denite Hermitian part. That is, for any matrices A and B with positive denite Hermitian part ff(a) + f(b)g=2? f(fa + Bg=2) is positive semidenite: Using this basic fact we prove a variety of inequalities involving norms, Hadamard products and submatrices and a perturbation result for the function f. These results are generalizations of results for positive denite matrices. Often the quantity H(A) kh(a?1 )?1 k 2 kh(a)?1 k 2 plays the role that 2 (A) kak 2 ka?1 k 2 plays in inequalities involving positive denite matrices. (kk 2 denotes the spectral norm.) Finally we derive a bound on the backward and forward error in ^x the solution to (1) Ax = b with H(A) positive denite computed by Gaussian elimination without pivoting in nite precision. This result is analogous to Wilkinson's result for positive denite matrices and gives a rigorous criterion for deciding when it is numerically safe not to pivot when solving (1). Key words. Positive denite matrix, LU Factorization, Loewner partial order, Matrix convexity, Hadamard product, Condition number AMS(MOS) subject classications. 15A48, 15A23, 15A45, 15A12 SIAM J. Matrix Anal. Appl., Vol 13, No. 2, pp , csiam 1. Introduction. Let M n (C) (respectively, M n (R)) denote the space of n n complex (respectively, real) matrices. We call A 2 M n (C) positive denite (respectively, positive semidenite) if A is Hermitian and x Ax > 0 (respectively, x Ax 0) for all nonzero x 2 C n. The Hermitian part of A is and the skew-hermitian part of A is H(A) (A + A )=2 S(A) (A? A )=2: Research supported by an Eliezer Naddor postdoctoral fellowship in the Mathematical Sciences from the Johns Hopkins University during the year while the author was in residence at the Department of Computer Science at Cornell University. Present address: Department of Mathematics, The College of William and Mary, Williamsburg, VA na.mathias@na-net.ornl.gov 1

2 2 roy mathias Matrices with positive denite Hermitian part have many properties analogous to those of positive denite matrices. We discuss some of these in this paper. In Sections 2 and 3 we derive a variety of inequalities for matrices with positive denite Hermitian part. Most of these involve principal submatrices, condition numbers or Hadamard products and generalize well known results for positive denite matrices. The two underlying facts are the formula for the Hermitian part of the inverse (Lemma 2.1) and a basic convexity result (Theorem 2.2). The results in Section 2 are applied in Section 4. In Section 4 we derive error bounds for Gaussian elimination applied to a matrix with positive denite Hermitian part in nite precision arithmetic. The leading principal minors (in fact all the principal minors) of a matrix A with positive denite Hermitian part are positive and hence a linear system Ax = b can be solved by Gaussian elimination without pivoting. This fact can be exploited in practical algorithms. However, Gaussian elimination without pivoting can lead to serious element growth, and in nite precision arithmetic this tends to result in an unacceptably inaccurate solution (see e.g. [8, p. 87] for a simple example). In [8] the authors showed (under the reasonable assumption that k jljjuj k k j^ljj ^Uj k) that ^x, the the solution computed by Gaussian elimination without pivoting, satises (A + E)^x = b where kek F unc n kh + S T H?1 Sk 2 and u is machine precision and c n is a linear function of n. Using this result they argued that it is safe not to pivot when solving Ax = b provided the ratio kh + S T H?1 Sk 2 =kak 2 is not large. (In Lemma 2.1 we show that this quantity is at least 1.) We show that it is not necessary to make the assumption k jljjuj k k j^ljj ^Uj k, and thereby give a sucient a priori condition for the LU factorization in nite precision arithmetic (without pivoting) of a positive denite matrix to run to completion with positive pivots. These results are in Section 4. All the results in this paper may be viewed as generalizations of results for positive denite matrices. If A is Hermitian we use use max (A) (respectively, min (A)) to denote the algebraically largest (respectively, smallest) eigenvalue of A. The spectral norm (k k 2 ) and the Frobenius norm (k k F ) are dened on M n by q kak 2 max (A A) = maxfkaxk 2 : kxk 2 = 1; x 2 C n g and kak F r X jaij j 2 : We dene jaj [ja ij j]. We write A B if B?A is positive semidenite. If T is positive denite then we use T 1=2 to denote the unique positive denite square root of T. We will frequently use the fact that for Hermitian matrices A; B 2 M n (C) min (A + B) min (A) + min (B) min (A)? kbk 2 ;

3 matrices with positive definite hermitian part 3 and that for a positive denite matrix A kak 2 = [ min (A?1 )]?1 : 2. Matrices with Positive Denite Hermitian Part. In this section we develop some of the properties of matrices with positive denite Hermitian part, in particular the properties of the Hermitian part of the inverse of a such a matrix. Previous research on matrices with positive denite Hermitian or skew-hermitian part [11, 5, 6, 13] has concentrated on the properties of AA?, especially interlacing inequalities for the arguments of the eigenvalues of AA?. (The eigenvalues of AA? all have unit modulus.) We start by determining the Hermitian part of the inverse of a matrix. Lemma 2.1. Let A have positive denite Hermitian part, and let H = H(A) and S = S(A). Then A is invertible and A?1 is has positive denite Hermitian part given by (2.1) H(A?1 ) = (A?1 + A? )=2 = (H + S H?1 S)?1 ; and we have the inequalities (2.2) ka?1 k 2 kh?1 k 2 and kak 2 kh + S H?1 Sk 2 : The rst inequality in (2.2) is problem in [9]. Proof. Let A = H + S satisfy the conditions of the lemma. Since X?1 + Y?1 = X?1 (X + Y )Y?1 for any nonsingular X; Y 2 M n we have f(a?1 + A? )=2g?1 = f(h + S)?1 (2H)(H? S)?1 =2g?1 = (H? S)H?1 (H + S) = H? SH?1 S = H + S H?1 S: Taking inverses now yields (2.1). By (2.1) the rst inequality in (2.2) applied to A?1 yields the second, so it suces to prove the rst. Using the fact that x Sx is imaginary for any x 2 C n and that H is positive denite we have ka?1 k?1 2 = minfkaxk 2 : kxk 2 = 1g minfjx Axj : kxk 2 = 1g = minfjx Hx + x Sxj : kxk 2 = 1g minfjx Hxj : kxk 2 = 1g = kh?1 k?1 2 : The inequality now follows by taking inverses. 2 From (2.1) one can easily show that for A with positive denite Hermitian part (2.3) H(A?1 ) [H(A)]?1 :

4 4 roy mathias We will rene this inequality in the next section. One can also use (2.1) to derive the bound min (H(A?1 )) [ max (H) + ksk 2 = min (H)]?1 > 0; which is used in [4] to prove the convergence of a variant of the conjugate gradient method for solving a linear system Ax = b when A has positive denite Hermitian part. Dene the functions f and H on the cone of positive denite matrices by (2.4) f(a) [(A?1 + A? )=2]?1 = H + S H?1 S; and (2.5) H (A) kh + S H?1 Sk 2 kh?1 k 2 ; where H = H(A) and S = S(A). We dene 2 (A) kak 2 ka?1 k 2 for any nonsingular A. Notice that 2 (A) = H (A) if A is positive denite. In the next theorem we show that f is convex with respect to the partial order. Many of the following results are based on this fact. Theorem 2.2. Let f be dened by (2.4) then f is convex with respect to the partial order. That is, for any A 1 ; A 2 2 M n with positive denite Hermitian part t 2 [0; 1] and any (2.6) f(ta 1 + (1?t)A 2 ) tf(a 1 ) + (1?t)f(A 2 ): Furthermore, suppose that A 2 M n has positive denite Hermitian part and is partitioned as (2.7)! A = A 11 A 12 A 21 A 22 with A 11 2 M k ; A 22 2 M n?k ; and let f(a) be partitioned in the same way. Then 1. f(a) is positive denite and f(a) = f(a ). 2. f(xax ) = Xf(A)X for any nonsingular X 2 M n. 3. f(a 22? A 21 A?1 11 A 12 ) = f(a) kf(a 22? A 21 A?1 11 A 12 )k 2 kf(a) 22 k: 5. f(a 11 A 22 ) f(a) 11 f(a) f(a 11 ) f(a) 11. Proof. To prove the convexity of f we will use the following fact which is essentially Theorem in [10]. Let! X = X 11 X 12 X12 X 22 be Hermitian with X 11 positive denite. Then X is positive semidenite if and only if X 12X?1 11 X 12 X 22 :

5 matrices with positive definite hermitian part 5 Let A i = H i + S i 2 M n be given with H i positive denite and S i skew-hermitian. Then! 0 ( H 1=2 i H?1=2 i S ) ( H 1=2 i H?1=2 i S ) = H i S i Si Si H?1 ; i = 1; 2 i S i and hence for t 2 [0; 1] 0 t =! H 1 S 1 S1 S1 H 1?1 + (1?t) S 1! H 2 S 2 S2 S2 H 2?1 S 2! th 1 + (1?t)H 2 ts 1 + (1?t)S 2 (ts 1 + (1?t)S 2 ) ts1 H 1?1 S 1 + (1?t)S2 H 2?1 : S 2 By the criterion above, the positive semideniteness of this last matrix implies [ts 1 + (1?t)S 2 ] [th 1 + (1?t)H 2 ]?1 [ts 1 + (1?t)S 2 ] ts 1H?1 1 S 1 + (1?t)S 2H?1 2 S 2 ; from which the assertion (2.6) follows. The statements 1. and 2. follow immediately from the denitions. To prove 3. note that (A 22? A 12 A?1 11 A 12 )?1 = (A?1 ) 22, see e.g. [10, 0.7.3]. The norm inequality in 4. follows from this. To prove 5. let Q be the nonsingular matrix I k (?I n?k ) and note that A 11 A 22 = (A + QAQ )=2 and f(a) 11 f(a) 22 = (f(a) + Qf(A)Q )=2: Now use the convexity of f to obtain the desired inequality: f(a 11 A 22 ) = f((a + QAQ )=2) [f(a) + Qf(A)Q ]=2 = f(a) 11 f(a) 22 : 6. follows We have immediately shown that from (2.6) 5.. implies 5.. It is not hard to show that if f is any function from M n to M n such that f(qaq ) = Qf(A)Q for any unitary Q 2 M n then the convexity inequality (2.6) holds if and only if the submatrix inequalty 6. holds, see e.g. [7]. We collect several useful facts about H in the following Theorem. These results reduce to well known results if one restricts A to be symmetric positive denite (in which case H (A) = 2 (A).) Theorem 2.3. Let H be dened by (2.5) and A; B 2 M n be positive denite with A partitioned as in (2.7). Then 1. H (A) = H (A ) = H (A?1 ) = H (ca) for any c > H (A) = H (QAQ ) for any unitary Q 2 M n. 3. H (A) kak 2 ka?1 k 2 = 2 (A) H (ta + (1?t)B) maxf H (A); H (B)g for any t 2 [0; 1]. 5. H (A + I) H (A). 6. H (A) H (A 11 A 22 ) H (A 11 ). 7. H (A) H (A 22? A 21 A?1 11 A 12 ). 2

6 6 roy mathias Proof. The statements in 1. and 2. follow immediately from the denitions. The inequalities in (2.2) imply 3. Since H (cx) = H (X) for any c > 0 and any positive denite X it suces to prove 4. under the additional assumption (2.8) min ((A + A )=2) = min ((B + B )=2) = 1; in which case Let C = A + B. Then, by (2.8), H (A) = kf(a)k 2 and H (B) = kf(b)k 2 : min ((C + C )=2) t min ((A + A )=2) + (1?t) min ((B + B )=2) = 1; or, equivalently, k[(c + C )=2]?1 k 2 1. By Theorem 2.2 we have Since f(c) is positive denite this implies f(c) tf(a) + (1?t)f(B): kf(c)k 2 ktf(a) + (1?t)f(B)k 2 tkf(a)k 2 + (1?t)kf(B)k 2 maxf H (A); H (B)g: Combining this with the bound on k[(c + C )=2]?1 k 2 gives the result. The inequality in 5. is a special case of 4. The second inequality in 6. is immediate. To prove the rst let Q be the unitary matrix I k (?I n?k ) and note that A 11 A 22 = (A + QAQ )=2: The result now follows from 4. and 2. Finally, to show 7. let (A?1 ) 22 be the (n?k)(n?k) submatrix in the bottom right corner of A?1. Then (A 22? A 21 A?1 11 A 12 )?1 = (A?1 ) 22 [10, 0.7.3]. So by 1., then 6., and nally 1. again, we have H (A 22?A 21 A?1 11 A 12 ) = H ((A 22?A 21 A?1 11 A 12 )?1 ) = H ((A?1 ) 22 ) H (A?1 ) = H (A): We will generalize 4. and 6. in the next section. Notice that if A has positive denite Hermitian part and B is a principal submatrix of A then, combining 3. and 6., we have 2 (B) H (B) H (A). That is we have a bound on the 2-norm condition number of any principal submatrix of a matrix with positive denite Hermitian part. Finally we give a perturbation result for the function f. Lemma 2.4. Let A = H + S have positive denite Hermitian part and let E be such that Then kh?1 k 2 kek 2 1=2: 2 (2.9) kf(a)? f(a + E)k 6kEk 2 kh?1 k 2 kh + S H?1 Sk 2 = 6kEk 2 H (A):

7 matrices with positive definite hermitian part 7 Proof. Let kh?1 k 2 1=2, then by standard arguments we have (T? I)?1 = H?1=2 (I? H?1 )?1 H?1=2 H?1=2 (I? kh?1 k 2 I)?1 H?1=2 H?1=2 [(1 + 2kH?1 k 2 )I] H?1=2 = (1 + 2kH?1 k 2 ) H?1 : Let A and E satisfy the conditions of the Lemma and set = kek 2. Then kh?1 k 2 1=2. Dene F = (E + E )=2; G = (E? E )=2: Then kf k 2 and kgk 2. Furthermore, the condition kh?1 k 2 1=2 implies So 1 + 2kH?1 k 2 2 and (2G H?1 G? I) 0: f(a + E) = (H + F ) + (S + G) (H + F )?1 (S + G) (H + I) + (S + G) (H? I)?1 (S + G) (H + I) + (1 + 2kH?1 k 2 )(S + G) H?1 (S + G) = H + S H?1 S + 2H?1? I + 2kH?1 k 2 S H?1 S +(1 + 2kH?1 k 2 )fs H?1 G + G H?1 S + G H?1 Gg H + S H?1 S + 2kH?1 k 2 fh + S H?1 Sg? I +2fS H?1 G + G H?1 Sg + 2G H?1 G = f(a) + 2kH?1 k 2 f(a) + 2fS H?1 G + G H?1 Sg +(2G H?1 G? I) f(a) + 2kH?1 k 2 f(a) + 2fS H?1 G + G H?1 Sg: Now we will bound the norm of the nal term in this expression. ks H?1 Gk 2 ks H?1=2 H?1=2 k 2 kgk 2 ks H?1=2 k 2 kh?1=2 k 2 = q q ks H?1 Sk 2 kh?1 k 2 kh + S H?1 Sk 2 kh?1 k 2 kh + S H?1 Sk 2 kh?1 k 2 : (For the nal inequality we have used the fact that kh +S H?1 Sk 2 kh?1 k 2 1.) Thus, max (f(a + E)? f(a)) 2kH?1 kkh + S T H?1 Sk 2 + 2kS H?1 G + G H?1 Sk 2 A similar argument shows that = 6kH?1 kkh + S T H?1 Sk 2 : min (f(a + E)? f(e))?6kh?1 kkh + S T H?1 Sk 2 :

8 8 roy mathias Combining these two we have the inequality in (2.9). The equality in (2.9) follows from the denition of H (A). 2 A simpler approach to bounding kf(a+e)?f(a)k 2 is to bound ka?1?(a+e)?1 k 2, then use the fact that f(a)?1 = (A?1 + A? )=2. However this gives an inequality of the form (2.9) but with H (A) replaced by 2 H (A). Note that if we restrict A and E to be Hermitian then we have a result which is stronger that (2.9): kf(a + E)? f(a)k 2 = kek 2 ; regardless of the value of H (A). However, the bound (2.9) is quite satisfactory for our purposes since our results in Theorem 4.1, when restricted to Hermitian matrices, reduce to the bounds proved for Hermitian matrices in [14] (up to a constant). 3. Further Inequalities. In this section we prove some additional inequalities that will not be used in Section 4. The rst is a renement of (2.3). Corollary 3.1. Let A = H + S have positive denite Hermitian part. Then (3.1) [H(A)]?1 H(A?1 ) [H(A)]?1 ; if and only if (3.2) (1 + maxfjj : is an eigenvalue of H(A)?1 S(A)g)?1 (1 + minfjj : is an eigenvalue of H(A)?1 S(A)g)?1 : Proof. By (2.1) the rst inequality is equivalent to (3.3) H(A)?1 (H(A) + S(A)H(A)?1 S(A))?1 : It is known that for positive denite X; Y 2 M n we have X Y if and only if the spectral radius of X?1 Y is less that or equal to 1 [10, Theorem 7.7.3]. Using this fact and elementary manipulations one can show that (3.3) holds if and only if is less that or equal to the right hand side of (3.2). The proof of the second inequality is similar. 2 The rst inequality in (3.1) is Theorem 2 in [11]. However, the proof there depended on the fact that A was real, here it does not. Recall that the Hadamard product of A = [a ij ] 2 M n and B = [b ij ] 2 M n is A B = [a ij b ij ]. Thus, the results in Theorem and Theorem may be stated as (3.4) (3.5) f(a B) f(a) B 8 A 2 M n (C); with H(A) positive denite H (A B) H (A) 8 A 2 M n (C); with H(A) positive denite where B = J k J n?k (J i is the i i matrix of ones). In fact, (3.4) and (3.5) are true more generally, as is Theorem First we will provide some preliminary facts and denitions. We call a norm k k on M n monotone if kak kbk whenever A and B are positive semidenite matrices with A B. We call a norm k k unitarily invariant if kak =

9 matrices with positive definite hermitian part 9 kuav k if for any A 2 M n and unitary U; V 2 M n. A unitarily invariant norm must be monotone. However, the monotone norm kak = max ja ij j is not unitarily invariant. Let k k be a norm on M n. Then we dene Hk k on the cone of nn matrices with positive denite Hermitian part by (3.6) Hk k (A) = kh + S H?1 Sk kh?1 k; where H = H(A); S = S(A): For x 2 R n let D x 2 M n denote the diagonal matrix with i; i entry x i. Let A 2 M n and x; y 2 R n then A (xy ) = D x AD y. We call a matrix B 2 M n a correlation matrix if it is positive denite and its main diagonal entries are all 1. If B 2 M n is a correlation matrix and k k is unitarily invariant norm then, by [2, (33)] or [3, Corollary 2], one can show that for any A 2 M n (C) (3.7) ka Bk kak and, by [1, Lemma 2], it follows that for any positive denite H 2 M n (3.8) (H B)?1 H?1 B: We will extend the denition of f to by P n = fa 2 M n : range S(A) range H(A); and H(A) is positive semideniteg (3.9) f(a) = H(A) + S(A) H(A) y S(A): (A y denotes the Moore-Penrose inverse of A, see e.g. [10, p. 421]). Notice that even though the function f is not continuous on P n we do have f(a) = lim #0 f(a + I) for all A 2 P n : With this extended denition of f we no longer have Theorem 2.3 but we do have the inequality (3.10) f(xax ) Xf(A)X for all X 2 M n : If one restricts A and B to be positive denite then the following result is Theorem 10.C.3 in [12]. Theorem 3.2. Let A; B 2 M n have positive denite Hermitian part and let k k be a monotone norm on M n. Then Hk k (A + B) maxf Hk k (A); Hk k (B)g: Proof. The proof is essentially the same as that of Theorem except that one uses the matrix convexity of the function g(h) = H?1 (see e.g., [12, pp ]) on the positive denite matrices to obtain the bound on k[(c + C )=2]?1 k. 2

10 10 roy mathias Theorem 3.3. Let A 2 M n have positive denite Hermitian part positive semidenite. Then and B 2 M n be (3.11) 0 f(a B) f(a) B: Proof. Let A; B satisfy the conditions of the theorem. The left hand inequality is immediate. Because B is positive semidenite we may write B = P n i=1 i x i x i with i 0. So now by the convexity and homogeneity of f and the inequality (3.10) f(a B) f(a = f( = f( nx i=1 nx i=1 nx i=1 nx i=1 nx i=1 i x i x i ) i A (x i x i )) i D xi AD xi ) i f(d xi AD xi ) i D xi f(a)d xi = f(a) B: 2 It appears that the next result has not been observed except in the case where A is positive denite and B is the correlation matrix J k J n?k [12, 10.D.2]. Theorem 3.4. Let A 2 M n have positive denite Hermitian part, B 2 M n be a correlation matrix and k k any unitarily invariant norm on M n. Then, (3.12) Hk k (A B) Hk k (A): Proof. Let A and B satisfy the conditions of the theorem and let k k be a unitarily invariant norm. Then, since by the previous result f(a B) f(a) B, taking norms and applying (3.7) we have kf(a B)k kf(a) B)k kf(a)k: Let T = (A + A )=2. So, by (3.8) and (3.7) k(t B)?1 k kh?1 Bk kh?1 k: Combining these two inequalities gives (3.12) Computation of the LU Factorization. In this section we analyze the backward stability of the outer product LU factorization algorithm without pivoting (described below) when applied to a matrix with positive denite Hermitian part using nite precision arithmetic. We will assume that all matrices are real in this section. (In

11 matrices with positive definite hermitian part 11 this case, (A + A T )=2, the symmetric part of A is the same as the Hermitian part of A.) These results generalize those in [14] for positive denite matrices and the bounds for the exact LU factors of a matrix with positive denite Hermitian part in [8]. We assume the model of oating point arithmetic in [9, Section 2.4] and let u denote unit roundo. There are many reasons to avoid pivoting, we will only mention two; see [9] for a more complete discussion. Firstly, block algorithms [9, Section ] perform better without pivoting. Secondly, pivoting will usually destroy sparsity. Although we consider the outer product LU factorization algorithm, the gaxpy LU factorization algorithm, with the computations organized in the natural way (e.g. [9, Algorithm 3.2.4]), computes exactly the same LU factors in oating point arithmetic as the outer product algorithm. So the results are valid for the gaxpy algorithm also. The gaxpy algorithm is often preferred in practice, see [9, Section 1.4.8] for a discussion of some of the issues. Block LU factorization algorithms (see e.g., [9, Algorithms 3.2.5, 3.2.6]) typically will not produces exactly the same computed LU factorization as (4.1) but one may expect the error analysis to produce similar conclusions since we have shown in Section 2 that 2 (B) H (A) for any submatrix of a positive denite matrix A. The outer product algorithm that we will consider is (4.1) for k = 1 to n? 1 for j = k + 1 to n a jk = a jk =a kk for i = k + 1 to n a ij = a ij? a jk a ki end end end It runs to completion provided that at the k-th stage a kk 6= 0. The algorithm over-writes A with U and the strictly lower triangular part of L. Theorem 4.1. Let A 2 M n (R) have positive denite Hermitian part, and let H = H(A) and S = S(A). Then L and U, the exact LU factors of A, satisfy (4.2) k jlj juj k F nkh + S T H?1 Sk 2 Let u be machine precision. If (4.3) 24n 3=2 H (A)u 1 then the LU factorization algorithm (4.1) runs to completion and the computed factors, ^L and ^U satisfy (4.4) k^l ^U? Ak F 7un 3=2 kh + S T H?1 Sk 2 :

12 12 roy mathias Proof. Let A be positive denite and partitioned as! A = A 11 A (4.5) 12 ; A A 21 A 22 2 M n?1 22 and let ~ A = A22? A 21 A?1 11 A 12. Given a matrix X, let X 1 denote the rst column of X, and X (1) denote the rst row of X (note X (1) is 1n). First we will prove the following inequalities which will be used several times: (4.6) ka 1 k 2 2 A 11kH + S T H?1 Sk 2 and ka (1) k 2 2 A 11kH + S T H?1 Sk 2 : It suces to prove the rst inequality as the second inequality is the rst with A replaced by A T. Notice that A 1 = (H + S) 1 = [(H 1=2 + SH?1=2 )(H 1=2 )] 1 = (H 1=2 + SH?1=2 )(H 1=2 ) 1 : Also, because S is skew symmetric we have A 11 = H 11 + S 11 = H 11 and hence kh 1=2 1 k 2 2 = (H 1=2 H 1=2 ) 11 = H 11 = A 11 : So, combining these two results, and using the skew-symmetry of S for the nal equality we have ka 1 k 2 2 = k(h 1=2 + SH?1=2 )(H 1=2 ) 1 k 2 2 kh 1=2 + SH?1=2 k 2 2 kh 1=2 1 k 2 2 = k(h 1=2 + SH?1=2 )(H 1=2 + SH?1=2 ) k 2 A 11 = A 11 kh + S T H?1 Sk 2 : We prove (4.2) by induction on n (it is also proved in [8] in another way). Let L and U be partitioned in the same way as A. k jlj juj k F = k jl 1 j ju (1) j + jl 22 j ju 22 j k F k jl 1 j ju (1) j k F + k jl 22 j ju 22 j k F k jl 1 j k 2 k ju (1) j k 2 + (n? 1)kf( ~ A)k 2 ka 1 =A 11 k 2 ka (1) k 2 + (n? 1)kf(A)k 2 kh + S T H?1 Sk 2 + (n? 1)kH + S T H?1 Sk 2 = nkt + S H?1 Sk 2 : We have used the induction hypothesis for the second inequality, Theorem 2.2 for the next and (4.6) for the last. We now consider oating point arithmetic with precision u. Let ^L and ^U be the computed LU factors. Again we will use induction on the order of the matrices. It is clear that the assertions are true when n = 1. After one step of the LU factorization we have computed ^L 1 = fl(a 1 =A 11 ) = L 1 + F; ^U(1) = U (1) and ^~A = fl(a 22? ^L 21 ^U21 ) = ~ A + E

13 matrices with positive definite hermitian part 13 where E and F satisfy the component-wise bounds (4.7) (4.8) jej u(ja 22 j + ja 21 A 12 =A 11 j) + 2ujA 21 A 12 =A 11 j jf j ujl 1 j = uja 1 j=a 11 : (We have used u 2 u to obtain the bound on jej.) Note that by (4.6) and the fact that A 21 is a submatrix of A 1 we have ka 21 = q A 11 k 2 2 ka 1 = q A 11 k 2 2 kh + S T H?1 Sk 2 : Similarly, q ka 12 = A 11 k 2 2 kh + S T H?1 Sk 2 : Thus kek F uk ja 22 j k F + 3uk ja 21 A 12 =A 11 j k F = uka 22 k F + 3ukA 21 A 12 =A 11 k F u p nkh + S T H?1 Sk 2 + 3ukH + S T H?1 Sk 2 4u p nkh + S T H?1 Sk 2 ; which implies (4.9) kek 2 kek F 4u p nkh + S T H?1 Sk 2 : It is immediate from (4.8) and (4.6) that (4.10) kf k 2 ukh + S T H?1 Sk 2 : First we will show that if H(A) 0 and the condition (4.3) the the LU factorization runs to completion with positive pivots. Our proof is by induction. The case n = 1 is immediate. Assume that A 2 M n (R) has H(A) 0 and that A satises (4.3), we will show that ~ A + E 2 M n?1 has positive denite Hermitian part and also satises (4.3). To do this we must compute a bound on H ( ~ A + E). min ([( ~ A + E) + ( ~ A + E)]=2) min ([ ~ A + ~ A]=2)? kek 2 min ([ ~ A + ~ A]=2)? 4u p nkh + S T H?1 Sk 2 = min ([ ~ A + ~ A]=2)(1? 4u p n H ( ~ A)): Now using Theorem and Lemma 2.4 for the second inequality, and the bound on kek 2 for the third, we have (4.11) kf( ~ A + E)k2 kf( ~ A)k2 + kf( ~ A + E)? f( ~ A)k2 kf( ~ A)k2 + 6kEk 2 H ( ~ A) kf( ~ A)k 2 (1 + 12u p n H ( ~ A)) :

14 14 roy mathias Combining these two bounds, then using the fact H ( ~ A) H (A) (Theorem 2.3) yields H ( ~ A + E) H ( ~ A)(1 + 12u p nh ( ~ A))(1? 4u p nh ( ~ A))?1 H (A)(1 + 12u p n H (A))(1? 4u p n H (A))?1 : We can now show that ~ A + E satises the condition (4.3): 24u H ( A ~ + E) 24u(n?1) 3=2 H (A) up n H (A) 1? 4u p n H (A) 24u(n?1) 3=2 1 H (A) (1? 12u p n H (A))(1? 4u p n H (A)) 24u(n?1) 3=2 1 H (A) 1? 16u p n H (A) = 24un 3=2 H (A) q(n?1)=n n? 1 1 n 1? 16u p n H (A) q n? 1 (n?1)=n n? 16un p n H (A) n? 1 n? 16un 3=2 H (A) 1: Thus we have shown that if A satises (4.3) then so does A ~ + E 2 M n?1, and hence by induction the nite precision LU factorization will run to completion with positive pivots. Finally, we show that under the same condition (4.3) we have (4.4). Using the inductive hypothesis, the bound (4.11), and the condition (4.3) (for the third inequality), we have k^l 22 ^U22? ( ~ A + E)k F 5u(n?1) 3=2 kf( ~ A + E)k 2 Also, 5u(n?1) 3=2 ( p nu H (A))kH + S T H?1 Sk 2 5u(n?1) 3=2 (1 + 1=2n)kH + S T H?1 Sk 2 = 5u(n?1) p q n (n?1)=n (1 + 1=2n)kH + S T H?1 Sk 2 5u(n?1) p n(1? 1=2n)(1 + 1=2n)kH + S T H?1 Sk 2 5u(n?1) p nkh + S T H?1 Sk 2 : k^l 1 ^U(1)? L 1 U (1) k F = kf U (1) k F kf k 2 ku (1) k 2 ukh + S T H?1 Sk 2 : We now combine these bounds to obtain (4.4): k^l ^U? Ak F = k^l ^U? LUk F

15 matrices with positive definite hermitian part 15 = k^l 1 ^U(1) + ^L 22 ^U22? L 1 U (1)? L 22 U 22 k F k^l 1 ^U(1)? L 1 U (1) k F + k^l 22 ^U22? L 22 U 22 k F = k^l 1 ^U(1)? L 1 U (1) k F + k^l 22 ^U22? ~ Ak F k^l 1 ^U(1)? L 1 U (1) k F + k^l 22 ^U22? ( ~ A + E)k F + kek F p nukh + S T H?1 Sk p n(n?1)kh + S T H?1 Sk 2 +4u p nkh + S T H?1 Sk 2 5n 3=2 ukh + S T H?1 Sk 2 : 2 Corollary 4.2. Let A 2 M n (R) have positive denite Hermitian part and suppose that the condition (4.3) holds. Then the computed LU factors of A satisfy (4.12) k j^lj j ^Uj kf n[1 + 30un 3=2 H (A)] kh + S T H?1 Sk 2 : Proof. By Theorem 4.1 we have ^L ^U = A + E and kek 2 kek F 5un 3=2 kh + S T H?1 Sk 2 : So, by the rst part of Theorem 4.1, and then Lemma 2.4 we have the desired bound: k j^lj j ^Uj k F k j^lj k 2 k j ^Uj k F k^lkf k ^UkF nkf(a + E)k 2 nkf(a)k 2 + nkf(a + E)? f(a)k 2 nkh + S T H?1 Sk 2 + 6nkEk 2 H (A) nkh + S T H?1 Sk 2 + 6n(5un 3=2 kh + S T H?1 Sk 2 ) H (A) n(1 + 30un 3=2 H (A)) kh + S T H?1 Sk 2 : 2 Now consider a linear system Ax = b with H(A) positive denite. Let ^L and ^U be the LU factors computed by algorithm (4.1), ^y be the computed solution to ^Uy = b and ^x the solution to ^Lz = ^y. Then combining the bound (4.12) with [9, Theorem 3.3.2] we know that (A + E)^x = b where (4.13) kek 2 nu(3 + 5n + 150un 3=2 H (A))kH + S T H?1 Sk 2 + O(u 2 ) = n;u;h (A)kH + S T H?1 Sk 2 + O(u 2 ): Thus we have a rigorous a priori upper bound on the backward error of the solution computed without pivoting. Using the fact that if Ax = b and (A + F )y = b then (see e.g., [10, (5.8.7)]) (4.14) kx? yk 2 kxk 2 kek 2kA?1 k 2 1? kek 2 ka?1 k 2 one can derive an a priori upper bound on the relative error in the computed solution ^x.

16 16 roy mathias Now let us compare ^x with ^x piv, the solution computed by Gaussian elimination with pivoting, in order to decide when it is worth pivoting. From [9, (3.4.3)] we have (4.15) ke piv k 2 nu(3 p n + 5n 2 )kak + O(u 2 ) = nu n kak 2 + O(u 2 ); where is the growth factor. Ignoring the factors n;u;h (A) and n; the ratio of the bounds in (4.13) and (4.15) is kh + S T H?1 Sk 2 =kak 2 1. Thus if the ratio kh + S T H?1 Sk 2 =kak 2 is not large it is reasonable expect that ^x is not signicantly worse that ^x piv from the standpoint of backward error or relative error (in view of (4.14)). This is of course a heuristic because the inequalities (4.13) and (4.15) are worst case bounds. First we will show that kak kh+s T H?1 Sk does not imply that one will obtain as accurate an answer without pivoting as with pivoting. Consider the contrived example A = = = 0?1= 1 Then kak 2 = kh + S T H?1 Sk 2, but for small > 0 solving Ax = b with pivoting will, in general, produce a considerably more accurate solution that without pivoting. While on the other hand, a large value of kh +S T H?1 Sk 2 =kak 2 does not imply that Gaussian elimination without pivoting will give signicantly worse results. For one thing we only use kh + S T H?1 Sk 2 to bound k j^lj j ^Uj k F, but the fact that kh + S T H?1 Sk 2 is large does not imply that k j^lj j ^Uj k F is large. This is in contrast to the case when A is positive denite when we have 1 C A : kak 2 k jlj juj k F nkak 2 : Also, a large value of k j^lj j ^Uj kf need not imply a large relative error. Both these points are illustrated by the numerical example in Section 3 of [8]. Acknowledgement: I am grateful to Izchak Lewkowicz for pointing out an error in the proof of Lemma 2.1. REFERENCES [1] T. Ando. Concavity of certain maps on positive denite matrices and applications to Hadamard products. Linear Algebra Appl., 26:203{241, [2] T. Ando, R. A. Horn, and C. R. Johnson. The singular values of a Hadamard product: A basic inequality. Linear Multilinear Algebra, 21:345{365, [3] R. B. Bapat and V. S. Sunder. On majorization and Schur products. Linear Algebra Appl., 72:107{117, [4] S. C. Eisenstat, H. C. Elman, and M. H. Schultz. Variational iterative methods for nonsymmetric systems of linear equations. SIAM J. Num. Anal., 20(4):345{357, [5] K. Fan. On real matrices with positive denite symmetric component. Linear Multilinear Algebra, 1:1{4, [6] K. Fan. On strictly dissipative matrices. Linear Algebra Appl., 9:223{241, [7] S. Friedland and M. Katz. On a matrix inequality. Linear Algebra Appl., 85:185{190, 1987.

17 matrices with positive definite hermitian part 17 [8] G. H. Golub and C. F. Van Loan. Unsymmetric positive denite linear systems. Linear Algebra Appl., 28:85{97, [9] G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore, second edition, [10] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, New York, [11] C. R. Johnson. An inequality for matrices whose symmetric part is positive denite. Linear Algebra Appl., 6:13{18, [12] A. W. Marshall and I. Olkin. Inequalities: Theory of Majorization and its Applications. Academic Press, London, [13] R. C. Thompson. Dissipative matrices and related results. Linear Algebra Appl., 11:255{269, [14] J. H. Wilkinson. A priori error analysis of algebraic processes. In Proceedings of International Congress of Mathematicians, pages 629{640, 1968.

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995

More information

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60 On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department

More information

2 and bound the error in the ith eigenvector in terms of the relative gap, min j6=i j i? jj j i j j 1=2 : In general, this theory usually restricts H

2 and bound the error in the ith eigenvector in terms of the relative gap, min j6=i j i? jj j i j j 1=2 : In general, this theory usually restricts H Optimal Perturbation Bounds for the Hermitian Eigenvalue Problem Jesse L. Barlow Department of Computer Science and Engineering The Pennsylvania State University University Park, PA 1682-616 e-mail: barlow@cse.psu.edu

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Department of Mathematics Technical Report May 2000 ABSTRACT. for any matrix norm that is reduced by a pinching. In addition to known

Department of Mathematics Technical Report May 2000 ABSTRACT. for any matrix norm that is reduced by a pinching. In addition to known University of Kentucky Lexington Department of Mathematics Technical Report 2000-23 Pinchings and Norms of Scaled Triangular Matrices 1 Rajendra Bhatia 2 William Kahan 3 Ren-Cang Li 4 May 2000 ABSTRACT

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

Extremal Characterizations of the Schur Complement and Resulting Inequalities

Extremal Characterizations of the Schur Complement and Resulting Inequalities Extremal Characterizations of the Schur Complement and Resulting Inequalities Chi-Kwong Li and Roy Mathias Department of Mathematics College of William & Mary Williamsburg, VA 23187. E-mail: ckli@math.wm.edu

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices

Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices Chi-Kwong Li Department of Mathematics, College of William & Mary, P.O. Box 8795, Williamsburg, VA 23187-8795, USA. E-mail:

More information

arxiv: v3 [math.ra] 22 Aug 2014

arxiv: v3 [math.ra] 22 Aug 2014 arxiv:1407.0331v3 [math.ra] 22 Aug 2014 Positivity of Partitioned Hermitian Matrices with Unitarily Invariant Norms Abstract Chi-Kwong Li a, Fuzhen Zhang b a Department of Mathematics, College of William

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices

Algorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 7, Issue 1, Article 34, 2006 MATRIX EQUALITIES AND INEQUALITIES INVOLVING KHATRI-RAO AND TRACY-SINGH SUMS ZEYAD AL

More information

An Analog of the Cauchy-Schwarz Inequality for Hadamard. Products and Unitarily Invariant Norms. Roger A. Horn and Roy Mathias

An Analog of the Cauchy-Schwarz Inequality for Hadamard. Products and Unitarily Invariant Norms. Roger A. Horn and Roy Mathias An Analog of the Cauchy-Schwarz Inequality for Hadamard Products and Unitarily Invariant Norms Roger A. Horn and Roy Mathias The Johns Hopkins University, Baltimore, Maryland 21218 SIAM J. Matrix Analysis

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992. Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

or H = UU = nx i=1 i u i u i ; where H is a non-singular Hermitian matrix of order n, = diag( i ) is a diagonal matrix whose diagonal elements are the

or H = UU = nx i=1 i u i u i ; where H is a non-singular Hermitian matrix of order n, = diag( i ) is a diagonal matrix whose diagonal elements are the Relative Perturbation Bound for Invariant Subspaces of Graded Indenite Hermitian Matrices Ninoslav Truhar 1 University Josip Juraj Strossmayer, Faculty of Civil Engineering, Drinska 16 a, 31000 Osijek,

More information

as specic as the Schur-Horn theorem. Indeed, the most general result in this regard is as follows due to Mirsky [4]. Seeking any additional bearing wo

as specic as the Schur-Horn theorem. Indeed, the most general result in this regard is as follows due to Mirsky [4]. Seeking any additional bearing wo ON CONSTRUCTING MATRICES WITH PRESCRIBED SINGULAR VALUES AND DIAGONAL ELEMENTS MOODY T. CHU Abstract. Similar to the well known Schur-Horn theorem that characterizes the relationship between the diagonal

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Compound matrices and some classical inequalities

Compound matrices and some classical inequalities Compound matrices and some classical inequalities Tin-Yau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Minimum number of non-zero-entries in a 7 7 stable matrix

Minimum number of non-zero-entries in a 7 7 stable matrix Linear Algebra and its Applications 572 (2019) 135 152 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Minimum number of non-zero-entries in a

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then LECTURE 6: POSITIVE DEFINITE MATRICES Definition: A Hermitian matrix A Mn is positive definite (pd) if x Ax > 0 x C n,x 0 A is positive semidefinite (psd) if x Ax 0. Definition: A Mn is negative (semi)definite

More information

Maximizing the numerical radii of matrices by permuting their entries

Maximizing the numerical radii of matrices by permuting their entries Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and

More information

Singular Value and Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices

Singular Value and Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 8 2017 Singular Value Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices Aliaa Burqan Zarqa University,

More information

On the Schur Complement of Diagonally Dominant Matrices

On the Schur Complement of Diagonally Dominant Matrices On the Schur Complement of Diagonally Dominant Matrices T.-G. Lei, C.-W. Woo,J.-Z.Liu, and F. Zhang 1 Introduction In 1979, Carlson and Markham proved that the Schur complements of strictly diagonally

More information

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja Homework Haimanot Kassa, Jeremy Morris & Isaac Ben Jeppsen October 7, 004 Exercise 1 : We can say that kxk = kx y + yk And likewise So we get kxk kx yk + kyk kxk kyk kx yk kyk = ky x + xk kyk ky xk + kxk

More information

Positive definiteness of tridiagonal matrices via the numerical range

Positive definiteness of tridiagonal matrices via the numerical range Electronic Journal of Linear Algebra Volume 3 ELA Volume 3 (998) Article 9 998 Positive definiteness of tridiagonal matrices via the numerical range Mao-Ting Chien mtchien@math.math.scu.edu.tw Michael

More information

Interpolating the arithmetic geometric mean inequality and its operator version

Interpolating the arithmetic geometric mean inequality and its operator version Linear Algebra and its Applications 413 (006) 355 363 www.elsevier.com/locate/laa Interpolating the arithmetic geometric mean inequality and its operator version Rajendra Bhatia Indian Statistical Institute,

More information

ON THE COMPLETE PIVOTING CONJECTURE FOR A HADAMARD MATRIX OF ORDER 12 ALAN EDELMAN 1. Department of Mathematics. and Lawrence Berkeley Laboratory

ON THE COMPLETE PIVOTING CONJECTURE FOR A HADAMARD MATRIX OF ORDER 12 ALAN EDELMAN 1. Department of Mathematics. and Lawrence Berkeley Laboratory ON THE COMPLETE PIVOTING CONJECTURE FOR A HADAMARD MATRIX OF ORDER 12 ALAN EDELMAN 1 Department of Mathematics and Lawrence Berkeley Laboratory University of California Berkeley, California 94720 edelman@math.berkeley.edu

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT

Institute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{97{02 TR{3864 On the Adjoint Matrix G. W. Stewart y ABSTRACT The adjoint A A of a matrix A

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

Bidiagonal decompositions, minors and applications

Bidiagonal decompositions, minors and applications Electronic Journal of Linear Algebra Volume 25 Volume 25 (2012) Article 6 2012 Bidiagonal decompositions, minors and applications A. Barreras J. M. Pena jmpena@unizar.es Follow this and additional works

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Convexity of the Joint Numerical Range

Convexity of the Joint Numerical Range Convexity of the Joint Numerical Range Chi-Kwong Li and Yiu-Tung Poon October 26, 2004 Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement. Abstract Let A = (A 1,..., A m ) be an

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures 2 1 Borel Regular Measures We now state and prove an important regularity property of Borel regular outer measures: Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon

More information

ESTIMATION OF ERROR. r = b Abx a quantity called the residual for bx. Then

ESTIMATION OF ERROR. r = b Abx a quantity called the residual for bx. Then ESTIMATION OF ERROR Let bx denote an approximate solution for Ax = b; perhaps bx is obtained by Gaussian elimination. Let x denote the exact solution. Then introduce r = b Abx a quantity called the residual

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

Norms and Perturbation theory for linear systems

Norms and Perturbation theory for linear systems CHAPTER 7 Norms and Perturbation theory for linear systems Exercise 7.7: Consistency of sum norm? Observe that the sum norm is a matrix norm. This follows since it is equal to the l -norm of the vector

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

2 W. LAWTON, S. L. LEE AND ZUOWEI SHEN is called the fundamental condition, and a sequence which satises the fundamental condition will be called a fu

2 W. LAWTON, S. L. LEE AND ZUOWEI SHEN is called the fundamental condition, and a sequence which satises the fundamental condition will be called a fu CONVERGENCE OF MULTIDIMENSIONAL CASCADE ALGORITHM W. LAWTON, S. L. LEE AND ZUOWEI SHEN Abstract. Necessary and sucient conditions on the spectrum of the restricted transition operators are given for the

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

ACM106a - Homework 2 Solutions

ACM106a - Homework 2 Solutions ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix.

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. arxiv:math/0506382v1 [math.na] 19 Jun 2005 Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. Adviser: Charles R. Johnson Department of Mathematics College

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Math Camp Notes: Linear Algebra I

Math Camp Notes: Linear Algebra I Math Camp Notes: Linear Algebra I Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1 8.409 The Behavior of Algorithms in Practice //00 Lecture 4 Lecturer: Dan Spielman Scribe: Matthew Lepinski A Gaussian Elimination Example To solve: [ ] [ ] [ ] x x First factor the matrix to get: [ ]

More information

I.C.F. Ipsen small magnitude. Our goal is to give some intuition for what the bounds mean and why they hold. Suppose you have to compute an eigenvalue

I.C.F. Ipsen small magnitude. Our goal is to give some intuition for what the bounds mean and why they hold. Suppose you have to compute an eigenvalue Acta Numerica (998), pp. { Relative Perturbation Results for Matrix Eigenvalues and Singular Values Ilse C.F. Ipsen Department of Mathematics North Carolina State University Raleigh, NC 7695-85, USA ipsen@math.ncsu.edu

More information

Chapter 12 Block LU Factorization

Chapter 12 Block LU Factorization Chapter 12 Block LU Factorization Block algorithms are advantageous for at least two important reasons. First, they work with blocks of data having b 2 elements, performing O(b 3 ) operations. The O(b)

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

II. Determinant Functions

II. Determinant Functions Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function

More information

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ZHONGPENG YANG AND XIAOXIA FENG Abstract. Under the entrywise dominance partial ordering, T.L. Markham

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,

More information

Generalized Principal Pivot Transform

Generalized Principal Pivot Transform Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information