AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION
|
|
- Kerry Jefferson
- 5 years ago
- Views:
Transcription
1 Unspecified Journal Volume 00, Number 0, Pages S????-????(XX) AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper triangular matrix R m in the QR decomposition of AX m B = Q m R m converges and the limit is given by the moduli of the eigenvalues of X with some ordering, where A, B, X C n n are nonsingular. The asymptotic behavior of the strictly upper triangular part of R m is discussed. Some computational experiments are discussed. 1. Introduction The QR decomposition [] of a nonsingular X C n n asserts that X = QR, where Q C n n is unitary and R C n n is upper triangular with positive diagonal entries and the decomposition is unique. It is simply a matrix version of the traditional Gram-Schmidt process on the columns of X. The diagonal entries of R have very nice geometric interpretation, that is, r ii, i = 1,..., n, is equal to the distance from the i-th column of X to the space spanned by the first i 1 columns of X. We denote by (1) a(x) := diag (a 1 (X),..., a n (X)) = diag (r 11,..., r nn ), the diagonal matrix of R. In this paper, it is shown in Section that given nonsingular A, B, X C n n, and the QR decomposition AX m B = Q m R m, the sequence of matrices { () [a(ax m B)] 1/m} = { (diag R m=1 m ) 1/m} converges and the limit is given by the moduli of the eigenvalues of X. The asymptotic behavior of the strictly upper triangular part of R m is studied in Section 3. Some computational experiments using MAPLE and MATLAB are discussed in the last section. m=1 000 AMS Mathematics Subject Classification. Primary 15A3, 15A18 Key words: Eigenvalues, QR decomposition 1 c 0000 (copyright holder)
2 H. HUANG AND T.Y. TAM. Convergence of [a(ax m B)] 1/m Let {e 1,..., e n } be the standard basis of C n, that is, e i has 1 as the only nonzero entry at the i-th position. We identify a permutation ω S n with the unique permutation matrix (also written as ω) in the general linear group GL n (C), where ωe i = e ω(i). The matrix representation of ω under the standard basis is ω = [ e ω(1),, e ω(n) ]. Given a matrix A C n n, let A(i j) denote the submatrix formed by the first i rows and the first j columns of A, 1 i, j n. Theorem.1. Let A, B, X GL n (C). Let X = Y 1 DY be the Jordan decomposition of X, where D is the Jordan form of X, diag D = diag (λ 1,..., λ n ) satisfying λ 1 λ n. Then (3) lim a(axm B) 1/m = diag ( λ ω(1),..., λ ω(n) ), where the permutation ω is uniquely determined by Y B: () rank ω(i j) = rank (Y B)(i j) for 1 i, j n. Proof. Let X m := AX m B have the QR decomposition X m = Q m R m. Then (5) a 1 (X m ) a k (X m ) = det R m (k k) = det (X m (n k) X m (n k)). That is, the product a 1 (X m ) a k (X m ) is uniquely determined by the first k columns of X m. Set D 0 := diag D. Then D = CD 0 for a unit upper triangular matrix C commuting with D 0. By LU decomposition [, p.1], Y B = LωU, for some (unique) permutation matrix ω, some unit lower triangular matrix L, and some nonsingular upper triangular matrix U. By block multiplication (Y B)(i j) = [ L(i i) 0 ] [ ω(i j) = L(i i)ω(i j)u(j j). ] [ ] U(j j) 0 So ω satisfies rank ω(i j) = rank (Y B)(i j) for 1 i, j n. Obviously rank ω(i j) is the number of nonzero entries in ω(i j). Thus it is easy to verify that ω ij is a nonzero entry 1 if and only if rank ω(i j) rank ω(i j 1) rank ω(i 1 j) + rank ω(i 1 j 1) = 1. So the permutation matrix ω is uniquely determined by rank ω(i j), 1 i, j n. Hence ω is uniquely determined by Y B.
3 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION 3 Let L = [ l pq ]n n. Then from X m = AX m B, X m (n k) = AY 1 C m D0 m (L ω) U(n k) = AY 1 C m diag (λ 1,..., λ n ) [ m l pω(q) ]n k U(k k) () Denote = AY 1 C m [ ( λp ) m lpω(q) ]n k diag (λ ω(1),..., λ ω(k) ) m U(k k). [ ( ) m (7) H m := AY 1 C m λp lpω(q). ]n k From (5) and the expression of X m (n k), m a1 (X m ) a k (X m ) = m det(x m (n k) X m (n k)) = m det U(k k) λ ω(1) λ ω(k) m det(h mh m ). m Clearly lim det U(k k) = 1 since U(k k) is a nonsingular constant matrix. It remains to prove lim m det(hmh m ) = 1 since it implies m lim a1 (X m ) a k (X m ) = λ ω(1) λ ω(k), m and thus lim ai (AX m B) = λ ω(i) for 1 i n. Viewing C as a constant matrix, the entries in C m are polynomials of m since C is unit upper triangular. In (7), each entry in AY 1 C m is a polynomial of m, and l pω(q) = 0 for those p < ω(q). Therefore, the (q, q) entry of HmH m has the form (8) p ω(q ) p ω(q) ( λ p λ ω(q ) ) m ( ) m λp f p p(m), where f p p(m) is a polynomial of m. So det(hmh m ) is a sum of summands in which each summand is a product of terms of the following form ( ) m ( ) m λ p λp f p p(m), for p ω(q ), p ω(q). λ ω(q ) Notice that λp 1 in each of the summands. If some λ p < 1 in a summand, then the summand approaches 0 as m goes to infinity.
4 H. HUANG AND T.Y. TAM Let E pq C n k whose only nonzero entry 1 is at the (p, q) position. Let Ω := { (p, q) {1,..., n} {1,..., k} : l pω(q) 0, λ p = }, L 0 := l pω(q) E pω(q), (p,q) Ω H m := AY 1 C m (p,q) Ω = AY 1 C m (p,q) Ω ( λp ) m l pω(q)e pω(q) ( ) m ( ) m λp λω(q) l pω(q)e pω(q) λ p ( ) m ( ) = AY 1 C m D0 λω(1) L 0 diag D 0 λ ω(1),..., λ m ω(k), λ ω(k) ( ) where D 0 := diag λ 1,..., λn D 0 λ 1 λ n. We remark that L 0 C n k is of full rank since it is obtained from L by picking some of the columns after the column permutation by ω and some entry deletions (but not the diagonal entries) of L. The discussion on det(hmh m ) in the preceding paragraph says that (9) lim [det(h mh m ) det(h mh m)] = 0. We have det(h mh m) = det [ ( ) D L m ( ) m ] 0 0 (C ) m (AY 1 ) (AY 1 )C m D0 L 0. D 0 D 0 Given two positive semi-definite matrices P, Q C n n, recall the Löwner partial order [, p.1] [5, p.1] : P Q whenever P Q is positive semi-definite. It follows that det P det Q. Now suppose that (AY 1 ) (AY 1 ) has the minimal eigenvalue α > 0 and the maximal eigenvalue β. Then Thus αi n (AY 1 ) (AY 1 ) βi n. ( ) D L m ( ) m 0 0 (C ) m (AY 1 ) (AY 1 )C m D0 L 0 D 0 D 0 ( ) D L 0(C ) m m ( ) m 0 D0 (αi n ) C m L 0 D 0 D 0 = αl 0(C ) m C m L 0. Now det[l 0(C ) m C m L 0 ] = f(m), where f is a polynomial. Then [, p.19] for all m N (10) det(h mh m) det[αl 0(C ) m C m L 0 ] = α k f(m).
5 Likewise, AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION 5 (11) det(h mh m) β k f(m). From (9), (10), (11), we get lim m det(hmh m ) = lim m det(h mh m) = 1. This completes the proof. Corollary.. Let X = Y 1 DY be the Jordan decomposition of X GL n (C), where D is the Jordan form of X, diag D = diag (λ 1,..., λ n ) satisfying λ 1 λ n. Then lim a(xm ) 1/m = diag ( λ ω(1),..., λ ω(n) ), where the permutation ω is uniquely determined by Y : rank ω(i j) = rank Y (i j) for 1 i, j n. Remark.3. In Theorem.1, there are many choices for the Jordan decomposition of X. Thus the permutation matrix ω may not be uniquely fixed by X. However, the result of Theorem.1 implies that diag ( λ ω(1),..., λ ω(n) ) is uniquely fixed by X and B. The phenomenon may be understood in the following way: Let X = Y 1 D Y be another Jordan decomposition of X, where the moduli of the diagonal entries of D are in non-increasing order. We obtain another permutation ω by rank ω (i j) = rank (Y B)(i j), for 1 i, j n. So Y B = L ω U for some unique lower triangular matrix L and upper triangular matrix U. Suppose ε 1 I t1 ε (1) D 0 := diag ( λ 1,, λ n ) = I t, ε 1 > > ε s.... ε s I ts That is, there are t i copies of the eigenvalue modulus ε i of X. Both D and D are block diagonal according to the row and column partitions γ := (t 1,..., t s ). There is a block diagonal matrix P according to γ such that D = P 1 DP. It follows from X = Y 1 DY = Y 1 D Y that (P Y Y 1 )D = D(P Y Y 1 ), that is, P Y Y 1 commutes with D. Note that D 0 = diag D is a polynomial of D [3, p.17], and D 0 in (1) is a polynomial of D 0 by the Lagrange-Sylvester interpolation polynomial [1, Chapter V]. Thus P Y Y 1 commutes with D 0, which
6 H. HUANG AND T.Y. TAM implies that P Y Y 1 is block diagonal according to γ, and so is T := Y Y 1. So Y = T Y and L ω U = Y B = T Y B = T LωU = L 1 T 1 ωu, where L 1 is unit lower triangular, and T 1 is block diagonal according to γ. From the LU decomposition discussed in the proof of Theorem.1, rank ω (i j) = rank (T 1 ω)(i j) for 1 i, j n. In particular, denote p i = t t i, then since T 1 is block diagonal, (13) rank ω (p i j) = rank (T 1 ω)(p i j) = rank ω(p i j), 1 i s, 1 j n. Partition the rows of ω and ω by γ, and partition the columns of ω and ω by (1, 1,..., 1), respectively. The (i, j)-block of ω has a nonzero entry (clearly 1) if and only if rank ω(p i j) rank ω(p i 1 j) rank ω(p i j 1) + rank ω(p i 1 j 1) 0, where p 0 := 0, 1 i s, 1 j n. Similar result holds for ω. By (13), the nonzero entries of ω and ω are located in the same block positions. This implies that λ ω(j) and λ ω (j) have the same moduli for 1 j n. Remark.. There is no similar convergence pattern for AW m X m B in general. For example, let A = B = I n, W = diag (1,,, n), and X = ω I n be a permutation. Then [ 1 m ] [ω m (1)] m AW m X m B = m... ω m = ω m [ω m ()] m.... nm [ω m (n)] m 3. Asymptotic behavior of the off-diagonal entries of R m Theorem.1 presents the asymptotic behavior of the diagonal entries of R m in the QR decomposition of AX m B = Q m R m. We now investigate the entries in the strictly upper triangular part of R m. [ ] Theorem 3.1. Under the same assumption as in Theorem.1, let R m = r (m) ij n n in the QR decomposition of AX m B = Q m R m. Then (1) lim r(m) ij 1/m max i k j Proof. From () with k = n, X m = AY 1 C m [ ( λp ) m lpω(q) ]n n { λω(k) } = λ min i k j ω(k), 1 i j n. diag (λ m ω(1),..., λ m ω(n)) U.
7 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION 7 We know that l pω(q) = 0 for those p < ω(q), and λ 1 λ n. So l pω(q). The entries of C m are polynomials of m. Thus H m = AY 1 C m [ ( λp ) m lpω(q) ]n k ( λp ) m lpω(q) has the norm H m g(m) for a polynomial g and every m N. So in the QR decomposition H m = Q m Rm, the entries of R [ ] m = are bounded by the polynomial g(m). Now r (m) ij n n Therefore, Q m R m = X m = Q m Rm diag (λ m ω(1),..., λ m ω(n))u. R m = diag ( ) λω(1) λ ω(1),..., λ m ω(k) Rm diag ( λ m λ ω(n) ω(1),..., λω(n)) m U, and (15) r (m) n ij = r (m) ik λm ω(k)u kj = j r (m) j ik λm ω(k)u kj r (m) ik u kj λ ω(k) m. k=1 k=i k=i In other words, r (m) ij j k=i g ikj(m) λω(k) m for some polynomials gikj and every m N. This leads to inequality (1). In AX m B = Q m R m, define the matrix [ (1) R m (1/m) := r (m) ij ]n n 1/m. In general { R m (1/m) } m=1 may not converge. See the following example: [ ] 1 1 Example 3.. Let A := I, X := and B := I 0 1. Then { AX m X, m odd; B = I, m even. and R m+1 (1/(m+1)) = [ ] 1 1, R 0 1 m (1/(m)) = Clearly { r (m) 1 1/m } m=1 = {1, 0, 1, 0, } does not converge. [ ] Even { R m (1/m) } m=1 converges for some A, X and B, unlike the diagonal entries, given 1 i < j n, the sequence { r (m) ij 1/m } m=1 may not converge to any eigenvalue modulus of X. The following is an example.
8 8 H. HUANG AND T.Y. TAM Example 3.3. The example here indicates that although lim R m (1/m) may exist, { r (m) 1 1/m } m=1 does not necessarily converge to any eigenvalue modulus of X. Let 1 > a > b > 0 and A := I 3, X := a 0, B := b so that ω = By direct computation a m + b m So R m = b m a m +b m 0 a m b m a m +b m a m +b m a m +b m a m b m a m +b m lim R m (1/m) = a b /a a 0 b b and lim r (m) 1 1/m = b /a is not any eigenvalue modulus of X. Remark 3.. Needless to say, there is no convergence of the sequence {Q m } m=1, for example, if [ ] 0 1 A = B := I, X :=, 1 0 then {Q m } m=1 = {X, I, X, I,... } does not converge.. Numerical experiments In this section, we use MATLAB and MAPLE to investigate { R m (1/m) } m=1 in AX m B = Q m R m. First we study the convergence of { R m (1/m) } m=1 whenever A, X, B, are randomly generated (with probably some restrictions on ω). Then we discuss the convergence of [a(ax m B)] 1/m in the floating point computation. If A, X, B, are randomly generated, then it is almost surely that ω = I n, and R m (1/m),. λ 1 λ 1 λ 1 λ λ...., λ n as m. However, if ω is fixed first, and randomly generate nonsingular A, X = Y 1 DY, unit lower triangular L and upper triangular U, and construct B by Y B = LωU,
9 then usually we still have (17) lim r(m) ij AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION 9 1/m = max i k j { λω(k) } = λ min i k j ω(k), for 1 i j n. In other words, usually the above limit exists and the equality in (1) holds. This phenomenon can be understood from the proof of Theorem 3.1. According to (15), r (m) j ij = r (m) ik u kjλ m ω(k). k=i When A, X, B, are randomly generated as above, the right side of the above equation is almost surely dominated by those λ m ω(k) terms with the highest modulus. Thus (17) holds in almost all situations. Example.1. This example illustrates the convergence pattern (17) for randomly generated matrices with certain fixed ω. Let X := diag (10, 9, 8, 7, ). Let ω denote the permutation matrix corresponding to ( 5 1 3). Thus ( λ ω(1),, λ ω(5) ) = (9,, 7, 10, 8). Suppose B = LωU, and A, L, U, are randomly generated as follow: ] ] A := [ , L := [ , U := [ Using symbolic computation for AX m B = Q m R m in MAPLE, we see that lim R m (1/m) = This is exactly the limit in (17). Next we compute the discrepancy between a(ax m B) 1/m and the eigenvalue moduli of X for randomly generated A, X, B. Again it is almost surely that ω = I n. Therefore, theoretically we have lim a(axm B) 1/m = λ(x). However, the following example reveals that the floating point computation differs vastly from the symbolic one. We randomly generate A, X, B GL (C) as below. ].
10 10 H. HUANG AND T.Y. TAM Example i 1 1i 3i 19 + i A := 3 + i 8i + 3i i 7 7i 19 19i 1i 1 + i, 3 + 3i 1 i 31 i i 33 0i 1 i 5 + 3i 1 7i X := i + 9i 3 i 1 + 8i i i + 7i 9 + 7i, 8 i i i 5 + 8i B := 37 5i 5 3i 3 8i i 30 + i 3 0i 5 9i i 35 38i i 1 + 8i i i i 1 i The moduli of eigenvalues of X are: λ (8.31, 7.09,.3, 1.5). We compare the floating point plot (Figure 1b) of a(ax m B) λ(x) with the symbolic one (Figure 1a) in MAPLE: Figure 1a (symbolic) Figure 1b (floating point) Evidently the symbolic plot matches Theorem.1 but the floating point plot does not. The discrepancy may be caused by the instability of the QR decomposition compounded by power taking. In order to identify the components in which computation departs from our theoretical result, for each 1 i, we compare the floating point computation of a i (AX m B) λ i (X) with the symbolic one in MAPLE. The plots for a i (AX m B) λ i (X) versus m (m = 10,, 300) in symbolic and floating point computations are given below: (1) a 1 (AX m B) λ 1 (X) versus m: (notice that λ 1 / λ 1 = 1)
11 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION Figure a (symbolic) Figure b (floating point) () a (AX m B) λ (X) versus m: (notice that λ 1 / λ 1.17) Figure 3a (symbolic) Figure 3b (floating point) (3) a 3 (AX m B) λ 3 (X) versus m: (notice that λ 1 / λ 3 1.3) Figure a (symbolic) Figure b (floating point) () a (AX m B) λ (X) versus m: (notice that λ 1 / λ.3)
12 1 H. HUANG AND T.Y. TAM Figure 5a (symbolic) Figure 5b (floating point) In general, we observe that for generic nonsingular randomly generated A, X, B, when λ k = λ 1, the floating point plot of a k (AX m B) 1/m λ k (X) 0 as m ; Otherwise, a k (AX m B) 1/m λ k (X) does not approach 0. Moreover, the divergence appears earlier whenever λ 1 / λ k is larger. The phenomenon may be interpreted in this way: Theoretically, a k (AX m B) is dominated by f(m)λ m k for certain f(m) bounded by polynomials. However, in the floating point computation, the round-off errors may disturb a k (AX m B) to add some λ m 1 terms with small modulus coefficients. When m is sufficiently large, the sequence {[a k (AX m B)] 1/m } m=1 will go to λ 1 (X) instead of λ k (X) in the floating point computation. References [1] F.R. Gantmacher, The Theory of Matrices, I, Chelsea Publishing Company, New York, [] R.A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, [3] J.E. Humphreys, Introduction to Lie Algebras and Representation Theory, Springer, New York, 197. [] D. Serre, Matrices: Theory and Applications, Springer, New York, 00. [5] X. Zhan, Matrix Inequalities, Lecture Notes in Mathematics 1790, Springer, Berlin, 00. [] F. Zhang, Matrix Theory: Basic Results and Techniques, Springer, New York, Department of Mathematics and Statistics, Auburn University, AL , USA address: huanghu@auburn.edu, tamtiny@auburn.edu
ON THE QR ITERATIONS OF REAL MATRICES
Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix
More informationCompound matrices and some classical inequalities
Compound matrices and some classical inequalities Tin-Yau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using
More informationAN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX
Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX TIN-YAU TAM AND HUAJUN HUANG Abstract.
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationLecture 3: QR-Factorization
Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization
More informationSome bounds for the spectral radius of the Hadamard product of matrices
Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationEXAM. Exam 1. Math 5316, Fall December 2, 2012
EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.
More informationALUTHGE ITERATION IN SEMISIMPLE LIE GROUP. 1. Introduction Given 0 < λ < 1, the λ-aluthge transform of X C n n [4]:
Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 ALUTHGE ITERATION IN SEMISIMPLE LIE GROUP HUAJUN HUANG AND TIN-YAU TAM Abstract. We extend, in the context of connected noncompact
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationJordan normal form notes (version date: 11/21/07)
Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationACI-matrices all of whose completions have the same rank
ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationLecture Notes in Linear Algebra
Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................
More informationLinear Algebra: Lecture notes from Kolman and Hill 9th edition.
Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices
More informationSolving Homogeneous Systems with Sub-matrices
Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationQR decomposition: History and its Applications
Mathematics & Statistics Auburn University, Alabama, USA Dec 17, 2010 decomposition: and its Applications Tin-Yau Tam èuî Æâ w f ŒÆêÆ ÆÆ Page 1 of 37 email: tamtiny@auburn.edu Website: www.auburn.edu/
More informationLinear Algebra Highlights
Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to
More informationb jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60
On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationGAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)
GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.
More informationMath 408 Advanced Linear Algebra
Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x
More informationCounting Matrices Over a Finite Field With All Eigenvalues in the Field
Counting Matrices Over a Finite Field With All Eigenvalues in the Field Lisa Kaylor David Offner Department of Mathematics and Computer Science Westminster College, Pennsylvania, USA kaylorlm@wclive.westminster.edu
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationFrame Diagonalization of Matrices
Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationCHAPTER 6. Direct Methods for Solving Linear Systems
CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to
More informationOHSx XM511 Linear Algebra: Solutions to Online True/False Exercises
This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More information642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004
642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationEigenvalues and Eigenvectors
Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationEquality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.
Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationMTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)
MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) COURSEWORK 3 SOLUTIONS Exercise ( ) 1. (a) Write A = (a ij ) n n and B = (b ij ) n n. Since A and B are diagonal, we have a ij = 0 and
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationA NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES
Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University
More informationKernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman
Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu
More informationOnline Exercises for Linear Algebra XM511
This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More informationLecture 4 Orthonormal vectors and QR factorization
Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced
More informationOn Euclidean distance matrices
On Euclidean distance matrices R. Balaji and R. B. Bapat Indian Statistical Institute, New Delhi, 110016 November 19, 2006 Abstract If A is a real symmetric matrix and P is an orthogonal projection onto
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationMath 18.6, Spring 213 Problem Set #6 April 5, 213 Problem 1 ( 5.2, 4). Identify all the nonzero terms in the big formula for the determinants of the following matrices: 1 1 1 2 A = 1 1 1 1 1 1, B = 3 4
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More information(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots
More informationMathematical Foundations
Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationh r t r 1 (1 x i=1 (1 + x i t). e r t r = i=1 ( 1) i e i h r i = 0 r 1.
. Four definitions of Schur functions.. Lecture : Jacobi s definition (ca 850). Fix λ = (λ λ n and X = {x,...,x n }. () a α = det ( ) x α i j for α = (α,...,α n ) N n (2) Jacobi s def: s λ = a λ+δ /a δ
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationTHE EIGENVALUE PROBLEM
THE EIGENVALUE PROBLEM Let A be an n n square matrix. If there is a number λ and a column vector v 0 for which Av = λv then we say λ is an eigenvalue of A and v is an associated eigenvector. Note that
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationI = i 0,
Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example
More informationMathematical Methods wk 2: Linear Operators
John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationJORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5
JORDAN NORMAL FORM KATAYUN KAMDIN Abstract. This paper outlines a proof of the Jordan Normal Form Theorem. First we show that a complex, finite dimensional vector space can be decomposed into a direct
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationLecture 4 Eigenvalue problems
Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationPart IB Numerical Analysis
Part IB Numerical Analysis Definitions Based on lectures by G. Moore Notes taken by Dexter Chua Lent 206 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More informationOn the adjacency matrix of a block graph
On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationACM106a - Homework 2 Solutions
ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationCheat Sheet for MATH461
Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationMINIMAL NORMAL AND COMMUTING COMPLETIONS
INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More informationLecture 2: Computing functions of dense matrices
Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More information