ON THE QR ITERATIONS OF REAL MATRICES

Size: px
Start display at page:

Download "ON THE QR ITERATIONS OF REAL MATRICES"

Transcription

1 Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix with nonreal eigenvalues whose moduli are distinct except for the conjugate pairs. Numerical experiments by MATLAB are performed. 1. Introduction There are many numerical methods for the computation of the eigenvalues of a given A GL n (K with K = R or C. One of the most efficient methods is the QR method [3, p ]. Define a sequence {A } N GL n (K of matrices with A 1 := A A j+1 := R j Q j if A j = Q j R j is the QR decomposition of A j, j = 1,,... Notice that (1.1 A j+1 = Q 1 j A j Q j. So the eigenvalues of each A j are identical with those of A, counting multiplicities. One hopes to have some sort of convergence on the sequence {A } N so that the limit would provide the eigenvalues of A. If we write then [3] P = Q 1 Q Q, U = R R 1 R 1, (1. A = P U, Q = P 1 1 P, R = U U 1 1, (1.3 A = P 1 1 AP 1 = U 1 AU 1 1. In Wilinson s boo [6, p ] one finds the following classical result. Theorem 1.1. Let A GL n (C such that the moduli of the eigenvalues λ 1,..., λ n of A are distinct, that is, (1.4 λ 1 > λ > > λ n (>. Let A = Y 1 diag (λ 1,..., λ n Y. Assume that Y admits an LU decomposition Y = LU. Then the strictly lower triangular part of A converges to zero the diagonal part of A converges to D := diag (λ 1,..., λ n. Though Theorem 1.1 is a rather satisfactory result, in many applications one encounters A GL n (R. If A has nonreal eigenvalues, then they occur in complex conjugate pairs the assumption (1.4 does not hold for A. D. Serre [3, p.174] asserts that AMS Mathematics Subject Classification. Primary 15A3, 65F1 Key words: QR iterations, eigenvalues, QR decomposition, LU decomposition 1 c (copyright holder

2 H. HUANG AND T.Y. TAM When A M n (R, one maes the following assumption. Let p be the number of real eigenvalues q that of nonreal eigenvalues; then there are p + q distinct eigenvalue moduli. In that case, {A } N might converge to a bloc-triangular form, the diagonal blocs being or 1 1. The limits of the diagonal blocs provide trivially the eigenvalues of A. The assertion has never been proved nor disproved, as pointed out by Serre [3, p.175]. Evidently the above quoted paragraph is interpreted as the strictly lower triangular bloc part of {A } N converges to zero. Indeed the diagonal blocs of {A } N may not converge, even though the eigenvalues of these diagonal blocs converge to the eigenvalues of A (see Proposition 4.. In Section, Theorem.1 gives an affirmative answer to the question of Serre under a very mild condition. Namely, if a real matrix A = Y 1 DY has distinct moduli eigenvalues (up to conjugate pairs, where D is given in (., Y admits a certain bloc LU decomposition, then the strictly lower triangular bloc part of {A } N converges to zero, where the diagonal blocs (of or 1 1 forms provide the eigenvalues of A. In Section 3, we exhibit that if Y (A = Y 1 DY does not have the bloc LU decomposition as in Theorem.1, then the conclusion of Theorem.1 may not hold, based on some numerical experiments. In other words, Serre s assertion is not true for this ind of matrices. In Section 4 we provide some quantitative analysis for the case. In Section 5 we prove that unlie the real case, the complex case still behaves well even if Y does not admit LU decomposition as long as (1.4 is satisfied.. An answer to Serre s question The assumption of Serre on A GL n (R amounts to that the eigenvalues of A have distinct moduli except for the conjugate pairs. It may be interpreted as the real counterpart of (1.4 in Theorem 1.1. By the real Jordan canonical form [, Theorem 3.4.5, p.15], A admits the following decomposition (.1 A = Y 1 DY where (. D := diag (λ 1 E θ1,, λ m E θm, λ 1 > > λ m >, E θi= := 1, E θi=π := 1, E θi (,π := cos θi sin θ i. sin θ i cos θ i In general Y has the Bruhat decomposition Y = LωU where L is unit lower triangular, U is upper triangular, ω is a permutation matrix uniquely determined by Y. If Y admits a bloc LU decomposition analogous to that in Theorem 1.1, we have the following result. Since such matrices Y form a dense subset of GL n (R, a romly chosen A GL n (R almost surely satisfies the above requirements. Theorem.1. Let A GL n (R be a matrix such that the eigenvalues of A have distinct moduli except for the conjugate pairs. With the above notations, let γ = (γ 1,, γ m where γ i is the size of E θi, i = 1,..., m. Let [M] γ be the bloc form

3 QR ITERATIONS OF REAL MATRICES 3 of M corresponding to the partitions γ. Let { λ t := max,, λ m }. λ 1 λ m 1 If Y = LωU [ω] γ is bloc diagonal (for example, if ω is the identity matrix, then the strictly lower triangular bloc part of [A ] γ converges to zero in O(t, the eigenvalues of the ith diagonal bloc of [A ] γ converge to the eigenvalues of λ i E θi in O(t. Proof: Let Y 1 = QR so that A = Y 1 D Y = QRD LωU = QR(D LD D ωu. Denote [L] γ = (L ij m m where the (i, j bloc of L is L ij of size γ i γ j. Then ( [D LD ] γ = λi λ j E θi L ij E. θ j Let D := diag [L] γ = L m m L mm, where diag [L] γ denotes the bloc diagonal part of [L] γ. Denote Eθ 1 L 11 E θ 1 D := D D D =... E θ m L mm E θ m, = 1,,.... Using L ij = for i < j E θi = 1 for all i, where is the spectral norm, So we have D LD = (I n + O(t D. A = QR(I n + O(t D D ωu = Q(I n + O(t RD D ωu = QO T RD D ωu. Here O T is the QR decomposition of the last I n + O(t. By the Gram-Schmidt process one has (.3 O = I n + O(t, T = I n + O(t. Since [T RD D ωu] γ is a bloc upper triangular matrix, its Q-component C in the QR decomposition is a bloc diagonal matrix according to γ. So the QR decomposition of A is A = P U = (QO C (C 1 T RD D ωu. Hence by the uniqueness of the QR decomposition P = QO C, U = C 1 T RD D ωu.

4 4 H. HUANG AND T.Y. TAM Therefore, by (.3 (.4 A = Q R = P 1 1 P U U 1 1 = C 1 1 O 1 1 O T RDR 1 T 1 1 C 1 = C 1 1 RDR 1 C 1 + O(t. Because C 1 1 RDR 1 C 1 is bloc upper triangular, the entries of the strictly lower triangular blocs of A approach zero in O(t. Moreover, by bloc multiplication the ith diagonal bloc of C 1 1 RDR 1 C 1 is similar to that of D, namely λ i E θi. So the eigenvalues of the ith diagonal bloc of [A ] γ approach those of λ i E θi in O(t. Numerical experiments denomstrate the convergence rate in Theorem.1. From the computational point of view, the assumption that [ω] γ is in bloc diagonal form does not impose any difficulty: A will first be reduced to an Hessenberg form to achieve drastic cost reduction [3, p.176]. Thus we may assume that A GL n (R is in irreducible (nonreduced Hessenberg form. Those nonsingular Y for which A = Y 1 DY would have the required LωU decomposition in Theorem.1, according to the following result. Proposition.. Suppose that A GL n (R in Theorem.1 is in irreducible Hessenberg form. Then for any Y GL n (R such that A = Y 1 DY, it has the decomposition Y = LωU, where [ω] γ is in diagonal bloc form, D is given in (.. Proof: For any θ R, if P := 1 i 1, then 1 i ( diag (e iθ, e iθ cos θ sin θ = P sin θ cos θ P 1. Let S GL n (C be in bloc diagonal form such that the diagonal blocs of [S] γ are P the 1 1 blocs are 1, according to the partition γ. Then A = Y 1 S 1 DSY where D is a diagonal matrix such that the diagonal blocs of [ D] γ are either ±λ j or λ j diag (e iθj, e iθj. We claim that the matrix Z = SY admits LU decomposition the argument follows from [3, p.179] (there are some typos in the proof. First notice that the rows of Z are left eigenvectors of A, that is, if z 1,..., z n denote the rows of Z, then z j A = µz j, j = 1,..., n, µ = ±λ j or λ j e ±iθj since ZA = DZ. Then {z1,..., zq } is invariant under A one has {z1,..., zq } {e 1,..., e q } = C n. In other words, det(z j e 1 j, q. In other words, the leading principal minors of Z of order q is nonzero. So Z = SY admits an LU decomposition. Thus SY = LU for some unit lower triangular matrix L upper triangular matrix U. Then Y = S 1 LU. Now the matrix S 1 L is in lower triangular bloc form with diagonal blocs 1 1 or. Applying Gaussian elimination on S 1 L, one has Y = L ωu where L is (real unit lower triangular, U is (real upper triangular ω is a diagonal bloc permutation matrix corresponding to the partition γ. The permutation matrix ω is unique. In general the strictly lower triangular part of the (real sequence {A } N does not converge to zero (Compare [, p.114].

5 QR ITERATIONS OF REAL MATRICES 5 Proposition.3. Suppose that A M n (R has nonreal eigenvalues. strictly lower triangular part of {A } N does not converge to zero. Proof: The sequence {A } N is contained in the compact set {X M n (R : X F = A F }, Then the where A F = (tr A A 1/ denotes the Frobenius norm of A. So there is a convergent subsequence {A i } i N. If the strictly lower triangular part of the sequence {A } N converged to zero, then the subsequence would converge to a real upper triangular matrix U. By the continuity of the eigenvalues (counting multiplicities [3, p.44], the eigenvalues of A would be the diagonal entries of U would be real, a contradiction. The argument in the above proof wors for real singular matrices having nonreal eigenvalues as well. 3. Numerical experiments We now discuss some numerical experiments which show that the conclusion of Theorem.1 may not hold if the condition on Y in Theorem.1 is not satisfied. Let a cos c a sin c A = Y 1 a sin c a cos c b cos d b sin d Y, b sin d b cos d where Y = LωU, 1 1 L = , ω = 1 1, U = I Clearly the condition of Theorem.1 is not satisfied for Y. With a =, b = 1/, numerically we have the following pattern convergence (not actual convergence of the corresponding matrices. We use the formula in (1.3 A = P 1 1 AP 1 instead of A = R 1 Q 1 to compute A via MAPLE MATLAB. (1 If c =, d = 1 (the eigenvalues of A occur as two distinct complex conjugate pairs, A, Q, P. ( If c =, d = π ( 1/ is a double eigenvalue of A, A, 1/

6 6 H. HUANG AND T.Y. TAM Q, 1 P. ( 1 (3 If c = π, d = 1 ( is a double eigenvalue of A, A, 1 Q, P. (4 If c =, d = π/ (the eigenvalues of A occur as two distinct complex conjugate pairs, Q. Then for all N, A, A 1, P, ( 1 P 1. (5 If c = d = π/, then for all N, A A In the above cases, no desired convergence (in the fashion of Theorem.1 occurs for the lower triangular bloc part of A. We also used A = R 1 Q 1 to compute A. The computed lower triangular bloc part of A tends to zero. Probably the roundoff errors perturb A so that the computed Y has bloc LU decomposition in the computational process. Denote

7 QR ITERATIONS OF REAL MATRICES 7 L to be the maximal entry in module of the lower left bloc of A. The convergence rate of L to is exactly the convergence rate of A to the bloc upper triangular form. Denote c( := L /t, where t = λ λ 1 = 1 4. When A meets the conditions in Theorem.1, we now c( M, = 1,,..., for some constant M depending on A alone. However, for the above five cases in which A does not satisfy the conditions in Theorem.1, numerical experiments show that we still have c( M, where M is determined by A the digit number used in floating-point computations. We apply A = R 1 Q 1 in MAPLE to compute A for the first case (c =, d = 1, using 1,, 3, 35-digit number floating-point arithmetic, respectively. Then we plot c( against for 1 1 as follow: Plot of c( by 1-digit computation Plot of c( by -digit computation 1E1.5E1 8E11 E1 6E11 1.5E1 4E11 1E1 E11 5E Plot of c( by 3-digit computation Plot of c( by 35-digit computation 7E3 E36 6E3 5E3 1.5E36 4E3 3E3 1E36 E3 5E35 1E The plots of c( display similar pattern in different floating point precisions. Roughly speaing, when using n-digit floating-point arithmetic, the upper bound M of the computed c( is around the scale 1 n. Similar phenomenon holds for the other four cases. 4 1

8 8 H. HUANG AND T.Y. TAM 4. Analysis of the real case with nonreal eigenvalues In Theorem.1, we see that the QR iterations for almost all real matrices converge to a bloc upper triangular form with or 1 1 diagonal blocs. Thus it is important to study the real matrix with nonreal eigenvalues in a quantitative fashion. Proposition 4.1. Suppose A GL (R has nonreal eigenvalues. Let A = Y 1 cos θ sin θ Y det A sin θ cos θ where Denote Y = y11 y 1 SL y 1 y (R, u := y 11 y 1 + y 1 y, θ (, π \ {π}. v := y11 + y1, u r := + v ( u + v + 1 v. Then the modulus of the (, 1 entry c of A satisfies (4.1 sv r sin θ c min{ sr sin θ, A }, v where s := det A A is the spectral norm of A. Proof: The singular values of A A are the same thus the entries of A are bounded above by A. Notice A /s = Y 1 cos θ sin θ Y sin θ cos θ y y = 1 cos θ sin θ y11 y 1 y 1 y 11 sin θ cos θ y 1 y (4. = Let A = P U = P ( a (4.3 ( cos θ + u sin θ v sin θ 1/a. s. By the Gram-Schmidt process, a = (cos θ + u sin θ + v (sin θ u + v = + 1 u + v 1 cos θ + u sin θ u = + v ( u + v 1 + u cos (θ + ζ, where ζ is a constant. Since u + v + 1 ( u + v 1 + u = ( u + v + 1 v, u + v ( u + v + 1 v a ( u + v + 1 v.

9 QR ITERATIONS OF REAL MATRICES 9 In other words, (4.4 On the other h, from (.4 v r a r. A = P 1 1 P U U 1 1 = P 1 1 A 1 AU 1 1 = U 1AU 1 1, (4.5 U 1 = so that (4.6 a 1 s 1 1/a 1 ( A = s 1/a 1 v sin θ ( = s v sin θ/a. 1 s +1 1/a 1 By (4.4 (4.6, the modulus of the (, 1 entry c of A is bounded by sv r sin θ c = sv sin θ/a 1 sr sin θ. v This completes the proof of (4.1. Now we are able to study the convergence of the QR iterations of the matrix A GL (R. It is sufficient to consider A SL (R. Proposition 4.. (1 Suppose A SL (R has nonreal eigenvalues. Then A converges if only if A is an orthogonal matrix. In this case, Q = A, R = I, = 1,,... ( If A SL (R has nonreal eigenvalues is not an orthogonal matrix, then each of the sequences {A } N, {P } N, {U } N, {Q } N, {R } N is bounded below above but not convergent. Proof: We adopt the notations from Proposition 4.1. (1 From (4.6, if A converges, then c has to converge. Then by (4.3, we have two possibilities: (a ( u +v 1 + u =, that is, u = v = 1. So by the definitions of u v, the matrix Y is orthogonal thus A is an orthogonal matrix. (b θ = π/ or 3π/, cos ζ = u +v 1 / ( u +v 1 + u =. So u + v = 1, a = 1 by (4.3. We have ( cos η sin η A = P 1 U 1 = sin η cos η 1 t, 1 for some t R η (, π\{π}. If t = then A is an orthogonal matrix. If t we have 1 t cos η sin η A = 1 sin η cos η cos η t sin η sin η + t cos η =. sin η cos η

10 1 H. HUANG AND T.Y. TAM So a = (cos η t sin η + ( sin η = 1 t cos η sin η + t sin η = 1. Hence t = cos η/ sin η. In such situation, we have A 1 = A 3 = A = A 4 =. Moreover, A 1 = A if only if cos η =, contradict with t. The converse is obviously true. ( By (1. (4.5 R = U U 1 1 = a /a 1, a 1 /a since s := det A = 1. By the first part A does not converge. So in (4.3 ( u +v 1 + u. Thus a /a 1 has finite positive upper bound lower bound but it does not converge. Thus the entries of R are bound above below in absolute value but not convergent. Now ( ( Q = A R 1 a 1 /a = v sin θ/a =. 1 (v/a 1 a sin θ If a 1 a does not converge, then Q does not converge. If a 1 a converges, then A belongs to (1(b thus Q does not converge. Neither P nor U converges since A does not converge ( Some remars on Theorem 1.1 Theorem 1.1 does not say that {A } N converges to an upper triangular matrix. In fact it is pointed out in [3, p.178], when the eigenvalues of A have distinct arguments, the sequence {A } N does not converge, in contrast to an incorrect assertion in [, p.114]. In general Y in Theorem 1.1 may not admit LU decomposition. Instead, it has the Bruhat decomposition Y = LωU for some permutation matrix ω I n. However, this will not cause any trouble based on two observations. First the set of nonsingular matrices without LU decompositions is of measure zero in GL n (C [1, p.47]. So a romly chosen A GL n (C almost surely satisfies the conditions in Theorem 1.1. Secondly, in practice a preliminary reduction of A GL n (C to an Hessenberg form drastically reduces the cost of each QR step [3, p.176]. So A will first be turned into a Hessenberg form [3, p , p ], thus we may assume that A is in irreducible (nonreduced Hessenberg form. Those nonsingular Y satisfying A = Y 1 DY have LU decomposition [3, p.179]. Nevertheless, for the general case Y = LωU, the diagonal part of {A } N converges to D ω := ω 1 Dω the strictly lower triangular part of A converges to zero, which are parts of the statements of Theorem 5.1. Theorem 5.1 is obviously a generalization of Theorem 1.1. We now introduce some notations. Given an n n permutation ω, we denote the permutation on {1,..., n} by the same notation ω such that ωe j = e ω(j, j = 1,..., n, where {e 1,..., e n } is the stard basis of R n. We write O(t for a matrix whose entries are less than or equal to C t in absolute value for some constant C >.

11 QR ITERATIONS OF REAL MATRICES 11 Theorem 5.1. Let A GL n (C such that the moduli of the eigenvalues λ 1,..., λ n of A are distinct, that is, (5.1 λ 1 > λ > > λ n (>. Let A = Y 1 DY where D := diag (λ 1,, λ n. Let ω be the permutation matrix uniquely determined by Y = LωU, where L is unit lower triangular U is upper triangular. Denote H := ( u11 diag u 11,, u nn, u nn D ω := diag (λ ω(1,, λ ω(n, C ω := ( λω(1 diag λ ω(1,, λ ω(n. λ ω(n Then where Cω 1 A Cω +1 = H 1 RD ω R 1 H + O(t, Q = C ω + O(t, { λ t := max,, λ n } < 1, λ 1 λ n 1 Y 1 ω = QR is the QR decomposition of Y 1 ω. In particular (1 lim C 1 ω A C +1 ω = lim C ωr C +1 ω = H 1 RD ω R 1 H. ( lim Q = C ω. (3 The strictly lower triangular part of A converges to zero in O(t. (4 The diagonal part of A converges to D ω in O(t. Proof: Under the assumption, A = Y 1 D Y. Let Y = LωU where L is unit lower triangular, U is upper triangular, ω is a permutation matrix uniquely determined by Y. Let Y 1 ω = QR be the QR decomposition of Y 1 ω. Then A = QRω 1 D LωU = QRω 1 (D LD D ωu. Notice that the unit lower triangular matrix D LD = I n + O(t since the (i, j entry of D LD where i > j is l ij (λ i /λ j whose absolute value is less than or equal to M t, where M = max 1 j<i n l ij. Hence A = QRω 1 (I n + O(t D ωu = Q(I n + O(t Rω 1 D ωu = QO T RD ωu where O T is the QR decomposition of the last I n + O(t. By the Gram-Schmidt process we have O = I n + O(t T = I n + O(t. So A = P U = (QO HCω(C ω H 1 T RDωU

12 1 H. HUANG AND T.Y. TAM is the QR decomposition of A since C ω H are diagonal unitary. By the uniqueness of the QR decomposition, By (1. (5. (5.3 P = QO HC ω = QHC ω + O(t, U = Cω H 1 T RDωU = Cω H 1 RDωU + O(t. Q = P 1 1 P = C ω + O(t, R = U U 1 ω H 1 RD ω R 1 HCω 1 + O(t, 1 = C (5.4 A = Q R = Cω +1 H 1 RD ω R 1 HCω 1 + O(t. Since Cω 1 is diagonal unitary, (5.5 Cω 1 A Cω +1 = H 1 RD ω R 1 H + O(t. Conclusion ( follows from (5., (1 from (5.5. From (5.4 the diagonal part of A is diag A = diag [Cω +1 H 1 RD ω R 1 HCω 1 + O(t ] = D ω + O(t (4 follows immediately. Similarly (3 follows from (5.4. Corollary 5.. The following statements are equivalent. (1 The sequence {A } N converges. ( The sequence {R } N converges. (3 The arguments of λ ω(i λ ω(j are equal whenever the (i, j entry of RD ω R 1 is nonzero for i < j. Proof: (1 ( follows from Theorem 5.1 (1 (. From (5.4 A = H 1 Cω +1 RD ω R 1 Cω 1 H + O(t. So A converges if only if Cω +1 RD ω R 1 Cω 1 converges. Thus (1 (3 are equivalent. Our proof of Theorem 5.1 is a modification of the proof in [3, Theorem 1..1]. Part of the theorem has been discussed in [6, p.519-5] wherein the proof relies on a very careful observation of the pivoting procedure. Acnowledgement: The authors are thanful to the referee s helpful comments. References [1] S. Helgason, Differential Geometry, Lie Groups, Symmetric Spaces, Academic Press, New Yor, [] R.A. Horn C.R. Johnson, Matrix Analysis, Cambridge Univ. Press, [3] D. Serre, Matrices: Theory Applications, Springer, New Yor,. [4] G.W. Stewart, Introduction to Matrix Computations, Academic Press, New Yor, [5] D. Watins, Understing the QR algorithm, SIAM Rev. 4 (198, [6] J.H. Wilinson, The Algebraic Eigenvalues Problem, Oxford Science Publications, Oxford, Department of Mathematics Statistics, Auburn University, AL , USA address: huanghu@auburn.edu, tamtiny@auburn.edu

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper

More information

Compound matrices and some classical inequalities

Compound matrices and some classical inequalities Compound matrices and some classical inequalities Tin-Yau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using

More information

AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX

AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX TIN-YAU TAM AND HUAJUN HUANG Abstract.

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

ALUTHGE ITERATION IN SEMISIMPLE LIE GROUP. 1. Introduction Given 0 < λ < 1, the λ-aluthge transform of X C n n [4]:

ALUTHGE ITERATION IN SEMISIMPLE LIE GROUP. 1. Introduction Given 0 < λ < 1, the λ-aluthge transform of X C n n [4]: Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 ALUTHGE ITERATION IN SEMISIMPLE LIE GROUP HUAJUN HUANG AND TIN-YAU TAM Abstract. We extend, in the context of connected noncompact

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

Sign Patterns with a Nest of Positive Principal Minors

Sign Patterns with a Nest of Positive Principal Minors Sign Patterns with a Nest of Positive Principal Minors D. D. Olesky 1 M. J. Tsatsomeros 2 P. van den Driessche 3 March 11, 2011 Abstract A matrix A M n (R) has a nest of positive principal minors if P

More information

Some bounds for the spectral radius of the Hadamard product of matrices

Some bounds for the spectral radius of the Hadamard product of matrices Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard

More information

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60 On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Lecture 4 Eigenvalue problems

Lecture 4 Eigenvalue problems Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Wavelets and Linear Algebra

Wavelets and Linear Algebra Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

EXAM. Exam 1. Math 5316, Fall December 2, 2012

EXAM. Exam 1. Math 5316, Fall December 2, 2012 EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Jordan Normal Form Revisited

Jordan Normal Form Revisited Mathematics & Statistics Auburn University, Alabama, USA Oct 3, 2007 Jordan Normal Form Revisited Speaker: Tin-Yau Tam Graduate Student Seminar Page 1 of 19 tamtiny@auburn.edu Let us start with some online

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Maximizing the numerical radii of matrices by permuting their entries

Maximizing the numerical radii of matrices by permuting their entries Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

QR decomposition: History and its Applications

QR decomposition: History and its Applications Mathematics & Statistics Auburn University, Alabama, USA Dec 17, 2010 decomposition: and its Applications Tin-Yau Tam èuî Æâ w f ŒÆêÆ ÆÆ Page 1 of 37 email: tamtiny@auburn.edu Website: www.auburn.edu/

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Lecture 12 Eigenvalue Problem. Review of Eigenvalues Some properties Power method Shift method Inverse power method Deflation QR Method

Lecture 12 Eigenvalue Problem. Review of Eigenvalues Some properties Power method Shift method Inverse power method Deflation QR Method Lecture Eigenvalue Problem Review of Eigenvalues Some properties Power method Shift method Inverse power method Deflation QR Method Eigenvalue Eigenvalue ( A I) If det( A I) (trivial solution) To obtain

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices

Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices Chi-Kwong Li Department of Mathematics, College of William & Mary, P.O. Box 8795, Williamsburg, VA 23187-8795, USA. E-mail:

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

Geometric Mapping Properties of Semipositive Matrices

Geometric Mapping Properties of Semipositive Matrices Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices

More information

Key words. Strongly eventually nonnegative matrix, eventually nonnegative matrix, eventually r-cyclic matrix, Perron-Frobenius.

Key words. Strongly eventually nonnegative matrix, eventually nonnegative matrix, eventually r-cyclic matrix, Perron-Frobenius. May 7, DETERMINING WHETHER A MATRIX IS STRONGLY EVENTUALLY NONNEGATIVE LESLIE HOGBEN 3 5 6 7 8 9 Abstract. A matrix A can be tested to determine whether it is eventually positive by examination of its

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM OLGA SLYUSAREVA AND MICHAEL TSATSOMEROS Abstract. The principal pivot transform (PPT) is a transformation of a matrix A tantamount to exchanging

More information

Some bounds for the spectral radius of the Hadamard product of matrices

Some bounds for the spectral radius of the Hadamard product of matrices Some bounds for the spectral radius of the Hadamard product of matrices Tin-Yau Tam Mathematics & Statistics Auburn University Georgia State University, May 28, 05 in honor of Prof. Jean H. Bevis Some

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

Some Formulas for the Principal Matrix pth Root

Some Formulas for the Principal Matrix pth Root Int. J. Contemp. Math. Sciences Vol. 9 014 no. 3 141-15 HIKARI Ltd www.m-hiari.com http://dx.doi.org/10.1988/ijcms.014.4110 Some Formulas for the Principal Matrix pth Root R. Ben Taher Y. El Khatabi and

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information