QR decomposition: History and its Applications

Size: px
Start display at page:

Download "QR decomposition: History and its Applications"

Transcription

1 Mathematics & Statistics Auburn University, Alabama, USA Dec 17, 2010 decomposition: and its Applications Tin-Yau Tam èuî Æâ w f ŒÆêÆ ÆÆ Page 1 of 37 tamtiny@auburn.edu Website: tamtiny

2 1. decomposition Recall the decomposition of A GL n (C): A = where Q GL n (C) is unitary and R GL n (C) is upper with positive diagonal entries. Such decomposition is unique. Set a(a) := diag (r 11,..., r nn ) where A is written in column form A = (a 1 a n ) Page 2 of 37 Geometric interpretation of a(a): r ii is the distance (w.r.t. 2-norm) between a i and span {a 1,..., a i 1 }, i = 2,..., n.

3 Example: /7 69/175 58/ = 3/7 158/175 6/ /7 6/35 33/ decomposition is the matrix version of the Gram-Schmidt orthonormalization process. decomposition can be extended to rectangular matrices, i.e., if A C m n with m n (tall matrix) and full rank, then A = where Q C m n has orthonormal columns and R C n n is upper with positive diagonal entries. Page 3 of 37

4 2. history When Erhard Schmidt presented the formulae on p. 442 of his E. Schmidt, Zur Theorie der linearen und nichtlinearen Integralgleichungen. I. Teil: Entwicklung willkülicher Funktionen nach Systemen vorgeschriebener, Math. Ann., 63 (1907) he said that essentially the same formulae were in J. P. Gram, Ueber die Entwickelung reeler Funtionen in Reihen mittelst der Methode der kleinsten Quadrate, Jrnl. für die reine und angewandte Math. 94 (1883) Page 4 of 37 Modern writers, however, distinguish the two procedures, sometimes using the term Gram-Schmidt for the Schmidt form and modified Gram- Schmidt for the Gram version.

5 But Gram-Schmidt orthonormalization appeared earlier in the work of Laplace and Cauchy. In the theory of semisimple Lie grougs, Gram-Schmidt process is extended to the Iwasawa decomposition G = KAN. A JSTOR search: the term Gram-Schmidt orthogonalization process first appears on p.57 of Y. K. Wong, An Application of Orthogonalization Process to the Theory of Least Squares, Annals of Mathematical Statistics, 6 (1935), Page 5 of 37

6 In 1801 Gauss predicted the orbit of the steroid Ceres using the method of least squares. Since then, the principle of least squares has been the standard procedure for the analysis of scientific data. Least squares problem i.e., finding ˆx that would yield Ax b min x Ax b 2. The solution is characterized by r R(A), where r = b Ax is the residual vector, or equivalently, given by the normal equation A Ax = A b. Page 6 of 37 With A = and A C m n tall with full rank R Rx = () x = R Q b Rx = Q b.

7 There are several methods for computing the decomposition: GS or modified GS, Givens rotations (real A), or Householder reflections. Disadvantage of GS: sensitive to rounding error (orthogonality of the computed vectors can be lost quickly or may even be completely lost) modified Gram-Schmidt. Page 7 of 37 Idea of modified GS: do the projection step with a number of projections which will be against the errors introduced in computation.

8 Example: A = 1 + ɛ ɛ ɛ with very small ɛ such that 3+2ɛ will be computed accurately but 3+2ɛ+ɛ 2 will be computed as 3 + 2ɛ. For example, ɛ < Then Q = 1+ɛ ɛ ɛ ɛ 0 2 and cos θ 12 and cos θ 13 π/2 but and cos θ 23 π/3. Page 8 of 37 See for a heuristic analysis of why Gram-Schmidt not stable.

9 Computing by Householder reflections A Householder reflection is a reflection about some hyperplane. Consider Q v = I 2vv, v 2 = 1. Q v sends v to v and fix pointwise the hyperplane to v. Householder reflections are Hermitian and unitary. Let e 1 = (1, 0,..., 0) T. Recall If a 1 2 = α 1, set A = (a 1 a n ) GL n (C) u = a 1 α 1 e 1, v = u u 2, Q 1 = I 2vv Page 9 of 37 so that Qa 1 = (α 1, 0,, 0) T = α 1 e 1.

10 Then Q 1 A = A 1 0 α 1 After t iterations of this process, t n 1, R = Q t Q 2 Q 1 A is upper. So, with Q = Q 1 Q 2 Q t A = is the decomposition of A. Page 10 of 37 This method has greater numerical stability than GS. On the other hand, GS produces the q j vector after the jth iteration, while Householder reflections produces all the vectors only at the end.

11 3. An Theorem 3.1. (Huang and Tam 2007) Given A GL n (C). Let A = Y 1 JY be the Jordan decomposition of A, where J is the Jordan form of A, diag J = diag (λ 1,..., λ n ) satisfying λ 1 λ n. Then lim m a(am ) 1/m = diag ( λ ω(1),..., λ ω(n) ), where the permutation ω is uniquely determined by the Gelfand-Naimark decomposition of Y = LωU: Page 11 of 37 rank ω(i j) = rank (Y )(i j), 1 i, j n. Here ω(i j) denotes the submatrix formed by the first i rows and the first j columns of ω, 1 i, j n.

12 Gelfand-Naimark decomposition of Y = LωU is different from the Guassian decomposition Y = P T LU obtained by Gaussian elimination with row exchanges. None of P, U and L in the Gaussian decomposition is unique. But ω and diag U are unique in the Gelfand-Naimark decomposition. H. Huang and T.Y. Tam, An asymptotic behavior of decomposition, Linear Algebra and Its Applications, 424 (2007) H. Huang and T.Y. Tam, An asymptotic result on the a-component in Iwasawa decomposition, Journal of Lie Theory, 17 (2007) Page 12 of 37

13 Numerical experiments: Computing the discrepancy between [a(a m )] 1/m and λ(a) of randomly generated A GL n (C). The graph of 100 [a(a m )] 1/m diag ( λ 1,..., λ n ) 2 80 versus m (m = 1,..., 100) 60 Page 13 of

14 If we consider a 1 (A m ) 1/m λ 1 (A) instead of [a(a m )] 1/m diag ( λ 1,..., λ n ) 2 for the above example, convergence occurs. The graph of a 1 (A m ) 1/m λ 1 (A) versus m (m = 1,..., 100) 30 Page 14 of

15 4. Because of Abel s theorem (1824), the roots of a general fifth order polynomial cannot be solved by radicals. Thus the computation of eigenvalues of A C n n has to be approximative. Given A GL n (C), define a sequence {A k } k N of matrices with A 1 := A = Q 1 R 1 and if A k = Q k R k,, k = 1, 2,... then A k+1 := R k Q k = Q kq k R k Q k = Q ka k Q k Page 15 of 37 So the eigenvalues are fixed in the process. One hopes to have some sort of convergence of the sequence {A k } k N so that the limit would provide the eigenvalue approximation of A.

16 Theorem 4.1. (Francis (1961/62), Kublanovskaja (1961), Huang and Tam (2005)) Suppose that the moduli of the eigenvalues λ 1,..., λ n of A GL n (C) are distinct: Let λ 1 > λ 2 > > λ n (> 0). A = Y 1 diag (λ 1,..., λ n )Y. Assume Y = LωU, where ω is a permutation, L is lower and U is unit upper. Then 1. the strictly lower part of A k converge to zero. 2. diag A k diag (λ ω(1),..., λ ω(n) ). H. Huang and T.Y. Tam, On the s of real matrices, Linear Algebra and Its Applications, 408 (2005) Page 16 of 37

17 It rates as one of the most important algorithmic developments of the past century Parlett (2000): The algorithm solves the eigenvalue problem in a very satisfactory way... What makes experts in matrix computations happy is that this algorithm is a genuinely new contribution to the field of numerical analysis and not just a refinement of ideas given by Newton, Gauss, Hadamard, or Schur. Higham (2003): The algorithm for solving the nonsymmetric eigenvalue problem is one of the jewels in the crown of matrix computations. Page 17 of 37

18 A common misconception Horn and Johnson s Matrix Analysis (p.114): Under some circumstances (for example, if all the eigenvalues of A 0 has distinct absolute values), the iterates A k will converge to an upper triangular matrix as k... Quarteroni, Sacco, and Saleri s Numerical Mathematics (p ): Let A R n n be a matrix with real eigenvalues such that λ 1 > λ 2 > > λ n. Then lim T (k) = k λ 1 t t 1n 0 λ 2 t λ n Wikipedia: Under certain conditions, the matrices A k converge to a triangular matrix, the Schur form of A. Page 18 of 37

19 D. Serre s Theory of Matrices (p ): Let us recall that the sequence A k is not always convergent. For example, if A is already triangular, its factorization is Q = D, R = D 1 A, with d j = a jj / a jj. Hence, A 1 = D 1 AD is triangular, with the same diagonal as that of A. By induction, A k is triangular, with the same diagonal as that of A so that A k = D k AD k... Hence, the part above the diagonal of A k does not converge. Summing up, a convergence theorem may concern only the diagonal of A k and what is below it. Page 19 of 37

20 More on algorithm The algorithm is numerically stable because it proceeds by orthogonal/unitary similarity transforms. A Hessenberg form algorithm It would be cost effective if we convert A to an upper Hessenberg form, with a finite sequence of orthogonal similarities. Then determine the decomposition of an upper Hessenberg matrix costs 6n 2 +O(n) arithmetic operations. A practical algorithm will use shifts to increase separation and accelerate convergence. The is further extended to semisimple Lie groups: H. Huang. R.R. Holmes and T.Y.Tam, Asymptotic behavior of Iwasawa and Cholesky iterations, manuscript. Page 20 of 37

21 5. Application in MIMO In radio, multiple-input and multiple-output, or MIMO, is the use of multiple antennas at both the transmitter and receiver to improve communication performance. Page 21 of 37

22 MIMO is one of several forms of smart antenna technology. MIMO offers significant increases in data throughput and link range without additional bandwidth or transmit power. MIMO is an important part of modern wireless communication standards such as IE n (Wifi) and 4G. Page 22 of 37

23 In MIMO systems, a transmitter sends multiple streams by multiple transmit antennas. The transmit streams go through a matrix channel which consists of all N t N r paths between the N t transmit antennas at the transmitter and N r receive antennas at the receiver. Page 23 of 37 Then, the receiver gets the received signal vectors by the multiple receive antennas and decodes the received signal vectors into the original information.

24 Mathematical description Let x = (x 1,..., x n ) T be a vector to be transmitted over a noisy channel. Each x i is chosen from a finite-size alphabet X. A general MIMO system is modeled as r = Hx + ξ H is the m n full column rank channel (tall, i.e., m n) matrix (known to the receiver) ξ = (ξ 1,..., ξ m ) T is a white Gaussian noise vector where E(ξξ ) = σ 2 I r = (r 1,..., r m ) T is the observed received vector. Our task is to detect/estimate the vector ˆx = (ˆx 1,..., ˆx n ) T X n given the noisy observation r. Page 24 of 37 decomposition of the channel matrix H can be used to form the back-cancellation detector.

25 A. Successive Cancellation Detection Using Decomposition (1) decomposition Let H = be the decomposition of the m n channel matrix H, i.e., Q = m n matrix with orthonormal columns R = n n upper matrix. So r = Hx + ξ Q r = Rx + Q ξ Set r := Q r, ξ := Q ξ So r 1 r 2. = r 11 r 12 r 1n x 1 ξ 1 0 r 22 r 2n x ξ 2. Page 25 of 37 r n 0 0 r nn x n ξ n

26 (2) Hard decision Estimate x n by making the hard decision: ˆx n := Quant [ r n r nn ] where Quant (t) is the element X that is closest w.r.t. 2-norm to t. (3) Cancellation Backward substitutions: ˆx n := Quant [ r n r nn ] ˆx k := Quant [ r k n i=k+1 r kmˆx i r kk ], k = n 1, n 2,..., 1 The algorithm is essentially least squares solution via decomposition. Page 26 of 37

27 B. Optimally Ordered Detection Golden et al proposed a vertical Bell Laboratories layered space-time (V-BLAST) system with an optimal ordered detection algorithm that maximizes the signal-to-noise ratio (SNR). G.D. Golden, G.J. Foschini, R.A. Valenzuela, and P.W. Wolniansky, Detection algorithm and initial laboratory results using V-BLAST spacetime communication architecture, Electron. Lett., vol. 35, pp. 14õ15, Jan The idea is (equivalently) to find an n n permutation matrix P such that if H := HP = then the distance between h n and the other columns h 1,..., h n 1 is maximal. Then repeat the process by stripping off the column vectors one. Page 27 of 37

28 Doing so amounts to finding a subchannel whose SNR is the highest among all n possible subchannels. Geometrically it is to find the column of H whose distance to the span of the other columns of H is maximum. Then do it recursively. Equivalently r = H x + ξ where H := HP, x = Px i.e., if we precode a vector with the permutation matrix P, and apply Algorithm A to detect x, we get the optimally ordered successive-cancellation detector of Golden et al. Page 28 of 37

29 A recent optimal decomposition, called equal-diagonal decomposition, or briefly the S decomposition in introduced in J.K. Zhang, A. Kavčić and K. M. Wong, Equal-Diagonal Decomposition and its Application to Precoder Design for Successive-Cancellation Detection, IE Transactions on Information Theory, 51 (2005) The S decomposition is applied to precode successive-cancellation detection, where we assume that both the transmitter and the receiver have perfect channel knowledge. Page 29 of 37

30 Theorem 5.1. (Zhang, Kavčić and Wong 2005) For any channel matrix H, there exists a unitary precoder S, such that the nonzero diagonal entries of the upper matrix R are all equal where HS =. Nice properties The precoder S and the resulting successive-cancellation detector have many nice properties. The minimum Euclidean distance between two signal points at the channel output is equal to the minimum Euclidean distance between two constellation points at the precoder input up to a multiplicative factor that equals the diagonal entry in the R-factor. Page 30 of 37

31 The superchannel HS naturally exhibits an optimally ordered column permutation, i.e., the optimal detection order for the vertical Bell Labs layered spaceõtime (V-BLAST) detector is the natural order. The precoder S minimizes the block error probability of the successive cancellation detector. A lower and an upper bound for the free distance at the channel output is expressible in terms of the diagonal entries of the R-factor in the decomposition of a channel matrix. The precoder S maximizes the lower bound of the channels free distance subject to a power constraint. For the optimal precoder S, the performance of the detector is asymptotically (at large signal-to-noise ratios (SNRs)) equivalent to that of the maximum-likelihood detector (MLD) that uses the same precoder. Page 31 of 37

32 Recall the S decomposition HS = Let H = UΣV s 1 be the SVD of H where Σ =... s n. Then set S = V Ŝ O ( ) HS = HV diag (s 1,..., s n )Ŝ Ŝ = UΣŜ = U 0 where the unitary Ŝ is to be determined so that R has equal diagonal entries. Clearly the R factors of ΣŜ and UΣŜ are the same. WLOG, assume that m = n, i.e., Σ = diag (s 1,..., s n ). The problem is reduced to finding a unitary S such that Page 32 of 37 and R has identical diagonal entries. ΣS =

33 Theorem 5.2. (Kostant 1973) Let Σ = s 1... s n C n n s 1 s n > 0. For any unitary S C n n, if ΣS =, where Q C m n has orthonormal columns, R C n n is upper, then k i=1 r ii n r ii = i=1 k s i, k = 1,..., n 1 (1) i=1 n s i, (2) i=1 where r 11 r nn are the rearrangement of r 11,..., r nn. Conversely if (1) and (2) are satisfied, then there is an n n unitary S such that the diagonal entries of R in ΣS = are those r ii, i = 1,..., n. Page 33 of 37

34 B. Kostant, On convexity, the Weyl group and Iwasawa decomposition. Ann. Sci. Ecole Norm. Sup. (4), 6 (1973) Clearly Kostant (1973) Zhang, Kavčić and Wong (2005). The proof of Kostant is not constructive. Indeed his result is true for semisimple Lie groups. The construction of the S precoder in [ZKW] is not very cost effective and involves a number of steps. Determinant is also involved. Shuangchi He and Tin-Yau Tam, On equal-diagonal decomposition, manuscript. Page 34 of 37

35 Use induction. WLOG assume that m = n. 2 2 case: ΣS = Then for any s 1 µ s 2, ( ) ( ) s 1 0 cos θ sin θ = 0 s 2 sin θ cos θ ( ) s 1 cos θ s 1 sin θ s 2 sin θ s 2 cos θ (s 1 cos θ, s 2 sin θ) T 2 = s 1 cos 2 θ + s 2 sin 2 θ = µ for some θ R. In particular µ = s 1 s 2 Suppose that the statement holds true for n 1. For r = n > 2, suppose that k is the smallest index such that s k 1 λ s k, where n λ := ( s i ) 1/n. i=1 Page 35 of 37

36 There is a unitary S 1 C 2 2 such that Set S 2 = S 1 I n 2. Then Note that diag (s 1, s k )S 1 = A 1 := ( λ 0 s 1s k λ A 2 := diag (s 1, s k, s 2,, s k 1, s k+1,, s n ) S 2 = A 1 diag (s 2,, s k 1, s k+1,, s n ). λ n 1 = s 1s k λ n i=2,i k By the inductive hypothesis, there exists a unitary S 3 C (n 1) (n 1) such that ( s1 s ) k A 3 = diag λ, s 2,, s k 1, s k+1,, s n S 3 is upper with equal-diagonal entries λ. Then S 4 = 1 S 3. s i. ). Page 36 of 37

37 T HAN K YOU FOR YOUR AT T EN T ION Page 37 of 37

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper

More information

Compound matrices and some classical inequalities

Compound matrices and some classical inequalities Compound matrices and some classical inequalities Tin-Yau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS

NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS Miro Rozložník Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic email: miro@cs.cas.cz joint results with Luc Giraud,

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

ALUTHGE ITERATION IN SEMISIMPLE LIE GROUP. 1. Introduction Given 0 < λ < 1, the λ-aluthge transform of X C n n [4]:

ALUTHGE ITERATION IN SEMISIMPLE LIE GROUP. 1. Introduction Given 0 < λ < 1, the λ-aluthge transform of X C n n [4]: Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 ALUTHGE ITERATION IN SEMISIMPLE LIE GROUP HUAJUN HUANG AND TIN-YAU TAM Abstract. We extend, in the context of connected noncompact

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX

AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX TIN-YAU TAM AND HUAJUN HUANG Abstract.

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

ELEC E7210: Communication Theory. Lecture 10: MIMO systems

ELEC E7210: Communication Theory. Lecture 10: MIMO systems ELEC E7210: Communication Theory Lecture 10: MIMO systems Matrix Definitions, Operations, and Properties (1) NxM matrix a rectangular array of elements a A. an 11 1....... a a 1M. NM B D C E ermitian transpose

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Efficient Inverse Cholesky Factorization for Alamouti Matrices in G-STBC and Alamouti-like Matrices in OMP

Efficient Inverse Cholesky Factorization for Alamouti Matrices in G-STBC and Alamouti-like Matrices in OMP Efficient Inverse Cholesky Factorization for Alamouti Matrices in G-STBC and Alamouti-like Matrices in OMP Hufei Zhu, Ganghua Yang Communications Technology Laboratory Huawei Technologies Co Ltd, P R China

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Adaptive Space-Time Shift Keying Based Multiple-Input Multiple-Output Systems

Adaptive Space-Time Shift Keying Based Multiple-Input Multiple-Output Systems ACSTSK Adaptive Space-Time Shift Keying Based Multiple-Input Multiple-Output Systems Professor Sheng Chen Electronics and Computer Science University of Southampton Southampton SO7 BJ, UK E-mail: sqc@ecs.soton.ac.uk

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Roundoff Error. Monday, August 29, 11

Roundoff Error. Monday, August 29, 11 Roundoff Error A round-off error (rounding error), is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Blind MIMO communication based on Subspace Estimation

Blind MIMO communication based on Subspace Estimation Blind MIMO communication based on Subspace Estimation T. Dahl, S. Silva, N. Christophersen, D. Gesbert T. Dahl, S. Silva, and N. Christophersen are at the Department of Informatics, University of Oslo,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

Applications of Lattices in Telecommunications

Applications of Lattices in Telecommunications Applications of Lattices in Telecommunications Dept of Electrical and Computer Systems Engineering Monash University amin.sakzad@monash.edu Oct. 2013 1 Sphere Decoder Algorithm Rotated Signal Constellations

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis, 5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Continuous analogues of matrix factorizations NASC seminar, 9th May 2014

Continuous analogues of matrix factorizations NASC seminar, 9th May 2014 Continuous analogues of matrix factorizations NSC seminar, 9th May 2014 lex Townsend DPhil student Mathematical Institute University of Oxford (joint work with Nick Trefethen) Many thanks to Gil Strang,

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

Joint Optimization of Transceivers with Decision Feedback and Bit Loading

Joint Optimization of Transceivers with Decision Feedback and Bit Loading Joint Optimization of Transceivers with Decision Feedback and Bit Loading Ching-Chih Weng, Chun-Yang Chen and P. P. Vaidyanathan Dept. of Electrical Engineering, C 136-93 California Institute of Technology,

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Numerical Linear Algebra Chap. 2: Least Squares Problems

Numerical Linear Algebra Chap. 2: Least Squares Problems Numerical Linear Algebra Chap. 2: Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear Algebra

More information

Lecture 4 Eigenvalue problems

Lecture 4 Eigenvalue problems Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information