Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
|
|
- Baldwin Hampton
- 6 years ago
- Views:
Transcription
1 Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics, University of Oslo August 27, 2009
2 Triangular Matrices (from week 1) Recall: The product of two upper(lower) triangular matrices is upper(lower) triangular. A triangular matrix is nonsingular if and only if all diagonal elements are nonzero. The inverse of two upper(lower) triangular matrices is upper(lower) triangular. A matrix is unit triangular if it is triangular with 1 s on the diagonal. The product of two unit upper(lower) triangular matrices is unit upper(lower) triangular. A unit upper(lower) triangular matrix is invertible and the inverse is unit upper(lower) triangular.
3 LU Factorization We say that A = LR is an LU factorization of A R n,n if L R n,n is lower (left) triangular and R R n,n is upper (right) triangular. In addition we will assume that L is unit triangular. Example A = [ ] 2 1 = 1 2 [ 1 0 1/2 1 ] [ ] /2
4 Example Not every matrix has an LU factorization. An LU factorization of A = [ ] must satisfy the equations [ ] [ ] [ ] r1 r = l r 2 for the unknowns l 1 in L and r 1, r 2, r 3 in R. Get equations [ ] [ ] 0 1 r1 r = l 1 r 1 l 1 r 3 + r 2 Comparing (1, 1)-elements we see that r 1 = 0, this makes it impossible to satisfy the condition 1 = l 1 r 1 for the (2, 1) element. We conclude that A has no LU factorization.
5 Submatrices A C n,n r = [r 1,..., r k ] for some 1 r 1 < < r k n Principal: B = A(r, r), b i,j = a ri,r j Leading principal: B = A k := A(1 : k, 1 : k) The determinant of a (leading) principal submatrix is called a (leading) principal minor. [ ] The principal submatrices of A = are [1], [5], [9], [ ], [ ], [ ], A. The leading principal submatrices are [1], [ ], A.
6 Existence and Uniqueness of LU Theorem Suppose the leading principal submatrices A k of A C n,n are nonsingular for k = 1,..., n 1. Then A has a unique LU factorization. Example [ ] 1 1 = 0 0 [ ] [ ]
7 Proof Proof by induction on n n = 1: [a 11 ] = [1][a 11 ]. Suppose that A n 1 has a unique LU factorization A n 1 = L n 1 R n 1, and that A 1,..., A n 1 are nonsingular. Since A n 1 is nonsingular it follows that L n 1 and R n 1 are nonsingular. But then [ ] An 1 b A = = c T a nn [ ] [ ] Ln 1 0 Rn 1 v c T R 1 n a nn c T R 1 n 1 v = LR, where v = L 1 n 1 b is an LU factorization of A. Since L n 1 and R n 1 are nonsingular the block (2,1) entry in L is uniquely given and then r nn is also determined uniquely from the construction. Thus the LU factorization is unique.
8 Using block multiplication one can show Lemma Suppose A = LR is the LU factorization of A R n,n. For k = 1,..., n let A k, L k, R k be the leading principal submatrices of A, L, R, respectively. Then A k = L k R k is the LU factorization of A k for k = 1,..., n. Example A = = = LR A 1 = [1] = [1][1] = L 1 R 1 [ ] [ ] [ ] A 2 = = = L R 2 R(3, 3) = 0 and A is singular.
9 A Converse Theorem Suppose A C n,n has an LU factorization. If A is nonsingular then the leading principal submatrices A k are nonsingular for k = 1,..., n 1 and the LU factorization is unique. Proof: Suppose A is nonsingular with the LU factorization A = LR. Since A is nonsingular it follows that L and R are nonsingular. By Lemma we have A k = L k R k. L k is unit lower triangular and therefore nonsingular. R k is nonsingular since its diagonal entries are among the nonzero diagonal entries of R. But then A k is nonsingular for all k. Moreover uniqueness follows. Remark The LU factorization of a singular matrix need not be unique. For the zero matrix any unit lower triangular matrix can be used as L in an LU factorization.
10 Symmetric LU Factorization For a symmetric matrix the LU factorization can be written in a special form. A =» 2 1 = 1 2» 1 0 1/2 1» 2 1 = 0 3/2»» / /2» 1 1/2 0 1 In the last product the first and last matrix are transposes of each other. A = LDL T symmetric LU factorization. A = LR where R = DL T Definition Suppose A R n,n. A factorization A = LDL T, where L is unit lower triangular and D is diagonal is called a symmetric LU factorization.
11 LDLT Characterization Theorem Suppose A R n,n is nonsingular. Then A has a symmetric LU factorization A = LDL T if and only if A = A T and A k is nonsingular for k = 1,..., n 1. The symmetric LU factorization is unique.
12 Block LU Factorization Suppose A R n,n is a block matrix of the form A 11 A 1m A :=.., (1) A m1 A mm where each (diagonal) block A ii is square. We call the factorization I R 11 R 1m L 21 I R 21 R 2m A = LR = (2) L m1 L m,m 1 I R mm a block LU factorization of A. Here the ith diagonal blocks I in L and R ii in R have the same order as A ii.
13 Block LU The results for elementwise LU factorization carry over to block LU factorization as follows. Theorem Suppose A R n,n is a block matrix of the form (1), and the leading principal block submatrices A 11 A 1k A k :=.. A k1 A kk are nonsingular for k = 1,..., m 1. Then A has a unique block LU factorization (2). Conversely, if A is nonsingular and has a block LU factorization then A k is nonsingular for k = 1,..., m 1.
14 Why Block LU? The number of flops for the block LU factorization is the same as for the ordinary LU factorization. An advantage of the block method is that it combines many of the operations into matrix operations.
15 The PLU Factorization A nonsingular matrix A R n,n has an LU factorization if and only if the leading principle submatrices A k are nonsingular for k = 1,..., n 1. This condition seems fairly restrictive. However, for a nonsingular matrix A there always is a permutation of the rows so that the permuted matrix has an LU factorization. We obtain a factorization of the form P T A = LR or equivalently A = PLR, where P is a permutation matrix, L is unit lower triangular, and R is upper triangular. We call this a PLU factorization of A.
16 Positive (Semi)Definite Matrices Suppose A R n,n is a square matrix. The function f : R n R given by n n f (x) = x T Ax = a ij x i x j i=1 j=1 is called a quadratic form. We say that A is (i) positive definite if x T Ax > 0 for all nonzero x R n. (ii) positive semidefinite if x T Ax 0 for all x R n. (iii) negative (semi)definite if A is positive (semi)definite. (iv) symmetric positive (semi)definite if A is symmetric in addition to being positive (semi)definite. (v) symmetric negative (semi)definite if A is symmetric in addition to being negative (semi)definite.
17 Observations A matrix is positive definite if it is positive semidefinite and in addition x T Ax = 0 x = 0. (3) A positive definite matrix must be nonsingular. Indeed, if Ax = 0 for some x R n then x T Ax = 0 which by (3) implies [ that x ] = is positive definite. 1 2 The zero-matrix is symmetric positive semidefinite, while the unit matrix is symmetric positive definite. The second derivative matrix T = tridiag( 1, 2, 1) R n,n is symmetric positive definite.
18 Useful Results Theorem Let m, n be positive integers. If A R n,n is positive semidefinite and X R n,m then B := X T AX R m,m is positive semidefinite. If in addition A is positive definite and X has linearly independent columns then B is positive definite. Proof. Let y R m and set x := Xy. Then y T By = x T Ax 0. If A is positive definite and X has linearly independent columns then x is nonzero if y is nonzero and y T By = x T Ax > 0. Taking A := I and X := A we obtain Corollary Let m, n be positive integers. If A R m,n then A T A is positive semidefinite. If in addition A has linearly independent columns then A T A is positive definite.
19 More Useful Results Theorem Any principal submatrix of a positive (semi)definite matrix is positive (semi)definite. Proof. Suppose the submatrix B is defined by the rows and columns r 1,..., r k of A. Then B := X T AX, where X = [e r1,..., e rk ] R n,k, and B is positive (semi)definite by Theorem 7. If A is positive definite then the leading principal submatrices are nonsingular and we obtain: Corollary A positive definite matrix has a unique LU factorization.
20 What about the Eigenvalues? Theorem A positive (semi)definite matrix A has positive (nonnegative) eigenvalues. Conversely, if A has positive (nonnegative) eigenvalues and orthonormal eigenvectors then it is positive (semi)definite. Proof. Consider the positive definite case. Ax = λx with x 0 λ = xt Ax x T x > 0. Suppose conversely that A R n,n has eigenpairs (λ j, u j ), j = 1,..., n, where the eigenvalues are positive and the eigenvectors satisfy u T i u j = δ ij, i, j = 1,..., n. Let U := [u 1,..., u n ] R n,n and D := diag(λ 1,..., λ n ). AU = UD and U T U = I U T AU = D. Let x R n be nonzero and define c R n by Uc = x. x T Ax = (Uc) T AUc = c T U T AUc = c T Dc = n j=1 λ jc 2 j > 0. The positive semidefinite case is similar.
21 What about the Determinant? Theorem If A is positive (semi)definite then det(a) > 0 (det(a) 0). Proof. Since the determinant of a matrix is equal to the product of its eigenvalues this follows from the previous theorem.
22 The Symmetric Case Lemma If A is symmetric positive semidefinite then for all i, j 1. a ij (a ii + a jj )/2, 2. a ij a ii a jj. Proof. For all i, j and α, β R 0 (αe i + βe j ) T A(αe i + βe j ) = α 2 a ii + β 2 a jj + 2αβa ij, α = 1, β = ±1 = a ii + a jj ± 2a ij 0 = follows trivially from 1. if a ii = a jj = 0. Suppose one of them, say a ii is positive. Taking α = a ij, β = a ii we find 0 a 2 ij a ii + a 2 ii a jj 2a 2 ij a ii = a ii (a ii a jj a 2 ij ). But then a ii a jj aij 2 0 and 2. follows.
23 A Consequence If A is symmetric positive semidefinite and one diagonal element is zero, say a ii = 0 then all elements in row i and column i must also be zero. For since a ij a ii a jj we have a ij = 0 for all j, and by symmetry a ji = 0 for all j. In particular, if A R n,n is symmetric [ positive ] semidefinite 0 0 T and a 11 = 0 then A has the form, B R n 1,n 1 0 B A 1 = [ ] 0 1, A = [ ] 1 2, A = None of them is symmetric positive semidefinite. [ ]
24 Cholesky Factorization Definition 1. A factorization A = R T R where R is upper triangular with positive diagonal elements is called a Cholesky factorization. 2. A factorization A = R T R where R is upper triangular with non-negative diagonal elements is called a semi-cholesky factorization. Theorem Let A R n,n. 1. A has a Cholesky factorization if and only if it is symmetric positive definite. 2. A has a semi-cholesky factorization if and only if it is symmetric positive semidefinite.
25 Proof Outline Positive Semidefinite Case If A = R T R is a semi-cholesky factorization then A is symmetric positive semidefinite. Suppose A R n,n is symmetric positive semidefinite. We use induction and partition A as [ ] α v T A =, α R, v R n 1, B R n 1,n 1. v B α = a 11 = e T 1 Ae 1 0. If α = 0 then v = 0. The principal submatrix B is positive semidefinite. By induction [ ] B has a semi-cholesky factorization B = R T 1 R T R = is a semi-cholesky factorization of A. 0 R 1
26 Proof Continued [ ] α v T A =. v B α > 0, β := α: C := B vv T /α is symmetric positive semidefinite. By induction C has a semi-cholesky factorization C = R T 1 R 1. [ β v R := T ] /β is a semi-cholesky factorization of A. 0 R 1
27 Criteria Symmetric Positive Semidefinite Case Theorem The following is equivalent for a symmetric matrix A R n,n. 1. A is positive semidefinite. 2. A has only nonnegative eigenvalues. 3. A = B T B for some B R n,n. 4. All principal minors are nonnegative.
28 Criteria Symmetric Positive Definite Case Theorem The following is equivalent for a symmetric matrix A R n,n. 1. A is positive definite. 2. A has only positive eigenvalues. 3. All leading principal minors are positive. 4. A = B T B for a nonsingular B R n,n.
29 Banded Case Recall that a matrix A has bandwidth d 0 if a ij = 0 for i j > d. (semi) Cholesky factorization preserves bandwidth. Corollary [ ] The Cholesky-factor R := β v T /β has the same bandwidth as A. 0 R 1 Proof. [ ] α v T Suppose A = R v B n,n has bandwidth d 0. Then v T = [u T, 0 T ], where u R d vv T = [ u 0 ] [ u T 0 T ] = [ ] uu T C = B vv T /α differs from B only in the upper d d corner. C has the same bandwidth as B and A. By induction on n, C = R T 1 R 1, where R 1 has the same bandwidth as C. But then R has the same bandwidth as A.
30 Towards an Algorithm Since A is symmetric we only need to use the upper part of A. The first row of R is v T /β if α > 0 and zero if α = 0. We store the first row of R in the first row of A and the upper part of C = B vv T /α in the upper part of A(2 : n, 2 : n). The first row of R and the upper part of C can be computed as follows. if A(1, 1) > 0 A(1, 1) = A(1, 1) A(1, 2 : n) = A(1, 2 : n)/a(1, 1) for i = 2 : n A(i, i : n) = A(i, i : n) A(1, i) A(1, i : n) (4)
31 Cholesky and Semi-Cholesky [bandcholesky] 1. function R=bandcholesky(A,d) 2. n=length(a); 3. for k=1:n 4. if A(k,k)>0 5. kp=min(n,k+d); 6. A(k,k)=sqrt(A(k,k)); 7. A(k,k+1:kp)=A(k,k+1:kp)/A(k,k); 8. for i=k+1:kp 9. A(i,i:kp)=A(i,i:kp)-A(k,i)*A(k,i:kp); 10. end 11. else 12. A(k,k:kp)=zeros(1,kp-k+1); 13. end 14. end 15. R=triu(A);
32 Comments We overwrite the upper triangle of A with the elements of R. Row k of R is zero for those k where r kk = 0. We reduce round-off noise by forcing those rows to be zero. There are many versions of Cholesky factorizations, see the Golub-VanLoan book The algorithm is based on outer products vv T. An advantage of this formulation is that it can be extended to positive semidefinite matrices.
33 Banded Forward Substitution [bandforwardsolve] Solves the lower triangular system R T y = b. R is upper triangular and banded with r kj = 0 for j k > d. 1. function y=bandforwardsolve(r,b,d) 2. n=length(b); y=b(:); 3. for k=1:n 4. km=max(1,k-d); 5. y(k)=(y(k)-r(km:k-1,k) *y(km:k-1))/r(k,k); 6. end
34 Banded Backward Substitution [bandbacksolve] Solves the upper triangular system Rx = y. R is upper triangular and banded with r kj = 0 for j k > d. 1. function x=bandbacksolve(r,y,d) 2. n=length(y); x=y(:); 3. for i=n:-1:1 4. kp=min(n,k+d); 5. x(k)=(x(k)-r(k,k+1:kp)*x(k+1:kp))/r(k,k); 6. end
35 Number of Flops, Discussion Full matrix: O(n 3 /3) Half of what is needed for Gaussian elimination Banded matrix, bandwidth d: O(nd 2 ) Restricted to positive (semi)-definite matrices Many versions of Cholesky factorization tuned to different machine architectures. Symmetric LU factorization can be used for many symmetric matrices that are not positive definite.
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationLecture 1 INF-MAT : Chapter 2. Examples of Linear Systems
Lecture 1 INF-MAT 4350 2010: Chapter 2. Examples of Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo August 26, 2010 Notation The set of natural
More informationLecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems
Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric
More informationComputing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 9, 2008 Today
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationLecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)
Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationIllustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44
Illustration of Gaussian elimination to find LU factorization. A = a 21 a a a a 31 a 32 a a a 41 a 42 a 43 a 1 Compute multipliers : Eliminate entries in first column: m i1 = a i1 a 11, i = 2, 3, 4 ith
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More informationGaussian Elimination without/with Pivoting and Cholesky Decomposition
Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A
More informationApril 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition
Applied mathematics PhD candidate, physics MA UC Berkeley April 26, 2013 UCB 1/19 Symmetric positive-definite I Definition A symmetric matrix A R n n is positive definite iff x T Ax > 0 holds x 0 R n.
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationCS412: Lecture #17. Mridul Aanjaneya. March 19, 2015
CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems
More informationLecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems
Lecture4 INF-MAT 4350 2010: 5. Fast Direct Solution of Large Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 16, 2010 Test Matrix
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationThe Singular Value Decomposition and Least Squares Problems
The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving
More informationGaussian Elimination for Linear Systems
Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination
More information2.1 Gaussian Elimination
2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationLecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems
Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems Tom Lyche University of Oslo Norway Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems p.1/33 Plan for the day 1. Notation
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More informationGaussian Elimination and Back Substitution
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationSymmetric matrices and dot products
Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality
More informationMatrix Factorization and Analysis
Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationACM106a - Homework 2 Solutions
ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m
More information1 Determinants. 1.1 Determinant
1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to
More informationECE133A Applied Numerical Computing Additional Lecture Notes
Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationLinear Systems and Matrices
Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......
More informationELE/MCE 503 Linear Algebra Facts Fall 2018
ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationNumerical Linear Algebra
Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationMATH2210 Notebook 2 Spring 2018
MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationNumerical Linear Algebra
Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one
More informationMATH 3511 Lecture 1. Solving Linear Systems 1
MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction
More informationLemma 8: Suppose the N by N matrix A has the following block upper triangular form:
17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationLinear Algebra: Lecture notes from Kolman and Hill 9th edition.
Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices
More informationChapter 6. Eigenvalues. Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45
Chapter 6 Eigenvalues Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45 Closed Leontief Model In a closed Leontief input-output-model consumption and production coincide, i.e. V x = x
More informationLinear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary
More informationII. Determinant Functions
Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function
More informationLecture Notes in Linear Algebra
Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More information3.2 Gaussian Elimination (and triangular matrices)
(1/19) Solving Linear Systems 3.2 Gaussian Elimination (and triangular matrices) MA385/MA530 Numerical Analysis 1 November 2018 Gaussian Elimination (2/19) Gaussian Elimination is an exact method for solving
More informationc c c c c c c c c c a 3x3 matrix C= has a determinant determined by
Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.
More informationPractical Linear Algebra: A Geometry Toolbox
Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More informationMAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.
MAT 1332: CALCULUS FOR LIFE SCIENCES JING LI Contents 1 Review: Linear Algebra II Vectors and matrices 1 11 Definition 1 12 Operations 1 2 Linear Algebra III Inverses and Determinants 1 21 Inverse Matrices
More informationLinear Analysis Lecture 16
Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationChapter 3. Matrices. 3.1 Matrices
40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows
More informationMATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.
MATH 311-504 Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. Determinant is a scalar assigned to each square matrix. Notation. The determinant of a matrix A = (a ij
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More information5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...
5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012
More informationChapter 2:Determinants. Section 2.1: Determinants by cofactor expansion
Chapter 2:Determinants Section 2.1: Determinants by cofactor expansion [ ] a b Recall: The 2 2 matrix is invertible if ad bc 0. The c d ([ ]) a b function f = ad bc is called the determinant and it associates
More informationENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition
More information12. Cholesky factorization
L. Vandenberghe ECE133A (Winter 2018) 12. Cholesky factorization positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods 12-1 Definitions a symmetric
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationChapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.
MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,
More informationChapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =
Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss
More informationLinear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More information7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization
More informationMatrix Multiplication Chapter IV Special Linear Systems
Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationChapter 3. Determinants and Eigenvalues
Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory
More informationThe matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.
) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationDeterminants Chapter 3 of Lay
Determinants Chapter of Lay Dr. Doreen De Leon Math 152, Fall 201 1 Introduction to Determinants Section.1 of Lay Given a square matrix A = [a ij, the determinant of A is denoted by det A or a 11 a 1j
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More information