STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
|
|
- Berniece Glenn
- 5 years ago
- Views:
Transcription
1 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they are much cheaper to compute there are direct algorithms for computing qr and complete orthogonal factorization in a finite number of arithmetic steps recall that svd is spectral in nature only iterative algorithms in general by Galois Abel, although for any fixed precision (fixed number of decimal places), we can compute svd in finitely many steps there are several versions of qr factorization version 1: for any A C m n with n m, there exist a unitary matrix Q C m m (i.e., Q Q = QQ = I n ) and an upper-triangular matrix R C m n (i.e., r ij = 0 whenver i > j) such that R1 A = QR = Q (1.1) 0 R 1 C n n is an upper-triangular square matrix in general if A has full column rank, i.e., rank(a) = n, then R 1 is nonsingular this is called the full qr factorization of A version 2: for any A C m n with n m, there exist a unitary matrix Q 1 C m n (i.e., Q 1 Q 1 = I n but Q 1 Q 1 I m unless m = n) and an upper-triangular square matrix R 1 C n n such that A = Q 1 R 1 (1.2) R 1 here is in fact the same R 1 as in (1.1) Q 1 is the first n columns of Q in (1.1), i.e., Q = [Q 1, Q 2 ] where Q 2 C m (m n) is the last m n columns of Q in fact we obtain (1.2) from (1.1) by simply multiplying out R1 A = QR = [Q 1, Q 2 ] = Q 0 1 R 1 + Q 2 0 = Q 1 R 1 as before, if A has full column rank, i.e., rank(a) = n, then R 1 is nonsingular this is called the reduced qr factorization of A version 3: for any A C m n with rank(a) = r, there exist a permutation matrix Π C n n, a unitary matrix Q C m m, and a nonsingular, upper-triangular square matrix R 1 C r r such that R1 S AΠ = Q (1.3) S C r (n r) is just some matrix with no special properties this is called the rank-retaining qr decomposition of A form Date: October 22, 2018, version 1.0. Comments, bug reports: lekheng@galton.uchicago.edu. 1
2 we may also write (1.3) as A = QRΠ T = Q R1 S Π T (1.4) version 4: for any A C m n with rank(a) = r, there exist a unitary matrix Q C m m, a unitary matrix U C n n, and a nonsingular, lower-triangular square matrix L C r r such that L 0 A = Q U (1.5) this is called the complete orthogonal factorization [ of] A it can be obtained from a full qr factorization of R 1 C m r, which has full column S rank, R 1 R2 S = Z (1.6) 0 where Z C m m is unitary and R 2 C r r is nonsingular, upper-triangular square matrix observe from (1.4) and (1.6) that A = Q [ R1 S ] Π T = Q R 2 0 Z Π T = Q L 0 U where we set L = R2 and U = ΠZ. note that for a matrix that is not of full column rank, a qr decomposition would necessarily mean either versions 3 or 4 there are yet other variants of qr factorizations that can be obtained using essentially the same algorithms (Givens and Householder qr): A = QR, A = LQ, A = RQ, A = QL where Q is unitary, R is upper triangular, and L is lower triangular using such variants, we could for instance make the lower triangular matrix L in (1.5) an upper-triangular matrix instead the qr factorization is sometimes regarded as a generalization of the polar form of a complex number a C, a = re iθ to matrices, we will see later that we may always choose our R so that r ii 0 2. aside: permutation matrices the permutation matrix Π in (1.3) comes from performing column pivoting in the algorithm recall that a permutation matrix is a simply the identity matrix with the rows and columns permuted, e.g. Π = 1 (2.1) 1 multiplying a matrix A C m n by an n n permutation matrix on the right, i.e., AΠ, has the effect of permuting the columns of A according to precisely the way the columns of Π are permuted from the identity, e.g. a b c d e f 1 = c a b f d e g h i 1 i g h 2
3 multiplying a matrix A C m n by an m m permutation matrix on the left, i.e., ΠA, has the effect of permuting the rows of A according to precisely the way the rows of Π are permuted from the identity, e.g. 1 a b c d e f = d e f g h i 1 g h i a b c multiplying a square matrix A C n n by an n n permutation matrix on the left and its transpose on the right, i.e., ΠAΠ T, has the effect of permuting the diagonal of A entries on the diagonal stays on the diagonal and entries off the diagonal stays off diagonal, e.g. 1 a b c d e f T 1 = e f d h i g 1 g h i 1 b c a note that a, e, i stays on the diagonal as expected permutation matrices are always orthogonal (also unitary since it has real entries), i.e. Π T Π = ΠΠ T = I or Π 1 = Π T = Π we don t store permutation matrices as matrices of floating point numbers, we store just the permutation, e.g. (2.1) can be stored as since it takes column 3 to column 1, column 1 to column 2, column 2 to column 3 3. existence and uniqueness of qr if A C m n has full column rank, i.e., rank(a) = n m, then we will show existence and (some kind of) uniqueness of its reduced qr factorization uniqueness is easy if m = n suppose A = Q 1 R 1 = Q 2 R 2 for Q 1, Q 2 C n n are unitary and R 1, R 2 C n n are nonsingular then Q 2Q 1 = R 2 R 1 1 note that the left-hand side is unitary and right hand side is upper-triangular the only matrix that is both unitary and upper-triangular is a diagonal matrix of the form D = diag(e iθ 1,..., e iθn ) so we get Q 2 = Q 1 D, R 2 = DR 1 qr factorization is unique up to such unimodular scaling more generally, we could also get uniqueness without requiring m = n this follows from Gram Schmidt, which we could also use to establish existence 4. Gram Schmidt orthogonalization suppose A C n n is square and full-rank so all the column vectors of A are linearly independent consider the qr factorization A = [ r 11 r 1n ] a 1 a n = q1 q n r nn = QR
4 from this matrix equation, we get a 1 = r 11 q 1 a 2 = r 12 q 1 + r 22 q 2. a n = r 1n q 1 + r 2n q r nn q n and from which we can deduce an algorithm first note that a 1 = r 11 q 1, and so next, from a 2 = r 12 q 1 + r 22 q 2 we get r 11 = a 1 2, q 1 = 1 a 1 2 a 1 r 12 = q 1a 2, r 22 = a 2 r 12 q 1 2, q 2 = 1 (a 2 r 12 q 1 ) r 22 in general, we get k a k = r jk q j and hence q k = 1 a k r jk q j, r jk = q ja k note that 0: since a 1,..., a n are linearly independent and so and so and so a k / span{a 1,..., a k 1 } = span{q 1,..., q k 1 } a k r jk q j 0 = a k r jk q j 0 (4.1) 2 this is the Gram Schmidt algorithm, there are two ways to see it given a list of linearly independent vectors a 1,..., a n C n, it produces a list of orthogonormal vectors q 1,..., q n that spans the same subspace given a matrix A C n n of full rank, it produces a qr factorization A = QR so we have established the existence of qr in fact, it is clear that if we started from a list of linearly independent vectors a 1,..., a n C m where n m or equivalently a matrix A C m n of full column rank rank(a) = n m, the Gram Schmidt algorithm would still produce a list of orthogonormal vectors q 1,..., q n or equivalently a matrix Q C m n with orthonormal columns the only difference is that the algorithm would terminate at step n when it runs out of input vectors note that this is a special qr factorization since > 0 for all k = 1,..., n (because is chosen to be a norm) in fact, requiring > 0 gives us uniqueness (not just uniqueness up to unimodular scaling) now what if A C m n is not full rank, i.e., a 1,..., a n are not linearly independent in this case Gram Schmidt could fail since in (4.1) can now be 0 4
5 we need to modify Gram Schmidt so that it finds a subset of a 1,..., a n that is linearly independent this is equivalent to finding a permutation matrix Π so that the first r = rank(a) columns of AΠ are linearly independent this can be done adaptively and corresponds to column pivoting we will discuss this later when we discuss Givens and Householder qr algorithms, which are what used in practice the truth is that Gram Schmidt is really a lousy algorithm it is numerically unstable for example, if a 1 and a 2 are almost parallel, then a 2 r 12 q 1 is almost zero and roundoff error becomes significant because of such numerical instability the computed q 1, q 2,..., q k gradually lose their orthogonality however it is not difficult to fix Gram Schmidt by reorthogonalization, essentially by applying Gram Schmidt a second time to the output of the first round of Gram Schmidt q 1, q 2,..., q k in exact arithmetic, q 1, q 2,..., q k is already orthogonal and applying Gram Schmidt a second time has no effect but in the presence of rounding error, reorthogonalization has real effect making the output of the second round orthogonal the nice thing is that there is no need to do a third round of Gram Schmidt twice suffices (for subtle reasons) 5. modified Gram Schmidt algorithm we didn t discuss this in lectures but I m adding this discussion of modified Gram Schmidt, a way to improve the numerical stability of Gram Schmidt note that q k can be rewritten as q k = 1 a k (q ja k )q j = 1 a k q j q ja k = 1 I q j q j a k if we define P i = q i q i Cn n, then P i is an orthogonal projector that satisfies Pi 2 = P i and P i P j = 0 if i j we can write q k = 1 I j=0 P j a k = 1 k 1 (I P j )a k although the classical Gram Schmidt process is numerically unstable, the modified Gram Schmidt method partially alleviates this difficulty note that A = QR = [ r 11 q 1 r 12 q 1 + r 22 q 2 ] we define which means A (k) = q i r T i, r T i = r i1 r i2 r ii i=1 A q i r T i = [ 0 A (k)] i=1 5
6 if we write then we then compute A (k) = [ z B ] = z 2, q k = 1 z [ rk,k+1 r k,n ] = q T k B which yields A (k+1) = B q k [ r1k ] this process is numerically more stable than Gram Schmidt although still not as good as Householder or Givens qr 6. back substitution backsolve or back substitution refers to a simple, intuitive way of solving linear systems of the form Rx = y or Lx = y where R is upper-triangular and L is lower-triangular take Rx = y for illustration r 11 r 1n..... r nn start at the bottom and work out way up y n = r nn x n x 1 x n = y 1. y n y n 1 = r n 1,n x n + r n 1,n 1 x n 1. we get y 1 = r 11 x 1 + r 12 x r 1n x n x n = y n r nn x n 1 = y n 1 r n 1,n (y n /r n ) r n 1,n 1. this requires that 0 for all k = 1,..., n, which is guaranteed if R is nonsingular for example we could use qr factorization given A C n n nonsingular and b C n step 1: find qr factorization A = QR step 2: form y = Q b step 3: backsolve Rx = y to get x 7. general principle for factoring matrices it is easy to solve Ax = b if A is unitary or orthogonal (includes permutation matrices) A is upper- or lower-triangular (includes diagonal matrices) Ax = b with such A can be solved with O(n 2 ) flops 6
7 if A represents a special orthogonal matrix like the discrete Fourier or wavelet transforms, then Ax = b can in fact be solved in O(n log n) flops using algorithms like fast Fourier or fast wavelet transforms if A is not one of these forms, we factorize A into a product of matrices of these forms this includes all the basic matrix factorizations lu, qr, svd, evd actually to the above list, we could also add A is bidiagonal/tridiagonal (or banded, i.e., a ij = 0 if i j > b for some bandwidth b n) A is Toeplitz or Hankel, i.e., a ij = a i j or a ij = a i+j constant on the diagonals or the opposite diagonals A is semiseparable Ax = b with bidiagonal or tridiagonal A can be solved in O(n) flops Ax = b with Toeplitz or Hankel A can be solved in O(n 2 log n) flops these are often called structured matrices for example, a tridiagonal system b 1 c 1 0 x 1 d 1 a 2 b 2 c 2. x 2 d 2 a 3 b.. 3 x 3 = d cn a n b n x n d n may be solved by first computing c i i = 1, c b i = i c i b i a i c i = 2, 3,..., n 1, i 1 and i = 1, d b i i = d i a i d i 1 b i a i c i = 2, 3,..., n, i 1 followed by back substitution x n = d n, d i x i = d i c ix i+1, i = n 1, n 2,..., 1 in this course we will just restrict ourselves to unitary and triangular factors but we will discuss a general principle for solving linear systems and least squares problems based on rank-retaining factorizations that works with any structured matrices 7
Linear Analysis Lecture 16
Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationMath 407: Linear Optimization
Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More informationLecture 4 Orthonormal vectors and QR factorization
Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced
More informationStability of the Gram-Schmidt process
Stability of the Gram-Schmidt process Orthogonal projection We learned in multivariable calculus (or physics or elementary linear algebra) that if q is a unit vector and v is any vector then the orthogonal
More informationLinear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg
Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector
More informationENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition
More informationLinear Algebra, part 3 QR and SVD
Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationOrthogonalization and least squares methods
Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationLecture 6, Sci. Comp. for DPhil Students
Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page
More informationThe QR Factorization
The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector
More informationNotes on Solving Linear Least-Squares Problems
Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation
More informationMatrix decompositions
Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization
More informationLinear Least Squares Problems
Linear Least Squares Problems Introduction We have N data points (x 1,y 1 ),...(x N,y N ). We assume that the data values are given by y j = g(x j ) + e j, j = 1,...,N where g(x) = c 1 g 1 (x) + + c n
More informationInverses. Stephen Boyd. EE103 Stanford University. October 28, 2017
Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number
More informationQ T A = R ))) ) = A n 1 R
Q T A = R As with the LU factorization of A we have, after (n 1) steps, Q T A = Q T A 0 = [Q 1 Q 2 Q n 1 ] T A 0 = [Q n 1 Q n 2 Q 1 ]A 0 = (Q n 1 ( (Q 2 (Q 1 A 0 }{{} A 1 ))) ) = A n 1 R Since Q T A =
More informationA fast randomized algorithm for approximating an SVD of a matrix
A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,
More informationVector and Matrix Norms. Vector and Matrix Norms
Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose
More informationCheat Sheet for MATH461
Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A
More informationAM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition
AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationLINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,
LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically
More informationNumerical Linear Algebra Chap. 2: Least Squares Problems
Numerical Linear Algebra Chap. 2: Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear Algebra
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationLecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation
Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition
More informationLecture 3: QR-Factorization
Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationMatrix Factorization and Analysis
Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their
More informationHouseholder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)
Householder QR Householder reflectors are matrices of the form P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) w Px x Geometrically, P x represents a mirror image of x with respect to
More information7. Dimension and Structure.
7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationLeast-Squares Systems and The QR factorization
Least-Squares Systems and The QR factorization Orthogonality Least-squares systems. The Gram-Schmidt and Modified Gram-Schmidt processes. The Householder QR and the Givens QR. Orthogonality The Gram-Schmidt
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationLeast Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined
More informationLinear Least-Squares Data Fitting
CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationlecture 2 and 3: algorithms for linear algebra
lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationOrthonormal Bases; Gram-Schmidt Process; QR-Decomposition
Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most
More informationorthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,
5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications
More informationThe Full-rank Linear Least Squares Problem
Jim Lambers COS 7 Spring Semeseter 1-11 Lecture 3 Notes The Full-rank Linear Least Squares Problem Gien an m n matrix A, with m n, and an m-ector b, we consider the oerdetermined system of equations Ax
More informationWorksheet for Lecture 25 Section 6.4 Gram-Schmidt Process
Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal
More informationSection 6.4. The Gram Schmidt Process
Section 6.4 The Gram Schmidt Process Motivation The procedures in 6 start with an orthogonal basis {u, u,..., u m}. Find the B-coordinates of a vector x using dot products: x = m i= x u i u i u i u i Find
More informationLecture 4: Applications of Orthogonality: QR Decompositions
Math 08B Professor: Padraic Bartlett Lecture 4: Applications of Orthogonality: QR Decompositions Week 4 UCSB 204 In our last class, we described the following method for creating orthonormal bases, known
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More information1 0 1, then use that decomposition to solve the least squares problem. 1 Ax = 2. q 1 = a 1 a 1 = 1. to find the intermediate result:
Exercise Find the QR decomposition of A =, then use that decomposition to solve the least squares problem Ax = 2 3 4 Solution Name the columns of A by A = [a a 2 a 3 ] and denote the columns of the results
More informationDot product and linear least squares problems
Dot product and linear least squares problems Dot Product For vectors u,v R n we define the dot product Note that we can also write this as u v = u,,u n u v = u v + + u n v n v v n = u v + + u n v n The
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationSince the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.
APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details
More information33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM
33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the
More informationSUMMARY OF MATH 1600
SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationI. Multiple Choice Questions (Answer any eight)
Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY
More informationLU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU
LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationA fast randomized algorithm for the approximation of matrices preliminary report
DRAFT A fast randomized algorithm for the approximation of matrices preliminary report Yale Department of Computer Science Technical Report #1380 Franco Woolfe, Edo Liberty, Vladimir Rokhlin, and Mark
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationThe Singular Value Decomposition and Least Squares Problems
The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving
More informationTypical Problem: Compute.
Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and
More informationL2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015
L-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 015 Marty McFly: Wait a minute, Doc. Ah... Are you telling me you built a time machine... out of a DeLorean? Doc Brown: The way I see
More informationChapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =
Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss
More informationProblem 1. CS205 Homework #2 Solutions. Solution
CS205 Homework #2 s Problem 1 [Heath 3.29, page 152] Let v be a nonzero n-vector. The hyperplane normal to v is the (n-1)-dimensional subspace of all vectors z such that v T z = 0. A reflector is a linear
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationLinear Least squares
Linear Least squares Method of least squares Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over more measurements than necessary to
More informationThe 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.
Orthogonality and QR The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next. So, what is an inner product? An inner product
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationlecture 3 and 4: algorithms for linear algebra
lecture 3 and 4: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 30, 2016 Solving a system of linear equations
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationA fast randomized algorithm for overdetermined linear least-squares regression
A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationCSE 160 Lecture 13. Numerical Linear Algebra
CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination
More informationComputational Methods. Least Squares Approximation/Optimization
Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the
More information