Contents. 1 Repeated Gram Schmidt Local errors Propagation of the errors... 3
|
|
- Alison Lane
- 5 years ago
- Views:
Transcription
1 Contents 1 Repeated Gram Schmidt Local errors Propagation of the errors
2 Gram-Schmidt orthogonalisation Gerard Sleijpen December 7, Repeated Gram Schmidt A sequence x 1, x 2,... of vectors of dimension n is orthogonalized by the Gram-Schmidt process into a sequence v 1, v 2,... of orthonormal vectors such that, for each k, the vectors v 1,..., v k span the same space as the first k vectors x i. The construction of the vectors v k is recursive. If V k is the matrix with columns v 1,..., v k then v k+1 is constructed by the Gram-Schmidt process as follows: x = x k+1 V k (V k x k+1), v k+1 = x/ x 2. In exact arithmetic, the operator I V k Vk projects any n-vector on the space orthogonal to span(v). This will not be the case in rounded arithmetic for two reasons: 1. Local errors. The application of I V k Vk to x k+1 will introduce rounding errors. In particular, the computed v k+1 will not be orthogonal to V k. 2. Propagation of the errors. The operator I V k Vk is not an exact orthogonal projector (see 1). Therefore, even an exact application of this operator does not lead to a vector v k+1 that is orthogonal to V k. The negative effects of both aspects can be diminished by repeating the Gram- Schmidt orthogonalization. 1.1 Local errors In rounded arithmetic, we have (neglecting O(u 2 ) terms) x (1) x + x = x VV x V with 1 nu V x and 2 ku V V x + u x. The error v in v x/ x 2 can be bounded by ( v 2 n k x 2 + k ) k V x u (n + k) k x 2 u + u. (1) x 2 x 2 x 2 If V is nearly orthogonal then x 2 / x 2 is the reciprocal of the sine of the angle φ between x and the space spanned by V. Mathematical Institute, Utrecht University, P.O. Box , 3508 TA Utrecht, the Netherlands. sleijpen@math.uu.nl Version: December,
3 The component of the error v in span(v) is essentially equal to V V 1 and can be bounded in norm by n ku/ sin(φ) if I V V 2 1. Therefore to keep the loss of orthogonality due to local rounding errors less than δ, with 0 < δ 1, the computable quantity 1/ sin(φ) = x 2 / x 2 should be less than δ/(n ku). If this requirement does not hold then another Gram-Schmidt can be applied. This produces the vector x (2) x + (I VV ) 2 + V with 1 nu V x and 2 ku V V x + u x. In the estimate of the perturbation, we assumed that x (1) 2 x 2, which is acceptable if, say, x 2 / x 2 0.1/(n ku). Note that, the vector x is, up to machine precision, in the span of V k if x 2 / x 2 > 0.1/(n ku). The error terms V are in the order of machine precision, relative to x 2. The term (I VV ) 2 can be a factor 1/ sin(φ) larger, but is orthogonal to V and does not contribute to a loss of orthogonality. Therefore, the vector x (2) will be, up to machine precision, orthogonal to V. Note that h V x = h 1 + V x (1) where h 1 V x + 1 and x = x (2) + Vh (I VV ) 2 + O(u). Notes 1 1. The estimate for 2 can be refined. Since fl(γ + ηα) = (γ + ηα(1 + ξ))(1 + ξ) = γ + ηα + ηαξ + (γ + ηα)ξ, we see that and V j [v 1,..., v j]. Hence 2 u V h + u k j=2 V jh j where h j V j x, 2 2 u V 2 h 2 + u j V jh j 2 u ( k + k 1) h k u h De error vector 2 will have some randomness. Therefore an estimate V is rather pessimistic and V k would be more realistic. n 3. With modified Gram Schmidt, errors are also subject to subsequential orthogonalization. Therefore, with the modified approach, the component of the error 2 in space spanned by V can be significantly smaller. However, the major advantage of the modified approach is in the 1 term. For the error in the intermediate terms, the sine of the angle between the intermediate vectors x V jh j and the space spanned by V is of importance rather than the sine between x and this space. If, for instance, the angle between x and v 1 is small, while the angle with the space spanned by the other vectors v i is non-small, then only the error in the inner product v1 x is of significance and the k term in the estimate (1) for 1 2 can be skipped. But also in this approach the error is proportional to 1/ sin(φ). If x has a small angle with V, but large angles with all v i (as, for instance, for x = k i=1 vi + ɛv k+1), then, also with the modified approach, the k will show up. 4. If we have a good estimate w V h for the vector in the space spanned by V that is close to x (close to Vh, where h V x) then orthogonalization of x to w, followed by the Gram-Schmidt procedure, β w x w w, x = x wβ, h = V x, x = x Vh, h = hβ + h, is stable (i.e., the rounding errors will be in the order of machine precision): the vector x will be nearly orthogonal to V. Therefore, the orthogonalization of x to V will be stable. The rounding errors in the computation of x will be largely diminished by the subsequential orthogonalization. The solution of the Jacobi Davidson correction vector is orthogonal to the Ritz vector. Therefore, the angle between this correction vector and the search subspace is large and there is usually no need to repeat the Gram-Schmidt orthogonalization. 2
4 function [V,H]=... RepGramSchmidt(X,kappa,delta,i0) [n,k]=size(x); v=x(:,1); gamma=norm(v); H=gamma; V=v/gamma; for j=2:k v=x(:,j); h=v *v; v=v-v*h; gamma=norm(v); beta=norm(h); for i=2:i0 if gamma<delta*beta gamma>kappa*beta break hc=v *v; h=h+hc; v=v-v*hc; gamma=norm(v); beta=norm(hc); H=[H,h;zeros(1,j)]; if gamma>delta*beta H(j,j)=gamma; V=[V,v/gamma]; return Figure 1: Matlab code for repeated Gram Schmidt. For i0=1, we have classical Gram Scmidt. The parameter delta determines when a vector x is consider to be in the span of V. 1.2 Propagation of the errors Consider an n by k matrix V and the interaction matrix M V V. Lemma 2 If M is non-singular then VM 1 V is the orthogonal projection onto the subspace span(v), and I VM 1 V projects onto the orthogonal complement of this subspace. We assume that ɛ E 2 < 1 where E I M. Then M is non singular. With i-times repeated Gram-Schmidt applied to a vector x we int to approximate the component x (I VM 1 V )x 3
5 of x that is orthogonal to span(v). Therefore, with the result x (i) (I VV ) i x for x of i sweeps with (classical) Gram-Schmidt, we are interested in the error x (i) x. Lemma 3 For each i = 0, 1, 2,... we have (I VV ) i (I VM 1 V ) = VM 1 E i V (2) and (I VV ) i (I VV ) i+1 = VE i V. (3) Proof. A simple induction argument using (I VV )V = VE leads to (3). With (3), we find that (I VV ) i I = V(I + E + E E i 1 )V, and, with a Neumann expansion M 1 = (I E) 1 = I + E + E , we obtain (2). Hence x (i) x 2 = VM 1 E i V x 2 = M 1 2 E i V x 2 ɛ i M 1 2 V x 2 ɛi 1 ɛ V x 2. here we used the fact that commutativity of M = V V and E = I M implies that (VM 1 E i V ) VM 1 E i V = (M 1 2 E i V) (M 1 2 E i V). The computable quantity τ 1 V x 2 / x (1) 2 is close to the cotangens τ of the angle between x and span(v): τ = VM 1 V x 2 x 2 = M 1 2 V x 2. x 2 The relative error in x (i) can be bounded in terms of τ: Theorem 4 Now we can relate τ to τ 1 : x (i) x 2 x 2 ɛ i τ. τ 1 = V x 2 x (1) ɛ M 1 2 V x 2 x 2 x x (1) 2 1 τ 1 ɛ 1 ɛτ Similarly Therefore: τ 1 = V x 2 x (1) 2 τ 1 ɛ 1 + ɛτ. Corollary 5 If ɛ(τ + 1) 1 then τ τ 1. 4
6 To estimate how much the computable cotangens τ i V x (i 1) 2 / x (i) 2 for x (i) reduces in a Gram Schmidt sweep, note that x (i) x (i+1) 1 2 x (i) ɛ i M 2 V x 2 2 x 2 x x (i) 2 ɛi τ 1 1 ɛ 1 ɛ i τ ɛi τ. Hence τ i+1 V x (i) 2 x (i+1) = EV x (i 1) 2 V x (i 1) 2 2 x (i+1) ɛ 2 x (i) 2 x (i) x (i+1) 2 ɛ 1 ɛ i τ τ i ɛ i τ. The following result tells us what the effect is of expanding a basis of a subspace with a vector that is not exactly orthogonal to this space. Theorem 6 Consider a vector v such that v 2 = 1. Put V + [V, v]. Then, with ɛ V V I 2 and δ V v 2, we have that V+V + I 2 1(ɛ + 2 ɛ 2 + 4δ 2 ) min(ɛ + δ, ɛ + δ2 ). (4) ɛ Proof. If µ i are the eigenvalues of E = I V V and ν i are the components of V v in the direction of the associated eigenvectors of E then the eigenvalues λ j of E + I V+V + satisfy λ j = νi 2. (5) λ i j µ i Since max µ i ɛ we have that i ν 2 i λ µ i i ν 2 i λ µ + = δ2 λ ɛ for λ ɛ. (6) Then, λ + 1(ɛ+ 2 ɛ 2 + 4δ 2 ) ɛ satisfies λ + = δ2 λ + ɛ. From (5) and (6) we can conclude that λ j λ + for all eigenvalues λ j of E +, which proves the theorem. Notes 7 The estimate in (4) is based on a worst case situation: all eigenvalues µ i of E are allowed to be equal to ɛ. In practise, the eigenvalues will be more or less equally distributed over negative and positive values and the factor 4 in (4) can be replaced by a smaller value. In numerical experiments 1.5 appeared to be appropriate. Corollary 8 If we expand V with v x (i) / x (i) 2, V + [V, v], then we have that Proof. Note that δ V v 2 ɛτ i and V +V + I 2 ɛ(1 + min(τ i, τ 2 i )). δ = V v 2 = V x (i) 2 x (i) 2 = EV x (i 1) 2 x (i) 2 ɛτ i. 5
7 Discussion 9 0. Apparently δ ɛ i τ. If i is such that ɛ i τ u, we have orthogonality up to machine precision. 1. If, say, τ i ɛ i 1 τ 0.1 then the loss of orthogonality is hardly affected by expansion with x (i) / x (i) If τ then x may considered to be in the span of V: lucky breakdown. Therefore, we may assume that τ < and we take tol If ɛ tol then ɛτ In particular τ and we may assume that expansion with x (2) / x (2) 2 leads to negligible non-orthogonality (see 1; twice is enough). 4. If τ > 10 8 we have to repeat Gram-Schmidt in order to avoid pollution by local rounding errors (see 1.1). 5. Suppose we expand V k by v k+1 x (i) / x (i) 2. Since the τ i are computable, we may estimate the loss of orthogonality Vk V k I k 2 by ɛ k that can recursively be computed as ( ) ɛ k+1 = 1ɛ 2 k τi 2. We may add a modest multiple of u to accommodate for the local rounding errors (see 4). 6. If, for each k, we select i such that τ i tol/ɛ k then δ tol and we know a priori that V mv m I m 2 m tol. The recursively computed upper bound ɛ m may be much smaller than m tol. criterion τ i tol/ɛ k is a dynamical one. 7. If v k+1,... v k+j have been formed by two sweeps of Gram Schmidt than these vectors do not form an orthogonality problem (see 1 and 3). Therefore, if the next expansion vector requires two sweeps of Gram-Schmidt, the second sweep can be restricted to the vectors v 1,..., v k. Unfortunately, the second sweep can not be restricted to the vectors created by unrepeated Gram-Schmidt, since a vector that is sufficiently orthogonal to its predecessors need not to be orthogonal enough to the subsequential ones. 8. In the standard strategy, Gramm-Schmidt is repeated if the sine of the angle is less than κ with κ = 0.5 as popular choice. This is equivalent to the criterion repeat if τ i 1 κ 2 /κ. For the popular choice of κ = 0.5, τ i will be less than and we allow ɛ k to grow with a factor ( τi 2 ) = in each expansion step. In k = 45 steps the orthogonality may be completely lost, i.e., Vk V k I 2 1). Remark 10 Rather than orthogonalizing to full precision, as is the aim of repeated Gramm Schmidt, one can also orthogonalize with the operator I VM 1 V, or with some accurate and convenient approximation of this operator: The x = x Vh where h h + Eh with h V x. (7) Here, we assumed that E 2 u which justifies the approximation of M 1 = (I E) 1 by I + E. The approach in (7) avoids large errors due to loss of orthogonality in the columns of V. It can not diminish local rounding errors. However, the criterion for keeping local rounding errors small is much less strict than the criterion for keeping errors small that are due to a non-orthogonal V. 6
8 To compute E we have to evaluate inner products that form the coordinates of V x (1), as in the second sweep of Gram Schmidt. However, in the variant in (7), we do not have to update the vector x (1) to form x (2). Instead, we have to update the low dimensional vector h in all subsequential orthogonalization steps. As a more efficient variant, we can store in E only those vectors V x (1) for which the cotangens V x 2 / x (1) 2 is non-small, e.g., as in the criterion for repeating Gram Schmidt. With L the lower triangular part of E and U the upper triangular part, we have that (I +U)(I +L) = I +L+U +UL and UL 2 E 2 2 u if E 2 u. Therefore, if we neglect errors of order u, then we have that VM 1 V = V((I + E)V ) = (V(I + U))(V(I + U)). Note that... + VU is precisely the update of the orthogonalization that has been skipped. If, for some reason (as restart), there is some need to form the more accurate orthogonal basis then it can easily be done. If V has to be updated by some k by l matrix S, then V = V(S + US) efficiently incorporates the postponed orthogonalization. In the Arnoldi process, the upper Hessenberg matrix H should also be update. Since U is upper triangular, the updated matrix (I U)H(I + U) is also upper Hessenberg. 7
9 function [V,H,E]=... RepGramSchmidt(X,kappa,kappa0,delta) [n,k]=size(x); v=x(:,1); gamma=norm(v); H=gamma; V=v/gamma; E=0; for j=2:k v=x(:,j); h=v *v; h=h-e*h; v=v-v*h; gamma=norm(v); beta=norm(h); hcs=zeros(j-1,1); if gamma>delta*beta if gamma<kappa0*beta hc=v *v; h=h+hc; v=v-v*hc; beta=norm(hc); elseif gamma<kappa*beta hcs = (V *v)/gamma; H=[H,h;zeros(1,j)]; if gamma>delta*beta H(j,j)=gamma; V=[V,v/gamma]; E=[E,hcs;hcs,0]; return Figure 2: Matlab code for Gram Schmidt with modified projections. The parameter delta determines when a vector x is consider to be in the span of V. kappa0 determines the size of local errors (kappa0 = 1.e -3 means that we accept an error of size u). kappa determines when to modify the projections. 8
Alternative correction equations in the Jacobi-Davidson method
Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace
More informationAlternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.
Universiteit-Utrecht * Mathematical Institute Alternative correction equations in the Jacobi-Davidson method by Menno Genseberger and Gerard L. G. Sleijpen Preprint No. 173 June 1998, revised: March 1999
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationKrylov subspace projection methods
I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks
More informationQuizzes for Math 304
Quizzes for Math 304 QUIZ. A system of linear equations has augmented matrix 2 4 4 A = 2 0 2 4 3 5 2 a) Write down this system of equations; b) Find the reduced row-echelon form of A; c) What are the pivot
More informationA JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y
A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y Abstract. In this paper we propose a new method for the iterative computation of a few
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationDavidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD
Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationCAAM 336 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination 1
CAAM 6 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination Instructions: Time limit: uninterrupted hours There are four questions worth a total of 5 points Please do not look at the questions until you begin
More informationLinear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products
Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD
ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More information5.3 The Power Method Approximation of the Eigenvalue of Largest Module
192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and
More informationMATH Linear Algebra
MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization
More informationA Jacobi Davidson Method for Nonlinear Eigenproblems
A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.
More informationInexactness and flexibility in linear Krylov solvers
Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th
More informationLecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation
Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition
More informationLecture 2 Decompositions, perturbations
March 26, 2018 Lecture 2 Decompositions, perturbations A Triangular systems Exercise 2.1. Let L = (L ij ) be an n n lower triangular matrix (L ij = 0 if i > j). (a) Prove that L is non-singular if and
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationThe 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.
Orthogonality and QR The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next. So, what is an inner product? An inner product
More informationHomework 5. (due Wednesday 8 th Nov midnight)
Homework (due Wednesday 8 th Nov midnight) Use this definition for Column Space of a Matrix Column Space of a matrix A is the set ColA of all linear combinations of the columns of A. In other words, if
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Arnoldi Notes for 2016-11-16 Krylov subspaces are good spaces for approximation schemes. But the power basis (i.e. the basis A j b for j = 0,..., k 1) is not good for numerical work. The vectors in the
More informationSolutions to Review Problems for Chapter 6 ( ), 7.1
Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationLecture 9 Least Square Problems
March 26, 2018 Lecture 9 Least Square Problems Consider the least square problem Ax b (β 1,,β n ) T, where A is an n m matrix The situation where b R(A) is of particular interest: often there is a vectors
More informationMath 413/513 Chapter 6 (from Friedberg, Insel, & Spence)
Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector
More informationSection 6.4. The Gram Schmidt Process
Section 6.4 The Gram Schmidt Process Motivation The procedures in 6 start with an orthogonal basis {u, u,..., u m}. Find the B-coordinates of a vector x using dot products: x = m i= x u i u i u i u i Find
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationThe Gram Schmidt Process
The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple case
More informationThe Gram Schmidt Process
u 2 u The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple
More informationMATH 304 Linear Algebra Lecture 34: Review for Test 2.
MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1
More informationClass notes: Approximation
Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationStability of the Gram-Schmidt process
Stability of the Gram-Schmidt process Orthogonal projection We learned in multivariable calculus (or physics or elementary linear algebra) that if q is a unit vector and v is any vector then the orthogonal
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationLinear Algebra 2 Spectral Notes
Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex
More informationa s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula
Syllabus for Math 308, Paul Smith Book: Kolman-Hill Chapter 1. Linear Equations and Matrices 1.1 Systems of Linear Equations Definition of a linear equation and a solution to a linear equations. Meaning
More informationPreconditioned inverse iteration and shift-invert Arnoldi method
Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationHomework 11 Solutions. Math 110, Fall 2013.
Homework 11 Solutions Math 110, Fall 2013 1 a) Suppose that T were self-adjoint Then, the Spectral Theorem tells us that there would exist an orthonormal basis of P 2 (R), (p 1, p 2, p 3 ), consisting
More informationREVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION
REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More informationDesigning Information Devices and Systems II
EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationEXAM. Exam 1. Math 5316, Fall December 2, 2012
EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.
More informationMATH 115A: SAMPLE FINAL SOLUTIONS
MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationRational Krylov methods for linear and nonlinear eigenvalue problems
Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationArnoldi Methods in SLEPc
Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,
More informationMATH Spring 2011 Sample problems for Test 2: Solutions
MATH 304 505 Spring 011 Sample problems for Test : Solutions Any problem may be altered or replaced by a different one! Problem 1 (15 pts) Let M, (R) denote the vector space of matrices with real entries
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More informationNEW A PRIORI FEM ERROR ESTIMATES FOR EIGENVALUES
NEW A PRIORI FEM ERROR ESTIMATES FOR EIGENVALUES ANDREW V. KNYAZEV AND JOHN E. OSBORN Abstract. We analyze the Ritz method for symmetric eigenvalue problems and prove a priori eigenvalue error estimates.
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 4, pp. 1001 1021 c 2005 Society for Industrial and Applied Mathematics BREAKDOWN-FREE GMRES FOR SINGULAR SYSTEMS LOTHAR REICHEL AND QIANG YE Abstract. GMRES is a
More informationChapter 6 Inner product spaces
Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y
More informationMATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL
MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left
More information5 Compact linear operators
5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationMATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.
MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:
More informationKrylov Subspaces. Lab 1. The Arnoldi Iteration
Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra
More informationMATH 235: Inner Product Spaces, Assignment 7
MATH 235: Inner Product Spaces, Assignment 7 Hand in questions 3,4,5,6,9, by 9:3 am on Wednesday March 26, 28. Contents Orthogonal Basis for Inner Product Space 2 2 Inner-Product Function Space 2 3 Weighted
More informationMath 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections
Math 6 Lecture Notes: Sections 6., 6., 6. and 6. Orthogonal Sets and Projections We will not cover general inner product spaces. We will, however, focus on a particular inner product space the inner product
More informationIDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht
IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More information5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.
Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationAn Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems
An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationThe parallel computation of the smallest eigenpair of an. acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven.
The parallel computation of the smallest eigenpair of an acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven Abstract Acoustic problems with damping may give rise to large quadratic
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationNumerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018
Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More informationSimple iteration procedure
Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationMath 2B Spring 13 Final Exam Name Write all responses on separate paper. Show your work for credit.
Math 2B Spring 3 Final Exam Name Write all responses on separate paper. Show your work for credit.. True or false, with reason if true and counterexample if false: a. Every invertible matrix can be factored
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationNotes on Householder QR Factorization
Notes on Householder QR Factorization Robert A van de Geijn Department of Computer Science he University of exas at Austin Austin, X 7872 rvdg@csutexasedu September 2, 24 Motivation A fundamental problem
More informationSPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS
SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties
More informationJacobi s Ideas on Eigenvalue Computation in a modern context
Jacobi s Ideas on Eigenvalue Computation in a modern context Henk van der Vorst vorst@math.uu.nl Mathematical Institute Utrecht University June 3, 2006, Michel Crouzeix p.1/18 General remarks Ax = λx Nonlinear
More informationOrthonormal Bases; Gram-Schmidt Process; QR-Decomposition
Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most
More informationAssignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:
Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due date: Friday, April 0, 08 (:pm) Name: Section Number Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationA short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering
A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization
More information