Math 577 Assignment 7
|
|
- Dustin James
- 5 years ago
- Views:
Transcription
1 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that A = and x = (U 1, U 2,, U n 1. By the Jacobi method, A is split into two parts: A = D+E, where D is a diagonal matrix with the same diagonal elements as those of A, and E is a matrix whose diagonal elements are zero and whose off-diagonal elements are identical to those of A.Then Ax = 0 Dx = Ex x = D 1 Ex x = R J x, where R J = D 1 E Thus the weighted Jacobi method here can be rewrite as x q+1 = ωx J + (1 ωx q = ωr J x q + (1 ωx q = (ωr J + (1 ωix q = R ω x q where R ω = ωr J + (1 ωi (1 The exact solution x (if it can be got from the iteration is unchanged by the iteration, i.e., x = R ω x (2 By (1 and (2, e q+1 = R ω e q 1
2 where e q := x q x. By induction, e q = R q ωe 0, where e 0 = x 0 x, and x 0 is the initial guess. Notice that A is (n 1 (n 1, symmetric, positive definite, and x = 0 is the exact solution. Hence e q = x q. (i For ω = 1, see Figure 1 (a and Appendix errorfig.m. (ii For ω = 2/3, see Figure 1 (b and Appendix errorfig2.m. (iii No. Note that R ω = ωr J + (1 ωi = ω( D 1 E + (1 ωd 1 D = I ωd 1 (E + D = I ωd 1 A = I ω 2 A Hence λ(r ω = 1 ω 2 λ(a Note that the eigenvalues of A are λ k (A = 4 sin 2 ( kπ 2n, 1 k n 1 And the corresponding eigenvectors of A are exactly the n 1 initial guesses w k (1 k n 1 with n 1 components Then the eigenvalues of R ω are w k,j = sin( jkπ (1 j n 1 n λ k (R ω = 1 2ω sin 2 ( kπ 2n 1 k n 1, while the eigenvectors of R ω are the same as the eigenvectors of A. Expand the initial error by the eigenvectors of R ω After m iterations, n 1 e 0 = c k w k k=1 n 1 n 1 e m = Rω m e 0 = c k Rω m w k = c k λ m k w k k=1 Note that in this case which takes the eigenvector of A as the initial guess, e m = λ m k w k 2 k=1
3 when w k is the initial guess. Hence, for w k, the rate of convergence is depended on λ k = 1 2ω sin 2 ( kπ 2n (3 = kπ 1 ω(1 cos( n (4 The smaller is λ k, the faster is the convergence. By (4 one can see that for convergence of the iteration, 0 < ω 1. By (3, we have λ 1 = 1 2ω sin 2 ( π 2n 1 ωπ2 h 2 h = 1 2 n Thus when n is large, say n = 64 in this case, λ 1 is always close to 1, no matter what value of ω (0, 1] one takes. Therefore there is no optimal ω that reduces the error effectively for all w k. 2. (p.275 Ex Solution. (a First, the Givens rotation is represented by a n n orthogonal matrix G(i, j, θ n n = (g kl where the non-zero elements of G(i, j, θ n n is given by g kk = 1 g ii = g jj = c g ji = s g ij = s c = cos θ s = sin θ for k i, j Second, When a Givens rotation matrix, G(i, j, θ, multiplies another matrix, A, from the left, GA, only rows i and j of A are affected. Thus we restrict attention to the following problem. Given a and b, find c = cos θ and s = sin θ such that ( c s s c An obvious solution would be ( a b = r = a 2 + b 2 c = a/r s = b/r ( r 0 3
4 Here, we use the Givens rotation to deduce the QR factorization for the Hessenberg matrix H n in Algorithm Let M k = G(k, k + 1, θ k (n+1 (n+1 where ( c(θk s(θ k s(θ k c(θ k ( hkk h k+1,k = ( h 2 kk + h2 k+1,k 0 Then M k M 2 M 1 Hn gives the upper-triangular matrix R n and H n = M T 1 M T 2 M T n R n = MR n gives the full QR factorization for H n. squares problem of Algorithm 35.1: Now we solve the least By Algorithm 11.2, Find y to minimize H n y b e 1 Algorithm I. Solving the least squares problem in Algorithm Compute the full QR factorization H n = MR n. 2. Compute the vector M T ( b e Solve the upper-triangular system R n y = M T ( b e 1 for y. Here the QR factorization is implemented by Givens rotation described above. For counting the work for the algorithm in the box, we go into details of the QR factorization by Givens rotation. Algorithm II. Compute the QR factorization of a Hessenberg matrix by Givens rotation 1: for k = 1 : n do 2: r = h 2 kk + h2 k+1,k 3: c = h kk /r 4: s = h k+1,k [ /r ] c s 5: G k = [ s ] c [ ] Hk,1:n Hk,1:n 6: = G H k k+1,1:n H k+1,1:n 7: end for 4
5 The work in one loop is dominated by line 6. This is a matrix multiplication between a 2 2 and a 2 n matrices which cost 6n flops. Thus the whole work in Algorithm II is O(n 2. On the other hand, the work for Algorithm I is dominated by the cost of the QR factorization and thus is O(n 2. (b Note that H n 1 is a submatrix of Hn. One can get the QR factorization of H n from that of H n 1, which comes from the Givens rotation. If the problem for step n-1 has been solved, then when we deal with H n, the H n 1 block is already an upper-triangular matrix. We only need to change the last column of H n. Algorithm II can be modified as followings Algorithm III. Compute the QR factorization of a Hessenberg matrix by Givens rotation 1: for[ k = 1 : ] n 1 do [ ] hk,n hk,n 2: = G h k k+1,n h k+1,n 3: end for 4: r = h 2 nn + h 2 n+1,n 5: c = h nn /r 6: s = h n+1,n [ /r ] c s 7: G n = [ s ] c [ ] hn,n hn,n 8: = G h k n+1,n h n+1,n Note that G k (k = 1, 2,, n 1 had been constructed when factorized H n 1. The work dominated by the for loop in which the operation count is 3 2 (n 1 = 6(n 1 O(n. Hence the whole work involved in Algorithm I is improved to O(n. 3. (p. 275 Ex Solution. Suppose the initial guess is x 0 = m. Then we have A(x m = b Am. Modify the right-hand side b of Ax = b as b = b Am. Then the initial guess for A x = b is again x0 = 0, r 0 = b. Using the Algorithm 35.1 to solve A x = b(just replace b by b Am, let x n = x n + m. Things are done. 4. Show that φ(x is minimized for x = x n 1 + αp n 1 (with free choice of α when α = α n = r T n 1r n 1 /p T n 1Ap n 1. Solution. Note that φ(x = 1 2 xt Ax x T b 5
6 Denote x(α = x n 1 + αp n 1. For α that minimize φ(x(α, by the chain rule, we have d dα φ(x = 0 φ (x T d dα x(α = 0 ( 1 2 (AT x + Ax b T p n 1 = 0 (Ax b T p n 1 = 0 since A = A T Then (A(x n 1 + αp n 1 b T p n 1 = 0 (Ax n 1 b + αap n 1 T p n 1 = 0 ( r n 1 + αap n 1 T p n 1 = 0 p T n 1( r n 1 + αap n 1 = 0 α = pt n 1r n 1 p T n 1Ap n 1 (5 According to the algorithm 38.1, p n 1 = r n 1 + β n 1 p n 2. Then p T n 1r n 1 = r T n 1r n 1 + β n 1 p T n 2r n 1 (6 Since r n 1 K n 1 and p n 2 K n 1, we have p T n 2r n 1 = 0 (7 By (5(6(7 α = rt n 1r n 1 p T n 1Ap n 1 5. (p.302 Ex Solution. By the assumption, the 2-norm condition number κ = λ max /λ min = 24/1 = 24 Since ( n ( n e n A κ = 2, e 0 A κ the maximum number of iterations required to achieve the bound 10 6 can be estimated by ( n
7 Thus n 10 6 log( 2 log ( Therefore, 36 steps of the CG iteration should be take to be sure of reducing the initial error e 0 A by a factor of Give an argument for the following statement: if the condition number κ is large but not too large, the CG iteration convergence to a specified tolerance can be expected in O( κ iterations. Solution. Suppose we wish to perform enough iterations to reduce the norm of the error by a factor of ε, i.e. Since e n A e 0 A ε ( n e n A κ 1 2 e 0 A κ + 1 the maximum number of iterations required to achieve the bound ε can be estimated by ( n κ 1 2 ε κ + 1 ( κ 1 n log log ( ε κ n log( ε 2 log ( log( ε κ 1 2 log ( log( ε κ+1 κ 2 = 1 2 κ log( 2 ε = O( κ κ as κ The assumption that κ is not too large ensures the problem itself is of meaning. If the condition number κ is too large, the problem is ill-conditioned. Then even the relative error is within the specified tolerance, it can be useless. 7. (p.188 Ex (a True. If λ is an ew of A, then det(a λi = 0. Consequently, det((a µi (λ µi = det(a λi = 0, and thus λ µ is an ew of A µi. 7
8 (c True. If λ is an ew of real matrix A, then λ is a root of the real coefficient polynomial det(a xi, i.e. det(a λi = 0. The complex conjugate root theorem states that if P is a polynomial in one variable with real coefficients, and a + bi is a root of P with a and b real numbers, then its complex conjugate a bi is also a root of P. Hence det(a λi = 0, and therefore λ is also an ew of A. (d True. If λ is an ew of A and A is nonsingular, then λ 0. Assume that v is a non-zero vector such that Av = λv. Then A 1 (Av = (A 1 Av = v = λ 1 (λv = λ 1 (Av. Since A is nonsingular, and v 0, Av 0. Thus Av is an ev of A 1. (e False. Considering, A = ( Obviously, all the ew s of A are zero, but A 0. (f True. If A is hermitian, i.e. A = A, then A = QΛQ = Q Λ sign(λq (8 where Λ and sign(λ denote the diagonal matices whose entries are the numbers λ j and sign(λ j, respectively. Since sign(λq is unitary whenever Q is unitary, (8 is an SVD of A, with the singular values equal to the diagonal entries of Λ, λ j. If desired, these numbers can be put into nonincreasing order by inserting suitable permutation matrices as factors in left-hand unitary matrix of (8, Q, and the right-hand unitary matrix, sign(λq. 8. (p.195 Ex 25.2Solution. (a In the case of linear convergence, e k+1 Ce k with C < 1 for all sufficiently large k. Without loss of generality, we can assume that e k+1 Ce k with C < 1 for all k 0. By induction, one can deduce that e n C n e 0, by which we have e n e 0 C n Denote ɛ = ɛ machine. Let C n ɛ we have n log C log ɛ n log ɛ log C 8
9 Since C < 1, C n is monotonically decreasing. For ensuring the accuracy O(ɛ machine, one only needs the low bound of n. Hence the steps needed are O(log(ɛ machine. Because the work of each step is O(1, the total work is O(log(ɛ machine. (b In this case e k+1 C(e k α with α > 1. Again assume that this is true for all k 0. Since (e k k=0 is convergent. without loss of generality, we can assume that 0 e 0 1. By induction, Then we have e n C 1+α+α2 + +α n 1 (e 0 αn e n e 0 M(e 0 αn where M = C 1+α+α2 + +α n 1 e 1 0 > 1 is a constant. Let M(e 0 αn ɛ, we have (e 0 αn ɛ M Then α n log(e 0 log( ɛ M α n log( ɛ M log(e 0 α n log(ɛ log(m log(ɛ log(e 0 log(e 0 ( log(ɛ n log(α log log(e 0 since log(e 0 < 0 Since α > 1, log(α > 0. Thus ( log(ɛ log log(e 0 n = log( log(ɛ log( log(e 0 log(α log(α = O(log( log(ɛ Consequently, the total work requirement in this case is O(log( log(ɛ machine. 9
Eigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationSymmetric and anti symmetric matrices
Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationName: Final Exam MATH 3320
Name: Final Exam MATH 3320 Directions: Make sure to show all necessary work to receive full credit. If you need extra space please use the back of the sheet with appropriate labeling. (1) State the following
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationLecture 3: QR-Factorization
Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization
More information4. Linear transformations as a vector space 17
4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation
More informationLecture 10 - Eigenvalues problem
Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationOrthogonalization and least squares methods
Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular
More informationCS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3
CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both
More informationMath 489AB Exercises for Chapter 2 Fall Section 2.3
Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationMATH 221, Spring Homework 10 Solutions
MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationIterative techniques in matrix algebra
Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationThe Singular Value Decomposition
The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition
More information13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2
The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationA = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].
Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More informationON THE QR ITERATIONS OF REAL MATRICES
Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric
More information5 Selected Topics in Numerical Linear Algebra
5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationMatrices and Linear Algebra
Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationChapter 3. Determinants and Eigenvalues
Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory
More informationHOMEWORK 9 solutions
Math 4377/6308 Advanced Linear Algebra I Dr. Vaughn Climenhaga, PGH 651A Fall 2013 HOMEWORK 9 solutions Due 4pm Wednesday, November 13. You will be graded not only on the correctness of your answers but
More informationComputing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 9, 2008 Today
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationMath 315: Linear Algebra Solutions to Assignment 7
Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are
More informationDiscrete Math, Spring Solutions to Problems V
Discrete Math, Spring 202 - Solutions to Problems V Suppose we have statements P, P 2, P 3,, one for each natural number In other words, we have the collection or set of statements {P n n N} a Suppose
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationEigenvalues and eigenvectors
Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationMath 310 Final Exam Solutions
Math 3 Final Exam Solutions. ( pts) Consider the system of equations Ax = b where: A, b (a) Compute deta. Is A singular or nonsingular? (b) Compute A, if possible. (c) Write the row reduced echelon form
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationA VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010
A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 00 Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More information7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques
More informationComputing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today
More informationHere is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J
Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationLinear Algebra - Part II
Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More information2 Eigenvectors and Eigenvalues in abstract spaces.
MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors
More informationPractice Final Exam Solutions for Calculus II, Math 1502, December 5, 2013
Practice Final Exam Solutions for Calculus II, Math 5, December 5, 3 Name: Section: Name of TA: This test is to be taken without calculators and notes of any sorts. The allowed time is hours and 5 minutes.
More informationThe QR Factorization
The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationCHAPTER 5. Basic Iterative Methods
Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than
More informationthat determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that
Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex
More informationCAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationLecture 4 Eigenvalue problems
Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationChapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015
Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More information33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM
33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the
More information