Math 502 Fall 2005 Solutions to Homework 5. Let x = A ;1 b. Then x = ;D ;1 (L+U)x +D ;1 b, and hence the dierences
|
|
- Ginger George
- 5 years ago
- Views:
Transcription
1 Math 52 Fall 25 Solutions to Homework 5 (). The i-th row ofd ; (L + U) is r i =[ a i ::: a i i; a i i+ ::: a i M ] a i i a i i a i i a i i Since A is strictly X row diagonally dominant jr i j = j a i j j = X ja i j j < i = ::: M: a j6=i i i ja i i j j6=i Therefore kd ; (L + U)k = max jr ij < : im Let x = A ; b. Then x = ;D ; (L+U)x +D ; b, and hence the dierences e (n) = x (n) ; x satisfy e (n+) = ;D ; (L + U)e (n). By induction ke (n) k ; kd ; (L + U)k n ke () k n : Therefore x (n)! x,asn!. 2. ( ) ) Suppose ^x 2 K minimizes (x) over K. Lety 2 K be any other point in K and consider F (s) =(^x + sy), s 2 R. Clearly F () = (^x) F (s) for all s. Thus F () =. Since F (s) =y T r(^x + sy) this shows that for every y 2 K, y T r(^x) =. Hence r(^x)? K. ( ( ) Suppose r(^x)? K. Let x 2 K be another point ink. Then x ; ^x = y 2 K. Using the fact that r(^x) =A^x ; b we have (x) =(^x + y) = 2 (^x + y)t A(^x + y) ; (^x + y) T b 2 ^xt A^x + y T A^x + 2 yt Ay ; x T b ; y T b (^x)+y T r(^x)+ 2 yt Ay: Since r(^x)? K and A is positive denite, this shows (^x) (x), for all x 2 K. 3. The i j entry (AU) i j of the product is computed by the expression (AU) i j = ;u i; j ; u i+ j +4u i j ; u i j; ; u i j+ : If we assume, for convenience in counting, u j = u M+ j = for j = ::: M and u i = u i M+ =fori = ::: M then () is valid for i j M. Since the evaluation of () requires multiplication and 4 subtractions (inverse additions) computing all components in this way requires M 2 multiplications and 4M 2 subtractions. If we now eliminate the padded zeros, of which there are 4M, this reduces the number of subtractions to 4M 2 ; 4M. It is also easy to count ops using the block structure of A. Multiplication by T requires M multiplications and 2M ; 2 subtractions, and this is done M times. The o diagonal blocks of ;I require M subtractions for the rst and last blocks, and 2M subtractions for the other M ; 2blocks. The total is the same, M 2 multiplications and 4M 2 ; 4M subtractions. Earlier in the semester we showed that if A is an (m m) matrix and x is an (m ) column vector then computing Ax required m 2 multiplications and m 2 ; m additions. In the present context m = M 2,sotocomputeBU requires M 4 multiplications and M 4 ; M 2 additions.
2 2 4. Consider f(x) =; cos(x), x 2 [ ]. Using basic calculus we ndf(x) is strictly increasing with f() = and f() = 2. Also, if <a<b<then f(a) =minff(x) : x 2 [a b]g and f(b) = maxff(x) : x 2 [a b]g. From these observations we obtain ; cos( i M ) ; cos( ) ; cos( ) i = ::: M: M+ M+ M+ By a trigonometric identity Therefore cos( M (M+;) ) = cos( )=; cos( ): M+ M+ M+ min = min i j = 4( ; cos( i jm max = )) M+ max i j = 4( ; cos( M )) = 4( + cos( )): i jm M+ M+ Hence the spectral radius and 2-norm condition number are (A) = max = 4( + cos( )) M+ 2(A) = max = + cos( min ; cos( Since cos x ; 2 x2 for x near it follows that for M large so that M+ ) M+ ): + cos( M+ ) 2 ; 2 2(M +) 2 ; cos( M+ ) 2 2(M +) 2 (A) 8 ; 4 2 (M +) 4 2(A) 2 2 (M +) : 2 5. For convenience we simplify the notation describing the iteration to x k+ =(I ; A)x k + v with x = Av= v: Then x = v, x 2 =(I ; A)v + v =(; )v + v, and x 3 =(I ; A)[( ; )v + v]+v =(; ) 2 v +(; )v + v: Clearly each of these has the form x k = q k ()v, whereq k () is a polynomial of degree k ;. We show by induction that this holds in general with q k () =+(; )++(; ) k; = k; X j= ( ; ) j : We have already veried this for k = 2 3. Suppose it's valid for x k. Then x k+ =(I ; A)q k ()v + v = q k ()( ; )v + v P k and it follows that q k+ ()=+(; )q k () = ( ; j= )j. The sequence fx k g will converge if and only if lim q k() = k! X j= ( ; ) j is a convergent power series. Since this is a geometric series with ratio r = ; we know itconverges if and only if j ; j <, in which casethesumis=. In the context of the original problem is one of the eigenvalues of A, all of which lie in the interval ( 8). If i j 2 ( 2) and F = V i j then the
3 3 iteration will converge. However, the majority of the eigenvalues lie in the interval (2 8) and if F is a corresponding eigenvector then the iteration will not converge. Any F 2 R m can be represented in terms of the basis of eigenvectors fv i j g M i j=.from our work with projections we know F = MX i j= (Vi jf T )V i j : Using this F in the iteration scheme we obtain U (k) = h 2 M X i j= (Vi jf T )q k ( i j )V i j : If Vi j T F 6= for some eigenvector with eigenvalue i j 2 (2 8) then the sequence will not converge. In particular, if F is the column vector of ones the sequence will not converge. (You need to have explicit formulas for the eigenvectors to actually show this. Since I didn't provide them the general argument is sucient.) 6. The diagonal matrix D =4I, where I is the (m m) identity, is the diagonal portion of A in the splitting A = L+D+U. Using this we nd L+U = A;4I and D ; (L + U) = 4 (A ; 4I). Therefore the eigenvalues of D; (L + U) are ( i j ; 4)=4 where i j is an eigenvalue of A. From a problem 4 we knowthe minimum and maximum eigenvalues of A satisfy < min < 4 < max < 8, so that the eigenvalues of D ; (L + U) lie in (; ), with the smallest and largest being ( 4 min ; 4) = ; cos( ) ( M+ 4 max ; 4) = cos( ): M+ Clearly these have the same absolute value, so that the spectral radius of D ; (L + U) is (D ; (L + U))=cos( M+ ): Note that in general (A) = (;A). Using calculus it is easy to show cos x =; 2 x2 + O(x 4 ), as x!. Hence (D ; (L + U)) = ; 2 2(M +) 2 + O((M +);4 ) as M!: Thus (D ; (L + U)) approaches quadratically in h =(M +) ;. Below is a table showing the depence of (D ; (L + U)) on M. M m = M 2 (D ; (L + U))
4 4 7. From problem 4 we know 2 (A) = (+cos )=(;cos ), where = =(M+). Using the trigonometric identities + cos = 2 cos 2 (=2) and ; cos = 2sin 2 (=2) we have p 2 (A) = cos(=2) sin(=2) since 2 ( ). Therefore C = p 2 (A) ; cos(=2) ; sin(=2) p = 2 (A)+ cos(=2) + sin(=2) = cos( ) ; sin( 2(M+) cos( ) + sin( 2(M+) By using calculus we obtain cos(=2) ; sin(=2) cos(=2) + sin(=2) =; + O(2 ) as! + : Hence C = p 2 (A) ; p 2 (A)+ =; ) 2(M+) ): 2(M+) M + + O((M +);2 ) as M!: Thus C approaches linearly in h =(M +) ;.Belowisatableshowing the depence of C on M. M m = M 2 C A script for Jacobi iteration is: % Script File: Jacobi amp = 6 M = 3 h = /(M+) max_loops = 5*M*M epsilon = h*h rho = cos(pi/(m+)) x = (:M)*h y = x u = zeros(m,m) % initialize exact solution and f = zeros(m,m) % right hand side of differential equation for i = :M for j = :M u(i,j) = amp*x(i)*(-x(i))*y(j)*(-y(j)) f(i,j) = 2*amp*(x(i)*(-x(i)) + y(j)*(-y(j)))
5 5 U = zeros(m+2,m+2) V = zeros(m+2,m+2) F = (h^2)*f % approximate solution % temporary storage % right hand side of linear system err = zeros(max_loops,) e_start = max(max(abs(u - U(2:M+,2:M+)))) % data for plotting k = e = e_start while ((e > epsilon) & (k < max_loops)) k = k + for i = 2:M+ % compute next Jacobi iterate for j = 2:M+ V(i,j) =.25*(U(i-,j) + U(i+,j) +... U(i,j-) + U(i,j+) + F(i-,j-)) U = V e = max(max(abs(u - U(2:M+,2:M+)))) % compute norm of error err(k) = e if (e <= epsilon) disp(sprintf(' convergence achieved after %2d interations',k)) else disp(' maximum number of loops computed without convergence') plot(:k,[e_start err(:k)]) xlabel('') ylabel('max norm error') hold on plot(:k,[e_start,rho.^(:k)*e_start],'-.') leg('','\rho^k') plot(k,err(k),'*') hold off
6 6 9. A script for Conjugate Gradient iteration is: % Script File: mycg amp = 6 M = 5 h = /(M+) max_loops = 5*M*M epsilon = h*h alfa = pi/(m+) c = cos(alfa/2) s = sin(alfa/2) CKA = (c - s)/(c + s) x = (:M)*h y = x u = zeros(m,m) % initialize exact solution and f = zeros(m,m) % right hand side of differential equation for i = :M for j = :M u(i,j) = amp*x(i)*(-x(i))*y(j)*(-y(j)) f(i,j) = 2*amp*(x(i)*(-x(i)) + y(j)*(-y(j))) U_exact = zeros(m+2,m+2) U_exact(2:M+,2:M+) = u % pad u with zeros for computing A-norm U = zeros(m+2,m+2) R = zeros(m+2,m+2) P = zeros(m+2,m+2) V = zeros(m+2,m+2) % approximate solution % residual vector % search direction % temporary storage R(2:M+,2:M+) = h*h*f % initialization of R and P P = R rho_ = norm(r,'fro') % 2-norm of previous residual. This is rho_ = rho_*rho_ % used to avoid re-computing this quantity err = zeros(max_loops,) e_start = A_norm(U_exact - U) % data for plotting k = e = e_start while ((e > epsilon) & (k < max_loops)) k = k + for i = 2:M+ % compute V = A*p
7 7 for j = 2:M+ V(i,j) = 4*P(i,j) - P(i-,j) -... P(i+,j) - P(i,j-) - P(i,j+) sum = for i = 2:M+ for j = 2:M+ sum = sum + P(i,j)*V(i,j) % sum = A-norm of p squared alfa = rho_/sum U = U + alfa*p R = R - alfa*v rho_ = norm(r,'fro') rho_ = rho_*rho_ beta = rho_/rho_ P = R + beta*p rho_ = rho_ % update U % and the residual R % 2-norm of current residual % update search direction % save to avoid re-computing e = A_norm(U_exact - U) err(k) = e if (e <= epsilon) disp(sprintf(' convergence achieved after %2d interations',k)) else disp(' maximum number of loops computed without convergence') plot(:k,[e_start err(:k)]) xlabel('') ylabel('a-norm of error') hold on plot(:k,[2*e_start,2*(cka.^(:k))*e_start],'-.') leg('','2c^k') plot(k,err(k),'*') hold off
8 8.9 ρ k.8.7 max norm error Figure. The e n = ku;u n k in the Jacobi iterates U n for the model problem with M =, compared with the sequence of iterates n e where = (D ; (L+U)). Convergence was achieved after 46 iterations..9 ρ k.8.7 max norm error Figure 2. The e n = ku;u n k in the Jacobi iterates U n for the model problem with M = 2, compared with the sequence of iterates n e where = (D ; (L+U)). Convergence was achieved after 6 iterations.
9 9.9 ρ k.8.7 max norm error Figure 3. The e n = ku;u n k in the Jacobi iterates U n for the model problem with M = 3, compared with the sequence of iterates n e where = (D ; (L+U)). Convergence was achieved after 45 iterations C k A norm of error Figure 4. The e n = ku ; U n k A in the conjugate gradient iterates U n for the model problem with M =, compared with the sequence of iterates 2C n e where C =( p 2 (A);)=( p 2 (A)+). Convergence was achieved after 7 iterations.
10 C k A norm of error Figure 5. The e n = ku ; U n k A in the conjugate gradient iterates U n for the model problem with M = 3, compared with the sequence of iterates 2C n e where C =( p 2 (A);)=( p 2 (A)+). Convergence was achieved after 26 iterations C k A norm of error Figure 6. The e n = ku ; U n k A in the conjugate gradient iterates U n for the model problem with M = 5, compared with the sequence of iterates 2C n e where C =( p 2 (A);)=( p 2 (A)+). Convergence was achieved after 47 iterations.
Iterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationClassical iterative methods for linear systems
Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationIntroduction to Scientific Computing
(Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationTMA 4180 Optimeringsteori THE CONJUGATE GRADIENT METHOD
INTRODUCTION TMA 48 Optimeringsteori THE CONJUGATE GRADIENT METHOD H. E. Krogstad, IMF, Spring 28 This note summarizes main points in the numerical analysis of the Conjugate Gradient (CG) method. Most
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More informationThe Conjugate Gradient Method
CHAPTER The Conjugate Gradient Method Exercise.: A-norm Let A = LL be a Cholesy factorization of A, i.e.l is lower triangular with positive diagonal elements. The A-norm then taes the form x A = p x T
More information1 Solutions to selected problems
Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationHere is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J
Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationMatrix Theory. A.Holst, V.Ufnarovski
Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationMath 413/513 Chapter 6 (from Friedberg, Insel, & Spence)
Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More information1 Inner Product and Orthogonality
CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationLinear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions
Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationExtra Problems for Math 2050 Linear Algebra I
Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as
More informationThere are six more problems on the next two pages
Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with
More informationNotes on Some Methods for Solving Linear Systems
Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms
More informationKasetsart University Workshop. Multigrid methods: An introduction
Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available
More informationMath 577 Assignment 7
Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the
More informationIterative solvers for linear equations
Spectral Graph Theory Lecture 15 Iterative solvers for linear equations Daniel A. Spielman October 1, 009 15.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving linear
More informationNonlinear equations. Norms for R n. Convergence orders for iterative methods
Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationLecture 4 Eigenvalue problems
Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationPositive Definite Matrix
1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function
More informationAIMS Exercise Set # 1
AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest
More information7.2 Steepest Descent and Preconditioning
7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider
More informationNumerical Analysis Comprehensive Exam Questions
Numerical Analysis Comprehensive Exam Questions 1. Let f(x) = (x α) m g(x) where m is an integer and g(x) C (R), g(α). Write down the Newton s method for finding the root α of f(x), and study the order
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationApril 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition
Applied mathematics PhD candidate, physics MA UC Berkeley April 26, 2013 UCB 1/19 Symmetric positive-definite I Definition A symmetric matrix A R n n is positive definite iff x T Ax > 0 holds x 0 R n.
More informationModelling and implementation of algorithms in applied mathematics using MPI
Modelling and implementation of algorithms in applied mathematics using MPI Lecture 3: Linear Systems: Simple Iterative Methods and their parallelization, Programming MPI G. Rapin Brazil March 2011 Outline
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationMath 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:
Homework Exercises 1 1 Find the complete solutions (if any!) to each of the following systems of simultaneous equations: (i) x 4y + 3z = 2 3x 11y + 13z = 3 2x 9y + 2z = 7 x 2y + 6z = 2 (ii) x 4y + 3z =
More informationMath/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018
Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x
More informationChapter 3. Differentiable Mappings. 1. Differentiable Mappings
Chapter 3 Differentiable Mappings 1 Differentiable Mappings Let V and W be two linear spaces over IR A mapping L from V to W is called a linear mapping if L(u + v) = Lu + Lv for all u, v V and L(λv) =
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationTheory of Iterative Methods
Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More informationMATH 5640: Functions of Diagonalizable Matrices
MATH 5640: Functions of Diagonalizable Matrices Hung Phan, UMass Lowell November 27, 208 Spectral theorem for diagonalizable matrices Definition Let V = X Y Every v V is uniquely decomposed as u = x +
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More information(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB (Mathematical Operations with Arrays) Contents Getting Started Matrices Creating Arrays Linear equations Mathematical Operations with Arrays Using Script
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More informationMATH 221, Spring Homework 10 Solutions
MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationMath 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1
Math 552 Scientific Computing II Spring 21 SOLUTIONS: Homework Set 1 ( ) a b 1 Let A be the 2 2 matrix A = By hand, use Gaussian elimination with back c d substitution to obtain A 1 by solving the two
More informationPreliminary Examination, Numerical Analysis, August 2016
Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More information8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0
8.7 Taylor s Inequality Math 00 Section 005 Calculus II Name: ANSWER KEY Taylor s Inequality: If f (n+) is continuous and f (n+) < M between the center a and some point x, then f(x) T n (x) M x a n+ (n
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation
More information2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.
Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationComputational math: Assignment 1
Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange
More information(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with
Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More informationLinear algebra 2. Yoav Zemel. March 1, 2012
Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationMaster Thesis Literature Study Presentation
Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element
More informationSPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS
SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly
More informationLinear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form
Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =
More informationMTH 5102 Linear Algebra Practice Final Exam April 26, 2016
Name (Last name, First name): MTH 5 Linear Algebra Practice Final Exam April 6, 6 Exam Instructions: You have hours to complete the exam. There are a total of 9 problems. You must show your work and write
More informationSynopsis of Numerical Linear Algebra
Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research
More informationEXAMPLES OF CLASSICAL ITERATIVE METHODS
EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic
More informationComputing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Jason E. Hicken Aerospace Design Lab Department of Aeronautics & Astronautics Stanford University 14 July 2011 Lecture Objectives describe when CG can be used to solve Ax
More informationLecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico
Lecture 11 Fast Linear Solvers: Iterative Methods J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) Math/CS 375 1 / 23 Summary: Complexity of Linear Solves
More informationBackground. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58
Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms
More informationProblem Set 9 Due: In class Tuesday, Nov. 27 Late papers will be accepted until 12:00 on Thursday (at the beginning of class).
Math 3, Fall Jerry L. Kazdan Problem Set 9 Due In class Tuesday, Nov. 7 Late papers will be accepted until on Thursday (at the beginning of class).. Suppose that is an eigenvalue of an n n matrix A and
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationCLASSICAL ITERATIVE METHODS
CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationMath 502 Fall 2005 Solutions to Homework 3
Math 502 Fall 2005 Solutions to Homework 3 (1) As shown in class, the relative distance between adjacent binary floating points numbers is 2 1 t, where t is the number of digits in the mantissa. Since
More informationTutorials in Optimization. Richard Socher
Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationBootstrap AMG. Kailai Xu. July 12, Stanford University
Bootstrap AMG Kailai Xu Stanford University July 12, 2017 AMG Components A general AMG algorithm consists of the following components. A hierarchy of levels. A smoother. A prolongation. A restriction.
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More information