that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that
|
|
- Augusta Sims
- 5 years ago
- Views:
Transcription
1 Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex matrix. Equivalently, we want to find values λ C such that there is a nontrivial (nonzero) x that satisfies (3.) (A λi)x = Ax = λx. In the linear eigenvalue problem (3.) the eigenvalue λ appears linearly. However, as the unknown λ is multiplied with the unknown vector x, the problem is in fact nonlinear. We have n+ unknownsλ,x,...,x n that arenot uniquelydefinedbythe nequations in(3.). We have noticed earlier, that the length of the eigenvector x is not determined. Therefore, we add to (3.) a further equation that fixes this. The straightforward condition is (3.2) x 2 = x x =, that determines x up to a complex scalar of modulus, in the real case ±. Another condition to normalize x is by requesting that (3.3) c T x =, for some c. Eq. (3.3) is linear in x and thus simpler. However, the combined equations (3.) (3.3) are nonlinear anyway. Furthermore, c must be chosen such that it has a strong component in the (unknown) direction of the searched eigenvector. This requires some knowledge about the solution. In nonlinear eigenvalue problems we have to find values λ C such that (3.4) A(λ)x = where A(λ) is a matrix the elements of which depend on λ in a nonlinear way. An example is a matrix polynomial, (3.5) A(λ) = d λ k A k, A k F n n. k= The linear eigenvalue problem (3.) is a special case with d =, A(λ) = A λa, A = A, A = I. 53
2 54 CHAPTER 3. NEWTON METHODS Quadratic eigenvalue problems are matrix polynomials where λ appears squared [6], (3.6) Ax+λKx+λ 2 Mx =. Matrix polynomials can be linearized, i.e., they can be transformed in a linear eigenvalue of size dn where d is the degree of the polynomial and n is the size of the matrix. Thus, the quadratic eigenvalue problem (3.6) can be transformed in a linear eigenvalue problem of size 2n. Setting y = λx we get or ( A O O I ( A K O I )( x y ) ( K M = λ I O )( ) x = λ y ( O M I O )( ) x y )( ) x. y Notice that other linearizations are possible [2, 6]. Notice also the relation with the transformation of high order to first order ODE s [5, p. 478]. Instead of looking at the nonlinear system (3.) (complemented with (3.3) or (3.2)) we may look at the nonlinear scalar equation (3.7) f(λ) := det A(λ) = andapplysomezerofinder. Herethequestionariseshowtocomputef(λ)andinparticular f (λ) = d dλ det A(λ). This is waht we discuss next. 3.2 Zeros of the determinant We first consider the computation of eigenvalues and subsequently eigenvectors by means of computing zeros of the determinant det(a(λ)). Gaussian elimination with partial pivoting (GEPP) applied to A(λ) provides the decomposition (3.8) P(λ)A(λ) = L(λ)U(λ), where P(λ) is the permutation matrix due to partial pivoting, L(λ) is a lower unit triangular matrix, and U(λ) is an upper triangular matrix. From well-known properties of the determinant function, equation (3.8) gives detp(λ) deta(λ) = detl(λ) detu(λ). Taking the particular structures of the factors in (3.8) into account, we get (3.9) f(λ) = deta(λ) = ± The derivative of deta(λ) is (3.) f (λ) = ± = ± n n u ii (λ) u jj (λ) i= n i= u ii (λ) u ii (λ) j i n u ii (λ). i= n u jj (λ) = j= n i= u ii (λ) u ii (λ) f(λ). How can we compute the derivatives u ii of the diagonal elements of U(λ)?
3 3.2. ZEROS OF THE DETERMINANT Algorithmic differentiation A clever way to compute derivatives of a function is by algorithmic differentiation, see e.g., []. Here we assume that we have an algorithm available that computes the value f(λ) of a function f, given the input argument λ. By algorithmic differentiation a new algorithm is obtained that computes besides f(λ) the derivative f (λ). The idea is easily explained by means of the Horner scheme to evaluate polynomials. Let n f(z) = c i z i. be a polynomial of degree n. f(z) can be written in the form which gives rise to the recurrence i= f(z) = c +z(c +z(c 2 + +z(c n ) )) p n := c n, p i := zp i+ +c i, f(z) := p. i = n,n 2,...,, Note that each of the p i can be considered as a function (polynomial) in z. We use the above recurrence to determine the derivatives dp i, dp n :=, p n := c n, dp i := p i+ +zdp i+, p i := zp i+ +c i, i = n,n 2,...,, f (z) := dp, f(z) := p. Here, to each equation in the original algorithm its derivation is prepended. We can proceed in a similar fashion for computing deta(λ). We need to be able to compute the derivatives a ij, though. Then, we can derive each single assignment in the GEPP algorithm. IfwerestrictourselvestothestandardeigenvalueproblemAx = λxthena(λ) = A λi and a ij = δ ij, the Kronecker δ Hyman s algorithm In a Newton iteration we have to compute the determinant for possibly many values λ. Using the factorization (3.8) leads to computational costs of 2 3 n3 flops (floating point operations) for each factorization, i.e., per iteration step. If this algorithm was used to compute all eigenvalues then an excessive amount of flops would be required. Can we do better? The strategy is to transform A by a similarity transformation to a Hessenberg matrix, i.e., a matrix H whose entries below the lower off-diagonal are zero, h ij =, i > j +. Any matrix A can be transformed into a similar Hessenberg matrix H by means of a sequence of elementary unitary matrices called Householder transformations. The details are given in Section 4.3.
4 56 CHAPTER 3. NEWTON METHODS LetS AS = H, wheres is unitary. S is theproductofthejustmentioned Householder transformations. Then Ax = λx Hy = λy, x = Sy. So, A and H have equal eigenvalues (A and H are similar) and the eigenvectors are transformed by S. We now assume that H is unreduced, i.e., h i+,i for all i. Otherwise we can split Hx = λx in smaller problems. Let λ be an eigenvalue of H and (3.) (H λi)x =, i.e., x is an eigenvector of H associated with the eigenvalue λ. Then the last component of x cannot be zero, x n. The proof is by contradiction. Let x n =. Then (for n = 4) h λ h 2 h 3 h 4 h 2 h 22 λ h 23 h 24 h 32 h 33 λ h 34 h 43 h 44 λ x x 2 x 3 =. The last equation reads h n,n x n +(h nn λ) = from which x n = follows since we assumed the H is unreduced, i.e., that h n,n. In the exact same procedure we obtain x n 2 =,...,x =. But the zero vector cannot be an eigenvector. Therefore, x n must not be zero. Without loss of generality we can set x n =. We continue to expose the procedure with the problem size n = 4. If λ is an eigenvalue then there are x i, i < n, such that (3.2) h λ h 2 h 3 h 4 h 2 h 22 λ h 23 h 24 h 32 h 33 λ h 34 h 43 h 44 λ If λ is not an eigenvalue then we determine the x i such that (3.3) h λ h 2 h 3 h 4 h 2 h 22 λ h 23 h 24 h 32 h 33 λ h 34 h 43 h 44 λ We determine the n numbers x n,x n 2,...,x by x x 2 x 3 x x 2 x 3 =. = p(λ). x i = ( ) (hi+,i+ λ)x i+ +h i+,i+2 x i+2 + +h i+,n x n, i = n,...,. h i+,i }{{} The x i are functions of λ, in fact, x i P n i. The first equation in (3.3) gives (3.4) (h, λ)x +h,2 x 2 + +h,n x n p(λ).
5 3.2. ZEROS OF THE DETERMINANT 57 Eq. (3.3) is obtained from looking at the last column of h λ h 2 h 3 h 4 x h 2 h 22 λ h 23 h 24 x 2 h 32 h 33 λ h 34 x 3 h 43 h 44 λ Taking determinants yields = h λ h 2 h 3 p(λ) h 2 h 22 λ h 23 h 32 h 33 λ. h 43 ( n det(h λi) = ( ) n i= h i+,i )p(λ) = c p(λ). So, p(λ) is a constant multiple of the determinant of H λi. Therefore, we can solve p(λ) = instead of det(h λi) =. Since the quantities x,x 2,...,x n and thus p(λ) are differentiable functions of λ, we can get p (λ) by algorithmic differentiation. For i = n,..., we have Finally, x i = ( xi+ +(h i+,i+ λ)x i+ +h i+,i+2 x i+2 + +h i+,n x ) n. h i+,i c f (λ) = x +(h,n λ)x +h,2x 2 + +h,n x n. Algorithm implements Hyman s algorithm that returns p(λ) and p (λ) given an input parameter λ [7]. Algorithm 3. Hyman s algorithm : Choose a value λ. 2: x n := ; dx n := ; 3: for i = n downto do 4: s = (λ h i+,i+ )x i+ ; ds = x i+ +(λ h i+,i+ )dx i+ ; 5: for j = i+2 to n do 6: s = s h i+,j x j ; ds = ds h i+,j dx j ; 7: end for 8: x i = s/h i+,i ; dx i = ds/h i+,i ; 9: end for : s = (λ h, )x ; ds = x (λ h, )dx ; : for i = 2 to n do 2: s = s+h,i x i ; ds = ds+h,i dx i ; 3: end for 4: p(λ) := s; p (λ) := ds; This algorithm computes p(λ) = c det(h(λ)) and its derivative p (λ) of a Hessenberg matrix H in O(n 2 ) operations. Inside a Newton iteration the new iterate is obtained by λ k+ = λ k p(λ k) p (λ k ), k =,,...
6 58 CHAPTER 3. NEWTON METHODS The factor c cancels. An initial guess λ has to be chosen to start the iteration. It is clear that a good guess reduces the iteration count of the Newton method. The iteration is considered converged if f(λ k ). The vector x = (x,x 2,...,x n,) T is a good approximation of the corresponding eigenvector. Remark 3.. Higher order deriatives of f can be computed in an analogous fashion. Higher order zero finders (e.g. Laguerre s zero finder) are then applicable [3] Computing multiple zeros If we have found a zero z of f(x) = and want to compute another one, we want to avoid recomputing the already found z. We can explicitly deflate the zero by defining a new function (3.5) f (x) := f(x) x z, and apply our method of choice to f. This procedure can in particular be done with polynomials. The coefficients of f are however very sensitive to inaccuracies in z. We can proceed similarly for multiple zeros z,...,z m. Explicit deflation is not recommended and often not feasible since f is not given explicitely. For the reciprocal Newton correction for f in (3.5) we get f (x) f (x) = f (x) x z f(x) (x z) 2 f(x) x z = f (x) f(x) x z. Then a Newton correction becomes (3.6) x (k+) = x k f (x k ) f(x k ) x k z and similarly for multiple zeros z,...,z m. Working with (3.6) is called implicit deflation. Here, f is not modified. In this way errors in z are not propagated to f 3.3 Newton methods for the constrained matrix problem We consider the nonlinear eigenvalue problem (3.4) equipped with the normalization condition (3.3), (3.7) T(λ)x =, c T x =, where c is some given vector. At a solution (x,λ), x, T(λ) is singular. Note that x is defined only up to a (nonzero) multiplicative factor. c T x = is just one way to normalize x. Another one would be x 2 =, cf. the next section. Solving (3.7) is equivalent with finding a zero of the nonlinear function f(x,λ), (3.8) f(x,λ) = ( ) T(λ)x c T = x ( ).
7 3.3. NEWTON METHODS FOR THE CONSTRAINED MATRIX PROBLEM 59 To apply Newton s zero finding method to (3.8) we need the Jacobian J of f, (3.9) J(x,λ) f(x,λ) (x,λ) = ( T(λ) T ) (λ)x c T. Here, T (λ) denotes the (elementwise) derivative of T with respect to λ. Then, a step of Newton s iteration is given by ( ) ( ) xk+ xk (3.2) = J(x k,λ k ) f(x k,λ k ), λ k+ λ k or, with the abbreviations T k := T(λ k ) and T k := T (λ k ), an equation for the Newton corrections is obtained, ( Tk T (3.2) k x )( ) ( ) k xk+ x k Tk x c T = k λ k+ λ k c T. x k If x k is normalized, c T x k =, then the second equation in (3.2) yields (3.22) c T (x k+ x k ) = c T x k+ =. The first equation in (3.2) gives T k (x k+ x k )+(λ k+ λ k )T k x k = T k x k T k x k+ = (λ k+ λ k )T k x k. We introduce the auxiliary vector u k+ by (3.23) T k u k+ = T k x k. Note that (3.24) x k+ = (λ k+ λ k )u k+. So, u k+ pointsin theright direction; it justneedstobenormalized. Premultiplying (3.24) by c T and using (3.22) gives or = c T x k+ = (λ k+ λ k )c T u k+, (3.25) λ k+ = λ k c T u k+. In summary, we get the following procedure. If the linear eigenvalue problem is solved by Algorithm 3.3 then T (λ)x = x. In each iteration step a linear system has to be solved which requires the factorization of a matrix. We now change the way we normalize x. Problem (3.7) becomes (3.26) T(λ)x =, x 2 =, with the corresponding nonlinear system of equations ( ) T(λ)x (3.27) f(x,λ) = = 2 (xt x ) ( ).
8 6 CHAPTER 3. NEWTON METHODS Algorithm 3.2 Newton iteration for solving (3.8) : Choose a starting vector x R n with c T x =. Set k :=. 2: repeat 3: Solve T(λ k )u k+ := T (λ k )x k for u k+ ; (3.23) 4: µ k := c T u k+ ; 5: x k+ := u k+ /µ k ; (Normalize u k+ ) 6: λ k+ := λ k /µ k ; (3.25) 7: k := k +; 8: until some convergence criterion is satisfied The Jacobian now is (3.28) J(x,λ) f(x,λ) (x,λ) = The Newton step (3.2) is changed into (3.29) ( Tk T k x k x T k ( T(λ) T ) (λ)x x T. )( ) ( xk+ x k Tk x = k λ k+ λ k 2 ( xt k x k) ). If x k is normalized, x k =, then the second equation in (3.29) gives (3.3) x T k (x k+ x k ) = x T k x k+ =. The correction x k := x k+ x k is orthogonal to the actual approximation. The first equation in (3.29) is identical with the first equation in (3.2). Again, we employ the auxiliary vector u k+ defined in (3.23). Premultiplying (3.24) by x T k and using (3.3) gives = x T k x k+ = (λ k+ λ k )x T k u k+, or (3.3) λ k+ = λ k x T k u. k+ The next iterate x k+ is obtained by normalizing u k+, (3.32) x k+ = u k+ / u k+. Algorithm 3.3 Newton iteration for solving (3.27) : Choose a starting vector x R n with x () =. Set k :=. 2: repeat 3: Solve T(λ k )u k+ := T (λ k )x k for u k+ ; (3.23) 4: µ k := x T k u k+; 5: λ k+ := λ k /µ k ; (3.3) 6: x k+ := u k+ / u k+ ; (Normalize u k+ ) 7: k := k +; 8: until some convergence criterion is satisfied
9 3.4. SUCCESSIVE LINEAR APPROXIMATIONS Successive linear approximations Ruhe [4] suggested the following method which is not derived as a Newton method. It is based on an expansion of T(λ) at some approximate eigenvalue λ k. (3.33) T(λ)x (T(λ k ) ϑt (λ k ))x =, λ = λ k ϑ. Equation (3.33) is a generalized eigenvalue problem with eigenvalue ϑ. If λ k is a good approximation of an eigenvalue, then it is straightforward to compute the smallest eigenvalue ϑ of (3.34) T(λ k )x = ϑt (λ k )x and update λ k by λ k+ = λ k ϑ. Algorithm 3.4 Algorithm of successive linear problems : Start with approximation λ of an eigenvalue of T(λ). 2: for k =,2,... do 3: Solve the linear eigenvalue problem T(λ k )u = ϑt (λ k )u. 4: Choose an eigenvalue ϑ smallest in modulus. 5: λ k+ := λ k ϑ; 6: if ϑ is small enough then 7: Return with eigenvalue λ k+ and associated eigenvector u/ u. 8: end if 9: end for Remark 3.2. If T is twice continuously differentiable, and λ is an eigenvalue of problem () such that T (λ) is singular and is an algebraically simple eigenvalue of T (λ) T(λ), then the method in Algorithm 3.4 converges quadratically towards λ. Bibliography [] P. Arbenz and W. Gander, Solving nonlinear eigenvalue problems by algorithmic differentiation, Computing, 36 (986), pp [2] D. S. Mackey, N. Mackey, C. Mehl, and V. Mehrmann, Structured polynomial eigenvalue problems: Good vibrations from good linearizations, SIAM J. Matrix Anal. Appl., 28 (26), pp [3] B. Parlett, Laguerre s method applied to the matrix eigenvalue problem, Math. Comput., 8 (964), pp [4] A. Ruhe, Algorithms for the nonlinear eigenvalue problem, SIAM J. Numer. Anal., (973), pp [5] G. Strang, Introduction to Applied Mathematics, Wellesley-Cambridge Press, Wellesley, 986. [6] F. Tisseur and K. Meerbergen, The quadratic eigenvalue problem, SIAM Rev., 43 (2), pp [7] R. C. Ward, The QR algorithm and Hyman s method on vector computers, Math. Comput., 3 (976), pp
10 62 CHAPTER 3. NEWTON METHODS
Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationThe quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying
I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More information4. Determinants.
4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationEigenvalues and Eigenvectors
Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationJUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson
JUST THE MATHS UNIT NUMBER 9.8 MATRICES 8 (Characteristic properties) & (Similarity transformations) by A.J.Hobson 9.8. Properties of eigenvalues and eigenvectors 9.8. Similar matrices 9.8.3 Exercises
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationNonlinear palindromic eigenvalue problems and their numerical solution
Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (
More informationAlgorithms for Solving the Polynomial Eigenvalue Problem
Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey
More informationQuadratic Matrix Polynomials
Research Triangularization Matters of Quadratic Matrix Polynomials February 25, 2009 Nick Françoise Higham Tisseur Director School of of Research Mathematics The University of Manchester School of Mathematics
More informationCalculating determinants for larger matrices
Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det
More informationMath 577 Assignment 7
Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the
More informationEigenvalues and Eigenvectors
5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS nn Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),
More informationa 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12
24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationMath 489AB Exercises for Chapter 2 Fall Section 2.3
Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial
More informationLecture 4 Eigenvalue problems
Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)
More informationMath 18.6, Spring 213 Problem Set #6 April 5, 213 Problem 1 ( 5.2, 4). Identify all the nonzero terms in the big formula for the determinants of the following matrices: 1 1 1 2 A = 1 1 1 1 1 1, B = 3 4
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More information1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i
Problems in basic linear algebra Science Academies Lecture Workshop at PSGRK College Coimbatore, June 22-24, 2016 Govind S. Krishnaswami, Chennai Mathematical Institute http://www.cmi.ac.in/~govind/teaching,
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More informationMassachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra
Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used
More informationCAAM 335 Matrix Analysis
CAAM 335 Matrix Analysis Solutions to Homework 8 Problem (5+5+5=5 points The partial fraction expansion of the resolvent for the matrix B = is given by (si B = s } {{ } =P + s + } {{ } =P + (s (5 points
More informationICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors
ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 34 The powers of matrix Consider the following dynamic
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More information33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM
33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the
More informationA = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,
65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationCHAPTER 3. Matrix Eigenvalue Problems
A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL
More informationEigenvalues and Eigenvectors
Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationACM106a - Homework 2 Solutions
ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationChapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015
Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal
More informationMath 215 HW #9 Solutions
Math 5 HW #9 Solutions. Problem 4.4.. If A is a 5 by 5 matrix with all a ij, then det A. Volumes or the big formula or pivots should give some upper bound on the determinant. Answer: Let v i be the ith
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationQR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used:
(In practice not Gram-Schmidt, but another process Householder Transformations are used.) QR Decomposition When solving an overdetermined system by projection (or a least squares solution) often the following
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationChapter 3. Linear and Nonlinear Systems
59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationMath 2331 Linear Algebra
5. Eigenvectors & Eigenvalues Math 233 Linear Algebra 5. Eigenvectors & Eigenvalues Shang-Huan Chiu Department of Mathematics, University of Houston schiu@math.uh.edu math.uh.edu/ schiu/ Shang-Huan Chiu,
More informationLinear Algebra Primer
Linear Algebra Primer D.S. Stutts November 8, 995 Introduction This primer was written to provide a brief overview of the main concepts and methods in elementary linear algebra. It was not intended to
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationLecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September
More informationJUST THE MATHS SLIDES NUMBER 9.6. MATRICES 6 (Eigenvalues and eigenvectors) A.J.Hobson
JUST THE MATHS SLIDES NUMBER 96 MATRICES 6 (Eigenvalues and eigenvectors) by AJHobson 96 The statement of the problem 962 The solution of the problem UNIT 96 - MATRICES 6 EIGENVALUES AND EIGENVECTORS 96
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationMATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.
MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial. Geometric properties of determinants 2 2 determinants and plane geometry
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationCheat Sheet for MATH461
Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A
More informationFinal Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015
Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,
More informationLinear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.
Homework # due Thursday, Oct. 0. Show that the diagonals of a square are orthogonal to one another. Hint: Place the vertices of the square along the axes and then introduce coordinates. 2. Find the equation
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationLagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.
Chapter 9 Lagrange multipliers Portfolio optimization The Lagrange multipliers method for finding constrained extrema of multivariable functions 91 Lagrange multipliers Optimization problems often require
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More information2 Eigenvectors and Eigenvalues in abstract spaces.
MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors
More informationMath Ordinary Differential Equations
Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x
More informationLinear Algebra Primer
Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary
More informationLU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU
LU Factorization A m n matri A admits an LU factorization if it can be written in the form of Where, A = LU L : is a m m lower triangular matri with s on the diagonal. The matri L is invertible and is
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More informationMATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005
MATH 225 Summer 25 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 25 Department of Mathematical and Statistical Sciences University of Alberta Question 1. [p 224. #2] The set of all
More information6 EIGENVALUES AND EIGENVECTORS
6 EIGENVALUES AND EIGENVECTORS INTRODUCTION TO EIGENVALUES 61 Linear equations Ax = b come from steady state problems Eigenvalues have their greatest importance in dynamic problems The solution of du/dt
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationSome Notes on Linear Algebra
Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding
More informationChapter 5 Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n
More information