SEC POWER METHOD Power Method
|
|
- Vanessa Barton
- 5 years ago
- Views:
Transcription
1 SEC..2 POWER METHOD Power Method We now describe the power method for computing the dominant eigenpair. Its extension to the inverse power method is practical for finding any eigenvalue provided that a good initial approximation is known. Some schemes for finding eigenvalues use other methods that converge fast, but have limited precision. The inverse power method is then invoked to refine the numerical values and gain full precision. To discuss the situation, we will need the following definitions. Definition.0. If λ is an eigenvalue of A that is larger in absolute value than any other eigenvalue, it is called the dominant eigenvalue. An eigenvector V corresponding to λ is called a dominant eigenvector.
2 600 CHAP. EIGENVALUES AND EIGENVECTORS Definition.. An eigenvector V is said to be normalized if the coordinate of largest magnitude is equal to unity (i.e., the largest coordinate in the vector V is the number ). It is easy to normalize an eigenvector [ v v 2 v n ] by forming a new vector V = (/c) [ v v 2 v n ], where c = v j and v j =max i n { v i }. Suppose that the matrix A has a dominant eigenvalue λ and that there is a unique normalized eigenvector V that corresponds to λ. This eigenpair λ, V can be found by the following iterative procedure called the power method. Start with the vector () X 0 = [ ]. Generate the sequence {X k } recursively, using (2) Y k = AX k, X k+ = c k+ Y k, where c k+ is the coordinate of Y k of largest magnitude (in the case of a tie, choose the coordinate that comes first). The sequences {X k } and {c k } will converge to V and λ, respectively: (3) lim k X k = V and lim k c k = λ. Remark. If X 0 is an eigenvector and X 0 = V, then some other starting vector must be chosen. Example.5. for the matrix Use the power method to find the dominant eigenvalue and eigenvector 0 5 A = Start with X 0 = [ ] and use the formulas in (2) to generate the sequence of vectors {X k } and constants {c k }.Thefirst iteration produces = 8 = 2 2 = c 3 X The second iteration produces = = = c 2 X 2.
3 SEC..2 POWER METHOD 60 Table. Power Method Used in Example.5 to Find the Normalized Dominant Eigenvector V = [ ] and Corresponding Eigenvalue λ = 4 AX k = Y k = c k+ X k+ AX 0 =[ ] = [ ] = c X AX =[ ] = [ ] = c 2 X 2 AX 2 =[ ] = [ ] = c 3 X 3 AX 3 =[ ] = [ ] = c 4 X 4 AX 4 =[ ] = [ ] = c 5 X 5 AX 5 =[ ] = [ ] = c 6 X 6 AX 6 =[ ] = [ ] = c 7 X 7 AX 7 =[ ] = [ ] = c 8 X 8 AX 8 =[ ] = [ ] = c 9 X 9 AX 9 =[ ] = [ ] = c 0 X 0 AX 0 =[ ] = [ ] = c X Iteration generates the sequence {X k } (where X k is a normalized vector): , 6 6, 9 2, , , , The sequence of vectors converges to V = [ ], and the sequence of constants converges to λ = 4 (see Table.). It can be proved that the rate of convergence is linear. Theorem.8 (Power Method). Assume that the n n matrix A has n distinct eigenvalues λ, λ 2,..., λ n and that they are ordered in decreasing magnitude; that is, (4) λ > λ 2 λ 3 λ n. If X 0 is chosen appropriately, then the sequences { X k = [ x (k) {c k } generated recursively by (5) Y k = AX k and (6) X k+ = Y k, c k+ where x (k) (7) c k+ = x (k) j and x (k) j = max i n { x(k) i }, 2... x n (k) ] } and
4 602 CHAP. EIGENVALUES AND EIGENVECTORS will converge to the dominant eigenvector V and eigenvalue λ, respectively. That is, (8) lim X k = V and lim c k = λ. k k Proof. Since A has n eigenvalues, there are n corresponding eigenvectors V j,for j =, 2,..., n, that are linearly independent, normalized, and form a basis for n- dimensional space. Hence the starting vector X 0 can be expressed as the linear combination (9) X 0 = b V + b 2 V 2 + +b n V n. Assume that X 0 = [ ] x x 2... x n was chosen in such a manner that b = 0. Also, assume that the coordinates of X 0 are scaled so that max j n { x j } =. Because {V j } n j= are eigenvectors of A, the multiplication AX 0, followed by normalization, produces Y 0 = AX 0 = A(b V + b 2 V 2 + +b n V n ) = b AV + b 2 AV 2 + +b n AV n (0) = b λ V + b 2 λ 2 V 2 + +b n λ n V n ( ( ) ( ) ) = λ b V + b 2 V 2 + +b n V n λ λ and X = λ ( ( ) ( ) ) b V + b 2 V 2 + +b n V n. c λ λ After k iterations we arrive at () Y k = AX k ( λ k ( ) k ( ) ) k = A b V + b 2 V 2 + +b n V n c c 2 c k λ λ ( λ k ( ) k ( ) ) k = b AV + b 2 AV 2 + +b n AV n c c 2 c k λ λ ( λ k ( ) k ( ) ) k = b λ V + b 2 λ 2 V 2 + +b n λ n V n c c 2 c k λ λ ( λ k ( ) k ( ) ) k = b V + b 2 V 2 + +b n V n c c 2 c k λ λ and X k = λ k c c 2 c k ( ( ) k ( ) ) k b V + b 2 V 2 + +b n V n. λ λ
5 SEC..2 POWER METHOD 603 Since we assumed that λ j / λ < for each j = 2, 3,..., n, wehave (2) lim k b j Hence it follows that ( ) λ k j V j = 0 for each j = 2, 3,..., n. λ (3) lim X b λ k k = lim V. k k c c 2 c k We have required that both X k and V be normalized and their largest component be. Hence the limiting vector on the left side of (3) will be normalized, with its largest component being. Consequently, the limit of the scalar multiple of V on the right side of (3) exists and its value must be ; that is, b λ k (4) lim =. k c c 2 c k Therefore, the sequence of vectors {X k } converges to the dominant eigenvector: (5) lim k X k = V. Replacing k with k in the terms of the sequence in (4) yields lim k b λ k c c 2 c k =, and dividing both sides of this result into (4) yields λ lim = lim k c k k b λ k /(c c 2 c k ) b λ k /(c c 2 c k ) = =. Therefore, the sequence of constants {c k } converges to the dominant eigenvalue: (6) lim k c k = λ, and the proof of the theorem is complete. Speed of Convergence In the light of equation (2) we see that the coefficient of V j in X k goes to zero in proportion to (λ j /λ ) k and that the speed of convergence of {X k } to V is governed by the terms (λ 2 /λ ) k. Consequently, the rate of convergence is linear. Similarly, the
6 604 CHAP. EIGENVALUES AND EIGENVECTORS Table.2 Comparison of the Rate of Convergence of the Power Method and Acceleration of the Power Method Using Aitken s 2 Technique c k Y k ĉ k X k c X = [ ] ; [ ] = ĉ X c 2 X 2 = [ ] ; [ ] = ĉ 2 X 2 c 3 X 3 = [ ] ; [ ] = ĉ 3 X 3 c 4 X 4 = [ ] ; [ ] = ĉ 4 X 4 c 5 X 5 = [ ] ; [ ] = ĉ 5 X 5 c 6 X 6 = [ ] ; [ ] = ĉ 6 X 6 c 7 X 7 = [ ] ; [ ] = ĉ 7 X 7 c 8 X 8 = [ ] ; [ ] = ĉ 8 X 8 c 9 X 9 = [ ] ; [ ] = ĉ 9 X 9 c 0 X 0 = [ ] ; [ ] = ĉ 0 X 0 convergence of the sequence of constants {c k } to λ is linear. The Aitken 2 method can be used for any linearly convergent sequence {p k } to form a new sequence, { } (p k+ p k ) 2 p k =, p k+2 2p k+ + p k that converges faster. In Example.4 this Aitken 2 method can be applied to speed up convergence of the sequence of constants {c k }, as well as the first two components of the sequence of vectors {X k }. A comparison of the results obtained with this technique and the original sequences is shown in Table.2. Shifted-Inverse Power Method We will now discuss the shifted inverse power method. It requires a good starting approximation for an eigenvalue, and then iteration is used to obtain a precise solution. Other procedures such as the QM and Givens method are used first to obtain the starting approximations. Cases involving complex eigenvalues, multiple eigenvalues, or the presence of two eigenvalues with the same magnitude or approximately the same magnitude will cause computational difficulties and require more advanced methods. Our illustrations will focus on the case where the eigenvalues are distinct. The shifted inverse power method is based on the following three results (the proofs are left as exercises). Theorem.9 (Shifting Eigenvalues). Suppose that λ, V is an eigenpair of A. If α is any constant, then λ α, V is an eigenpair of the matrix A α I. Theorem.20 (Inverse Eigenvalues). Suppose that λ, V is an eigenpair of A. If λ = 0, then /λ, V is an eigenpair of the matrix A.
7 SEC..2 POWER METHOD 605 x λ λ j λ j α λ j + λ n Figure.2 The location of α for the shifted-inverse power method. Theorem.2. Suppose that λ, V is an eigenpair of A. Ifα = λ, then /(λ α), V is an eigenpair of the matrix (A α I). Theorem.22 (Shifted-Inverse Power Method). Assume that the n n matrix A has distinct eigenvalues λ, λ 2,..., λ n and consider the eigenvalue λ j. Then a constant α can be chosen so that µ = /(λ j α) is the dominant eigenvalue of (A α I). Furthermore, if X 0 is chosen appropriately, then the sequences { X k = [ (k) x x (k) 2... x n (k) ] } and {ck } are generated recursively by (7) Y k = (A α I) X k and (8) X k+ = Y k, c k+ where (9) c k+ = x (k) j and x (k) j = max j n { x(k) i } will converge to the dominant eigenpair µ, V j of the matrix (A α I). Finally, the corresponding eigenvalue for the matrix A is given by the calculation (20) λ j = µ + α. Remark. For practical implementations of Theorem.22, a linear system solver is used to compute Y k in each step by solving the linear system (A α I)Y k = X k. Proof. Without loss of generality, we may assume that λ <λ 2 < <λ n. Select a number α (α = λ j ) that is closer to λ j than any of the other eigenvalues (see Figure.2), that is, (2) λ j α < λ i α for each i =, 2,..., j, j +,..., n. According to Theorem.2, /(λ j α), V is an eigenpair of the matrix (A α I). Relation (2) implies that / λ i α < / λ j α for each i = j so that µ = /(λ j α) is the dominant eigenvalue of the matrix (A α I). The shifted-inverse power method uses a modification of the power method to determine the eigenpair µ, V j. Then the calculation λ j = /µ + α produces the desired eigenvalue of the matrix A.
8 606 CHAP. EIGENVALUES AND EIGENVECTORS Table.3 Shifted-Inverse Power Method for the Matrix (A 4.2I) in Example.6: Convergence to the Eigenvector V = [ ] and µ = 5 (A α I) X k = c k+ X k+ (A α I) X 0 = [ ] = c X (A α I) X = [ ] = c 2 X 2 (A α I) X 2 = [ ] = c 3 X 3 (A α I) X 3 = [ ] = c 4 X 4 (A α I) X 4 = [ ] = c 5 X 5 (A α I) X 5 = [ ] = c 6 X 6 (A α I) X 6 = [ ] = c 7 X 7 (A α I) X 7 = [ ] = c 8 X 8 (A α I) X 8 = [ ] = c 9 X 9 Example.6. matrix Employ the shifted-inverse power method to find the eigenpairs of the 0 5 A = Use the fact that the eigenvalues of A are λ = 4, λ 2 = 2, and λ 3 =, and select an appropriate α and starting vector for each case. Case (i): For the eigenvalue λ = 4, we select α = 4.2 and the starting vector X 0 = [ ]. First, form the matrix A 4.2I, compute the solution to Y 0 = X 0 =, and get the vector Y 0 = [ ]. Then compute c = and X = [ ]. Iteration generates the values given in Table.3. The sequence {c k } converges to µ = 5, which is the dominant eigenvalue of (A 4.2I), and {X k } converges to V = [ ]. The eigenvalue λ of A is given by the computation λ = /µ +α = /( 5)+4.2 = = 4. Case (ii): For the eigenvalue λ 2 = 2, we select α = 2. and the starting vector X 0 = [ ]. Form the matrix A 2.I, compute the solution to Y 0 = X 0 =, and obtain the vector Y 0 = [ ]. Then c = and vector X = [ ]. Iteration produces the
9 SEC..2 POWER METHOD 607 Table.4 Shifted-Inverse Power Method for the Matrix (A 2.I) in Example.6: Convergence to the Dominant Eigenvector V = [ 4 2 ] and µ = 0 (A α I) X k = c k+ X k+ (A α I) X 0 = [ ] = c X (A α I) X = [ ] = c 2 X 2 (A α I) X 2 = [ ] = c 3 X 3 (A α I) X 3 = [ ] = c 4 X 4 (A α I) X 4 = [ ] = c 5 X 5 (A α I) X 5 = [ ] = c 6 X 6 (A α I) X 6 = [ ] = c 7 X 7 Table.5 Shifted-Inverse Power Method for the Matrix (A 0.875I) in Example.6: Convergence to the Dominant Eigenvector V = [ 2 2 ] and µ = 8 (A α I) X k = c k+ X k+ (A α I) X 0 = [ ] = c X (A α I) X = [ ] = c 2 X 2 (A α I) X 2 = [ ] = c 3 X 3 (A α I) X 3 = [ ] = c 4 X 4 (A α I) X 4 = [ ] = c 5 X 5 (A α I) X 5 = [ ] = c 6 X 6 (A α I) X 6 = [ ] = c 7 X 7 values given in Table.4. The dominant eigenvalue of (A 2.I) is µ = 0, and the eigenpair of the matrix A is λ 2 = /( 0) + 2. = = 2 and V 2 = [ 4 2 ]. Case (iii): For the eigenvalue λ 3 =, we select α = and the starting vector X 0 = [ 0 ]. Iteration produces the values given in Table.5. The dominant eigenvalue of (A 0.875I) is µ = 8, and the eigenpair of matrix A is λ 3 = / = = and V 3 = [ 2 2 ]. The sequence {Xk } of vectors with the starting vector [ 0 ] converged in seven iterations. (Computational difficulties were encountered when X 0 = [ ] was used, and convergence took significantly longer.)
10 Numerical Methods Using Matlab, 4 th Edition, 2004 John H. Mathews and Kurtis K. Fink ISBN: Prentice-Hall Inc. Upper Saddle River, New Jersey, USA
2. The Power Method for Eigenvectors
2. Power Method We now describe the power method for computing the dominant eigenpair. Its extension to the inverse power method is practical for finding any eigenvalue provided that a good initial approximation
More informationp 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.
80 CHAP. 2 SOLUTION OF NONLINEAR EQUATIONS f (x) = 0 y y = f(x) (p, 0) p 2 p 1 p 0 x (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- Figure 2.16 cant method. Secant Method The
More information632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method
632 CHAP 11 EIGENVALUES AND EIGENVECTORS QR Method Suppose that A is a real symmetric matrix In the preceding section we saw how Householder s method is used to construct a similar tridiagonal matrix The
More information5.1 Least-Squares Line
252 CHAP. 5 CURVE FITTING 5.1 Least-Squares Line In science and engineering it is often the case that an experiment produces a set of data points (x 1, y 1 ),...,(x N, y N ), where the abscissas {x k }
More information434 CHAP. 8 NUMERICAL OPTIMIZATION. Powell's Method. Powell s Method
434 CHAP. 8 NUMERICAL OPTIMIZATION Powell's Method Powell s Method Let X be an initial guess at the location of the minimum of the function z = f (x, x 2,...,x N ). Assume that the partial derivatives
More information446 CHAP. 8 NUMERICAL OPTIMIZATION. Newton's Search for a Minimum of f(x,y) Newton s Method
446 CHAP. 8 NUMERICAL OPTIMIZATION Newton's Search for a Minimum of f(xy) Newton s Method The quadratic approximation method of Section 8.1 generated a sequence of seconddegree Lagrange polynomials. It
More information9.6 Predictor-Corrector Methods
SEC. 9.6 PREDICTOR-CORRECTOR METHODS 505 Adams-Bashforth-Moulton Method 9.6 Predictor-Corrector Methods The methods of Euler, Heun, Taylor, and Runge-Kutta are called single-step methods because they use
More information9.2 Euler s Method. (1) t k = a + kh for k = 0, 1,..., M where h = b a. The value h is called the step size. We now proceed to solve approximately
464 CHAP. 9 SOLUTION OF DIFFERENTIAL EQUATIONS 9. Euler s Method The reader should be convinced that not all initial value problems can be solved explicitly, and often it is impossible to find a formula
More information9.8 Boundary Value Problems. Another type of differential equation has the form. (1) x = f (t, x, x ) for a t b, with the boundary conditions
528 CHAP. 9 SOLUTION OF DIFFERENTIAL EQUATIONS Linear Shooting Method 9.8 Boundary Value Problems Another type of differential equation has the form (1) x = f (t, x, x ) for a t b, with the boundary conditions
More informationBernstein polynomials of degree N are defined by
SEC. 5.5 BÉZIER CURVES 309 5.5 Bézier Curves Pierre Bézier at Renault and Paul de Casteljau at Citroën independently developed the Bézier curve for CAD/CAM operations, in the 1970s. These parametrically
More informationThe Tangent Parabola The AMATYC Review, Vol. 23, No. 1, Fall 2001, pp
The Tangent Parabola The AMATYC Review, Vol. 3, No., Fall, pp. 5-3. John H. Mathews California State University Fullerton Fullerton, CA 9834 By Russell W. Howell Westmont College Santa Barbara, CA 938
More informationIteration & Fixed Point
Iteration & Fied Point As a method for finding the root of f this method is difficult, but it illustrates some important features of iterstion. We could write f as f g and solve g. Definition.1 (Fied Point)
More informationAccelerating Convergence
Accelerating Convergence MATH 375 Numerical Analysis J. Robert Buchanan Department of Mathematics Fall 2013 Motivation We have seen that most fixed-point methods for root finding converge only linearly
More informationMATHEMATICAL EXPOSITION
MATHEMATICAL EXPOSITION An Improved Newton's Method by John H. Mathews California State University, Fullerton, CA 92634 4(oif John earned his Ph.D. in mathematics from Michigan State University under the
More informationLecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.
Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More information1 Inner Product and Orthogonality
CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today
More informationEigenvalues, Eigenvectors, and an Intro to PCA
Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationEigenvectors Via Graph Theory
Eigenvectors Via Graph Theory Jennifer Harris Advisor: Dr. David Garth October 3, 2009 Introduction There is no problem in all mathematics that cannot be solved by direct counting. -Ernst Mach The goal
More informationTHE EIGENVALUE PROBLEM
THE EIGENVALUE PROBLEM Let A be an n n square matrix. If there is a number λ and a column vector v 0 for which Av = λv then we say λ is an eigenvalue of A and v is an associated eigenvector. Note that
More informationEigenvalues and Eigenvectors 7.1 Eigenvalues and Eigenvecto
7.1 November 6 7.1 Eigenvalues and Eigenvecto Goals Suppose A is square matrix of order n. Eigenvalues of A will be defined. Eigenvectors of A, corresponding to each eigenvalue, will be defined. Eigenspaces
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationNumerical Optimization
Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding
More informationJUST THE MATHS UNIT NUMBER 9.9. MATRICES 9 (Modal & spectral matrices) A.J.Hobson
JUST THE MATHS UNIT NUMBER 9.9 MATRICES 9 (Modal & spectral matrices) by A.J.Hobson 9.9. Assumptions and definitions 9.9.2 Diagonalisation of a matrix 9.9.3 Exercises 9.9.4 Answers to exercises UNIT 9.9
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationSection 6.3 Richardson s Extrapolation. Extrapolation (To infer or estimate by extending or projecting known information.)
Section 6.3 Richardson s Extrapolation Key Terms: Extrapolation (To infer or estimate by extending or projecting known information.) Illustrated using Finite Differences The difference between Interpolation
More informationInvestigation of Tangent Polynomials with a Computer Algebra System The AMATYC Review, Vol. 14, No. 1, Fall 1992, pp
Investigation of Tangent Polynomials wit a Computer Algebra System Te AMATYC Review, Vol. 14, No. 1, Fall 199, pp. -7. Jon H. Matews California State University Fullerton Fullerton, CA 9834 By Russell
More informationMath 471 (Numerical methods) Chapter 4. Eigenvalues and Eigenvectors
Math 471 (Numerical methods) Chapter 4. Eigenvalues and Eigenvectors Overlap 4.1 4.3 of Bradie 4.0 Review on eigenvalues and eigenvectors Definition. A nonzero vector v is called an eigenvector of A if
More informationA Note on Inverse Iteration
A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite
More informationConstant coefficients systems
5.3. 2 2 Constant coefficients systems Section Objective(s): Diagonalizable systems. Real Distinct Eigenvalues. Complex Eigenvalues. Non-Diagonalizable systems. 5.3.. Diagonalizable Systems. Remark: We
More informationMathQuest: Differential Equations
MathQuest: Differential Equations Geometry of Systems 1. The differential equation d Y dt = A Y has two straight line solutions corresponding to [ ] [ ] 1 1 eigenvectors v 1 = and v 2 2 = that are shown
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationEigenvalues, Eigenvectors, and Diagonalization
Math 240 TA: Shuyi Weng Winter 207 February 23, 207 Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. We will
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationMath 2331 Linear Algebra
5. Eigenvectors & Eigenvalues Math 233 Linear Algebra 5. Eigenvectors & Eigenvalues Shang-Huan Chiu Department of Mathematics, University of Houston schiu@math.uh.edu math.uh.edu/ schiu/ Shang-Huan Chiu,
More informationOn A General Formula of Fourth Order Runge-Kutta Method
On A General Formula of Fourth Order Runge-Kutta Method Delin Tan Zheng Chen Abstract In this paper we obtain a general formula of Runge-Kutta method in order with a free parameter t By picking the value
More informationDesigning Information Devices and Systems II Fall 2015 Note 22
EE 16B Designing Information Devices and Systems II Fall 2015 Note 22 Notes taken by John Noonan (11/12) Graphing of the State Solutions Open loop x(k + 1) = Ax(k) + Bu(k) y(k) = Cx(k) Closed loop x(k
More informationSolutions to Review Problems for Chapter 6 ( ), 7.1
Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,
More informationKinematics Introduction
Kinematics Introduction Kinematics is the study of the motion of bodies. As such it deals with the distance/displacement, speed/velocity, and the acceleration of bodies. Although we are familiar with the
More informationCourse and Wavelets and Filter Banks
Course 18.327 and 1.130 Wavelets and Filter Banks Refinement Equation: Iterative and Recursive Solution Techniques; Infinite Product Formula; Filter Bank Approach for Computing Scaling Functions and Wavelets
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More informationTherefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.
Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be
More informationIterative Methods for Ax=b
1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative
More informationEigenvalues, Eigenvectors, and an Intro to PCA
Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationSection 1.1 Algorithms. Key terms: Algorithm definition. Example from Trapezoidal Rule Outline of corresponding algorithm Absolute Error
Section 1.1 Algorithms Key terms: Algorithm definition Example from Trapezoidal Rule Outline of corresponding algorithm Absolute Error Approximating square roots Iterative method Diagram of a general iterative
More informationDiagonalization of Matrix
of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that
More informationEigenvalues, Eigenvectors, and an Intro to PCA
Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.
More informationsystems of linear di erential If the homogeneous linear di erential system is diagonalizable,
G. NAGY ODE October, 8.. Homogeneous Linear Differential Systems Section Objective(s): Linear Di erential Systems. Diagonalizable Systems. Real Distinct Eigenvalues. Complex Eigenvalues. Repeated Eigenvalues.
More informationGregory Capra. April 7, 2016
? Occidental College April 7, 2016 1 / 39 Outline? diffusion? Method Gaussian Elimination Convergence, Memory, & Time Complexity Conjugate Gradient Squared () Algorithm Concluding remarks 2 / 39 Standard
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Rayleigh Quotient Iteration Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Solving Eigenvalue Problems All
More informationOrder of convergence
Order of convergence Linear and Quadratic Order of convergence Computing square root with Newton s Method Given a > 0, p def = a is positive root of equation Newton s Method p k+1 = p k p2 k a 2p k = 1
More informationSolution of System of Linear Equations & Eigen Values and Eigen Vectors
Solution of System of Linear Equations & Eigen Values and Eigen Vectors P. Sam Johnson March 30, 2015 P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March
More informationDIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix
DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More informationIn this section again we shall assume that the matrix A is m m, real and symmetric.
84 3. The QR algorithm without shifts See Chapter 28 of the textbook In this section again we shall assume that the matrix A is m m, real and symmetric. 3.1. Simultaneous Iterations algorithm Suppose we
More informationa 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12
24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2
More informationEigenvalues and Eigenvectors
November 3, 2016 1 Definition () The (complex) number λ is called an eigenvalue of the n n matrix A provided there exists a nonzero (complex) vector v such that Av = λv, in which case the vector v is called
More informationLecture 10: Powers of Matrices, Difference Equations
Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each
More informationAn Analytic-Numerical Method for Solving Difference Equations with Variable Coefficients by Discrete Multiplicative Integration
5 1 July 24, Antalya, Turkey Dynamical Systems and Applications, Proceedings, pp 425 435 An Analytic-Numerical Method for Solving Difference Equations with Variable Coefficients by Discrete Multiplicative
More informationElementary Linear Algebra Review for Exam 3 Exam is Friday, December 11th from 1:15-3:15
Elementary Linear Algebra Review for Exam 3 Exam is Friday, December th from :5-3:5 The exam will cover sections: 6., 6.2, 7. 7.4, and the class notes on dynamical systems. You absolutely must be able
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the
More informationMathematical preliminaries and error analysis
Mathematical preliminaries and error analysis Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Round-off errors and computer arithmetic
More informationApplied Numerical Analysis
Applied Numerical Analysis Using MATLAB Second Edition Laurene V. Fausett Texas A&M University-Commerce PEARSON Prentice Hall Upper Saddle River, NJ 07458 Contents Preface xi 1 Foundations 1 1.1 Introductory
More informationA = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,
65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example
More informationA Three-Step Iterative Method to Solve A Nonlinear Equation via an Undetermined Coefficient Method
Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 14, Number 11 (018), pp. 145-1435 Research India Publications http://www.ripublication.com/gjpam.htm A Three-Step Iterative Method
More informationJordan Canonical Form Homework Solutions
Jordan Canonical Form Homework Solutions For each of the following, put the matrix in Jordan canonical form and find the matrix S such that S AS = J. [ ]. A = A λi = λ λ = ( λ) = λ λ = λ =, Since we have
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again View at edx Let us revisit the example from Week 4, in which we had a simple model for predicting
More informationVectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =
Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.
More informationB. Differential Equations A differential equation is an equation of the form
B Differential Equations A differential equation is an equation of the form ( n) F[ t; x(, xʹ (, x ʹ ʹ (, x ( ; α] = 0 dx d x ( n) d x where x ʹ ( =, x ʹ ʹ ( =,, x ( = n A differential equation describes
More informationMath 240 Calculus III
Generalized Calculus III Summer 2015, Session II Thursday, July 23, 2015 Agenda 1. 2. 3. 4. Motivation Defective matrices cannot be diagonalized because they do not possess enough eigenvectors to make
More informationSOLUTIONS ABOUT ORDINARY POINTS
238 CHAPTER 6 SERIES SOLUTIONS OF LINEAR EQUATIONS In Problems 23 and 24 use a substitution to shift the summation index so that the general term of given power series involves x k. 23. nc n x n2 n 24.
More informationNumerical Method for Solution of Fuzzy Nonlinear Equations
1, Issue 1 (218) 13-18 Journal of Advanced Research in Modelling and Simulations Journal homepage: www.akademiabaru.com/armas.html ISSN: XXXX-XXXX Numerical for Solution of Fuzzy Nonlinear Equations Open
More informationSingular Value Decomposition
Singular Value Decomposition (Com S 477/577 Notes Yan-Bin Jia Sep, 7 Introduction Now comes a highlight of linear algebra. Any real m n matrix can be factored as A = UΣV T where U is an m m orthogonal
More informationSection 4.4 Reduction to Symmetric Tridiagonal Form
Section 4.4 Reduction to Symmetric Tridiagonal Form Key terms Symmetric matrix conditioning Tridiagonal matrix Similarity transformation Orthogonal matrix Orthogonal similarity transformation properties
More information10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
More informationMath/Music: Structure and Form Fall 2011 Worksheet on Continued Fractions
Math/Music: Structure and Form Fall 20 Worksheet on Continued Fractions Playing around with π The famous mathematical constant π is irrational, with a non-repeating, non-terminating decimal expansion.
More informationElements of linear algebra
Elements of linear algebra Elements of linear algebra A vector space S is a set (numbers, vectors, functions) which has addition and scalar multiplication defined, so that the linear combination c 1 v
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationUsually, when we first formulate a problem in mathematics, we use the most familiar
Change of basis Usually, when we first formulate a problem in mathematics, we use the most familiar coordinates. In R, this means using the Cartesian coordinates x, y, and z. In vector terms, this is equivalent
More informationLinear Combination. v = a 1 v 1 + a 2 v a k v k
Linear Combination Definition 1 Given a set of vectors {v 1, v 2,..., v k } in a vector space V, any vector of the form v = a 1 v 1 + a 2 v 2 +... + a k v k for some scalars a 1, a 2,..., a k, is called
More informationComputer Derivations of Numerical Differentiation Formulae. Int. J. of Math. Education in Sci. and Tech., V 34, No 2 (March-April 2003), pp
Computer Derivations o Numerical Dierentiation Formulae By Jon H. Matews Department o Matematics Caliornia State University Fullerton USA Int. J. o Mat. Education in Sci. and Tec. V No (Marc-April ) pp.8-87.
More informationFind the general solution of the system y = Ay, where
Math Homework # March, 9..3. Find the general solution of the system y = Ay, where 5 Answer: The matrix A has characteristic polynomial p(λ = λ + 7λ + = λ + 3(λ +. Hence the eigenvalues are λ = 3and λ
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationChapter 7 Duopoly. O. Afonso, P. B. Vasconcelos. Computational Economics: a concise introduction
Chapter 7 Duopoly O. Afonso, P. B. Vasconcelos Computational Economics: a concise introduction O. Afonso, P. B. Vasconcelos Computational Economics 1 / 21 Overview 1 Introduction 2 Economic model 3 Numerical
More informationBackground on Linear Algebra - Lecture 2
Background on Linear Algebra - Lecture September 6, 01 1 Introduction Recall from your math classes the notion of vector spaces and fields of scalars. We shall be interested in finite dimensional vector
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationAnnouncements Monday, November 13
Announcements Monday, November 13 The third midterm is on this Friday, November 17. The exam covers 3.1, 3.2, 5.1, 5.2, 5.3, and 5.5. About half the problems will be conceptual, and the other half computational.
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationEigenvalues and Eigenvectors of Matrices over a Finite Field by. Kristopher R. Lewis
Eigenvalues and Eigenvectors of Matrices over a Finite Field by Kristopher R Lewis Abstract: This paper is a study of eigenvalues and their corresponding eigenvectors of x matrices A with entries in Z
More informationEFFECTIVE MODAL MASS & MODAL PARTICIPATION FACTORS Revision F
EFFECTIVE MODA MASS & MODA PARTICIPATION FACTORS Revision F By Tom Irvine Email: tomirvine@aol.com March 9, 1 Introduction The effective modal mass provides a method for judging the significance of a vibration
More information