Section 4.1 The Power Method
|
|
- Joel Ray
- 5 years ago
- Views:
Transcription
1 Section 4.1 The Power Method Key terms Dominant eigenvalue Eigenpair Infinity norm and 2-norm Power Method Scaled Power Method Power Method for symmetric matrices The notation used varies a bit from that in the text.
2 Terminology and Properties Dominant eigenvalue: eigenvalue of largest absolute value Eigenpair: an eigenvalue λ and an associated eigenvector p; (λ, p). Property 1. If (λ, p) is an eigenpair of A, then for any positive integer r, (λ r, p) is an eigenpair of A r. The power method is a way to approximate the dominant eigenvalue. We do this indirectly by first approximating an eigenvector associated with the dominant eigenvalue. Our development uses a number of properties of eigenvalues and information about bases and subspaces.
3 Property 1. If (λ, p) is an eigenpair of A, then for any positive integer r, (λ r, p) is an eigenpair of A r.
4 assuming the denominator is not zero. If we assume that then we have assuming that p 1, j 0. Hence, the ratio converges toward the dominant eigenvalue, and the convergence is linear with asymptotic rate constant λ 2 /λ 1.
5 We demonstrate this graphically in Example 1 and then from a numerical point of view in Example 2. Figure 1.
6 Next we compute vectors u 1 = Au 0, u 2 = Au 1, u 3 = Au 2,.... Then we form the ratio of corresponding components. As shown in the table below, the size of the entries in the vectors can grow quite rapidly and we may need the number k of vectors to be large to get an accurate approximation. (We have only included enough steps to indicate the behavior of the method.) It appears that the ratios of terms is approaching 6.
7 % Unscaled Power Method Code A=[7 4 5;-2 2-2;1 0 3]; %Example u=[1 1 1]'; %sets the initial guess for k=1:15 %<== number of iterations v=a*u %generates the next approximate eigenvector v./u %computes the ratio of corresponding entries %of the two most recent approximate eigenvectors & prints u=v; %updates the eigenvector for the next time through the loop end (The preceding code prints output at each iteration.) It is recommended that you use the following modification that constructs a table for the eigenvectors and eigenvalue approximations. pvec=[ ];pval=[ ]; for k=1:15 v=a*u; pvec=[pvec;v' ]; w=v./u; pval=[pval;w' ]; u=v; end pvec,pval
8 This can be entered as follows: pvec=[ ];pval=[ ]; for k=1:15, v=a*u; pvec=[pvec;v' ]; w=v./u; pval=[pval;w' ]; u=v; end, pvec, pval Remember to enter the matrix A and initial guess u prior to executing the code. Using this code with input data A=[7 4 5;-2 2 2;1 0 3]; u=[1 1 1]'; we obtained the information in the table. (We haven t shown all the output.) It appears that the ratios of corresponding entries are converging to 6. Hence we conjecture that the dominant eigenvalue is 6. There is a modification of the power method, called the scaled power method, which inhibits the growth of the size of the entries of the approximate eigenvectors u k and is recommended for use in computational work.
9 The SCALED POWER METHOD The scaled power method can be described as follows. Let initial guess u 0 be chosen as in Equation (1). We define the sequences of vectors
10 Scaled Power Method code Here we will use the -norm for the scaling and determine the approximation to the dominant eigenvalue at each step. pvec=[ ];pval=[ ]; [mv,kk]=max(abs(u));u=u/u(kk); pvec=u';pval=u(kk); format long e for k=1:15, v=a*u; pval=[pval;v(kk)]; [mv,kk]=max(abs(v)); w=v./v(kk); u=w; pvec=[pvec;w' ]; end, [pvec,pval] In this case the display contains the scaled eigenvectors w and the maximum value (with appropriate sign) of the unscaled eigenvector v as an approximate dominant eigenvalue. In order to get the sign of the approximations to the dominant eigenvalue correct we use the indicated code to find the entry in the approximate eigenvector that has largest magnitude. This code also appears in the for loop. A=[7 4 5;-2 2-2;1 0 3]; u =[1 1 1]'; %sets the initial guess
11 Next use code for the scaled power method. pvec=[ ];pval=[ ]; [mv,kk]=max(abs(u));u=u/u(kk); pvec=u';pval=u(kk); format long e for k=1:15, v=a*u; pval=[pval;v(kk)]; [mv,kk]=max(abs(v)); w=v./v(kk); u=w; pvec=[pvec;w' ]; end, [pvec,pval] This displayed in format short
12 The power method is an iterative scheme, so a convergence tolerance must be specified and a stopping condition implemented. Three possibilities for the stopping condition immediately come to mind. The iteration could be terminated when any of the following is true: checking for convergence of the eigenvalue checking for convergence of the eigenvector checking for convergence of the residual where TOL denotes the specified convergence tolerance and λ (k) is used to denote the approximation to the eigenvalue during the k-th iteration. The author argues that using the check for convergence of the eigenvector is preferred. (See page 267.)
13 power_method approximate the dominant eigenvalue and an associated eigenvector for an arbitrary matrix using the power method calling sequences: [lambda, v] = power_method ( A, x, TOL, Nmax ) lambda = power_method ( A, x, TOL, Nmax ) power_method ( A, x, TOL, Nmax ) inputs: A square matrix whose dominant eigenvalue is to be approximated x initial approximation to eigenvector corresponding to the dominant eigenvalue TOL absolute error convergence tolerance (convergence is measured in terms of the infinity norm of the difference between successive terms in the eigenvector seqeunce) Nmax maximum number of iterations to be performed outputs: lambda approximation to dominant eigenvalue of A v an eigenvector of A corresponding to the eigenvalue lambda - vector will be normalized to unit length in the maximum infinity
14 Example: Use m-file power_method to approximate the dominant eigenvalue of whose eigenvalues are λ 1 = -12, λ 2 = -3 and λ 3 = 3. Let's start with the vector x (0) = [1 0 0] T. A=[-2-2 3; ; ];x=[1 0 0]';TOL = 5E-6; Nmax=20; power_method ( A, x, TOL, Nmax ) Called the convergence column. See next slide.
15 The last column is an estimate for the asymptotic rate of linear convergence of the sequence {λ (j) } toward the value λ 1 = -12. This column was computed according to the formula Note the values in this column approach the value predicted by theory: λ 2 /λ 1 = 3/12 = If the matrix is symmetric there is a variation of the power method that uses the 2-norm instead of the infinity norm. Details are in the text. For a discussion about special cases and their effect on the available m- file for the power method see pp ; Some Final Comments Regarding the Power Method.
Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur
Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Lecture No. #07 Jordan Canonical Form Cayley Hamilton Theorem (Refer Slide Time:
More informationLecture 44. Better and successive approximations x2, x3,, xn to the root are obtained from
Lecture 44 Solution of Non-Linear Equations Regula-Falsi Method Method of iteration Newton - Raphson Method Muller s Method Graeffe s Root Squaring Method Newton -Raphson Method An approximation to the
More informationIntroduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy
Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing
More informationECS130 Scientific Computing Handout E February 13, 2017
ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ
More informationMath 1080: Numerical Linear Algebra Chapter 4, Iterative Methods
Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods
More information5.3 The Power Method Approximation of the Eigenvalue of Largest Module
192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and
More informationEigenvalue problems III: Advanced Numerical Methods
Eigenvalue problems III: Advanced Numerical Methods Sam Sinayoko Computational Methods 10 Contents 1 Learning Outcomes 2 2 Introduction 2 3 Inverse Power method: finding the smallest eigenvalue of a symmetric
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of
More informationSection 4.5 Eigenvalues of Symmetric Tridiagonal Matrices
Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues We will now complete
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationMath 205 A B Final Exam page 1 12/12/2012 Name
Math 205 A B Final Exam page 1 12/12/2012 Name 1. Let F be the vector space of continuous functions f : R R that we have been using in class. Let H = {f F f has the same y-coordinate at x = 1 as it does
More informationActivity 8: Eigenvectors and Linear Transformations
Activity 8: Eigenvectors and Linear Transformations MATH 20, Fall 2010 November 1, 2010 Names: In this activity, we will explore the effects of matrix diagonalization on the associated linear transformation.
More informationA Residual Inverse Power Method
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR 2007 09 TR 4854 A Residual Inverse Power Method G. W. Stewart February 2007 ABSTRACT The inverse
More informationPractical Information
MA2501 Numerical Methods Spring 2018 Norwegian University of Science and Technology Department of Mathematical Sciences Semester Project Practical Information This project counts for 30 % of the final
More informationRatio of Polynomials Fit Many Variables
Chapter 376 Ratio of Polynomials Fit Many Variables Introduction This program fits a model that is the ratio of two polynomials of up to fifth order. Instead of a single independent variable, these polynomials
More informationSection Summary. Sequences. Recurrence Relations. Summations. Examples: Geometric Progression, Arithmetic Progression. Example: Fibonacci Sequence
Section 2.4 Section Summary Sequences. Examples: Geometric Progression, Arithmetic Progression Recurrence Relations Example: Fibonacci Sequence Summations Introduction Sequences are ordered lists of elements.
More informationSection 1.1 Algorithms. Key terms: Algorithm definition. Example from Trapezoidal Rule Outline of corresponding algorithm Absolute Error
Section 1.1 Algorithms Key terms: Algorithm definition Example from Trapezoidal Rule Outline of corresponding algorithm Absolute Error Approximating square roots Iterative method Diagram of a general iterative
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationIntroduction to Techniques for Counting
Introduction to Techniques for Counting A generating function is a device somewhat similar to a bag. Instead of carrying many little objects detachedly, which could be embarrassing, we put them all in
More informationKrylov Subspace Methods to Calculate PageRank
Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More information9.2 Eigenanalysis II. Discrete Dynamical Systems
9. Eigenanalysis II 653 9. Eigenanalysis II Discrete Dynamical Systems The matrix equation y = 5 4 3 5 3 7 x predicts the state y of a system initially in state x after some fixed elapsed time. The 3 3
More informationTHE EIGENVALUE PROBLEM
THE EIGENVALUE PROBLEM Let A be an n n square matrix. If there is a number λ and a column vector v 0 for which Av = λv then we say λ is an eigenvalue of A and v is an associated eigenvector. Note that
More information= lim. (1 + h) 1 = lim. = lim. = lim = 1 2. lim
Math 50 Exam # Solutions. Evaluate the following its or explain why they don t exist. (a) + h. h 0 h Answer: Notice that both the numerator and the denominator are going to zero, so we need to think a
More informationEQ: What are limits, and how do we find them? Finite limits as x ± Horizontal Asymptote. Example Horizontal Asymptote
Finite limits as x ± The symbol for infinity ( ) does not represent a real number. We use to describe the behavior of a function when the values in its domain or range outgrow all finite bounds. For example,
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationAn Asynchronous Algorithm on NetSolve Global Computing System
An Asynchronous Algorithm on NetSolve Global Computing System Nahid Emad S. A. Shahzadeh Fazeli Jack Dongarra March 30, 2004 Abstract The Explicitly Restarted Arnoldi Method (ERAM) allows to find a few
More informationCSE 554 Lecture 7: Alignment
CSE 554 Lecture 7: Alignment Fall 2012 CSE554 Alignment Slide 1 Review Fairing (smoothing) Relocating vertices to achieve a smoother appearance Method: centroid averaging Simplification Reducing vertex
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationPageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)
PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper) In class, we saw this graph, with each node representing people who are following each other on Twitter: Our
More informationSyDe312 (Winter 2005) Unit 1 - Solutions (continued)
SyDe3 (Winter 5) Unit - Solutions (continued) March, 5 Chapter 6 - Linear Systems Problem 6.6 - b Iterative solution by the Jacobi and Gauss-Seidel iteration methods: Given: b = [ 77] T, x = [ ] T 9x +
More informationCHAPTER 5. Basic Iterative Methods
Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than
More informationSection Summary. Sequences. Recurrence Relations. Summations Special Integer Sequences (optional)
Section 2.4 Section Summary Sequences. o Examples: Geometric Progression, Arithmetic Progression Recurrence Relations o Example: Fibonacci Sequence Summations Special Integer Sequences (optional) Sequences
More informationNotes for CS542G (Iterative Solvers for Linear Systems)
Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,
More informationSection 4.4 Reduction to Symmetric Tridiagonal Form
Section 4.4 Reduction to Symmetric Tridiagonal Form Key terms Symmetric matrix conditioning Tridiagonal matrix Similarity transformation Orthogonal matrix Orthogonal similarity transformation properties
More informationF04JGF NAG Fortran Library Routine Document
F4 Simultaneous Linear Equations F4JGF NAG Fortran Library Routine Document Note. Before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised
More informationRunning Time. Assumption. All capacities are integers between 1 and C.
Running Time Assumption. All capacities are integers between and. Invariant. Every flow value f(e) and every residual capacities c f (e) remains an integer throughout the algorithm. Theorem. The algorithm
More information10725/36725 Optimization Homework 4
10725/36725 Optimization Homework 4 Due November 27, 2012 at beginning of class Instructions: There are four questions in this assignment. Please submit your homework as (up to) 4 separate sets of pages
More informationMath/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018
Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today
More informationChapter 2 Solutions of Equations of One Variable
Chapter 2 Solutions of Equations of One Variable 2.1 Bisection Method In this chapter we consider one of the most basic problems of numerical approximation, the root-finding problem. This process involves
More informationLecture 5. September 4, 2018 Math/CS 471: Introduction to Scientific Computing University of New Mexico
Lecture 5 September 4, 2018 Math/CS 471: Introduction to Scientific Computing University of New Mexico 1 Review: Office hours at regularly scheduled times this week Tuesday: 9:30am-11am Wed: 2:30pm-4:00pm
More informationFirst of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if
5 Matrices in Different Coordinates In this section we discuss finding matrices of linear maps in different coordinates Earlier in the class was the matrix that multiplied by x to give ( x) in standard
More informationKrylov Subspaces. The order-n Krylov subspace of A generated by x is
Lab 1 Krylov Subspaces Lab Objective: matrices. Use Krylov subspaces to find eigenvalues of extremely large One of the biggest difficulties in computational linear algebra is the amount of memory needed
More information2. The Power Method for Eigenvectors
2. Power Method We now describe the power method for computing the dominant eigenpair. Its extension to the inverse power method is practical for finding any eigenvalue provided that a good initial approximation
More informationMAE 107 Homework 8 Solutions
MAE 107 Homework 8 Solutions 1. Newton s method to solve 3exp( x) = 2x starting at x 0 = 11. With chosen f(x), indicate x n, f(x n ), and f (x n ) at each step stopping at the first n such that f(x n )
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More information3.2 Iterative Solution Methods for Solving Linear
22 CHAPTER 3. NUMERICAL LINEAR ALGEBRA 3.2 Iterative Solution Methods for Solving Linear Systems 3.2.1 Introduction We continue looking how to solve linear systems of the form Ax = b where A = (a ij is
More informationSec$on Summary. Sequences. Recurrence Relations. Summations. Ex: Geometric Progression, Arithmetic Progression. Ex: Fibonacci Sequence
Section 2.4 Sec$on Summary Sequences Ex: Geometric Progression, Arithmetic Progression Recurrence Relations Ex: Fibonacci Sequence Summations 2 Introduc$on Sequences are ordered lists of elements. 1, 2,
More informationLeast squares and Eigenvalues
Lab 1 Least squares and Eigenvalues Lab Objective: Use least squares to fit curves to data and use QR decomposition to find eigenvalues. Least Squares A linear system Ax = b is overdetermined if it has
More informationAssignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:
Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization
More informationOutline. Math Numerical Analysis. Intermediate Value Theorem. Lecture Notes Zeros and Roots. Joseph M. Mahaffy,
Outline Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research
More informationA First Course on Kinetics and Reaction Engineering Example 1.4
Example 1.4 Problem Purpose This example illustrates the process of identifying reactions that are linear combinations of other reactions in a set and eliminating them until a mathematically independent
More informationMath Numerical Analysis
Math 541 - Numerical Analysis Lecture Notes Zeros and Roots Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center
More informationThe residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x!
The residual again The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute r = b - A x! which gives us a measure of how good or bad x! is as a
More informationSystems of Algebraic Equations and Systems of Differential Equations
Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices
More informationPreliminaries and Complexity Theory
Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra
More informationDesigning Information Devices and Systems I Spring 2018 Homework 5
EECS 16A Designing Information Devices and Systems I Spring 2018 Homework All problems on this homework are practice, however all concepts covered here are fair game for the exam. 1. Sports Rank Every
More informationJeffrey D. Ullman Stanford University
Jeffrey D. Ullman Stanford University 2 Often, our data can be represented by an m-by-n matrix. And this matrix can be closely approximated by the product of two matrices that share a small common dimension
More informationR x n. 2 R We simplify this algebraically, obtaining 2x n x n 1 x n x n
Math 42 Homework 4. page 3, #9 This is a modification of the bisection method. Write a MATLAB function similar to bisect.m. Here, given the points P a a,f a and P b b,f b with f a f b,we compute the point
More informationConvergence of Sequences
James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University September 5, 2018 Outline 1 2 Homework Definition Let (a n ) n k be a sequence of real numbers.
More information5. Solving the Bellman Equation
5. Solving the Bellman Equation In the next two lectures, we will look at several methods to solve Bellman s Equation (BE) for the stochastic shortest path problem: Value Iteration, Policy Iteration and
More informationINTRODUCTION, FOUNDATIONS
1 INTRODUCTION, FOUNDATIONS ELM1222 Numerical Analysis Some of the contents are adopted from Laurene V. Fausett, Applied Numerical Analysis using MATLAB. Prentice Hall Inc., 1999 2 Today s lecture Information
More informationQuiescent Steady State (DC) Analysis The Newton-Raphson Method
Quiescent Steady State (DC) Analysis The Newton-Raphson Method J. Roychowdhury, University of California at Berkeley Slide 1 Solving the System's DAEs DAEs: many types of solutions useful DC steady state:
More informationKrylov Subspaces. Lab 1. The Arnoldi Iteration
Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra
More informationFamily Feud Review. Linear Algebra. October 22, 2013
Review Linear Algebra October 22, 2013 Question 1 Let A and B be matrices. If AB is a 4 7 matrix, then determine the dimensions of A and B if A has 19 columns. Answer 1 Answer A is a 4 19 matrix, while
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Power iteration Notes for 2016-10-17 In most introductory linear algebra classes, one computes eigenvalues as roots of a characteristic polynomial. For most problems, this is a bad idea: the roots of
More informationTHE DYNAMICS OF SUCCESSIVE DIFFERENCES OVER Z AND R
THE DYNAMICS OF SUCCESSIVE DIFFERENCES OVER Z AND R YIDA GAO, MATT REDMOND, ZACH STEWARD Abstract. The n-value game is a dynamical system defined by a method of iterated differences. In this paper, we
More informationME451 Kinematics and Dynamics of Machine Systems
ME451 Kinematics and Dynamics of Machine Systems Introduction to Dynamics Newmark Integration Formula [not in the textbook] December 9, 2014 Dan Negrut ME451, Fall 2014 University of Wisconsin-Madison
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationSolutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.
Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes
More informationRatio of Polynomials Fit One Variable
Chapter 375 Ratio of Polynomials Fit One Variable Introduction This program fits a model that is the ratio of two polynomials of up to fifth order. Examples of this type of model are: and Y = A0 + A1 X
More informationMath 98 - Introduction to MATLAB Programming. Spring Lecture 3
Reminders Instructor: Chris Policastro Class Website: https://math.berkeley.edu/~cpoli/math98/fall2016.html Assignment Submission: https://bcourses.berkeley.edu Homework 2 1 Due September 8th by 11:59pm
More informationNumerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods
Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods So welcome to the next lecture of the 2 nd unit of this
More informationMATH 221, Spring Homework 10 Solutions
MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the
More information2.2. Limits Involving Infinity. Copyright 2007 Pearson Education, Inc. Publishing as Pearson Prentice Hall
2.2 Limits Involving Infinity Copyright 2007 Pearson Education, Inc. Publishing as Pearson Prentice Hall Finite Limits as x ± What you ll learn about Sandwich Theorem Revisited Infinite Limits as x a End
More informationTuring Machines Part II
Turing Machines Part II Problem Set Set Five Five due due in in the the box box up up front front using using a late late day. day. Hello Hello Condensed Slide Slide Readers! Readers! This This lecture
More informationLINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,
LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically
More informationMath 56 Homework 1 Michael Downs. ne n 10 + ne n (1)
. Problem (a) Yes. The following equation: ne n + ne n () holds for all n R but, since we re only concerned with the asymptotic behavior as n, let us only consider n >. Dividing both sides by n( + ne n
More informationLab 2 Iterative methods and eigenvalue problems. Introduction. Iterative solution of the soap film problem. Beräkningsvetenskap II/NV2, HT (6)
Beräkningsvetenskap II/NV2, HT 2008 1 (6) Institutionen för informationsteknologi Teknisk databehandling Besöksadress: MIC hus 2, Polacksbacken Lägerhyddsvägen 2 Postadress: Box 337 751 05 Uppsala Telefon:
More informationLinear Algebra and TI 89
Linear Algebra and TI 89 Abdul Hassen and Jay Schiffman This short manual is a quick guide to the use of TI89 for Linear Algebra. We do this in two sections. In the first section, we will go over the editing
More informationMAT 343 Laboratory 6 The SVD decomposition and Image Compression
MA 4 Laboratory 6 he SVD decomposition and Image Compression In this laboratory session we will learn how to Find the SVD decomposition of a matrix using MALAB Use the SVD to perform Image Compression
More informationG03ACF NAG Fortran Library Routine Document
G03 Multivariate Methods G03ACF NAG Fortran Library Routine Document Note. Before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms
More informationA hybrid reordered Arnoldi method to accelerate PageRank computations
A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationSome definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts
Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric
More informationCS 6901 (Applied Algorithms) Lecture 2
CS 6901 (Applied Algorithms) Lecture 2 Antonina Kolokolova September 15, 2016 1 Stable Matching Recall the Stable Matching problem from the last class: there are two groups of equal size (e.g. men and
More informationLinear algebra issues in Interior Point methods for bound-constrained least-squares problems
Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek
More informationQualifying Examination Winter 2017
Qualifying Examination Winter 2017 Examination Committee: Anne Greenbaum, Hong Qian, Eric Shea-Brown Day 1, Tuesday, December 12, 9:30-12:30, LEW 208 You have three hours to complete this exam. Work all
More informationEigenvectors and Hermitian Operators
7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding
More informationav 1 x 2 + 4y 2 + xy + 4z 2 = 16.
74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationPreliminary Examination, Numerical Analysis, August 2016
Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationMath for ML: review. ML and knowledge of other fields
ath for L: review ilos Hauskrecht milos@cs.pitt.edu Sennott Square x- people.cs.pitt.edu/~milos/ L and knowledge of other fields L solutions and algorithms rely on knowledge of many other disciplines:
More informationConvergence of Sequences
Convergence of Sequences James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University February 12, 2018 Outline Convergence of Sequences Definition Let
More information