Eigenvalues by row operations

Similar documents
MAT1302F Mathematical Methods II Lecture 19

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

MATH 310, REVIEW SHEET 2

Introduction to Matrices

Final Exam Practice Problems Answers Math 24 Winter 2012

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

6 EIGENVALUES AND EIGENVECTORS

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Conceptual Questions for Review

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Chapter 4. Solving Systems of Equations. Chapter 4

4 Elementary matrices, continued

Eigenvalues and Eigenvectors

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

18.06SC Final Exam Solutions

4 Elementary matrices, continued

Math 215 HW #9 Solutions

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

STEP Support Programme. STEP 2 Matrices Topic Notes

Math Matrix Algebra

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Chapter 4 & 5: Vector Spaces & Linear Transformations

Linear Algebra, Summer 2011, pt. 2

Extra Problems for Math 2050 Linear Algebra I

Properties of Linear Transformations from R n to R m

SOLUTIONS: ASSIGNMENT Use Gaussian elimination to find the determinant of the matrix. = det. = det = 1 ( 2) 3 6 = 36. v 4.

Eigenvalues and eigenvectors

Calculating determinants for larger matrices

Designing Information Devices and Systems I Discussion 4B

and let s calculate the image of some vectors under the transformation T.

Eigenvalues and Eigenvectors

22A-2 SUMMER 2014 LECTURE 5

Row Reduction

Numerical Linear Algebra Homework Assignment - Week 2

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

MAT 1302B Mathematical Methods II

Problems for M 11/2: A =

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Problems for M 10/26:

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Eigenvalues and Eigenvectors: An Introduction

1 Matrices and Systems of Linear Equations. a 1n a 2n

CSL361 Problem set 4: Basic linear algebra

Honors Advanced Mathematics Determinants page 1

Math Linear Algebra Final Exam Review Sheet

MATH 320, WEEK 11: Eigenvalues and Eigenvectors

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

18.06 Quiz 2 April 7, 2010 Professor Strang

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

MIT Final Exam Solutions, Spring 2017

Chapter 2 Notes, Linear Algebra 5e Lay

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

2. Every linear system with the same number of equations as unknowns has a unique solution.

Chapter 5. Eigenvalues and Eigenvectors

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Study Guide for Linear Algebra Exam 2

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Lecture 1 Systems of Linear Equations and Matrices

MATH 1553 PRACTICE MIDTERM 3 (VERSION B)

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

1 Last time: determinants

Math 1553 Worksheet 5.3, 5.5

Math 320, spring 2011 before the first midterm

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Lecture 9: Elementary Matrices

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Differential Equations

Cheat Sheet for MATH461

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Summer Session Practice Final Exam

Elementary Linear Algebra

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Announcements Monday, October 29

Getting Started with Communications Engineering

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Volume in n Dimensions

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

2 Systems of Linear Equations

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015

Elementary maths for GMT

Chapter 2. Matrix Arithmetic. Chapter 2

1 Last time: inverses

Chapter 3. Determinants and Eigenvalues

1 Last time: least-squares problems


1300 Linear Algebra and Vector Geometry

Linear Algebra Practice Final

Transcription:

Eigenvalues by row operations Barton L. Willis Department of Mathematics University of Nebraska at Kearney Kearney, Nebraska 68849 May, 5 Introduction There is a wonderful article, Down with Determinants!, by Sheldon Axler that shows how linear algebra can be done better without determinants. [] The article received the Lester R. Ford Award for expository writing from the Mathematical Association of America. Axler s article shows how to prove some standard results in linear algebra without using determinants, but it doesn t show how to do calculations without using determinants. My article will show you two ways to find the eigenvalues of a matrix without using determinants. The first method (Section ) uses only row operations; the second method (Section ) applies ideas from Axler s article. A complete description of the second method is in an article by William A. McWorter and Leroy F. Meyers []; I do not know of a reference for the first method. Section 4 concludes with a method for solving generalized eigenvalue problems. This article started as class notes for my spring 5 linear algebra class. Since then, I more than doubled the length, but I kept some of the original style of the notes. The row reduction method A number z is an eigenvalue of a square matrix A provided A zi is singular. The best way to determine if a matrix is singular is to reduce it to a triangular form. So it seems that a good scheme for finding the eigenvalues of a matrix A would be to find a triangular form of A zi. We ll show that A zi has a triangular form with a simple representation, but finding it requires a trick. I ll show you how to do it for a matrix. You ll need to convince yourself that the process can be done for an arbitrary square matrix. To illustrate, let s try to reduce z z 4 z

to a triangular form. If z, we could use the, entry as a pivot and do the row operations R R z R and R R z R. The case z = would need to be handled in a separate calculation. This plan would result in a triangular form that is not simple (multiple cases). A better scheme would be to swap rows and. This gives z z. 4 z Now the, entry is a nonzero constant; we ll call a good pivot because it is nonzero and constant. The row operations R R ( z)r and R R R place zeros below the diagonal in the first column; thus z ( z) ( z) z z + z z = z + 4z z z + z Now we have trouble. The, and the, entries are both bad pivots (they are non-constant). This time, a row swap will not place a good pivot in the, position. We could try swapping columns, but for this matrix it doesn t help because every entry in the, sub-matrix is a bad pivot. It s time for a trick. We ll try to do a row operation of the form R R θr that makes the, entry a good pivot. What value should we choose for θ? After the row operation, the, entry is ( z + 4z ) θ(z + ). We d like the, entry to be constant and nonzero. Can this be arranged? Certainly, define θ to be θ = ( z + 4z ) (z + ) = z + 5, where by we mean the quotient without the remainder. The row operation R R ( z + 5)R yields z 6 z + 4z +. z + z We were successful; the, entry is a good pivot. It s now possible to do R R + z+ 6 R. The result is z 6 z + 4z + ( 6 z z 8z + ). My students will recognize the terms good pivot and bad pivot. The, sub-matrix is everything on row and below and everything in column and to the right. This concept generalizes to the i,j sub-matrix..

This is the echelon form we desired. The characteristic polynomial is the product of the diagonal entries times ( ) k, where k is the number of row swaps. We did one row swap, so the characteristic polynomial is z z + z + 8z. Let s find the eigenvectors. To start, it might seem that we should first find the roots of the characteristic polynomial. Any computer algebra system will tell us the solutions, but they are big ugly messes. Here is one root ( i + ) ( 9 i + ) / 9 i + i + 9 i +. What does a mathematician do when things get unbearably messy? We give the messy thing a name and push forward. Let s try this approach. Let z be an eigenvalue. In the row reduced matrix, substitute z z, and use the fact that z z 8z + =. Now do back substitution on the matrix z z 6 z + 4z + ( 6 z z 8z + ) = 6 z + 4z + Naming the unknowns a,b,c, we see that c is free. Choosing c = makes a = and b =. Eigenvectors must be nonzero, so the choice c = doesn t work. Choosing c = gives b = 6 z + z + 5, ( a = ( z ) 6 z + z + 5 ). We could stop now, but the expression for a is cubic in z. Expanding a and substituting z z + 8z gives a simpler representation for a. It is a = z 5 z 7. The eigenvector is a b = c 7 5 + z 5 + z 6., () where z is any eigenvalue. Since the characteristic polynomial has three distinct roots, we were able to find all the eigenvectors using only one back substitution! You might guess that the imaginary part of this number is nonzero, but it s not. Remember that every cubic polynomial with real coefficients has at least one real root. Of the three roots of the polynomial, I chose to display the real root.

Does the row reduction method work when the characteristic polynomial has a degenerate root? It does, but there is a slight twist. To illustrate, let s find the eigenvectors of. 5 We begin by subtracting zi. Swapping rows and gives 5 z z. z Next, do R R ( z)r ; thus 5 z z. z z + 6z 8 Following the trick we did in the first example, we now apply R R θr, where θ = ( z) (z ) =. The result is 5 z z 6z + 8. z z + 6z 8 This time the trick doesn t make the, entry a good pivot (the slight twist). Nevertheless, swapping rows and gives the triangular form 5 z 5 z z z + 6z 8 = z (z )(z 4) z 6z + 8 (z 4)(z ) that we were trying to find. The characteristic polynomial is z (z 4)(z ). The eigenvalues are and 4. Unlike the first example, we can t find all three eigenvectors with a single back substitution. Substituting z and solving gives the eigenvectors and. The geometric multiplicity of this eigenvalue is because the, and the, entries of the triangularized matrix have a common root. For this example, the common root was easy to identify; for more involved problems, it might be necessary to use the polynomial resultant to detect the common roots. [] Substituting z 4 gives the eigenvector. 4

Method of McWorter and Meyers Our second method comes from Computing eigenvalues and eigenvectors without determinants, by William A. McWorter and Leroy F. Meyers. [] After reading just two pages out of ten, I had a slap-on-the-forehead moment. The article uses an idea that I ve used to prove theorems, but I had never connected the idea to a computational method. Here is the idea. Let A be a square matrix, and let x be any nonzero vector. Find the greatest integer m such that the set {x, Ax, A x,..., A m x} is linearly independent. There are scalars α,...,α m such that ( α I + α A + α A + + α m A m + A m) x =. Equivalently, there are scalars z,...,z m such that (A z I)(A z I) (A z m I)x =. Either (A z I) (A z m I)x is an eigenvector of A with eigenvalue z, or (A z I) (A z m I)x =. The left side of this equation is a nontrivial linear combination of the linearly independent set {x, Ax, A x,..., A m x}. Thus (A z I) (A z m I)x. This shows that (A z I) (A z m I)x is an eigenvector of A corresponding to the eigenvalue z. Let s try this method on the example we worked using the row reduction method. Let A =. 4 Choose x =, and do row reduction on the matrix with columns x, Ax, A x, A x. We have 4 8 4 8 5 5 8. 4 From this we discover that x 8Ax A x + A x =. Consequently, ( I 8A A + A ) x =. Factor this as (A z I)(A z I)(A z I)x =, 5

where z,z, and z are zeros of the polynomial z 8z z + z. The eigenvector corresponding to the eigenvalue z is z z z z + 4 (A z I)(A z I)x = z z + 5 z z + 4. Showing that this vector is parallel to [see Eq. ()] 7 5 + z + z, 6 5 is an algebraic nightmare. I ve been able to verify it numerically, but I have not been able to prove it. 4 Generalized eigenvalues Let A and B be square matrices with the same size. A complex number z is a generalized eigenvalue of (A, B) provided A z B is singular. Determinants give a quick characterization of the generalized eigenvalues; z is a generalized eigenvalue of (A,B) provided det(a zb) =. Can the computational method given in McWorter and Meyers be extended to find generalized eigenvalues? Can Axler s proofs be modified to develop a theory of generalized eigenvalues without resorting to determinants? I don t know. But I do know that the row operation method given in this article can be used to find generalized eigenvalues. Again, I demonstrate this by way of an example. Let s find the generalized eigenvalues of (A,B), where 4 A = 4 5 6, B =. 7 8 9 We need to row reduce z z 4 5 z 6. 7 z 8 9 z Swapping rows and and eliminating in the first column yields 4 5 z 6 z 4 + z + z 4 +. z 4 + z z 4 4 When B is invertible, the generalized eigenvalues of (A,B) are the eigenvalues of AB. In this example, the matrix B is singular. 6

To eliminate in the second column, it would be advantageous to swap the second and third columns. Doing so would place first degree polynomials in the, and, positions. Instead of swapping columns, let s try to proceed as before. The quotient (without remainder) of the, entry divided by the, entry is. Thus we do the row operation R R R. The result is 4 5 z 6 z. z 4 + z 4 z The second column still does not have a good pivot. What do we do? We use our trick again, but this time, we apply it to the third row. (The, entry has the greatest degree.) The quotient of the, and the, entries is z 6. The row operation R R z 6 R gives 4 5 z 6 z. 4 Finally, the second column has a good pivot. Swapping the second and third rows, and eliminating in the second column gives the triangular form 4 5 z 6 4. z The only generalized eigenvalue is. As a check, we have z z det 4 5 z 6 = 4z. 7 z 8 9 z 5 Acknowledgment Sheldon Axler read a draft of this article and gave me the reference to the paper by McWorter and Meyers. 6 Conclusion As presented in linear algebra books, all computations, except eigenvalues, rely on row reduction. Why should the eigenvalue problem be any different? This article shows how to solve both eigenvalue and generalized eigenvalue problems using a pure row reduction method. One method that is not discussed in this article is due to Bareiss. [5] His method reduces a matrix to triangular form, but it changes the determinant. For example, applying his method to the matrix [ ] z 4 z 7

yields the triangular matrix [ ] z z. 5z [ ] Based on the triangular form, it might seem that is an eigenvalue of, but it is not. 4 You may download software (written in Maxima [4]) for the row reduction method from my web page. [6] I end with a quote from Sheldon Axler: Down with determinants! References [] Sheldon Axler, Down with Determinants!, American Mathematical Monthly (995) 9 54. [] William A. McWorter and Leroy F. Meyers, Computing eigenvalues and eigenvectors without determinants, Mathematics Magazine 7 (998) 4. [] planetmath.org/encyclopedia/resultant.html. [4] The Maxima Computer Algebra Project, maxima.sourceforge.net/. [5] E. H. Bareiss, Sylvester s Identity and Multistep Integer preserving Gaussian Elimination, Math. Comp. (968) 565 578. [6] Web page of Barton Willis, www.unk.edu/acad/math/people/willisb/. 8