Unit VIII Matrices, Eigenvalues and Eigenvectors

Similar documents
Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Conceptual Questions for Review

Linear Algebra Practice Problems

REPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW

EXAMPLE 7: EIGENVALUE PROBLEM EXAMPLE. x ks1. ks2. fs1. fs2 !!! +!!! =!!!!! 4) State variables:!!!,!!!,!!!,!!! (Four SV s here!) =!!!

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

8-15. Stop by or call (630)

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

MAT1302F Mathematical Methods II Lecture 19

7.6 The Inverse of a Square Matrix

1 Matrices and Systems of Linear Equations. a 1n a 2n

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Chapter 3. Determinants and Eigenvalues

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016

Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018

Roberto s Notes on Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 3. Diagonal matrices

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

18.06 Quiz 2 April 7, 2010 Professor Strang

Matrices. In this chapter: matrices, determinants. inverse matrix

25. Strassen s Fast Multiplication of Matrices Algorithm and Spreadsheet Matrix Multiplications

A Brief Outline of Math 355

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

EXAM 2 REVIEW DAVID SEAL

Matrix Operations: Determinant

Lecture 1 Systems of Linear Equations and Matrices

MATH 310, REVIEW SHEET 2

Methods for Solving Linear Systems Part 2

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Honors Advanced Mathematics Determinants page 1

SCHOOL OF BUSINESS, ECONOMICS AND MANAGEMENT BUSINESS MATHEMATICS / MATHEMATICAL ANALYSIS

Homework 3 Solutions Math 309, Fall 2015

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

Final Exam Practice Problems Answers Math 24 Winter 2012

Matrices related to linear transformations

and let s calculate the image of some vectors under the transformation T.

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Chapter 2 Notes, Linear Algebra 5e Lay

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

MAT 1302B Mathematical Methods II

Math Spring 2011 Final Exam

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 3 - EIGENVECTORS AND EIGENVALUES

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Introduction to Systems of Equations

Differential Equations

Matrices, Row Reduction of Matrices

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Reduction to the associated homogeneous system via a particular solution

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Second Midterm Exam April 14, 2011 Answers., and

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

MATH 221, Spring Homework 10 Solutions

Determinants: Uniqueness and more

Definition (T -invariant subspace) Example. Example

MIT Final Exam Solutions, Spring 2017

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

Introduction to Determinants

REVIEW PROBLEMS FOR MIDTERM II MATH 2373, FALL 2016 ANSWER KEY

Eigenvalues and eigenvectors

Calculating determinants for larger matrices

MATH 320, WEEK 7: Matrices, Matrix Operations

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Elementary Linear Algebra

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

22m:033 Notes: 3.1 Introduction to Determinants

Chapter 4 & 5: Vector Spaces & Linear Transformations

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

CHAPTER 8: Matrices and Determinants

The minimal polynomial

Eigenvalues and Eigenvectors: An Introduction

Math 343 Midterm I Fall 2006 sections 002 and 003 Instructor: Scott Glasgow

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

Recitation 9: Probability Matrices and Real Symmetric Matrices. 3 Probability Matrices: Definitions and Examples

4. Determinants.

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Lecture 10: Powers of Matrices, Difference Equations

Matrices MA1S1. Tristan McLoughlin. November 9, Anton & Rorres: Ch

Econ Slides from Lecture 7

Properties of Linear Transformations from R n to R m

1 Matrices and Systems of Linear Equations

Topic 15 Notes Jeremy Orloff

Linear Algebra Basics

3 Matrix Algebra. 3.1 Operations on matrices

Table of contents. d 2 y dx 2, As the equation is linear, these quantities can only be involved in the following manner:

Connor Ahlbach Math 308 Notes

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Now, if you see that x and y are present in both equations, you may write:

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Differential equations

A matrix times a column vector is the linear combination of the columns of the matrix weighted by the entries in the column vector.

1. Introduction to commutative rings and fields

Math 205, Summer I, Week 4b:

Transcription:

Unit VIII Matrices, Eigenvalues and Eigenvectors These materials were developed for use at Grinnell College and neither Grinnell College nor the author, Mark Schneider, assume any responsibility for their suitability or completeness for use elsewhere nor liability for any damages or injury that may result from their use elsewhere. Unit VII Page i

Unit VIII Matrices, Eigenvalues and Eigenvectors In this unit, we will investigate some of the techniques appropriate to matrix usage in physics. We will concentrate on two particular operations: the inversion of matrices, and the discovery of eigenvalues and eigenvectors. The theoretical analysis of a system often proceeds naturally in one direction, starting with a model of the system, and producing predictions for experimental data. Almost invariably, there are undetermined parameters in the model, which must be determined by experiment. Naturally, the understanding of these undetermined parameters will be of great interest, so we need to have methods of going back from the experimental data to the basic parameters. The connection between these two arenas is not uncommonly made through a matrix. So, to be able to interpret data, one must be able to invert the conversion matrix. An example of this is given in Koonin's book Computational Physics in section 5.4, where experimental high energy electron scattering data is used to deduce the charge distribution of a nucleus. While we will not actually work through this example, I encourage you to glance at it to get some idea of a real application. First, however, we should refamiliarize ourselves with simple matrix multiplication, and then we will move on to a practical method of inversion. Guidebook Entry VIII.1: Matrix Multiplication Recall that the multiplication of two matrices proceeds that mixes the rows and columns. In particular, for a 2x2 pair of matrices, the product is given by α β γ δ a b αa + βc αb + βd c d = γa +δc γb +δd. Make a spreadsheet that multiplies two arbitrary 2x2 matrices. For format, you should do something like the following: allocate cells A1:B2 for the first matrix, A4:B5 for the second matrix, and A7:B8 for the product matrix. Check your sheet by verifying the product 1 2 4 1 1 2 4 2 = 7 2 0 6 Unit VII Page 1

In the next activity, you will use your multiplication techniques to do some inversion of one of these matrices. The technique we will use is more applicable to large matrices, but we will do this on this minimal matrix first. Guidebook Entry VIII.2: Inverting a 2x2 Matrix The textbook technique usually encountered for inverting matrices involves adjoints and cofactors [Cramer's Rule]--useful for proving some general results, but computationally expensive for actual calculations. Instead, we will use a method known as the Gauss-Jordan method. Let us say we wish to invert a matrix A. In other words, we want to find the matrix A -1 such that A -1 A = I where I is the identity matrix. It is difficult to guess this, but it is not difficult to get to the identity matrix step by step by a series of matrix multiplications, so that we will have T 3 (T 2 (T 1 (A))) = I so that we will ultimately find T 3 T 2 T 1 = A -1. Let's try this out on a real example. Consider the matrix we saw above of 1 2 4 2. Multiply this on the left by the matrix T 1 = 1 1 0 1. What is your result? You should have effectively replaced the first row with the second row minus the first row. Check to see that this is true. This has now placed a zero in the upper right element, a good step toward creating the identity matrix. With our result above, how can you combine the top and bottom rows in a way that allows you to replace the bottom row now with a new row that has a zero in the left position? Unit VII Page 2

How can you write that operation as a matrix (T 2 )? Multiply the intermediate result above by this matrix, and make sure that your result agrees with your multiplication by hand. You should now have a matrix that is purely diagonal, although not yet the identity matrix. Show that you can produce the identity matrix by multipling by another diagonal matrix. What is that matrix (T 3 ) for your example? Calculate the matrix T 3 T 2 T 1 and then show by direct multiplication that this is the inverse of our original matrix. Unit VII Page 3

Now, write a spreadsheet that attempts this inversion automatically for an arbitrary 2x2 matrix. To do this, I suggest you make your sheet as follows. Place two multiplying routines side by side. The left one starts out multiplying the original matrix [A] from the left, the right one multiplying the identity matrix from the left. Now make the left multiplicand [T 1 ] for the left multiplication such that it produces a result with a zero in the upper left corner. You should be able to do this such that the sheet will do it automatically for any starting matrix. Use this T 1 matrix also as the left multiplicand of the identity matrix. Then cut and past the multiplying routines down immediately below the originals (leave a space to keep things pretty). In each case, the new right hand multiplicand is the result from the multiplication immediately above. You should be able to generate the correct T matrices based on the values in certain cells of the various results. Once you think you have this working, test it on several matrices by directly multiplying the resulting inverses by the original matrices. Give some examples below. In a homework problem, you will extend this method to 3x3 matrices. In this case, you simply need to systematically work through the off-diagonal elements getting each to turn to zero one at a time. With seven multiplications, you should have your inverse. Now let's turn to the eigenvalue problem. This turns up often in physics, most familiarly in complex oscillating systems (searching for normal modes) and in quantum mechanics in the matrix format. Let's look at a real problem, which is developed in Acton's book Numerical Methods that Work in the beginning of Chapter 8. Unit VII Page 4

Guidebook Entry VIII.3: Coupled Oscillations Imagine that we have three identical objects connected in series by a set of springs as shown: 3k 2k m m m k s 1 s 2 s 3 Show that that can be described by the differential equations m k d 2 s 1 dt 2 = 5s 1 2s 2 m d 2 s 2 = 2s k dt 2 1 + 3s 2 s 3 m d 2 s 3 = s k dt 2 2 + s 3. The usual game played here is to assume normal mode solutions, that is, each object is oscillating sinusoidally, all at the same frequency. The solutions then can be written as s i = A i cosωt. If we define a new version of the frequency as x = ω 2 m k show that the solution to these simultaneous differential equations becomes solution to the following equations: xa 1 = 5A 1 2A 2 xa 2 = 2A 1 +3A 2 A 3 Unit VII Page 5

xa 3 = A 2 + A 3 Show that this can also be written as the matrix equation 5 x 2 0 A 1 0 2 3 x 1 A 2 = 0. 0 1 1 x A 3 0 If this were an equation with scalars, we would say that one of the two multiplicands must be non-invertable, that is, zero. Matrices are a bit more interesting. They may not have an inverse and still be non-zero. This affords us the opportunity to have some interesting solutions here. We know that a matrix can't be inverted if its determinant is zero. So, what we can do is find the determinant of this matrix, solve for the x values that make this go to zero, and we have found the normal mode solutions. These are also called eigenvalue solutions, or eigensolutions, which comes from the German word for "same." The appropriateness of this terminology is clearer if we rewrite the matrix equation as 5 2 0 A 1 A 1 2 3 1 A 2 = x A 2 0 1 1 A 3 A 3 where the vector is called an eigenvector, and x is the characteristic value, or eigenvalue. Unit VII Page 6

Let's use Mathematica to investigate the solutions to this a bit. First, let's enter the matrix. To do this, you simply enter the numbers as a list of lists: a={{5-x,-2,0},{-2,3-x,-1},{0,-1,1-x}} Taking the determinant is no more complicated than Det[a] which should give you a cubic polynomial in x. This suggests there are three zeroes, which agrees nicely with the number of degrees of freedom of our oscillating system. At this point, you may wish to solve for the eigenvalues analytically using Solve[%==0,x], where the % refers to the output line immediately above, which must be then placed into the form of an equation, hence the ==0. In case you have some intervening outputs, you can always refer to output line 23 as %23. Try using Solve, and describe your results. You may want to get actual numbers, which you can do with N[%], as we have used before. If you are just interested in numerical values for these roots, you may find it easier to first plot the determinant: Plot[%,{x,-10.10}] to see where the zeros might be, and then close in on them individually by making a good estimate of their location in the command FindRoot[%==0,{x,0}] where here I am guessing there is a root near x = 0. Do this, and sketch the curve, and find all three roots. Unit VII Page 7

Now let's see how smart Mathematica can be. Re-enter a new matrix, the matrix that we actually want to find the eigenvalues of: b={{5,-2,0},{-1,3,-1},{0,-1,1}}. Let Mathematica do the hard work and find the eigenvalues: Eigenvalues[N[b]] and the eigenvectors: Eigenvectors[N[b]]. The use of the N forces the matrix to be interpreted numerically, which then gives real numbers as a result. How do the eigenvalues compare with the values you got earlier? Unit VII Page 8