MAT2342 : Introduction to Linear Algebra Mike Newman, 5 October assignment 1

Similar documents
Math 1553 Worksheet 5.3, 5.5

Problems for M 11/2: A =

MATH 1553, C. JANKOWSKI MIDTERM 3

and let s calculate the image of some vectors under the transformation T.

235 Final exam review questions

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Practice problems for Exam 3 A =

MAT1302F Mathematical Methods II Lecture 19

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Recall : Eigenvalues and Eigenvectors

Math 3191 Applied Linear Algebra

Problems for M 10/26:

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

TBP MATH33A Review Sheet. November 24, 2018

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Math Matrix Algebra

Diagonalization of Matrix

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Conceptual Questions for Review

Definition (T -invariant subspace) Example. Example

CAAM 335 Matrix Analysis

MATH 221, Spring Homework 10 Solutions

Linear Algebra: Matrix Eigenvalue Problems

Math Spring 2011 Final Exam

MATH 1553-C MIDTERM EXAMINATION 3

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Study Guide for Linear Algebra Exam 2

Systems of Linear ODEs

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Math 315: Linear Algebra Solutions to Assignment 7

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Linear Algebra- Final Exam Review

Eigenvalues and Eigenvectors

Jordan Canonical Form Homework Solutions

MATH 1553 PRACTICE MIDTERM 3 (VERSION A)

Eigenvalues and Eigenvectors

4. Linear transformations as a vector space 17

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Name: Final Exam MATH 3320

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

Final Exam Practice Problems Answers Math 24 Winter 2012

Linear Algebra Practice Problems

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Math 21b Final Exam Thursday, May 15, 2003 Solutions

Linear Algebra. Rekha Santhanam. April 3, Johns Hopkins Univ. Rekha Santhanam (Johns Hopkins Univ.) Linear Algebra April 3, / 7

Diagonalizing Matrices

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

5.3.5 The eigenvalues are 3, 2, 3 (i.e., the diagonal entries of D) with corresponding eigenvalues. Null(A 3I) = Null( ), 0 0

MAT 1302B Mathematical Methods II

Eigenvalues and Eigenvectors: An Introduction

1 Last time: least-squares problems

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Eigenvalues for Triangular Matrices. ENGI 7825: Linear Algebra Review Finding Eigenvalues and Diagonalization

Calculating determinants for larger matrices

MATH 310, REVIEW SHEET 2

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

88 CHAPTER 3. SYMMETRIES

Lecture 12: Diagonalization

Math 215 HW #9 Solutions

Solutions to Final Exam

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Jordan normal form notes (version date: 11/21/07)

Math Linear Algebra Final Exam Review Sheet

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

Math 308 Final, Autumn 2017

Linear algebra II Tutorial solutions #1 A = x 1

Exercise Set 7.2. Skills

Lecture 15, 16: Diagonalization

Solutions Problem Set 8 Math 240, Fall

Question 7. Consider a linear system A x = b with 4 unknown. x = [x 1, x 2, x 3, x 4 ] T. The augmented

Eigenvalues and Eigenvectors

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Topics in linear algebra

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

Dimension. Eigenvalue and eigenvector

Eigenvalues for Triangular Matrices. ENGI 7825: Linear Algebra Review Finding Eigenvalues and Diagonalization

Jordan Normal Form and Singular Decomposition

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

Math 489AB Exercises for Chapter 2 Fall Section 2.3

MATH 1553 PRACTICE MIDTERM 3 (VERSION B)

MATHEMATICS 23a/E-23a, Fall 2015 Linear Algebra and Real Analysis I Module #1, Week 4 (Eigenvectors and Eigenvalues)

Transcription:

[/8 MAT4 : Introduction to Linear Algebra Mike Newman, 5 October 07 assignment You must show your work. You may use software or solvers of some sort to check calculation of eigenvalues, but you should solve them by hand. You may use software or solvers to calculate matrix products. There are some suggested links on the course webpage; more to be added, and of course others exist as well. [8. Consider a model of three species, two predators (R and S) and one prey (P ). The populations are linked by the following dynamical system, where R k and R k are the populations of the two predators and P k is the population of prey (in 000 s), all at time k. a) Give the transition matrix A. 0.7 0. 0. A 0. 0.7 0. 0.4 0.4. R k+ 0.7R k 0.S k + 0.P k S k+ 0.R k + 0.7S k + 0.P k P k+ 0.4R k 0.4S k +.P k b) The initial population is R 0 0, S 0 40 and P 0 90. Without calculating eigenvalues or eigenvectors, give the populations after three time steps. We calculate A x 0 as follows. 0.7 0. 0. 0 0.68 0.44 0 0.4 0. 0.7 0. 40 0.44 0.68 0.44 40 5.6. 0.4 0.4. 90 0.976 0.976.488 90 65.6 Actually it might have been bettter to calculate the following. 0.7 0. 0. 0 6 0. 0.7 0. 40 4 0.4 0.4. 90 80 0.7 0. 0. 6.8 0. 0.7 0. 4 9. 0.4 0.4. 80 7 0.7 0. 0..8 0.4 0. 0.7 0. 9. 5.6. 0.4 0.4. 7 65.6 Why is the second method better than the first? c) Find the eigenvalues of the transtion matrix A.

We start by findign the characteristic polynomial 0.7 λ 0. 0. det(a λi) det 0. 0.7 λ 0. 0.4 0.4. λ 0.7 λ 0. 0. 0. 0. 0.7 λ (0.7 λ) det ( 0.) det + (0.) det 0.4. λ 0.4. λ 0.4 0.4 ( ) ( ) ( ) (0.7 λ) (0.7 λ)(. λ) + 0.04 + (0.) ( 0.)(. λ) + 0.04 + (0.) 0.04 + (0.4)(0.7 λ) (0.7 λ)(λ.9λ + 0.88) + (0.)(0.)(λ 0.8) + (0.)( 0.4)(λ 0.8) (0.7 λ)(λ 0.8)(λ.) + (0.)(0.)(λ 0.8) + (0.)( 0.4)(λ 0.8) ( ) (λ 0.8) (0.7 λ)(λ.) + 0.0 0.04 (λ 0.8)( λ +.8λ 0.8) (λ 0.8)(λ 0.8)(λ ) The eigenvalues are λ 0.8 of multiplicity two and λ of multiplicity one. d) Find a basis for each eigenspace of the transition matrix. For λ 0.8 we find the kernel of A 0.8I. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0 0 0 0 0 0 0.4 0.4 0.4 0 0 0 0 0 0 There is one pivot in the first column and two free variables, so two parameters. A basis for hte eigenspace is, 0. Note: other bases are possible! It depends on how the free variables are chosen. The free variables must be chosen so as to avoid linear dependencies; the easiest way to do this (and simplest to calculate with too) is the way we set each free variable to one and the others to zero, but this is not the only way. For λ we find the kernel of A I. 0. 0. 0. 0. 0. 0. 0.5 0. 0. 0. 0.5 0 0. 0.05 0.5 0 0.5 0.5 0.4 0.4 0. 0. 0. 0. 0 0. 0.5 0 0 0 0 0 0 /4 Two pivots and one free variable. A basis for the eigenspace is /4 or. 4 e) Is the transition matrix diagonalizable? If so, give a decomposition A P DP. For λ 0.8 we have a multiplicity of two and a dimension of two; for λ we have a multiplicity of one and a multiplicity of one. Since each dimension is equal to its multiplicity we know that A P DP exists. Speicifically we have that the columns of P are the vectors in all of the eigenbases and the 0, 8 0 0 diagonal of D is the corresponding eigenvalues. So P and D 0 0, 8 0. 4 0 NB: other answers are possible, by either changing the order of the eigenvectors and eigenvalues (consistently) or choosing other bases for each eigenspace. f) Give the vector of initial populations in terms of the eigenbasis. 0 We solve 40 c + 0 + c by row-reducing. 90 4

0 40 4 90 0 70 0 0 0 70 4 90 This gives c 0, 50, c 0. 0 0 0 50 0 0 0 70 0 0 0 0 0 0 0 50 0 0 g) Give an exact expression for the population vector at time k in terms of the eigenvalues and eigenvectors. x k A k x 0 A k 0 + 50 0 + 0 4 0A k + 50A k 0 + 0A k 0 4 0(0, 8) k + 50(0, 8) k 0 + 0() k 0 4 h) Describe the long-term behaviour. Do the populations stabilize, dissapper, grow without bound? Eventually the (0, 8) k approaches zero which leaves only the last term. So in the 0 long-term the populations approach 0() k 0, and so stabilize. 4 40 [8. In each case the eigenvalues of a transition matrix are given, as well as a basis for each eigenspace. Determine if the populations will eventually stabilize, dissappear, grow without bound. Determine the eventual (approximate) ratio between the two populations. In doing so, you may assume the initial population vector is not an eigenvector. a) λ, v [ et λ 0, 5, v [. We imagine that we have written x 0 in terms of the eigenbasis, so we have constants c and such that x 0 c v + c v. The dominant eigenvalue is λ. Thus in the long run we have the estimate x k () k c c for some constant c. So the eventual tendancy is a k c and b k c. The populations will stabilize. The ratio of the first population to the second population will eventually approach () k c () k c and so they will eventually be close to the ratio :. Note: There is a small wrinkle: what happens if c 0? Then the above doesn t make sense. But if c 0 then x 0 v and hence x 0 would be an eigenvector for λ. Since we are told x 0 is not an eigenvector, we know that c, 0 so the above is valid. b) λ, 5, v [ et λ 0, 5, v [. The dominant eigenvalue is λ.5. Thus in the long run we have the estimate x k (.5) k c for some constant c. So the eventual tendancy is a k (.5) k c and a k (.5) k c.

The populations grow exponentially. The ratio of the first population to the second population will eventually approach (.5)k c (.5) k c and so they will eventually be in the ratio :. As before, we know that c, 0 since x 0 is not an eigenvector. Note that if c 0 then x 0 would be a multiple of v and hence x k would converge to zero since λ c) λ, 5, v [ et λ, v [. The dominant eigenvalue is λ.5. Thus in the long run we have the estimate x k (.5) k c for some constant c. So the eventual tendancy is a k (.5) k c and a k (.5) k c. The populations grow exponentially The ratio of the first population to the second population will eventually approach (.5)k c (.5) k c and so they will eventually be in the ratio :. As before, we know that c, 0 since x 0 is not an eigenvector. Note that if c 0 then x 0 would be a multiple of v and hence x k would converge to since λ Note: here something slightly different happens. In the previous examples, the remaining eigenvalues satisfied λ <, so their contribution goes to zero. In this example, the second eigenvalue is λ, so it s contribution does not go to zero. However it s relative contribution (compared to the contribution from the dominant term) goes to zero, so the ratio is still given by the eigenvector. [8. Consider the transition matrix A 7/ 4/ ak, where we define x / 7/ k+ Ax k with x k. b k a) Explain why the trajectories for this matrix are rotations (hint: what are the eigenvalues?). We find the eigenvalues. ( 7/ λ 4/ det(a λi) det 7 ) ( ) 7 / 7/ λ λ λ Using the quadratic forumla we find the roots. λ 0 λ + ( ) ( 4 ) λ 0/ ± (0/) 4()() () 0/ ± (/) (5) () ( 5 ± 44 ) 5 ± i These are complex roots. They are conjugate (which we expected since the polynomial was real). Therefore they correspond to a rotation. b) Give the values r, θ and the matrices Q, R θ such that A rqr θ Q, as we defined them in class. We take λ 5 i. In order to write this in polar form we consider a right angle triangle. 5/ θ /

We have 5/ /i (5/) + (/) and θ arctan /5. Actually the angle is negative, so we λ 5/ /i e iθ e iθ. Thus we have cos(θ) sin(θ) r R θ sin(θ) cos(θ) In order to find Q we need an eigenvector for λ 5 i. 7/ (5/ i/) 4/ A λi / 7/ (5/ i/) / + i/ 4/ / / + i/ / (i )/ 4/ 4i/ R ( i)r / / + i/ 4/ 4/ 4i/ / / + i/ R (/4)R R ( /)R [ i i [ i R R R 0 0 This {[ was } maybe a little overly detailed..., but in any case we have a basis [ for the[ eigenspace: + i + i. We write this vector in terms of real and imaginary parts as + i. 0 Then these two vectors give the columns of Q. Q 0 Thus we have a decomposition for A. [ A rqr θ Q cos(θ) sin(θ) () 0 sin(θ) cos(θ) 0 c) Describe the long-term behaviour of x k. Specifically: will the two populations (a k and b k ) die out, grow exponentially,...? Will the ratio of the two populations (a k /b k ) eventually stabilize? Explain briefly; you may use the meanings of r, Q, R θ given in class. Since r the trajectories will neither spiral outwards nor inwards, so will remain on an elliptical curve. There will be a rotation of θ arctan(/), but this will not be a circular rotation, since the columns of Q are not orthonormal. In slightly more detail, the transformation consists of expressing x in terms of the bases of the columns of Q, performing a rotation of θ on the vector of those coefficients, then taking the result as a linear combination of the columns of Q, then multiplying by r (which does nothing since r ). [4 p 4. Let A with p 0. Show that A is not diagonalizable. As part of this exercise, you should find all the eigenvalues (including multiplicities) and a basis for every eigenspace. We find the eigenvalues and eigenvectors. λ p det(a λi) det ( λ)( λ) λ So the only eigenvalue is λ, of multiplicity two. Notice that the number of arithmetical operations we did was zero, due to the 0 s below the diagonal. This might serve as a reminder that the eigenvalues

of a triangular matrix are in fact the entries on the diagonal. In any case we row-reduce A λi. 0 p A λi R 0 /pr 0 0 [ So the eigenvector is (are you sure you understand where this vector came from?). Since the 0 basis for the eigenspace has size, then the dimension of the eigenspace is one, which is less than the multiplicity. The matrix is not diagonalizable. [+ 5. (bonus) Let A x 0 [ a0 b 0 p with p 0. Prove (by induction or otherwise) that A k, and define x k+ Ax k. Give formulae for a k and b k in terms of a 0, b 0, p, k Since A A, it is certainly true that A for some particular k. Then we find A k+. A k+ A k kp p A Thus A k + 0 p + kp ()p. Now assume that A k + 0 (k + )p kp for all positive k (in fact it is true for all integers k). Using the above, we find x k. ak x b k A k x 0 k [ kp a0 a0 + kpb 0 b 0 { a k a 0 + kpb 0 b 0 b k b 0 kp. Let kp Note: A diagonalizable matrix A can be written as A P DP where D is zero aside from blocks (funny way of saying that D is diagonal). An arbitrary matrix can be written as A P JP where J is zero aside from square blocks each with constant diagonal and on the superdiagonal. This is the Jordan decomposition. [ So if A is a non-diagonalizable matrix then there is an invertible λ matrix P such that A P P 0 λ. If we let the columns of P be v and v then we can find constants c, with x 0 c v + v. Then we can evaluatex k as follows. x k A k x 0 ( P JP ) k x0 P J k P x 0 P J k c Now using the above we find J k [ c. J k c So this gives x k. [ λ 0 λ k c λ k [ /λ k c λ k [ k/λ c x k P J k c P λ k c + k/λ λ k ((c c + k/λ)v + v ) λ k c + k/λ This is much lilke what would have happened for a diagonalizable matrix, except that there is an extra coefficient on the v. What happens if λ > or if λ < or if λ < 0? What if λ?