APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Similar documents
22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

7. Symmetric Matrices and Quadratic Forms

Math 118, Fall 2014 Final Exam

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Math 315: Linear Algebra Solutions to Assignment 7

8. Diagonalization.

MA 265 FINAL EXAM Fall 2012

Applied Linear Algebra in Geoscience Using MATLAB

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Econ Slides from Lecture 7

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Chapter 3 Transformations

Linear Algebra: Matrix Eigenvalue Problems

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

TBP MATH33A Review Sheet. November 24, 2018

Conceptual Questions for Review

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Chapter 7: Symmetric Matrices and Quadratic Forms

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

235 Final exam review questions

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Diagonalization of Matrix

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Eigenvalues and Eigenvectors 7.2 Diagonalization

Tangent spaces, normals and extrema

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

LINEAR ALGEBRA KNOWLEDGE SURVEY

MATH 369 Linear Algebra

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Linear Algebra, Summer 2011, pt. 3

k is a product of elementary matrices.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Diagonalizing Matrices

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Study Guide for Linear Algebra Exam 2

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

Math 302 Outcome Statements Winter 2013

Transpose & Dot Product

Maths for Signals and Systems Linear Algebra in Engineering

Solutions to Final Exam

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Final Exam Practice Problems Answers Math 24 Winter 2012

Linear Algebra

Linear Algebra Practice Final

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Transpose & Dot Product

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

Section 7.3: SYMMETRIC MATRICES AND ORTHOGONAL DIAGONALIZATION

Review problems for MA 54, Fall 2004.

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

1 Last time: least-squares problems

1. Select the unique answer (choice) for each problem. Write only the answer.

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Chapter 6: Orthogonality

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Check that your exam contains 30 multiple-choice questions, numbered sequentially.

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

I. Multiple Choice Questions (Answer any eight)

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Spring 2014 Math 272 Final Exam Review Sheet

Lecture 15, 16: Diagonalization

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010

6 EIGENVALUES AND EIGENVECTORS

Eigenvalues and Eigenvectors A =

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Recall the convention that, for us, all vectors are column vectors.

Practice problems for Exam 3 A =

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Dimension. Eigenvalue and eigenvector

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Chapter 5 Eigenvalues and Eigenvectors

UNIT 6: The singular value decomposition.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

The Singular Value Decomposition

Exercise Sheet 1.

PRACTICE PROBLEMS FOR THE FINAL

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

Math 108b: Notes on the Spectral Theorem

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

ELLIPSES. Problem: Find the points on the locus. Q(x, y) = 865x 2 294xy + 585y 2 = closest to, and farthest from, the origin. Answer.

Chapter 3. Determinants and Eigenvalues

Knowledge Discovery and Data Mining 1 (VO) ( )

1. General Vector Spaces

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Notes on Eigenvalues, Singular Values and QR

LINEAR ALGEBRA SUMMARY SHEET.

A Brief Outline of Math 355

Math 1553, Introduction to Linear Algebra

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

Chapter 12 Review Vector. MATH 126 (Section 9.5) Vector and Scalar The University of Kansas 1 / 30

Transcription:

CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However, the matrix is not symmetric, so there is no special reason to expect that the eigenvectors will be perpendicular The eigenvalues are,, An orthonormal basis is,, 4 The columns of the matrix 4 4 4 form an orthonormal basis of eigenvectors corresponding to the eigenvalues 4,, 6 (P t AP ) t = P t A t (P t ) t = P t AP (a), (b) 4 8,, 7 994 4 7

4 III APPLICATIONS The eigenvalues are with multiplicity and with multiplicity A basis for the eigenspace corresponding to the eigenvalue is, Applying Gram Schmidt to this yields, 6 an eigenvector of length for the eigenvalue is An orthonormal basis of eigenvectors is,, sqrt 6 The characteristic polynomial is [(λ+) (λ+) 6] Put X = λ+ Then the equation becomes X X 6 =, and this factors as (X 4)(X +) =,so the roots are X =4,, That means the eigenvalues are λ = with multiplicity and λ = with multiplicity For λ = 4, a normalized eigenvector is u = For λ =, v =, v = form a basis of eigenvectors Applying Gram Schmidt yields u =, u = 6 However, it would also make sense to reverse the order and apply Gram Schmidt to obtain u =, u = 6

III APPLICATIONS 4 (a) v = v, v = v cv = v cv for an appropriate scalar c, so it is clear that v can t be zero, unless v is a multiple of v (b) v = v yv zv for appropriate scalars y, z If it were zero, that would say that v is a linear combination of v and v, which we know are linear combinations of v and v Hence, v would be a linear combination of v and v, which by assumption is not the case (The same argument works in general for any number of vectors, as long as they form a linearly independent set If any v i =, then that would, after a lot of algebra, give a way to express v i in terms of v,,v i ) [ [ / / (a) / ] (b) / ] / / / / Replacing θ by θ doesn t change the cosine entries and changes the signs of the sine entries The P matrix is [ ] Its inverse is its transpose, so the components of gj in the new coordinate system are given by [ ] g [ ] = g 4 Such a matrix is The diagonal entries are, Such a matrix is 4 4 The diagonal entries are 4,, 6 6 Let A, B be orthogonal, ie, they are invertible and A t = A,B t =B Then AB is invertible and 4 (AB) t = B t A t = B A =(AB) The inverse of an orthogonal matrix is orthogonal For, if A t is the inverse of A, then A is the inverse of A t But (A t ) t = A, soa t has the property that its transpose is its inverse 7 The rows are also mutually perpendicular unit vectors The reason is that another way to characterize an orthogonal matrix P is to say that the P t is the inverse of P, ie, P t P = PP t = I However, it is easy to see from this that P t is also orthogonal (Its transpose (P t ) t = P is also its inverse) Hence, the columns of P t are mutually perpendicular unit vectors But these are the rows of P

6 III APPLICATIONS 4 The principal axes are given by the basis vectors u =, u = In the new coordinates system the equation is which is the equation of an ellipse (x ) +(y ) = 4 The eigenvalues are, The conic is a hyperbola The principal axes may be specified by the unit vectors u =, u = The equation in the new coordinates is (x ) +(y ) = 4 The points closest to the origin are at x =,y = ± In the original coordinates, these points are ±( /, /) 4 The principal axes are those along the unit vectors u = 4, u = The equation in the new coordinate system is (x ) + (y ) = 4 The curve is a hyperbola The points closest to the origin are given by x =,y = ± In the original coordinates these are the points ± (4, ) There is no upper bound on the distance of points to the origin for a hyperbola 44 The principal axes are along the unit vectors given by u =, u =, u = 6 The equation in the new coordinates system is (x ) +(y ) +(z ) = This is an elliptic hyperboloid of one sheet centered on the x -axis

III APPLICATIONS 7 Compare this with Exercise in Section 7 The equations are x = λ(x + y) y = λ(x +y) x +xy + y = This in effect says that (x, y) give the components of an eigenvector for the matrix with as eigenvalue However, since we already classified the conic in the aforementioned exercise, it is easier to use that information here In the new coordi- λ nates the equation is (x ) +(y ) = The maximum distance to the origin is at x = ±,y = and the minimum distance to the origin is at x =,y =± / Since the change of coordinates is orthogonal, we may still measure distance by (x ) +(y ) Hence, the maximum square distance to the origin is and the minimum square distance to the origin is / Note that the problem did not ask for the locations in the original coordinates where the maximum and minimum are attained (Look in the previous section for an analogous problem) The equations are x y z = λ x y z, x +y +z = Thus we need to find eigenvectors of length for the given matrix However, we already did this in the aforementioned exercise The answers are ± (,, ), ± (,, ), ± (,, ) 6 The values of the function at these three points are respectively,, Since a continuous function on a closed bounded set must attain both a maximum and minimum values, the maximum is at the third point and the minimum is at the first point The Lagrange multiplier condition yields the equations x = λx y = λy z = λz x + y = z If λ, then the first two equations show that x = y = From this, the last equation shows that z = Hence, (,, ) is one possible maximum point If

8 III APPLICATIONS λ =, then the third equation shows that z =, and then the last equation shows that x = y = Hence, that gives the same point Finally, we have to consider all points where g = x, y, z = Again, (,, ) is the only such point Since x + y + z and it does attain the value on the given surface, it follows that is its minimum value Note that in this example g = didn t give us any other candidates to examine, but in general that might not have been the case 4 On the circle x + y =,f(x, y) = +4xy, so maximizing f(x, y) is the same as maximizing xy The level sets xy = c are hyperbolas Some of these intersect the circle x + y = and some don t intersect The boundary between the two classes are the level curves xy = and xy = The first is tangent to the circle in the first and third quadrants and the second is tangent in the second and fourth quadrants As you move on the circle toward one of these points of tangency, you cross level curves with either successively higher values of c or successively lower values of c Hence, the points of tangency are either maximum or minimum points for xy The maximum points occur in the first and third quadrants with xy attaining the value at those points Note also that the points of tangency are exactly where the normal to the circle and the normal to the level curve are parallel 6 The characteristic polynomial of is det λ λ =( λ)(( λ)( λ) ) ( λ) λ = (λ + )(λ +λ)= (λ+ )(λ +)λ Hence, the eigenvalues are ω (m/k) =λ=,, and As indicated in the problem statement, the eigenvalue corresponds to a non-oscillatory solution in which the system moves freely at constant velocity For λ =, we have ω = k/m and yielding basic eigenvector u = This corresponds to both end particles moving with equal displacements in opposite directions and the middle particle staying still For λ =, we have ω = k/m and yielding basic eigenvector u = This corresponds to both end particles moving together with equal displacements in the same direction and the middle particle moving with twice that displacement in the opposite direction

III APPLICATIONS 9 6 The characteristic polynomial of is det λ λ =( λ)(( λ)( λ) ) ( λ) λ = (λ + )(λ +4λ+) Hence, the eigenvalues are ω (m/k) =λ=, +, and = + For λ =, we have ω = k/m and yielding basic eigenvector u = This corresponds to both end particles moving with equal displacements in opposite directions and the middle particle staying still For λ =, we have ω = ( + )k/m and yielding basic eigenvector u = This corresponds to both end particles moving together with equal displacements in the same direction and the middle particle moving with times that displacement in the opposite direction For λ = +, we have ω = ( )k/m and yielding basic eigenvector u = This corresponds to both end particles moving together with equal displacements in the same direction and the middle particle moving with times that displacement in the same direction Notice that the intuitive significance of this last normal mode is not so clear 6 The given information about the first normal mode tells us that a corresponding basic eigenvector is v = Any basic eigenvector for the second normal mode must be perpendicular to v, so we can take v = Hence, the relation between the displacements for the second normal mode is x = x

III APPLICATIONS k 64 For one normal mode, ω =, and the relative motions of the particles satisfy m 6k x =x For the other normal mode, ω = and the relative motions of the m particles satisfy x =x 6 For one normal mode, λ = + and ω = obtained by reducing the matrix [ ] ( ) k The eigenspace is m Note that this matrix must be singular so that the first row must be a multiple of the second row (The multiple is in fact Check it!) Hence, the reduced matrix is [ ] ( )/ The relative motions of the particles satisfy x = x A similar analysis show that for the other normal mode, ω = ( + ) k, and the relative motions m + of the particles satisfy x = x 66 The information given tells us that two of the eigenvectors are and Any basic eigenvector for the third normal mode must be perpendicular to this If its components are v v, then we must have v v + v + v = and v v + v = By the usual method, we find that a basis for the null space of this system is given by whence we conclude that the relative motions satisfy x = x,x = 7 The set is not linearly independent

III APPLICATIONS 7 (a) For λ =, a basis for the corresponding eigenspace is For λ =, a basis for the corresponding eigenspace is, (b) An orthonormal basis of eigenvectors is,, 6 (c) We have / / / 6 P = / / / 6 / /, P t AP = 6 The answers would be different if the eigenvalues or eigenvectors were chosen in some other order 7 The columns of P must also be unit vectors 74 (a) For l =6, constitutes a basis for the corresponding eigenspace For l =, constitutes a basis for the corresponding eigenspace [ ] / / (b) P = / / The equation is 6u + v = 4 (c) The conic is an ellipse with principal axes along the axes determined by the two basic eigenvectors 7 First find an orthonormal basis of eigenvectors for The eigenvalues are λ =,, The corresponding basis is,, Picking new coordinates with these as unit vectors on the new axes, the equation is (x ) +(y ) (z ) = 6 This is a hyperboloid of one sheet Its axis is the z -axis and this points along the third vector in the above list

III APPLICATIONS If the eigenvalues had been given in another order, you would have listed the basic eigenvectors in that order also You might also have conceivably picked a negative of one of those eigenvectors for a basic eigenvector The new axes would be the same except that they would be labeled differently and some might have directions reversed The graph would be the same but its equation would look different because of the labeling of the coordinates Note that just to identify the graph, all you needed was the eigenvalues The orthonormal basis of eigenvectors is only necessary if you want to sketch the surface relative to the original coordinate axes 76 One normal mode x = v cos(ωt) has λ =, ω = k m, v = In this mode, x = x, so the particles move in the same direction with equal displacements A second normal mode x = v cos(ωt) has λ =, ω = k m, v = In this mode, x = x, so the particles move in the opposite directions with equal displacements 77 (a) Not diagonalizable is a triple root of the characteristic equation, but A I has rank, which shows that the corresponding eigenspace has dimension (b) If you subtract from each diagonal entry, and compute the rank of the resulting matrix, you find it is one So the eigenspace corresponding to l = has dimension = The other eigenspace necessarily has dimension one Hence, the matrix is diagonalizable (c) The matrix is real and symmetric so it is diagonalizable