Extreme Values and Positive/ Negative Definite Matrix Conditions

Similar documents
Matrix Solutions to Linear Systems of ODEs

Linear Systems of ODE: Nullclines, Eigenvector lines and trajectories

Math 54 Homework 3 Solutions 9/

Review of Linear Algebra

Linear Systems of ODE: Nullclines, Eigenvector lines and trajectories

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

MIT Final Exam Solutions, Spring 2017

Solving systems of ODEs with Matlab

Matrices and Vectors

The Method of Undetermined Coefficients.

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.

Lecture 1: Systems of linear equations and their solutions

Math 331 Homework Assignment Chapter 7 Page 1 of 9

Algebra II. Paulius Drungilas and Jonas Jankauskas

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Systems of Algebraic Equations and Systems of Differential Equations

Tutorials in Optimization. Richard Socher

Linear Least-Squares Data Fitting

Solutions to Final Exam

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions


1 Inner Product and Orthogonality

The Singular Value Decomposition

MA 527 first midterm review problems Hopefully final version as of October 2nd

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Geometric Series and the Ratio and Root Test

Dot Products, Transposes, and Orthogonal Projections

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Matrices related to linear transformations

Chapter 3 Transformations

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

18.06SC Final Exam Solutions

Predator - Prey Model Trajectories are periodic

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

Chapter Two Elements of Linear Algebra

MATH 235. Final ANSWERS May 5, 2015

Final Exam Practice Problems Answers Math 24 Winter 2012

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Conceptual Questions for Review

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

CS 246 Review of Linear Algebra 01/17/19

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Geometric Series and the Ratio and Root Test

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Hölder s and Minkowski s Inequality

Eigenvectors and Hermitian Operators

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

Eigenvalues and Eigenvectors

Notes on Row Reduction

Knowledge Discovery and Data Mining 1 (VO) ( )

Project One: C Bump functions

Unit 2: Lines and Planes in 3 Space. Linear Combinations of Vectors

Linear Algebra: Matrix Eigenvalue Problems

Chapter 5. Eigenvalues and Eigenvectors

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Predator - Prey Model Trajectories are periodic

Column 3 is fine, so it remains to add Row 2 multiplied by 2 to Row 1. We obtain

Functional Analysis Review

Eigenvalues, Eigenvectors, and Diagonalization

Lecture 8 : Eigenvalues and Eigenvectors

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Review problems for MA 54, Fall 2004.

Linear Algebra Exercises

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Lecture 7. Econ August 18

18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points.

Linear Algebra Review. Vectors

Introduction to Vectors

Extra Problems for Math 2050 Linear Algebra I

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Cheat Sheet for MATH461

Repeated Eigenvalues and Symmetric Matrices

CS 143 Linear Algebra Review

Hölder s and Minkowski s Inequality

Lecture 1 Systems of Linear Equations and Matrices

MA 265 FINAL EXAM Fall 2012

REVIEW OF DIFFERENTIAL CALCULUS

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Basic Calculus Review

LINEAR ALGEBRA KNOWLEDGE SURVEY

14 Singular Value Decomposition

Matrix-Vector Products and the Matrix Equation Ax = b

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

k is a product of elementary matrices.

Homogeneous Constant Matrix Systems, Part II

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

1 Last time: least-squares problems

Recitation 9: Probability Matrices and Real Symmetric Matrices. 3 Probability Matrices: Definitions and Examples

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Transcription:

Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016

Outline 1 Linear Systems of ODE The Characteristic Equation 3 Finding The General Solution 4 Symmetric Problems 5 A Canonical Form For a Symmetric Matrix 6 Signed Definite Matrices 7 A Deeper Look at Extremals

Linear Systems of ODE Let s review Linear Systems of first order ODE. These have the form x (t) = a x(t) + b y(t) y (t) = c x(t) + d y(t) x(0) = x 0 y(0) = y 0 for any numbers a, b, c and d and initial conditions x 0 and y 0. The full problem is called, as usual, an Initial Value Problem or IVP for short. The two initial conditions are just called the IC s for the problem to save writing.

Linear Systems of ODE For example, we might be interested in the system x (t) = x(t) + 3 y(t) y (t) = 4 x(t) + 5 y(t) x(0) = 5 y(0) = 3 Here the IC s are x(0) = 5 and y(0) = 3. Another sample problem might be the one below. x (t) = 14 x(t) + 5 y(t) y (t) = 4 x(t) + 8 y(t) x(0) = y(0) = 7

The Characteristic Equation For linear first order problems like u = 3u and so forth, we find the solution has the form u(t) = A e 3t for some number A. We then determine the value of A to use by looking at the initial condition. To find the solutions here, we begin by rewriting the model in matrix - vector notation. [ [ [ x (t) a b x(t) y =. (t) c d y(t) The matrix is called the coefficient matrix of this model. The initial conditions can then be redone in vector form as [ [ x(0) x0 =. y(0) y 0

The Characteristic Equation Now it seems reasonable to believe that if a constant times e rt solves a first order linear problem like u = ru, perhaps a vector times e rt will work here. Let s make this formal. So let s look at the problem below x (t) = 3 x(t) + y(t) y (t) = 4 x(t) + 5 y(t) x(0) = y(0) = 3 Assume the solution has the form V e rt Let s denote the components of V as follows: V = [ V1 V.

The Characteristic Equation We assume the solution is [ x(t) y(t) = V e rt. Then the derivative of V e rt is ( ) ([ ) ([ ) V e rt V1 = e rt V1 e = rt V V e rt [ [ V1 (e = rt ) V1 r e V (e rt ) = rt V r e rt [ V1 = r e rt = r V e rt V Hence, [ x (t) y (t) = r V e rt.

The Characteristic Equation When we plug these terms into the matrix - vector form of the problem, we find r V e rt = [ 3 4 5 V e rt Rewrite as r V e rt [ 3 4 5 V e rt = [ 0 0. Recall that the identity matrix I has the form [ 1 0 I = I V = V. 0 1

The Characteristic Equation So [ r V e rt 3 4 5 [ V e rt = ri V e rt 3 V e rt 4 5 ( [ ) 3 = ri V e rt 4 5 ([ [ ) r 0 3 = V e rt 0 r 4 5 [ r 3 = V e rt ( 4) r 5 Plugging this into our model, we find [ r 3 V e rt = 4 r 5 [ 0 0.

The Characteristic Equation But e rt is never 0, so we want r satisfying [ r 3 V = 4 r 5 [ 0 0. For each r, we get two equations in V 1 and V : (r 3)V 1 V = 0, 4V 1 + (r 5)V = 0. Let A r be this matrix. Any r for which the det A r 0 tells us these two lines have different slopes and so cross at the origin implying V 1 = 0 and V = 0. Thus [ x y = [ 0 0 [ e rt 0 = 0 which will not satisfy nonzero initial conditions. So reject these r. Any value of r for which det A r = 0 gives an infinite number of solutions which allows us to pick one that matches the initial conditions we have..

The Characteristic Equation The equation det(ri A) = det [ r 3 4 r 5 = 0. is called the characteristic equation of this linear system. The characteristic equation is a quadratic, so there are three possibilities: two distinct roots this is the only case we handle in this class. the real roots are the same not done in this class. the roots are complex

The Characteristic Equation Example Derive the characteristic equation for the system below x (t) = 8 x(t) + 9 y(t) y (t) = 3 x(t) y(t) x(0) = 1 y(0) = 4 Solution The matrix - vector form is [ x (t) y = (t) [ x(0) = y(0) [ 8 9 3 [ 1 4 [ x(t) y(t)

The Characteristic Equation Solution The coefficient matrix A is thus A = [ 8 9 3 Assume the solution has the form V e rt Plug this into the system. [ r V e rt 8 9 V e rt = 3 [ 0 0 Rewrite using the identity matrix I and factor ( [ ) 8 9 r I V e rt = 3 [ 0 0

The Characteristic Equation Solution Since e rt 0 ever, we find r and V satisfy ( [ ) 8 9 r I V = 3 [ 0 0 If r is chosen so that det (ri A) 0, the only solution to this system of two linear equations in the two unknowns V 1 and V is V 1 = 0 and V = 0. This leads to x(t) = 0 and y(t) = 0 always and this solution does not satisfy the initial conditions. Hence, we must find r which give det (ri A) = 0. The characteristic equation is thus [ r 8 9 det = (r 8)(r + ) 7 3 r + = r 6r 43 = 0

Finding The General Solution The roots to the characteristic equation are called eigenvalues. We usually organize the eigenvalues as with the largest one first although we don t have to. In fact, in the examples, we organize from small to large! Example: The eigenvalues are and 1. So r 1 = 1 and r =. Since e t decays faster than e t, we say the root r 1 = 1 is the dominant part of the solution. Example: The eigenvalues are and 3. So r 1 = 3 and r =. Since e t decays and e 3t grows, we say the root r 1 = 3 is the dominant part of the solution. Example: The eigenvalues are and 3. So r 1 = 3 and r =. Since e t grows slower than e 3t, we say the root r 1 = 3 is the dominant part of the solution.

Finding The General Solution We usually organize the eigenvalues as with the largest one first although we don t have to. In fact, in the examples, we organize from small to large! Example: The eigenvalues are and 1. So r 1 = 1 and r =. Since e t decays faster than e t, we say the root r 1 = 1 is the dominant part of the solution. Example: The eigenvalues are and 3. So r 1 = 3 and r =. Since e t decays and e 3t grows, we say the root r 1 = 3 is the dominant part of the solution. Example: The eigenvalues are and 3. So r 1 = 3 and r =. Since e t grows slower than e 3t, we say the root r 1 = 3 is the dominant part of the solution. For each eigenvalue r we want to find nonzero vectors V so that (r I A) V = 0 where to help with our writing we let 0 be the two dimensional zero vector.

Finding The General Solution We usually organize the eigenvalues as with the largest one first although we don t have to. In fact, in the examples, we organize from small to large! Example: The eigenvalues are and 1. So r 1 = 1 and r =. Since e t decays faster than e t, we say the root r 1 = 1 is the dominant part of the solution. Example: The eigenvalues are and 3. So r 1 = 3 and r =. Since e t decays and e 3t grows, we say the root r 1 = 3 is the dominant part of the solution. Example: The eigenvalues are and 3. So r 1 = 3 and r =. Since e t grows slower than e 3t, we say the root r 1 = 3 is the dominant part of the solution. For each eigenvalue r we want to find nonzero vectors V so that (r I A) V = 0 where to help with our writing we let 0 be the two dimensional zero vector. These nonzero V are called the eigenvectors for eigenvalue r and satisfy AV = rv.

Finding The General Solution For eigenvalue r 1 Find V so that (r 1 I A) V = 0 There will be an infinite number of V s that solve this; we pick one and call it eigenvector E 1. For eigenvalue r Find V so that (r I A) V = 0 There will again be an infinite number of V s that solve this; we pick one and call it eigenvector E. The general solution to our model will be [ x(t) = AE y(t) 1 e r1t + BE e rt. where A and B are arbitrary. We use the ICs to find A and B. Best to show all this with some examples.

Finding The General Solution Example For the system below [ x (t) y (t) [ x(0) y(0) = = [ 0 1 13 5 [ 1 [ x(t) y(t) Find the characteristic equation Find the general solution Solve the IVP

Finding The General Solution Solution The characteristic equation is ( [ 1 0 det r 0 1 [ 0 1 13 5 ) = 0 Thus ([ r + 0 1 0 = det 13 r 5 = (r + 0)(r 5) + 156 = r + 15r + 56 = (r + 8)(r + 7) ) Hence, eigenvalues or roots of the characteristic equation are r 1 = 8 and r = 7. Note since this is just a calculation, we are not following our labeling scheme.

Finding The General Solution Solution For eigenvalue r 1 = 8, substitute the value into [ [ [ [ r + 0 1 V1 1 1 V1 13 r 5 V 13 13 V This system of equations should be collinear: i.e. the rows should be multiples; i.e. both give rise to the same line. Our rows are multiples, so we can pick any row to find V in terms of V 1. Picking the top row, we get 1V 1 1V = 0 implying V = V 1. Letting V 1 = a, we find V 1 = a and V = a: so [ [ V1 1 = a V 1

Finding The General Solution Solution Choose E 1 : The vector E 1 = [ 1 1 is our choice for an eigenvector corresponding to eigenvalue r 1 = 8. So one of the solutions is [ [ x1 (t) = E y 1 (t) 1 e 8t 1 = e 8t. 1

Finding The General Solution First, for eigenvalue r = 7, let s find our eigenvector. Solution For eigenvalue r = 7, substitute the value into [ [ [ [ r + 0 1 V1 13 1 V1 13 r 5 13 1 V V This system of equations should be collinear: i.e. the rows should be multiples; i.e. both give rise to the same line. Our rows are multiples, so we can pick any row to find V in terms of V 1. Picking the top row, we get 13V 1 1V = 0 implying V = (13/1)V 1. Letting V 1 = b, we find V 1 = b and V = (13/1)b: so [ [ V1 1 = b 13 V 1

Finding The General Solution Solution Choose E : The vector E = [ 1 13 1 is our choice for an eigenvector corresponding to eigenvalue r = 7. So one of the solutions is [ x (t) y (t) = E e 7t = [ 1 13 1 e 7t.

Finding The General Solution Solution The general solution: [ x(t) y(t) = A E 1 e 8t + B E e 7t [ [ 1 1 = A e 8t + B 13 1 1 e 7t Find A and B: use the ICs. [ [ [ x(0) 1 1 = = A e 0 + B y(0) 1 [ [ 1 1 = A + B 13 1 1 [ 1 13 1 e 0

Finding The General Solution Solution So A + B = 1 A + 13 1 B = Subtracting the bottom equation from the top equation, we get B = 3 or B = 36. Thus, A = 1 B = 37. So 1 1 [ x(t) y(t) = 37 [ 1 1 e 8t + 36 [ 1 13 1 e 7t

Symmetric Problems It is easy to see the term in the square root is always positive implying two real roots. Let s specialize to the case where the coefficient matrix is a symmetric matrix. Let s start with a general symmetric matrix A given by [ a b A = b d where a, b and d are arbirary non zero numbers. The characteristic equation here is r (a + d)r + ad b. Note that the term ad b is the determinant of A. The roots are given by r = (a + d) ± (a + d) 4(ad b ) = (a + d) ± a + ad + d 4ad + 4b ) = (a + d) ± a ad + d + 4b = (a + d) ± (a d) + 4b

Symmetric Problems Note we can find the eigenvectors with a standard calculation. For eigenvalue λ 1 = (a+d)+ (a d) +4b, we must find the vectors V so that (a+d)+ (a d) +4b a b (a+d)+ (a d) b +4b d [ V1 V = [ 0 0 We can use the top equation to find the needed relationship between V 1 and V. We have ( (a + d) + (a d) + 4b ) a V 1 bv = 0. Thus, we have for V = d a+ (a d) +4b, V 1 = b. Thus, the first eigenvector is E 1 = [ b d a+ (a d) +4b

Symmetric Problems The second eigenvector is a similar calculation. We must find the vector V so that (a+d) (a d) +4b We find a b (a+d) (a d) b +4b d [ V1 V = [ 0 0 ( (a + d) (a d) + 4b ) a V 1 bv = 0. Thus, we have for V = d + (a+d) (a d) +4b, V 1 = b. Thus, the second eigenvector is E = [ b d a (a d) +4b

Symmetric Problems Note that < E 1, E > is < E 1, E > ( )( ) d a + (a = b d) + 4b + d a (a d) + 4b = b + (d a) 4 (a d) + 4b 4 = b + (d a) (a d) 4 Hence, these two eigenvectors are orthogonal to each other. Note, the two eigenvalues are b = 0. λ 1 = (a + d) + (a d) + 4b, λ = (a + d) (a d) + 4b The only way both eigenvalues can be zero is if both a + d = 0 and (a d) + 4b = 0. That only happens if a = b = d = 0 which we explicitly ruled out at the beginning of our discussion because we said a, b and d were nonzero. However, both eigenvalues can be negative, both can be positive or they can be of mixed sign as our examples show.

Symmetric Problems Example For A = [ 3 6 the eigenvalues are (9 ± 5)/ and both are positive. Example For A = [ 3 5 5 6 the eigenvalues are (9 ± 9 + 100)/ giving λ 1 = 9.7 and λ = 5. and the eigenvalues have mixed sign.

A Canonical Form For a Symmetric Matrix Now let s look at symmetric matrices more abstractly. Don t worry, there is a payoff here in understanding! Let A be a general symmetric matrix. Then it has two distinct eigenvalues λ 1 and another one λ. Consider the matrix P given by P = [ E 1 E whose transpose is then P T = [ E T 1 E T Thus, P T A P = = = [ E T 1 E T A [ E 1 E [ E T [AE1 1 AE E T [ E T [λ1 1 E 1 λ E E T

A Canonical Form For a Symmetric Matrix After we do the final multiplications, we have [ P T λ1 < E A P = 1, E 1 > λ < E 1, E > λ 1 < E, E 1 > λ < E, E > We know the eigenvectors are orthogonal, so we must have [ P T λ1 < E A P = 1, E 1 > 0 0 λ < E, E > Once last step and we are done! There is no reason, we can t choose as our eigenvectors, vectors of length one: here just replace E 1 by the new vector E 1 / E 1 where E 1 is the usual Euclidean length of the vector. Similarly, replace E by E / E. Assuming this is done, we have < E 1, E 1 >= 1 and < E, E >= 1. We are left with the identity [ P T λ1 0 A P = 0 λ It is easy to see P T P = PP T = I telling us P T = P 1. Thus, we can rewrite as [ λ1 0 A = P P T 0 λ

A Canonical Form For a Symmetric Matrix This is an important thing. We have shown the matrix A can be decomposed into the product A = P Λ P T where Λ is the diagonal matrix whose entries are the eigenvalues of A which the most positive one in the (1, 1) position. It is now clear how we solve an equation like A X = b. We rewrite as P Λ P T X = b which leads to the solution X = P Λ 1 P T b and it is clear the reciprocal eigenvalue sizes determine how large the solution can get. The eigenvectors here are independent vectors in R and since they span R, they form a basis. This is called an orthonormal basis because the vectors are perpendicular or orthogonal. Hence, any vector in R can be written as a linear combination of this basis. That is, if V is such a vector, then we have V = V 1 E 1 + V E and the components V 1 and V are known as the components of V relative to the basis {E 1, E }. We often just refer to this basis as E. Hence, a vector V has many possible representations.

A Canonical Form For a Symmetric Matrix The one you are most used to is the one which uses the basis vectors [ [ 1 0 e 1 = and e 0 = 1 [ 3 which is called the standard basis. When we write V = unless 5 otherwise stated, we assume these are the components of V with respect to the standard basis. Now let s go back to V. Since the vectors E 1 and E are orthogonal, we can take inner products on both sides of the representation of V with respect to the basis E to get < V, E 1 > = < V 1 E 1, E 1 > + < V E, E 1 > = V 1 < E 1, E 1 > +V < E, E 1 > as < E, E 1 >= 0 as the vectors are perpendicular and < E 1, E 1 >=< E, E >= 1 as our eigenvectors have length one. So we have V 1 =< V, E 1 > and similarly V =< V, E >. So we can decompose V as V = < V, E 1 > E 1 + < V, E > E

A Canonical Form For a Symmetric Matrix Another way to look at this is that the two eigenvectors can be used to find a representation of the data vector b and the solution vector X as follows: b = < b, E 1 > E 1 + < b, E > E = b 1 E 1 + b E X = < X, E 1 > E 1 + < X, E > E = X 1 E 1 + X E So A X = b becomes ) A (X 1 E 1 + X E = b 1 E 1 + b E λ 1 X 1 E 1 + λ X E = b 1 E 1 + b E. The only way this equation works is if the coefficients on the eigenvectors match. So we have X 1 = λ 1 1 b 1 and X = λ 1 b. This shows very clearly how the solution depends on the size of the reciprocal eigenvalues. Thus, if our problem has a very small eigenvalue, we would expect our solution vector to be unstable. Also, if one of the eigenvalues is 0, we would have real problems! We can address this somewhat by finding a way to force all the eigenvalues to be positive.

Signed Definite Matrices A matrix is said to be a positive definite matrix if x T Ax > 0 for all vectors x. If we multiply this out, we find the inequality below If we complete the square, we find ax 1 + bx 1 x + dx > 0, ( ) ( ) a( ) x 1 + (b/a)x + (ad b )/a )x > 0 Now if the leading term a > 0, and if the determinant of ( ) ( ) A = ad b > 0, we have x 1 + (b/a)x + (ad b )/a )x always positive. Note since the determinant is positive ad > b which forces d to be positive as well. So in this case, a and d and ad b > 0. And the expression x T Ax > 0 in this case. Now recall what we found about the eigenvalues here. We had the eigenvalues were r = (a + d) ± (a d) + 4b is

Signed Definite Matrices Since ad b > 0, the term (a d) + 4b = a ad + d + 4b < a ad + 4ad + d = (a + d). Thus, the square root is smaller than a + d as a and d are positive. The first root is always positive. The second root is too as (a + d) (a d) + 4b > a + d (a + d) = 0. So both eigenvalues are positive if a and d are positive and ad b > 0. Note the argument can go the other way. If we assume the matrix is positive definite, the we are forced to have a > 0 and ad b > 0 which gives the same result. We conclude our symmetric matrix A is positive definite if and only if a > 0, d > 0 and the determinant of A > 0 too. Note a positive definite matrix has positive eigenvalues.

Signed Definite Matrices A similar argument holds if we have determinant of A > 0 but a < 0. The determinant condition will then force d < 0 too. We find that x T Ax < 0. In this case, we say the matrix is negative definite. The eigenvalues are still r = (a + d) ± (a d) + 4b. But now, since ad b > 0, the term (a d) + 4b = a ad + d + 4b < a ad + 4ad + d = a + d. Since a and d are negative, a + d < 0 and so the second root is always negative. The first root s sign is determined by (a + d) + (a d) + 4b < (a + d) + a + d = 0. So both eigenvalues are negative. We have found the matrix A is negative definite if a and d are negative and the determinant of A > 0. Note a negative definite matrix has negative eigenvalues.

A Deeper Look at Extremals We can now rephrase what we said about second order tests for extremals for functions of two variables: recall we had Theorem If the partials of f are zero at the point (x 0, y 0 ), we can determine if that point is a local minimum or local maximum of f using a second order test. We must assume the second order partials are continuous at the point (x 0, y 0 ). If f 0 xx > 0 and det(h 0 ) > 0 then f (x 0, y 0 ) is a local minimum. If f 0 xx < 0 and det(h 0 ) > 0 then f (x 0, y 0 ) is a local maximum. If det(h 0 ) < 0, then f (x 0, y 0 ) is a local saddle. We just don t know anything if the test det(h 0 ) = 0.

A Deeper Look at Extremals [ A B The Hessian at the critical point is H 0 = and we see B D f 0 xx > 0 and det(h 0 ) > 0 tells us H 0 is positive definite and both eigenvalues are positive. f 0 xx < 0 and det(h 0 ) > 0 tells us H 0 is negative definite and both eigenvalues are negative. We haven t proven it yet, but if the eigenvalues are nonzero and differ in sign, this gives us a saddle. Thus our theorem becomes Theorem If f is zero at the point (x 0, y 0 ) and the second order partials are continuous at (x 0, y 0 ). If H 0 is positive definite then f (x 0, y 0 ) is a local minimum. If H 0 is negative definite then f (x 0, y 0 ) is a local maximum. If the eigenvalues of det(h 0 ) are nonzero and of mixed sign, then f (x 0, y 0 ) is a local saddle.

A Deeper Look at Extremals Homework 39 For these problems Write matrix, vector form. Derive the characteristic equation. Find the two eigenvalues. Label the largest one as r 1 and the other as r Find the two associated eigenvectors as unit vectors Define P = [ E 1 E Compute P T A P where A is the coefficient matrix of the ODE system. Show A = PΛP T for an appropriate Λ. Write general solution. Solve the IVP.

A Deeper Look at Extremals Homework 39 Continued 39.1 x = x + y y = x 6y x(0) = 4 y(0) = 6. 39. x = x + 3 y y = 3 x + 7 y x(0) = y(0) = 3.

A Deeper Look at Extremals Homework 39 The following matrix is the Hessian H 0 at the critical point (1, 1) of an extremal value problem. Determine if H 0 is positive or negative definite Determine if the critical point is a maximum or a minimum Find the two eigenvalues of H 0. Label the largest one as r 1 and the other as r Find the two associated eigenvectors as unit vectors Define Compute P T H 0 P P = [ E 1 E Show H 0 = PΛP T for an appropriate Λ.

A Deeper Look at Extremals Homework 39 Continued 39.3 39.4 H 0 = H 0 = [ 1 3 3 1 [ 7 7 40