Extreme Values and Positive/ Negative Definite Matrix Conditions
|
|
- Hector Rafe Adams
- 5 years ago
- Views:
Transcription
1 Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016
2 Outline 1 Linear Systems of ODE The Characteristic Equation 3 Finding The General Solution 4 Symmetric Problems 5 A Canonical Form For a Symmetric Matrix 6 Signed Definite Matrices 7 A Deeper Look at Extremals
3 Linear Systems of ODE Let s review Linear Systems of first order ODE. These have the form x (t) = a x(t) + b y(t) y (t) = c x(t) + d y(t) x(0) = x 0 y(0) = y 0 for any numbers a, b, c and d and initial conditions x 0 and y 0. The full problem is called, as usual, an Initial Value Problem or IVP for short. The two initial conditions are just called the IC s for the problem to save writing.
4 Linear Systems of ODE For example, we might be interested in the system x (t) = x(t) + 3 y(t) y (t) = 4 x(t) + 5 y(t) x(0) = 5 y(0) = 3 Here the IC s are x(0) = 5 and y(0) = 3. Another sample problem might be the one below. x (t) = 14 x(t) + 5 y(t) y (t) = 4 x(t) + 8 y(t) x(0) = y(0) = 7
5 The Characteristic Equation For linear first order problems like u = 3u and so forth, we find the solution has the form u(t) = A e 3t for some number A. We then determine the value of A to use by looking at the initial condition. To find the solutions here, we begin by rewriting the model in matrix - vector notation. [ [ [ x (t) a b x(t) y =. (t) c d y(t) The matrix is called the coefficient matrix of this model. The initial conditions can then be redone in vector form as [ [ x(0) x0 =. y(0) y 0
6 The Characteristic Equation Now it seems reasonable to believe that if a constant times e rt solves a first order linear problem like u = ru, perhaps a vector times e rt will work here. Let s make this formal. So let s look at the problem below x (t) = 3 x(t) + y(t) y (t) = 4 x(t) + 5 y(t) x(0) = y(0) = 3 Assume the solution has the form V e rt Let s denote the components of V as follows: V = [ V1 V.
7 The Characteristic Equation We assume the solution is [ x(t) y(t) = V e rt. Then the derivative of V e rt is ( ) ([ ) ([ ) V e rt V1 = e rt V1 e = rt V V e rt [ [ V1 (e = rt ) V1 r e V (e rt ) = rt V r e rt [ V1 = r e rt = r V e rt V Hence, [ x (t) y (t) = r V e rt.
8 The Characteristic Equation When we plug these terms into the matrix - vector form of the problem, we find r V e rt = [ V e rt Rewrite as r V e rt [ V e rt = [ 0 0. Recall that the identity matrix I has the form [ 1 0 I = I V = V. 0 1
9 The Characteristic Equation So [ r V e rt [ V e rt = ri V e rt 3 V e rt 4 5 ( [ ) 3 = ri V e rt 4 5 ([ [ ) r 0 3 = V e rt 0 r 4 5 [ r 3 = V e rt ( 4) r 5 Plugging this into our model, we find [ r 3 V e rt = 4 r 5 [ 0 0.
10 The Characteristic Equation But e rt is never 0, so we want r satisfying [ r 3 V = 4 r 5 [ 0 0. For each r, we get two equations in V 1 and V : (r 3)V 1 V = 0, 4V 1 + (r 5)V = 0. Let A r be this matrix. Any r for which the det A r 0 tells us these two lines have different slopes and so cross at the origin implying V 1 = 0 and V = 0. Thus [ x y = [ 0 0 [ e rt 0 = 0 which will not satisfy nonzero initial conditions. So reject these r. Any value of r for which det A r = 0 gives an infinite number of solutions which allows us to pick one that matches the initial conditions we have..
11 The Characteristic Equation The equation det(ri A) = det [ r 3 4 r 5 = 0. is called the characteristic equation of this linear system. The characteristic equation is a quadratic, so there are three possibilities: two distinct roots this is the only case we handle in this class. the real roots are the same not done in this class. the roots are complex
12 The Characteristic Equation Example Derive the characteristic equation for the system below x (t) = 8 x(t) + 9 y(t) y (t) = 3 x(t) y(t) x(0) = 1 y(0) = 4 Solution The matrix - vector form is [ x (t) y = (t) [ x(0) = y(0) [ [ 1 4 [ x(t) y(t)
13 The Characteristic Equation Solution The coefficient matrix A is thus A = [ Assume the solution has the form V e rt Plug this into the system. [ r V e rt 8 9 V e rt = 3 [ 0 0 Rewrite using the identity matrix I and factor ( [ ) 8 9 r I V e rt = 3 [ 0 0
14 The Characteristic Equation Solution Since e rt 0 ever, we find r and V satisfy ( [ ) 8 9 r I V = 3 [ 0 0 If r is chosen so that det (ri A) 0, the only solution to this system of two linear equations in the two unknowns V 1 and V is V 1 = 0 and V = 0. This leads to x(t) = 0 and y(t) = 0 always and this solution does not satisfy the initial conditions. Hence, we must find r which give det (ri A) = 0. The characteristic equation is thus [ r 8 9 det = (r 8)(r + ) 7 3 r + = r 6r 43 = 0
15 Finding The General Solution The roots to the characteristic equation are called eigenvalues. We usually organize the eigenvalues as with the largest one first although we don t have to. In fact, in the examples, we organize from small to large! Example: The eigenvalues are and 1. So r 1 = 1 and r =. Since e t decays faster than e t, we say the root r 1 = 1 is the dominant part of the solution. Example: The eigenvalues are and 3. So r 1 = 3 and r =. Since e t decays and e 3t grows, we say the root r 1 = 3 is the dominant part of the solution. Example: The eigenvalues are and 3. So r 1 = 3 and r =. Since e t grows slower than e 3t, we say the root r 1 = 3 is the dominant part of the solution.
16 Finding The General Solution We usually organize the eigenvalues as with the largest one first although we don t have to. In fact, in the examples, we organize from small to large! Example: The eigenvalues are and 1. So r 1 = 1 and r =. Since e t decays faster than e t, we say the root r 1 = 1 is the dominant part of the solution. Example: The eigenvalues are and 3. So r 1 = 3 and r =. Since e t decays and e 3t grows, we say the root r 1 = 3 is the dominant part of the solution. Example: The eigenvalues are and 3. So r 1 = 3 and r =. Since e t grows slower than e 3t, we say the root r 1 = 3 is the dominant part of the solution. For each eigenvalue r we want to find nonzero vectors V so that (r I A) V = 0 where to help with our writing we let 0 be the two dimensional zero vector.
17 Finding The General Solution We usually organize the eigenvalues as with the largest one first although we don t have to. In fact, in the examples, we organize from small to large! Example: The eigenvalues are and 1. So r 1 = 1 and r =. Since e t decays faster than e t, we say the root r 1 = 1 is the dominant part of the solution. Example: The eigenvalues are and 3. So r 1 = 3 and r =. Since e t decays and e 3t grows, we say the root r 1 = 3 is the dominant part of the solution. Example: The eigenvalues are and 3. So r 1 = 3 and r =. Since e t grows slower than e 3t, we say the root r 1 = 3 is the dominant part of the solution. For each eigenvalue r we want to find nonzero vectors V so that (r I A) V = 0 where to help with our writing we let 0 be the two dimensional zero vector. These nonzero V are called the eigenvectors for eigenvalue r and satisfy AV = rv.
18 Finding The General Solution For eigenvalue r 1 Find V so that (r 1 I A) V = 0 There will be an infinite number of V s that solve this; we pick one and call it eigenvector E 1. For eigenvalue r Find V so that (r I A) V = 0 There will again be an infinite number of V s that solve this; we pick one and call it eigenvector E. The general solution to our model will be [ x(t) = AE y(t) 1 e r1t + BE e rt. where A and B are arbitrary. We use the ICs to find A and B. Best to show all this with some examples.
19 Finding The General Solution Example For the system below [ x (t) y (t) [ x(0) y(0) = = [ [ 1 [ x(t) y(t) Find the characteristic equation Find the general solution Solve the IVP
20 Finding The General Solution Solution The characteristic equation is ( [ 1 0 det r 0 1 [ ) = 0 Thus ([ r = det 13 r 5 = (r + 0)(r 5) = r + 15r + 56 = (r + 8)(r + 7) ) Hence, eigenvalues or roots of the characteristic equation are r 1 = 8 and r = 7. Note since this is just a calculation, we are not following our labeling scheme.
21 Finding The General Solution Solution For eigenvalue r 1 = 8, substitute the value into [ [ [ [ r V1 1 1 V1 13 r 5 V V This system of equations should be collinear: i.e. the rows should be multiples; i.e. both give rise to the same line. Our rows are multiples, so we can pick any row to find V in terms of V 1. Picking the top row, we get 1V 1 1V = 0 implying V = V 1. Letting V 1 = a, we find V 1 = a and V = a: so [ [ V1 1 = a V 1
22 Finding The General Solution Solution Choose E 1 : The vector E 1 = [ 1 1 is our choice for an eigenvector corresponding to eigenvalue r 1 = 8. So one of the solutions is [ [ x1 (t) = E y 1 (t) 1 e 8t 1 = e 8t. 1
23 Finding The General Solution First, for eigenvalue r = 7, let s find our eigenvector. Solution For eigenvalue r = 7, substitute the value into [ [ [ [ r V V1 13 r V V This system of equations should be collinear: i.e. the rows should be multiples; i.e. both give rise to the same line. Our rows are multiples, so we can pick any row to find V in terms of V 1. Picking the top row, we get 13V 1 1V = 0 implying V = (13/1)V 1. Letting V 1 = b, we find V 1 = b and V = (13/1)b: so [ [ V1 1 = b 13 V 1
24 Finding The General Solution Solution Choose E : The vector E = [ is our choice for an eigenvector corresponding to eigenvalue r = 7. So one of the solutions is [ x (t) y (t) = E e 7t = [ e 7t.
25 Finding The General Solution Solution The general solution: [ x(t) y(t) = A E 1 e 8t + B E e 7t [ [ 1 1 = A e 8t + B e 7t Find A and B: use the ICs. [ [ [ x(0) 1 1 = = A e 0 + B y(0) 1 [ [ 1 1 = A + B [ e 0
26 Finding The General Solution Solution So A + B = 1 A B = Subtracting the bottom equation from the top equation, we get B = 3 or B = 36. Thus, A = 1 B = 37. So 1 1 [ x(t) y(t) = 37 [ 1 1 e 8t + 36 [ e 7t
27 Symmetric Problems It is easy to see the term in the square root is always positive implying two real roots. Let s specialize to the case where the coefficient matrix is a symmetric matrix. Let s start with a general symmetric matrix A given by [ a b A = b d where a, b and d are arbirary non zero numbers. The characteristic equation here is r (a + d)r + ad b. Note that the term ad b is the determinant of A. The roots are given by r = (a + d) ± (a + d) 4(ad b ) = (a + d) ± a + ad + d 4ad + 4b ) = (a + d) ± a ad + d + 4b = (a + d) ± (a d) + 4b
28 Symmetric Problems Note we can find the eigenvectors with a standard calculation. For eigenvalue λ 1 = (a+d)+ (a d) +4b, we must find the vectors V so that (a+d)+ (a d) +4b a b (a+d)+ (a d) b +4b d [ V1 V = [ 0 0 We can use the top equation to find the needed relationship between V 1 and V. We have ( (a + d) + (a d) + 4b ) a V 1 bv = 0. Thus, we have for V = d a+ (a d) +4b, V 1 = b. Thus, the first eigenvector is E 1 = [ b d a+ (a d) +4b
29 Symmetric Problems The second eigenvector is a similar calculation. We must find the vector V so that (a+d) (a d) +4b We find a b (a+d) (a d) b +4b d [ V1 V = [ 0 0 ( (a + d) (a d) + 4b ) a V 1 bv = 0. Thus, we have for V = d + (a+d) (a d) +4b, V 1 = b. Thus, the second eigenvector is E = [ b d a (a d) +4b
30 Symmetric Problems Note that < E 1, E > is < E 1, E > ( )( ) d a + (a = b d) + 4b + d a (a d) + 4b = b + (d a) 4 (a d) + 4b 4 = b + (d a) (a d) 4 Hence, these two eigenvectors are orthogonal to each other. Note, the two eigenvalues are b = 0. λ 1 = (a + d) + (a d) + 4b, λ = (a + d) (a d) + 4b The only way both eigenvalues can be zero is if both a + d = 0 and (a d) + 4b = 0. That only happens if a = b = d = 0 which we explicitly ruled out at the beginning of our discussion because we said a, b and d were nonzero. However, both eigenvalues can be negative, both can be positive or they can be of mixed sign as our examples show.
31 Symmetric Problems Example For A = [ 3 6 the eigenvalues are (9 ± 5)/ and both are positive. Example For A = [ the eigenvalues are (9 ± )/ giving λ 1 = 9.7 and λ = 5. and the eigenvalues have mixed sign.
32 A Canonical Form For a Symmetric Matrix Now let s look at symmetric matrices more abstractly. Don t worry, there is a payoff here in understanding! Let A be a general symmetric matrix. Then it has two distinct eigenvalues λ 1 and another one λ. Consider the matrix P given by P = [ E 1 E whose transpose is then P T = [ E T 1 E T Thus, P T A P = = = [ E T 1 E T A [ E 1 E [ E T [AE1 1 AE E T [ E T [λ1 1 E 1 λ E E T
33 A Canonical Form For a Symmetric Matrix After we do the final multiplications, we have [ P T λ1 < E A P = 1, E 1 > λ < E 1, E > λ 1 < E, E 1 > λ < E, E > We know the eigenvectors are orthogonal, so we must have [ P T λ1 < E A P = 1, E 1 > 0 0 λ < E, E > Once last step and we are done! There is no reason, we can t choose as our eigenvectors, vectors of length one: here just replace E 1 by the new vector E 1 / E 1 where E 1 is the usual Euclidean length of the vector. Similarly, replace E by E / E. Assuming this is done, we have < E 1, E 1 >= 1 and < E, E >= 1. We are left with the identity [ P T λ1 0 A P = 0 λ It is easy to see P T P = PP T = I telling us P T = P 1. Thus, we can rewrite as [ λ1 0 A = P P T 0 λ
34 A Canonical Form For a Symmetric Matrix This is an important thing. We have shown the matrix A can be decomposed into the product A = P Λ P T where Λ is the diagonal matrix whose entries are the eigenvalues of A which the most positive one in the (1, 1) position. It is now clear how we solve an equation like A X = b. We rewrite as P Λ P T X = b which leads to the solution X = P Λ 1 P T b and it is clear the reciprocal eigenvalue sizes determine how large the solution can get. The eigenvectors here are independent vectors in R and since they span R, they form a basis. This is called an orthonormal basis because the vectors are perpendicular or orthogonal. Hence, any vector in R can be written as a linear combination of this basis. That is, if V is such a vector, then we have V = V 1 E 1 + V E and the components V 1 and V are known as the components of V relative to the basis {E 1, E }. We often just refer to this basis as E. Hence, a vector V has many possible representations.
35 A Canonical Form For a Symmetric Matrix The one you are most used to is the one which uses the basis vectors [ [ 1 0 e 1 = and e 0 = 1 [ 3 which is called the standard basis. When we write V = unless 5 otherwise stated, we assume these are the components of V with respect to the standard basis. Now let s go back to V. Since the vectors E 1 and E are orthogonal, we can take inner products on both sides of the representation of V with respect to the basis E to get < V, E 1 > = < V 1 E 1, E 1 > + < V E, E 1 > = V 1 < E 1, E 1 > +V < E, E 1 > as < E, E 1 >= 0 as the vectors are perpendicular and < E 1, E 1 >=< E, E >= 1 as our eigenvectors have length one. So we have V 1 =< V, E 1 > and similarly V =< V, E >. So we can decompose V as V = < V, E 1 > E 1 + < V, E > E
36 A Canonical Form For a Symmetric Matrix Another way to look at this is that the two eigenvectors can be used to find a representation of the data vector b and the solution vector X as follows: b = < b, E 1 > E 1 + < b, E > E = b 1 E 1 + b E X = < X, E 1 > E 1 + < X, E > E = X 1 E 1 + X E So A X = b becomes ) A (X 1 E 1 + X E = b 1 E 1 + b E λ 1 X 1 E 1 + λ X E = b 1 E 1 + b E. The only way this equation works is if the coefficients on the eigenvectors match. So we have X 1 = λ 1 1 b 1 and X = λ 1 b. This shows very clearly how the solution depends on the size of the reciprocal eigenvalues. Thus, if our problem has a very small eigenvalue, we would expect our solution vector to be unstable. Also, if one of the eigenvalues is 0, we would have real problems! We can address this somewhat by finding a way to force all the eigenvalues to be positive.
37 Signed Definite Matrices A matrix is said to be a positive definite matrix if x T Ax > 0 for all vectors x. If we multiply this out, we find the inequality below If we complete the square, we find ax 1 + bx 1 x + dx > 0, ( ) ( ) a( ) x 1 + (b/a)x + (ad b )/a )x > 0 Now if the leading term a > 0, and if the determinant of ( ) ( ) A = ad b > 0, we have x 1 + (b/a)x + (ad b )/a )x always positive. Note since the determinant is positive ad > b which forces d to be positive as well. So in this case, a and d and ad b > 0. And the expression x T Ax > 0 in this case. Now recall what we found about the eigenvalues here. We had the eigenvalues were r = (a + d) ± (a d) + 4b is
38 Signed Definite Matrices Since ad b > 0, the term (a d) + 4b = a ad + d + 4b < a ad + 4ad + d = (a + d). Thus, the square root is smaller than a + d as a and d are positive. The first root is always positive. The second root is too as (a + d) (a d) + 4b > a + d (a + d) = 0. So both eigenvalues are positive if a and d are positive and ad b > 0. Note the argument can go the other way. If we assume the matrix is positive definite, the we are forced to have a > 0 and ad b > 0 which gives the same result. We conclude our symmetric matrix A is positive definite if and only if a > 0, d > 0 and the determinant of A > 0 too. Note a positive definite matrix has positive eigenvalues.
39 Signed Definite Matrices A similar argument holds if we have determinant of A > 0 but a < 0. The determinant condition will then force d < 0 too. We find that x T Ax < 0. In this case, we say the matrix is negative definite. The eigenvalues are still r = (a + d) ± (a d) + 4b. But now, since ad b > 0, the term (a d) + 4b = a ad + d + 4b < a ad + 4ad + d = a + d. Since a and d are negative, a + d < 0 and so the second root is always negative. The first root s sign is determined by (a + d) + (a d) + 4b < (a + d) + a + d = 0. So both eigenvalues are negative. We have found the matrix A is negative definite if a and d are negative and the determinant of A > 0. Note a negative definite matrix has negative eigenvalues.
40 A Deeper Look at Extremals We can now rephrase what we said about second order tests for extremals for functions of two variables: recall we had Theorem If the partials of f are zero at the point (x 0, y 0 ), we can determine if that point is a local minimum or local maximum of f using a second order test. We must assume the second order partials are continuous at the point (x 0, y 0 ). If f 0 xx > 0 and det(h 0 ) > 0 then f (x 0, y 0 ) is a local minimum. If f 0 xx < 0 and det(h 0 ) > 0 then f (x 0, y 0 ) is a local maximum. If det(h 0 ) < 0, then f (x 0, y 0 ) is a local saddle. We just don t know anything if the test det(h 0 ) = 0.
41 A Deeper Look at Extremals [ A B The Hessian at the critical point is H 0 = and we see B D f 0 xx > 0 and det(h 0 ) > 0 tells us H 0 is positive definite and both eigenvalues are positive. f 0 xx < 0 and det(h 0 ) > 0 tells us H 0 is negative definite and both eigenvalues are negative. We haven t proven it yet, but if the eigenvalues are nonzero and differ in sign, this gives us a saddle. Thus our theorem becomes Theorem If f is zero at the point (x 0, y 0 ) and the second order partials are continuous at (x 0, y 0 ). If H 0 is positive definite then f (x 0, y 0 ) is a local minimum. If H 0 is negative definite then f (x 0, y 0 ) is a local maximum. If the eigenvalues of det(h 0 ) are nonzero and of mixed sign, then f (x 0, y 0 ) is a local saddle.
42 A Deeper Look at Extremals Homework 39 For these problems Write matrix, vector form. Derive the characteristic equation. Find the two eigenvalues. Label the largest one as r 1 and the other as r Find the two associated eigenvectors as unit vectors Define P = [ E 1 E Compute P T A P where A is the coefficient matrix of the ODE system. Show A = PΛP T for an appropriate Λ. Write general solution. Solve the IVP.
43 A Deeper Look at Extremals Homework 39 Continued 39.1 x = x + y y = x 6y x(0) = 4 y(0) = x = x + 3 y y = 3 x + 7 y x(0) = y(0) = 3.
44 A Deeper Look at Extremals Homework 39 The following matrix is the Hessian H 0 at the critical point (1, 1) of an extremal value problem. Determine if H 0 is positive or negative definite Determine if the critical point is a maximum or a minimum Find the two eigenvalues of H 0. Label the largest one as r 1 and the other as r Find the two associated eigenvectors as unit vectors Define Compute P T H 0 P P = [ E 1 E Show H 0 = PΛP T for an appropriate Λ.
45 A Deeper Look at Extremals Homework 39 Continued H 0 = H 0 = [ [
Matrix Solutions to Linear Systems of ODEs
Matrix Solutions to Linear Systems of ODEs James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 3, 216 Outline 1 Symmetric Systems of
More informationLinear Systems of ODE: Nullclines, Eigenvector lines and trajectories
Linear Systems of ODE: Nullclines, Eigenvector lines and trajectories James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University October 6, 2013 Outline
More informationMath 54 Homework 3 Solutions 9/
Math 54 Homework 3 Solutions 9/4.8.8.2 0 0 3 3 0 0 3 6 2 9 3 0 0 3 0 0 3 a a/3 0 0 3 b b/3. c c/3 0 0 3.8.8 The number of rows of a matrix is the size (dimension) of the space it maps to; the number of
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationLinear Systems of ODE: Nullclines, Eigenvector lines and trajectories
Linear Systems of ODE: Nullclines, Eigenvector lines and trajectories James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University October 6, 203 Outline
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationNOTES ON LINEAR ALGEBRA CLASS HANDOUT
NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations 2 3.1. Example: Rotation Transformation 3 4. Matrix Multiplication and Function
More informationMIT Final Exam Solutions, Spring 2017
MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of
More informationSolving systems of ODEs with Matlab
Solving systems of ODEs with Matlab James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University October 20, 2013 Outline 1 Systems of ODEs 2 Setting Up
More informationMatrices and Vectors
Matrices and Vectors James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 11, 2013 Outline 1 Matrices and Vectors 2 Vector Details 3 Matrix
More informationThe Method of Undetermined Coefficients.
The Method of Undetermined Coefficients. James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University May 24, 2017 Outline 1 Annihilators 2 Finding The
More informationAN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b
AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise
More informationUnderstand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.
Review Outline To review for the final, look over the following outline and look at problems from the book and on the old exam s and exam reviews to find problems about each of the following topics.. Basics
More informationLecture 1: Systems of linear equations and their solutions
Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications
More informationMath 331 Homework Assignment Chapter 7 Page 1 of 9
Math Homework Assignment Chapter 7 Page of 9 Instructions: Please make sure to demonstrate every step in your calculations. Return your answers including this homework sheet back to the instructor as a
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More information18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in
806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state
More informationSystems of Algebraic Equations and Systems of Differential Equations
Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices
More informationTutorials in Optimization. Richard Socher
Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2
More informationLinear Least-Squares Data Fitting
CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered
More informationSolutions to Final Exam
Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns
More informationLinear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions
Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given
More informationMath 18.6, Spring 213 Problem Set #6 April 5, 213 Problem 1 ( 5.2, 4). Identify all the nonzero terms in the big formula for the determinants of the following matrices: 1 1 1 2 A = 1 1 1 1 1 1, B = 3 4
More information1 Inner Product and Orthogonality
CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationMA 527 first midterm review problems Hopefully final version as of October 2nd
MA 57 first midterm review problems Hopefully final version as of October nd The first midterm will be on Wednesday, October 4th, from 8 to 9 pm, in MTHW 0. It will cover all the material from the classes
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationGeometric Series and the Ratio and Root Test
Geometric Series and the Ratio and Root Test James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University September 5, 2018 Outline 1 Geometric Series
More informationDot Products, Transposes, and Orthogonal Projections
Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More informationMath 291-2: Lecture Notes Northwestern University, Winter 2016
Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,
More informationMatrices related to linear transformations
Math 4326 Fall 207 Matrices related to linear transformations We have encountered several ways in which matrices relate to linear transformations. In this note, I summarize the important facts and formulas
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationHomework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)
CS 106 Spring 2004 Homework 1 Elena Davidson 8 April 2004 Problem 1.1 Let B be a 4 4 matrix to which we apply the following operations: 1. double column 1, 2. halve row 3, 3. add row 3 to row 1, 4. interchange
More information18.06SC Final Exam Solutions
18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon
More informationPredator - Prey Model Trajectories are periodic
Predator - Prey Model Trajectories are periodic James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 4, 2013 Outline 1 Showing The PP
More information18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in
86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationMATH 235. Final ANSWERS May 5, 2015
MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your
More informationFinal Exam Practice Problems Answers Math 24 Winter 2012
Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationMath 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes
Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Written by Santiago Cañez These are notes which provide a basic summary of each lecture for Math 290-2, the second
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationLecture 6: Lies, Inner Product Spaces, and Symmetric Matrices
Math 108B Professor: Padraic Bartlett Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices Week 6 UCSB 2014 1 Lies Fun fact: I have deceived 1 you somewhat with these last few lectures! Let me
More informationGeometric Series and the Ratio and Root Test
Geometric Series and the Ratio and Root Test James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University September 5, 2017 Outline Geometric Series The
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationHölder s and Minkowski s Inequality
Hölder s and Minkowski s Inequality James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University September 1, 218 Outline Conjugate Exponents Hölder s
More informationEigenvectors and Hermitian Operators
7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding
More informationCMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA
CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA Andrew ID: ljelenak August 25, 2018 This assignment reviews basic mathematical tools you will use throughout
More informationEigenvalues and Eigenvectors
LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are
More informationNotes on Row Reduction
Notes on Row Reduction Francis J. Narcowich Department of Mathematics Texas A&M University September The Row-Reduction Algorithm The row-reduced form of a matrix contains a great deal of information, both
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationProject One: C Bump functions
Project One: C Bump functions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 2, 2018 Outline 1 2 The Project Let s recall what the
More informationUnit 2: Lines and Planes in 3 Space. Linear Combinations of Vectors
Lesson10.notebook November 28, 2012 Unit 2: Lines and Planes in 3 Space Linear Combinations of Vectors Today's goal: I can write vectors as linear combinations of each other using the appropriate method
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationChapter 5. Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Section 5. Eigenvectors and Eigenvalues Motivation: Difference equations A Biology Question How to predict a population of rabbits with given dynamics:. half of the
More information1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal
. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal 3 9 matrix D such that A = P DP, for A =. 3 4 3 (a) P = 4, D =. 3 (b) P = 4, D =. (c) P = 4 8 4, D =. 3 (d) P
More informationAPPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of
CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,
More informationPredator - Prey Model Trajectories are periodic
Predator - Prey Model Trajectories are periodic James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 4, 2013 Outline Showing The PP Trajectories
More informationColumn 3 is fine, so it remains to add Row 2 multiplied by 2 to Row 1. We obtain
Section Exercise : We are given the following augumented matrix 3 7 6 3 We have to bring it to the diagonal form The entries below the diagonal are already zero, so we work from bottom to top Adding the
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationLinear Algebra Exercises
9. 8.03 Linear Algebra Exercises 9A. Matrix Multiplication, Rank, Echelon Form 9A-. Which of the following matrices is in row-echelon form? 2 0 0 5 0 (i) (ii) (iii) (iv) 0 0 0 (v) [ 0 ] 0 0 0 0 0 0 0 9A-2.
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationLecture 7. Econ August 18
Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar
More information18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points.
86 Problem Set 8 Solution Due Wednesday, April 9 at 4 pm in -6 Total: 6 points Problem : If A is real-symmetric, it has real eigenvalues What can you say about the eigenvalues if A is real and anti-symmetric
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationIntroduction to Vectors
Introduction to Vectors K. Behrend January 31, 008 Abstract An introduction to vectors in R and R 3. Lines and planes in R 3. Linear dependence. 1 Contents Introduction 3 1 Vectors 4 1.1 Plane vectors...............................
More informationExtra Problems for Math 2050 Linear Algebra I
Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as
More informationLinear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form
Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =
More informationCheat Sheet for MATH461
Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A
More informationRepeated Eigenvalues and Symmetric Matrices
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationHölder s and Minkowski s Inequality
Hölder s and Minkowski s Inequality James K. Peterson Deartment of Biological Sciences and Deartment of Mathematical Sciences Clemson University Setember 10, 2018 Outline 1 Conjugate Exonents 2 Hölder
More informationLecture 1 Systems of Linear Equations and Matrices
Lecture 1 Systems of Linear Equations and Matrices Math 19620 Outline of Course Linear Equations and Matrices Linear Transformations, Inverses Bases, Linear Independence, Subspaces Abstract Vector Spaces
More informationMA 265 FINAL EXAM Fall 2012
MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators
More informationREVIEW OF DIFFERENTIAL CALCULUS
REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding
More informationBasic Calculus Review
Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V
More informationLINEAR ALGEBRA KNOWLEDGE SURVEY
LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationMatrix-Vector Products and the Matrix Equation Ax = b
Matrix-Vector Products and the Matrix Equation Ax = b A. Havens Department of Mathematics University of Massachusetts, Amherst January 31, 2018 Outline 1 Matrices Acting on Vectors Linear Combinations
More informationOld Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University
Old Math 330 Exams David M. McClendon Department of Mathematics Ferris State University Last updated to include exams from Fall 07 Contents Contents General information about these exams 3 Exams from Fall
More informationk is a product of elementary matrices.
Mathematics, Spring Lecture (Wilson) Final Eam May, ANSWERS Problem (5 points) (a) There are three kinds of elementary row operations and associated elementary matrices. Describe what each kind of operation
More informationHomogeneous Constant Matrix Systems, Part II
4 Homogeneous Constant Matrix Systems, Part II Let us now expand our discussions begun in the previous chapter, and consider homogeneous constant matrix systems whose matrices either have complex eigenvalues
More informationCS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works
CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationRecitation 9: Probability Matrices and Real Symmetric Matrices. 3 Probability Matrices: Definitions and Examples
Math b TA: Padraic Bartlett Recitation 9: Probability Matrices and Real Symmetric Matrices Week 9 Caltech 20 Random Question Show that + + + + +... = ϕ, the golden ratio, which is = + 5. 2 2 Homework comments
More informationSolutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015
Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For
More information