Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems
|
|
- Paul Williamson
- 5 years ago
- Views:
Transcription
1 Dierential Equations (part 3): Systems of First-Order Dierential Equations (by Evan Dummit, 26, v 2) Contents 6 Systems of First-Order Linear Dierential Equations 6 General Theory of (First-Order) Linear Systems 62 Eigenvalue Method (Nondefective Coecient Matrices) 3 6 Systems of First-Order Linear Dierential Equations In many (perhaps most) applications of dierential equations, we have not one but several quantities which change over time and interact with one another Examples include the populations of the various species in an ecosystem, the concentrations of molecules involved in a chemical reaction, the motion of objects in a physical system, and the availability and production of items (goods, labor, materials) in economic processes In this chapter, we will outline the basic theory of systems of dierential equations As with the other dierential equations we have studied, we cannot solve arbitrary systems in full generality: in fact it is very dicult even to solve individual nonlinear dierential equations, let alone a system of nonlinear equations We will therefore restrict our attention to systems of linear dierential equations: as with our study of higher-order linear dierential equations, there is an underlying vector-space structure to the solutions which we will explain We will discuss how to solve many examples of homogeneous systems having constant coecients 6 General Theory of (First-Order) Linear Systems Before we start our discussion of systems of linear dierential equations, we rst observe that we can reduce any system of linear dierential equations to a system of rst-order linear dierential equations (in more variables): if we dene new variables equal to the higher-order derivatives of our old variables, then we can rewrite the old system as a system of rst-order equations Example: Convert the single 3rd-order equation y + y = to a system of rst-order equations If we dene new variables z = y and w = y = z, then the original equation tells us that y = y, so w = y = y = z Thus, this single 3rd-order equation is equivalent to the rst-order system y = z, z = w, w = z Example: Convert the system y +y = and y 2 +y +y 2 sin(x) = e x to a systen of rst-order equations If we dene new variables z = y and z 2 = y 2, then z = y = y + and z 2 = y 2 = e x y y 2 sin(x) = e x z z 2 sin(x) So this system is equivalent to the rst-order system y = z, y 2 = z 2, z = y +, z 2 = e x z z 2 sin(x) Thus, whatever we can show about solutions of systems of rst-order linear equations will carry over to arbitrary systems of linear dierential equations So we will talk only about systems of rst-order linear dierential equations from now on Denition: The standard form of a system of rst-order linear dierential equations with unknown functions y,,, y n is y = a, (x) y + a,2 (x) + + a,n (x) y n + q (x) y 2 = a 2, (x) y + a 2,2 (x) + + a 2,n (x) y n + q 2 (x) y n = a n, (x) y + a n,2 (x) + + a n,n (x) y n + q n (x) for some functions a i,j (x) and q i (x) for i, j n
2 a, (x) a,n (x) We can write this system more compactly using matrices: if A =, q = a n, (x) a n,n (x) q (x) y (x) y (x), and y = so that y =, we can write the system more compactly as q n (x) y = Ay + q y n (x) y n(x) We say that the system is homogeneous if q =, and it is nonhomogeneous otherwise Most of the time we will be dealing with systems with constant coecients, in which the entries of A are constant functions An initial condition for this system consists of n pieces of information: y (x ) = b, (x ) = b 2,, y n (x ) = b n, where x is the starting value for x and the b i are constants Equivalently, it is a condition of the form y(x ) = b for some vector b We also have a version of the Wronskian in this setting for checking whether function vectors are linearly independent: y, (x) y n, (x) Denition: Given n vectors v =,, v n = of length n with functions as entries, y,n (x) y n,n (x) y, y,2 y,n,,2,n their Wronskian is dened as the determinant W = y n, y n,2 y n,n By our results on row operations and determinants, we immediately see that n function vectors v,, v n of length n are linearly independent if their Wronskian is not the zero function Many of the theorems about general systems of rst-order linear equations are very similar to the theorems about nth order linear equations Theorem (Existence-Uniqueness): For a system of rst-order linear dierential equations, if the coecient functions a i,j (x) and nonhomogeneous terms p j (x) are each continuous in an interval around x for all i, j n, then the system y = a, (x) y + a,2 (x) + + a,n (x) y n + p (x) y 2 = a 2, (x) y + a 2,2 (x) + + a 2,n (x) y n + p 2 (x) y n = a n, (x) y + a n,2 (x) + + a n,n (x) y n + p n (x) with initial conditions y (x ) = b,, y n (x ) = b n has a unique solution (y,,, y n ) on that interval This theorem is not trivial to prove and we will omit the proof Example: The system y = e x y +sin(x) z, z = 3x 2 y has a unique solution for every initial condition y(x ) = b, z(x ) = b 2 Proposition: Suppose y par is one solution to the matrix system y = Ay + q Then the general solution y gen to this equation may be written as y gen = y par + y hom, where y hom is a solution to the homogeneous system y = Ay Proof: Suppose that y and are solutions to the general equation Then ( y ) = y 2 y = (Ay +q) (A +q) = A(y ), meaning that their dierence y is a solution to the homogeneous equation 2
3 Theorem (Homogeneous Systems): If the coecient functions a i,j (x) are continuous on an interval I for each i, j n, then the set of solutions y to the homogeneous system y = Ay on I is an n-dimensional vector space Proof: First, the solution space is a subspace, since it satises the subspace criterion: S: The zero function is a solution S2: If y and are solutions, then (y + ) = y + y 2 = A(y + ) so y + is also a solution S3: If α is a scalar and y is a solution, then (αy) = αy = α(ay) = A(αy) so αy is also a solution Now we need to show that the solution space is n-dimensional We will do this by nding a basis Choose any x in I By the existence part of the existence-uniqueness theorem, for each i n there exists a function z i such that z i (x ) is the ith unit coordinate vector of R n, with z i,i (x ) = and x i,j (x ) for all j i The functions z, z 2,, z n are linearly independent because their Wronskian matrix evaluated at x = x is the identity matrix (In particular, the Wronskian is not the zero function) Now suppose y is any solution to the homogeneous equation, with y(x ) = c Then the function z = c z + c 2 z z n also has z(x ) = c and is a solution to the homogeneous equation But by the uniqueness part of the existence-uniqueness theorem, there is only one such function, so we must have y(x) = z(x) for all x: therefore y = c z + c 2 z z n, meaning that y is in the span of z, z 2,, z n This is true for any solution function y, so z, z 2,, z n span the solution space Since they are also linearly independent, they form a basis of the solution space, and because there are n of them, we see that the solution space is n-dimensional If we combine the above results, we can write down a fairly nice form for the solutions of a general system of rst-order dierential equations: Corollary: The general solution to the nonhomogeneous equation y = Ay +q has the form y = y par +C z + C 2 z C n z n, where y par is any one particular solution of the nonhomogeneous equation, z,, z n are a basis for the solutions to the homogeneous equation, and C,, C n are arbitrary constants This corollary says that, in order to nd the general solution, we only need to nd one function which satises the nonhomogeneous equation, and then solve the homogeneous equation 62 Eigenvalue Method (Nondefective Coecient Matrices) We now restrict our discussion to homogeneous rst-order systems with constant coecients: those of the form y = a, y + a,2 + + a,n y n y 2 = a 2, y + a 2,2 + + a 2,n y n y n = a n, y + a n,2 + + a n,n y n y which we will write in matrix form as y = Ay with y = and A = y n a, a,2 a,n a 2, a 2,2 a 2,n a n, a n,2 a n,n 3
4 Our starting point for solving such systems is to observe that if v = c c 2 is an eigenvector of A with eigenvalue λ, then y = c c 2 eλx is a solution to y = Ay This follows simply by dierentiating y = e λx v with respect to x: we see y = λe λx v = λy = Ay In the event that A has n linearly independent eigenvectors, we will therefore obtain n solutions to the dierential equation If these solutions are linearly independent, then since we know the solution space is n-dimensional, we would be able to conclude that our solutions are a basis for the solution space Theorem (Eigenvalue Method): If A has n linearly independent eigenvectors v, v 2,, v n with associated eigenvalues λ, λ 2,, λ n, then the general solution to the matrix dierential system y = Ay is given by y = C e λx v + C 2 e λ2x v C n e λnx v 2, where C,, C n are arbitrary constants Recall that if λ is a root of the characteristic equation k times, we say that λ has multiplicity k If the eigenspace for λ has dimension less than k, we say that λ is defective The theorem allows us to solve the matrix dierential system for any nondefective matrix Proof: By the observation above, each of e λx v, e λ2x v 2,, e λnx v n is a solution to y = Ay We claim that they are a basis for the solution space To show this, we know by our earlier results that the solution space of the system y = Ay is n- dimensional: thus, if we can show that these solutions are linearly independent, we would be able to conclude that our solutions are a basis for the solution space We can compute the Wronskian of these solutions: after factoring out the exponentials from each column, we obtain W = e (λ+ +λn)x det(m), where M = v v n The exponential is always nonzero and the vectors v, v 2,, v n are (by hypothesis) linearly independent, meaning that det(m) is nonzero Thus, W is nonzero, so e λx v, e λ2x v 2,, e λnx v n are linearly independent Since these solutions are therefore a basis for the solution space, we immediately conclude that the general solution to y = Ay has the form y = C e λx v + C 2 e λ2x v C n e λnx v 2, for arbitrary constants C,, C n The theorem allows us to solve all homogeneous systems of linear dierential equations whose coecient matrix A has n linearly independent eigenvectors (Such matrices are called nondefective matrices) Example: Find all functions y and such that y = y 3 y 2 = y + 3 The coecient matrix is A =, whose characteristic polynomial is det(ti A) = t 3 t = (t )(t ) + 3 = t 2 6t + 8 = (t 2)(t 4) Thus, the eigenvalues of A are λ = 2, 4 For λ = 2, we want to nd the nullspace of 3 the row-echelon form is = By row-reducing we nd 3 3, so the 2-eigenspace is -dimensional and is spanned by 4
5 4 3 For λ = 4, we want to nd the nullspace of 4 the row-echelon form is Thus, the general solution to the system is 3 3 = By row-reducing we nd, so the 4-eigenspace is -dimensional and is spanned by y = C 3 e 2x + C 2 e 4x y = y 3 + 7y 3 Example: Find all functions y,, y 3 such that = y + y 3 y 3 = y + 3y The coecient matrix is A =, whose characteristic polynomial is det(ti A) = 3 t 3 7 t + t + 3 = t3 + 3t 2 + 2t = t(t + )(t + 2) Thus, the eigenvalues of A are λ =,, For λ =, we want to nd the nullspace of By row-reducing we nd the row-echelon 3 form is 2, so the -eigenspace is -dimensional and is spanned by 2 For λ =, we want to nd the nullspace of form is By row-reducing we nd the row-echelon, so the ( )-eigenspace is -dimensional and is spanned by For λ = 2, we want to nd the nullspace of form is Thus, the general solution to the system is By row-reducing we nd the row-echelon, so the ( 2)-eigenspace is -dimensional and is spanned by y y 3 = C 2 + C 2 3 e x + C 3 e 2x In the event that the coecient matrix has complex-conjugate eigenvalues, we generally want to rewrite the resulting solutions as real-valued functions Suppose A has a complex eigenvalue λ = a+bi with associated eigenvector v = w +iw 2 Then λ = a bi has an eigenvector v = w iw 2 (the conjugate of v), so we obtain the two solutions e λx v and e λx v to the system y = Ay Now we observe that 2 (eλx v+e λx v) = e ax (w cos(bx) w 2 sin(bx)), and 2i (eλx v e λx v) = e ax (w sin(bx)+ w 2 cos(bx)), and the latter solutions are real-valued Thus, to obtain real-valued solutions, we can replace the two complex-valued solutions e λx v and e λx v with the two real-valued solutions e ax (w cos(bx) w 2 sin(bx)) and e ax (w sin(bx) + w 2 cos(bx)), which are simply the real part and imaginary part of e λx v respectively Example: Find all real-valued functions y and such that y = y 2 = y
6 The coecient matrix is A = t 2 + Thus, the eigenvalues of A are λ = ±i For λ = i, we want to nd the nullspace of i is, whose characteristic polynomial is det(ti A) = i i, so the i-eigenspace is -dimensional and spanned by t t = By row-reducing we nd the row-echelon form i For λ = i we can take the complex conjugate of the eigenvector for λ = i to see that eigenvector y = C i e ix + C 2 The general solution, as a complex-valued function, is We want real-valued solutions, so we must replace the complex-valued solutions i with real-valued ones We have λ = i and v = + Thus, the equivalent real-valued solutions are sin(x) cos(x) = cos(x) y The system's solution is then i so that w = cos(x) = C cos(x) sin(x) and w 2 = + C 2 sin(x) cos(x) cos(x) sin(x) = sin(x) i e ix i e ix and i and is an e ix sin(x)+ y Example: Find all real-valued functions y and such that = 3y 2 = y The coecient matrix is A =, whose characteristic polynomial is det(ti A) = t 3 2 t = t 2 4t + Thus, the eigenvalues of A are λ = 2 ± i by the quadratic formula + i 2 For λ = 2 + i, we want to nd the nullspace of By row-reducing we nd the rowechelon form is, so the i-eigenspace is -dimensional and spanned by + i i + i i For λ = 2 i we can take the complex conjugate of the eigenvector for λ = 2 + i to see that is an eigenvector The general solution, as a complex-valued function, is ( Thus, the real-valued solutions are e 2x y The system's solution is then y = C + i We want real-valued solutions, so we must replace the complex-valued solutions i e (2 i)x with real-valued ones We have λ = 2 + i and v = + i so that w = and w 2 = ) ( cos(x) sin(x) and e 2x = C e 2x cos(x) sin(x) cos(x) i e (2+i)x +C 2 + i e (2 i)x e (2+i)x and sin(x) + + C 2 e 2x sin(x) + cos(x) sin(x) ) cos(x) 6
7 Example: Find all real-valued functions y,, y 3 such that y = 3y 7 3y 3 = y 4 2y 3 y 3 = y y The coecient matrix is A = 4 2, whose characteristic polynomial is det(ti A) = 2 2 t t t 2 = (t + )(t2 2t + 2) The eigenvalues of A are λ =, ± i by the quadratic formula For λ =, we want to nd the nullspace of 3 2 By row-reducing we nd the row-echelon 2 3 form is, so the ( )-eigenspace is -dimensional and spanned by For λ = + i, we want to nd the nullspace of row-echelon form is 3i 2 i 2 + i i i, so the ( + i)-eigenspace is spanned by By row-reducing we nd the + 3i 2 + i For λ = i we can take the complex conjugate of the eigenvector for λ = + i to see that is an eigenvector The general solution, as a complex-valued function, is y y 3 = C e x +C 2 + 3i 2 + i 3i 2 i e (+i) + 3i C 3 2 i e ( i), but we need to replace the complex-valued solutions with real-valued ones We have λ = + i and v = i so that w = 3 2 and w 2 = Thus, the real-valued solutions are e x 2 cos(x) 3 sin(x) and e 2x 2 sin(x) + Then y y 3 = C e x + C 2 e x cos(x) 3 sin(x) 2 cos(x) sin(x) cos(x) + C 3 e x sin(x) + 3 cos(x) 2 sin(x) + cos(x) sin(x) 3 cos(x) Well, you're at the end of my handout Hope it was helpful Copyright notice: This material is copyright Evan Dummit, You may not reproduce or distribute this material without my express permission 7
5 Eigenvalues and Diagonalization
Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 27, v 5) Contents 5 Eigenvalues and Diagonalization 5 Eigenvalues, Eigenvectors, and The Characteristic Polynomial 5 Eigenvalues
More information4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial
Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and
More informationContents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces
Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and
More information[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2
MATH Key for sample nal exam, August 998 []. (a) Dene the term \reduced row-echelon matrix". A matrix is reduced row-echelon if the following conditions are satised. every zero row lies below every nonzero
More informationLinear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)
Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation
More information1 Matrices and Systems of Linear Equations
Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationRow Space, Column Space, and Nullspace
Row Space, Column Space, and Nullspace MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Every matrix has associated with it three vector spaces: row space
More informationSecond-Order Linear ODEs
Second-Order Linear ODEs A second order ODE is called linear if it can be written as y + p(t)y + q(t)y = r(t). (0.1) It is called homogeneous if r(t) = 0, and nonhomogeneous otherwise. We shall assume
More informationLinear transformations
Linear transformations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T. Linear transformations
More information4. Higher Order Linear DEs
4. Higher Order Linear DEs Department of Mathematics & Statistics ASU Outline of Chapter 4 1 General Theory of nth Order Linear Equations 2 Homogeneous Equations with Constant Coecients 3 The Method of
More informationChapter 1. Vectors, Matrices, and Linear Spaces
1.6 Homogeneous Systems, Subspaces and Bases 1 Chapter 1. Vectors, Matrices, and Linear Spaces 1.6. Homogeneous Systems, Subspaces and Bases Note. In this section we explore the structure of the solution
More informationAdditional Homework Problems
Math 216 2016-2017 Fall Additional Homework Problems 1 In parts (a) and (b) assume that the given system is consistent For each system determine all possibilities for the numbers r and n r where r is the
More informationChapter 5 Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n
More information(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.
1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationMath 313 Chapter 5 Review
Math 313 Chapter 5 Review Howard Anton, 9th Edition May 2010 Do NOT write on me! Contents 1 5.1 Real Vector Spaces 2 2 5.2 Subspaces 3 3 5.3 Linear Independence 4 4 5.4 Basis and Dimension 5 5 5.5 Row
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationMATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.
MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationVector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)
Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational
More informationChapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015
Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal
More information5 Linear Dierential Equations
Dierential Equations (part 2): Linear Dierential Equations (by Evan Dummit, 2016, v. 2.01) Contents 5 Linear Dierential Equations 1 5.1 General Linear Dierential Equations...................................
More information(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).
.(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)
More informationCHAPTER 3. Matrix Eigenvalue Problems
A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector
More informationc Igor Zelenko, Fall
c Igor Zelenko, Fall 2017 1 18: Repeated Eigenvalues: algebraic and geometric multiplicities of eigenvalues, generalized eigenvectors, and solution for systems of differential equation with repeated eigenvalues
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More informationMath Final December 2006 C. Robinson
Math 285-1 Final December 2006 C. Robinson 2 5 8 5 1 2 0-1 0 1. (21 Points) The matrix A = 1 2 2 3 1 8 3 2 6 has the reduced echelon form U = 0 0 1 2 0 0 0 0 0 1. 2 6 1 0 0 0 0 0 a. Find a basis for the
More informationMATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.
MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian. Spanning set Let S be a subset of a vector space V. Definition. The span of the set S is the smallest subspace W V that contains S. If
More informationMath 205, Summer I, Week 4b:
Math 205, Summer I, 2016 Week 4b: Chapter 5, Sections 6, 7 and 8 (5.5 is NOT on the syllabus) 5.6 Eigenvalues and Eigenvectors 5.7 Eigenspaces, nondefective matrices 5.8 Diagonalization [*** See next slide
More informationEigenvalues and Eigenvectors 7.1 Eigenvalues and Eigenvecto
7.1 November 6 7.1 Eigenvalues and Eigenvecto Goals Suppose A is square matrix of order n. Eigenvalues of A will be defined. Eigenvectors of A, corresponding to each eigenvalue, will be defined. Eigenspaces
More informationEigenvalues and Eigenvectors A =
Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationHomework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.
Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0
More information9 - Matrix Methods for Linear Systems
9 - Matrix Methods for Linear Systems 9.4 Linear Systems in Normal Form Homework: p. 523-526 # ü Introduction Consider a system of n linear differential equations given by x 1 x 2 x n = a 11 HtL x 1 HtL
More informationEigenvalues & Eigenvectors
Eigenvalues & Eigenvectors Page 1 Eigenvalues are a very important concept in linear algebra, and one that comes up in other mathematics courses as well. The word eigen is German for inherent or characteristic,
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For
More informationMath Camp Notes: Linear Algebra II
Math Camp Notes: Linear Algebra II Eigenvalues Let A be a square matrix. An eigenvalue is a number λ which when subtracted from the diagonal elements of the matrix A creates a singular matrix. In other
More informationChapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form
Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1
More informationEIGENVALUES AND EIGENVECTORS 3
EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationLecture Notes for Math 251: ODE and PDE. Lecture 30: 10.1 Two-Point Boundary Value Problems
Lecture Notes for Math 251: ODE and PDE. Lecture 30: 10.1 Two-Point Boundary Value Problems Shawn D. Ryan Spring 2012 Last Time: We finished Chapter 9: Nonlinear Differential Equations and Stability. Now
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationRecall : Eigenvalues and Eigenvectors
Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector
More informationHomework 3 Solutions Math 309, Fall 2015
Homework 3 Solutions Math 39, Fall 25 782 One easily checks that the only eigenvalue of the coefficient matrix is λ To find the associated eigenvector, we have 4 2 v v 8 4 (up to scalar multiplication)
More informationUnderstand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.
Review Outline To review for the final, look over the following outline and look at problems from the book and on the old exam s and exam reviews to find problems about each of the following topics.. Basics
More informationQuadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.
Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric
More informationChapter 3. Determinants and Eigenvalues
Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory
More informationMath 215 HW #9 Solutions
Math 5 HW #9 Solutions. Problem 4.4.. If A is a 5 by 5 matrix with all a ij, then det A. Volumes or the big formula or pivots should give some upper bound on the determinant. Answer: Let v i be the ith
More informationLECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book]
LECTURE VII: THE JORDAN CANONICAL FORM MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO [See also Appendix B in the book] 1 Introduction In Lecture IV we have introduced the concept of eigenvalue
More informationChapter 3. Vector spaces
Chapter 3. Vector spaces Lecture notes for MA1111 P. Karageorgis pete@maths.tcd.ie 1/22 Linear combinations Suppose that v 1,v 2,...,v n and v are vectors in R m. Definition 3.1 Linear combination We say
More informationMcGill University Math 325A: Differential Equations LECTURE 12: SOLUTIONS FOR EQUATIONS WITH CONSTANTS COEFFICIENTS (II)
McGill University Math 325A: Differential Equations LECTURE 12: SOLUTIONS FOR EQUATIONS WITH CONSTANTS COEFFICIENTS (II) HIGHER ORDER DIFFERENTIAL EQUATIONS (IV) 1 Introduction (Text: pp. 338-367, Chap.
More informationMath 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis
Math 24 4.3 Linear Independence; Bases A. DeCelles Overview Main ideas:. definitions of linear independence linear dependence dependence relation basis 2. characterization of linearly dependent set using
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More information3 Matrix Algebra. 3.1 Operations on matrices
3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8
More informationMAT 22B - Lecture Notes
MAT 22B - Lecture Notes 2 September 2015 Systems of ODEs This is some really cool stu. Systems of ODEs are a natural and powerful way to model how multiple dierent interrelated quantities change through
More informationGoals: Second-order Linear Equations Linear Independence of Solutions and the Wronskian Homogeneous DEs with Constant Coefficients
Week #3 : Higher-Order Homogeneous DEs Goals: Second-order Linear Equations Linear Independence of Solutions and the Wronskian Homogeneous DEs with Constant Coefficients 1 Second-Order Linear Equations
More informationJordan normal form notes (version date: 11/21/07)
Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let
More informationQuizzes for Math 304
Quizzes for Math 304 QUIZ. A system of linear equations has augmented matrix 2 4 4 A = 2 0 2 4 3 5 2 a) Write down this system of equations; b) Find the reduced row-echelon form of A; c) What are the pivot
More information1. In this problem, if the statement is always true, circle T; otherwise, circle F.
Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation
More informationLU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU
LU Factorization A m n matri A admits an LU factorization if it can be written in the form of Where, A = LU L : is a m m lower triangular matri with s on the diagonal. The matri L is invertible and is
More informationMATH 320, WEEK 11: Eigenvalues and Eigenvectors
MATH 30, WEEK : Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors We have learned about several vector spaces which naturally arise from matrix operations In particular, we have learned about the
More informationLinear differential equations with constant coefficients Method of undetermined coefficients
Linear differential equations with constant coefficients Method of undetermined coefficients e u+vi = e u (cos vx + i sin vx), u, v R, i 2 = -1 Quasi-polynomial: Q α+βi,k (x) = e αx [cos βx ( f 0 + f 1
More informationMath 369 Exam #2 Practice Problem Solutions
Math 369 Exam #2 Practice Problem Solutions 2 5. Is { 2, 3, 8 } a basis for R 3? Answer: No, it is not. To show that it is not a basis, it suffices to show that this is not a linearly independent set.
More information1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)
1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix
More information5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B.
EE { QUESTION LIST EE KUMAR Spring (we will use the abbreviation QL to refer to problems on this list the list includes questions from prior midterm and nal exams) VECTORS AND MATRICES. Pages - of the
More information4. Linear transformations as a vector space 17
4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation
More informationCHAPTER 5 REVIEW. c 1. c 2 can be considered as the coordinates of v
CHAPTER 5 REVIEW Throughout this note, we assume that V and W are two vector spaces with dimv = n and dimw = m. T : V W is a linear transformation.. A map T : V W is a linear transformation if and only
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More information1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?
. Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in
More informationMATH 304 Linear Algebra Lecture 34: Review for Test 2.
MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1
More informationNotes on the matrix exponential
Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)
More informationLecture 22: Section 4.7
Lecture 22: Section 47 Shuanglin Shao December 2, 213 Row Space, Column Space, and Null Space Definition For an m n, a 11 a 12 a 1n a 21 a 22 a 2n A = a m1 a m2 a mn, the vectors r 1 = [ a 11 a 12 a 1n
More informationLec 2: Mathematical Economics
Lec 2: Mathematical Economics to Spectral Theory Sugata Bag Delhi School of Economics 24th August 2012 [SB] (Delhi School of Economics) Introductory Math Econ 24th August 2012 1 / 17 Definition: Eigen
More informationPROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS
PROBLEM SET Problems on Eigenvalues and Diagonalization Math 335, Fall 2 Oct. 2, 2 ANSWERS i Problem. In each part, find the characteristic polynomial of the matrix and the eigenvalues of the matrix by
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More information1. TRUE or FALSE. 2. Find the complete solution set to the system:
TRUE or FALSE (a A homogenous system with more variables than equations has a nonzero solution True (The number of pivots is going to be less than the number of columns and therefore there is a free variable
More informationMethod of Frobenius. General Considerations. L. Nielsen, Ph.D. Dierential Equations, Fall Department of Mathematics, Creighton University
Method of Frobenius General Considerations L. Nielsen, Ph.D. Department of Mathematics, Creighton University Dierential Equations, Fall 2008 Outline 1 The Dierential Equation and Assumptions 2 3 Main Theorem
More informationEXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)
EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily
More informationLecture 12: Diagonalization
Lecture : Diagonalization A square matrix D is called diagonal if all but diagonal entries are zero: a a D a n 5 n n. () Diagonal matrices are the simplest matrices that are basically equivalent to vectors
More informationELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices
ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex
More informationLinear Algebra. and
Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?
More informationMath 205, Summer I, Week 4b: Continued. Chapter 5, Section 8
Math 205, Summer I, 2016 Week 4b: Continued Chapter 5, Section 8 2 5.8 Diagonalization [reprint, week04: Eigenvalues and Eigenvectors] + diagonaliization 1. 5.8 Eigenspaces, Diagonalization A vector v
More informationChapter 1: Systems of Linear Equations
Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where
More informationLecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September
More informationMA 262, Fall 2017, Final Version 01(Green)
INSTRUCTIONS MA 262, Fall 2017, Final Version 01(Green) (1) Switch off your phone upon entering the exam room. (2) Do not open the exam booklet until you are instructed to do so. (3) Before you open the
More informationTest 3, Linear Algebra
Test 3, Linear Algebra Dr. Adam Graham-Squire, Fall 2017 Name: I pledge that I have neither given nor received any unauthorized assistance on this exam. (signature) DIRECTIONS 1. Don t panic. 2. Show all
More informationMATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by
MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar
More informationVII Selected Topics. 28 Matrix Operations
VII Selected Topics Matrix Operations Linear Programming Number Theoretic Algorithms Polynomials and the FFT Approximation Algorithms 28 Matrix Operations We focus on how to multiply matrices and solve
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationLecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)
Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Travis Schedler Tue, Oct 18, 2011 (version: Tue, Oct 18, 6:00 PM) Goals (2) Solving systems of equations
More informationBASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x
BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,
More information