12.4 The Diagonalization Process

Size: px
Start display at page:

Download "12.4 The Diagonalization Process"

Transcription

1 Chapter - More Matrix Algebra.4 The Diagonalization Process We now have the background to understand the main ideas behind the diagonalization process. Definition: Eigenvalue, Eigenvector. Let A be an nµ n matrix over R. l is an eigenvalue of A if for some nonzero column vector x œ R n we have A x l x. x is called an Eigenvectors corresponding to the eigenvalue l. Example.4.. Find the eigenvalues and corresponding eigenvectors of the matrix A K O. We want to find nonzero vectors x K x O x and real numbers l such that A X l X ñ K O K x x O l K x x O ñ K O K x x O - l K x x O K O ñ K O K x x O - l K O K x x O K O ñ K K O - l K O O K x x O K O ñ K - l - l O K x O K x O H.4 al The last matrix equation will have nonzero solutions if and only if det K - l - l O or H - ll H - ll -, which simplifies to l - 5 l + 4. Therefore, the solutions to this quadratic equation, l and l 4, are the eigenvalues of A. We now have to find eigenvectors associated with each eigenvalue. Case. For l, Equation.4a becomes: K - - O K x x O K O K O K x O K x O which reduces to the single equation, x + x. From this, x -x. This means the solution set of this equation is (in column notation) E : K -c c O c œ R > So any column vector of the form K -c c O where c is any nonzero real number is an eigenvector associated with l. The reader should verify that, for example, K O - - so that - is an eigenvector associated with eigenvalue. Case. For l 4 equation.4.a becomes: K O K x x O K O K - - O K x O K x O which reduces to the single equation - x + x, so that x x. The solution set of the equation is c E :K - c O c œ R > Applied Discrete Structures by Alan Doerr & Kenneth Levasseur is licensed under a Creative Commons Attribution-Noncommercial-ShareAlikes. United States License.

2 Chapter - More Matrix Algebra c Therefore, all eigenvectors of A associated with the eigenvalue l 4 are of the form K O, where c can be any nonzero number. - c The following theorems summarize the more important aspects of this example: Theorem.4.. Let A be any nµ n matrix over R. Then l œ R is an eigenvalue of A if and only if detha - l IL. The equation detha - l IL is called the characteristic equation and the left side of this equation is called the characteristic polynomial of A. Theorem.4.. Nonzero eigenvectors corresponding to distinct eigenvalues are linearly independent. The solution space of HA - l IL x is called the eigenspace of A corresponding to l. This terminology is justified by Exercise of this section. We now consider the main aim of this section. Given an nµn (square) matrix A, we would like to "change" A into a diagonal matrix D, perform our tasks with the simpler matrix D, and then describe the results in terms of the given matrix A. Definition: Diagonalizable Matrix. An n µ n matrix A is called diagonalizable if there exists an invertible n µ n matrix P such that P - A P is a diagonal matrix D. The matrix P is said to diagonalize the matrix A. Example.4.. We will now diagonalize the matrix A of Example.4.. Form the matrix P as follows: Let P HL be the first column of P. Choose for P HL any eigenvector from E. We may as well choose a simple vector in E so P HL K - O is our candidate. Similarly, let PHL be the second column of P, and choose for P HL any eigenvector from E. The vector P HL K O is a reasonable choice, thus So that P K - O and P- K - O - P - A P K - O K O K - O K 4 O Notice that the elements on the main diagonal of D are the eigenvalues of A, where D i i is the eigenvalue corresponding to the eigenvector P HiL. Remarks: () The first step in the diagonalization process is the determination of the eigenvalues. The ordering of the eigenvalues is purely arbitrary. If we designate l 4 and l, the columns of P would be interchanged and D would be K 4 O (see Exercise b of this section). Nonetheless, the final outcome of the application to which we are applying the diagonalization process would be the same. () If A is an nµn matrix with distinct eigenvalues, then P is also an nµn matrix whose columns P HL, P HL,, P HnL are n linearly independent vectors. Example.4.. Diagonalize the matrix -8 A l -8 detha - l IL det -l l -l - 8 H - ll det K -6 - l O H - ll HH-l - L H - ll + 8L H - ll Hl + l - L Hence, the equation detha - l IL becomes H - ll Hl + l - L - Hl - L Hl + L Therefore, our eigenvalues for A are l - and l. We note that we do not have three distinct eigenvalues, but we proceed as in the previous example. Case. For l - the equation HA - l IL x becomes Applied Discrete Structures by Alan Doerr & Kenneth Levasseur is licensed under a Creative Commons Attribution-Noncommercial-ShareAlikes. United States License.

3 Chapter - More Matrix Algebra x x x Using Mathematica, we can row reduce the matrix: RowReduceB In equation form, the matrix equation is then equivalent to x - x x x F Therefore, the solution, or eigenspace, corresponding to l - consists of vectors of the form Therefore - - x x x x - is an eigenvector corresponding to the eigenvalue l -, and can be used for our first column of P: P HL - Before we continue we make the observation: E is a subspace of R with basis 8P HL < and dim E. Case. If l, then the equation HA - l IL x becomes x x x Without the aid of any computer technology, it should be clear that all three equations that correspond to this matrix equation are equivalent to x - x, or x x. Notice that x can take on any value, so any vector of the form x x x x will solve the matrix equation. + x We note that the solution set contains two independent variables, x and x. Further, note that we cannot express the eigenspace E as a linear combination of a single vector as in Case. However, it can be written as E :x + x x, x œ R >. We can replace any vector in a basis is with a nonzero multiple of that vector. Simply for aesthetic reasons, we will multiply the second vector that generates E by. Therefore, the eigenspace E is a subspace of R with basis :, > and so dim E. What this means with respect to the diagonalization process is that l gives us both Column and Column the diagonalizing matrix. The order is not important. Let P HL and P HL and so P - Applied Discrete Structures by Alan Doerr & Kenneth Levasseur is licensed under a Creative Commons Attribution-Noncommercial-ShareAlikes. United States License.

4 Chapter - More Matrix Algebra The reader can verify (see Exercise 5 of this section) that P and P - A P - In doing Example.4., the given µ matrix A produced only two, not three, distinct eigenvalues, yet we were still able to diagonalize A. The reason we were able to do so was because we were able to find three linearly independent eigenvectors. Again, the main idea is to produce a matrix P that does the diagonalizing. If A is an n µ n matrix, P will be an nµn matrix, and its n columns must be linearly independent eigenvectors. The main question in the study of diagonalizability is When can it be done? This is summarized in the following theorem. Theorem.4.. Let A be an n µ n matrix. Then A is diagonalizable if and only if A has n linearly independent eigenvectors. Outline of a proof: (ì) Assume that A has linearly independent eigenvectors, P HL, P HL,, P HnL, with corresponding eigenvalues l, l,, l n. We want to prove that A is diagonalizable. Column i of the n µn matrix A P is A P HiL (see Exercise 7 of this section). Then, since the P HiL is an eigenvector of A associated with the eigenvalue l i we have A P HiL l i P HiL for i,,..., n. But this means that A P P D, where D is the diagonal matrix with diagonal entries l, l,, l n. If we multiply both sides of the equation by P - we get the desired P - A P D. (ï) The proof in this direction involves a concept that is not covered in this text (rank of a matrix); so we refer the interested reader to virtually any linear algebra text for a proof. We now give an example of a matrix which is not diagonalizable. Example.4.4. Let us attempt to diagonalize the matrix A A detha - l IL det - l - l l H - ll det K - l l O H - ll HH - ll H4 - ll + L H - ll Hl - 6 l + 9L H - ll Hl - L detha - l IL l or l - 4 Therefore there are two eigenvalues, l and l. Since l is an eigenvalue of degree it will have an eigenspace of dimension. Since l is a double root of the characteristic equation, the dimension of its eigenspace must be in order to be able to diagonalize. Case. For l, the equation HA - l IL x becomes - x x x A quick Mathematica evaluation make the solution to this system obvious RowReduce@A - IdentityMatrix@DD 4 There is one free variable, x, and Applied Discrete Structures by Alan Doerr & Kenneth Levasseur is licensed under a Creative Commons Attribution-Noncommercial-ShareAlikes. United States License.

5 Chapter - More Matrix Algebra Hence, : -4 - x x x -4 x -x x x -4 - > is a basis for the eigenspace of l. Case. For l, the equation HA - l IL x becomes x x x RowReduce@A - IdentityMatrix@DD - Once again there is only one free variable in the row reduction and so the dimension of the eigenspace will be one: x x x x x x Hence, : > is a basis for the eigenspace of l. This means that l produces only one column for P. Since we began with only two eigenvalues, we had hoped that one of them would produce a vector space of dimension two, or, in matrix terms, two linearly independent columns of P. Since A does not have three linearly independent eigenvectors A cannot be diagonalized. Mathematica Note Diagonalization can be easily done with a few built-in functions of Mathematica. Here is a µ matrix we've selected because the eigenvalues are very simple, and could be found by hand with a little work. A ; The set of linearly independent eigenvectors of A can be computed: Eigenvectors@AD - - The rows of this matrix are the eigenvectors, so we transpose the result to get our diagonalizing matrix P whose columns are eigenvectors. P Transpose@Eigenvectors@ADD - - We then use P to diagonalize. The entries in the diagonal matrix are the eigenvalues of A. Inverse@PD.A.P 6 4 We could have gotten the eigenvalues directly this way: Applied Discrete Structures by Alan Doerr & Kenneth Levasseur is licensed under a Creative Commons Attribution-Noncommercial-ShareAlikes. United States License.

6 Chapter - More Matrix Algebra Eigenvalues@AD 86, 4, < Most matrices that are selected at random will not have "nice" eigenvalues. Here is a new matrix A that looks similar to the one above. A ; Asking for the eigenvalues first, we see that the result is returned symbolically as the three roots to a cubic equation. The default for Mathematica is to leave these non-computed. Since the entries of A are exact numbers, Mathematica is capable of giving an exact solution, but it's very messy. The easiest way around the problem is to make the entries in A approximate. The following expression redefines A as approximate. A N@AD Now we can get approximate eigenvalues, and the approximations are very good for most purposes. Eigenvalues@AD 88.77, 7.789, 4.489< We can verify that the matrix can be diagonalized although due to round-off error some of the off-diagonal entries of the "diagonal" matrix are nonzero. P Transpose@Eigenvectors@ADD Inverse@PD.A.P µ µ µ µ µ The Chop function will set small numbers to zero. The default thresh hold for "small" is - but that can be adjusted, if desired. Diag Chop@Inverse@PD.A.PD We can't use the name D here because Mathematica reserves it for the differentiation function. If you experiment with more matrices, you will undoubtedly encounter situations where some eigenvalues are complex. The process is the same, although we've avoided these just for simplicity. Sage Note We start by defining the same matrix as we did in Mathematica. We also declare D and P to be variables. A Matrix (QQ, [[4,, ], [, 5, ], [,, 4]]);A [4 ] [ 5 ] [ 4] var (' D, P') (D, P) We have been working with "right eigenvectors" since the x in A x l x is a column vector to the right of A. It's not so common but still desirable in some situations to consider "left eigenvectors," so Sage allows either one. The right_eigenmatrix method returns a pair of matrices. The diagonal matrix, D, with eigenvalues and the diagonalizing matrix, P, which is made up of columns that are eigenvectors Applied Discrete Structures by Alan Doerr & Kenneth Levasseur is licensed under a Creative Commons Attribution-Noncommercial-ShareAlikes. United States License.

7 Chapter - More Matrix Algebra corresponding to the eigenvectors of D. (D,P)A.right_eigenmatrix();(D,P) ( [6 ] [ ] [ 4 ] [ -] [ ], [ - ] ) We should note here that P is not unique because even if an eigenspace has dimension one, any nonzero vector in that space will serve as an eigenvector. For that reason, the P generated by Sage isn't identical to the one generated by Mathematica, but they both work. Here we verify the result for our Sage calculation. Recall that an asterisk is used for matrix multiplication in Sage. P.inverse()*A*P [6 ] [ 4 ] [ ] Here is a second matrix, again the same as we used with Mathematica. AMatrix(QQ,[[8,,],[,5,],[,,7]]);A [8 ] [ 5 ] [ 7] Here we've already specified that the underlying system is the rational numbers. Since the eigenvalues are not rational, Sage will revert to approximate number by default. We'll just pull out the matrix of eigenvectors this time and display rounded entries. Here the diagonalizing matrix looks very different from the result from Mathematica, but this is because he eigenvalues are not in the same order in the two calculations. They both diagonalize but with a different diagonal matrix. PA.right_eigenmatrix()[] P.numerical_approx(digits) [...] [ ] [ ] D(P.inverse()*A*P);D.numerical_approx(digits) [ 4.5..] [. 7.7.] [.. 8.8] EXERCISES FOR SECTION.4 A Exercises. (a) List three different eigenvectors of A K O, the matrix of Example.4., associated with the two eigenvalues and 4. Verify your results. ((b) Choose one of the three eigenvectors corresponding to and one of the three eigenvectors corresponding to 4, and show that the two chosen vectors are linearly independent.. (a) Verify that E and E in Example.4. are vector spaces over R. Since they are also subsets of R, they are called subvector-spaces, or subspaces for short, of R. Since these are subspaces consisting of eigenvectors, they are called eigenspaces. (b) Use the definition of dimension in the previous section to find dim E and dim E. Note that dim E + dim E dim R. This is not a coincidence.. (a) Verify that P - A P is indeed equal to K O, as indicated in Example (b) Choose P HL K O and PHL K - O and verify that the new value of P satisfies P- A P K 4 O (c) Take any two linearly independent eigenvectors of the matrix A of Example.4. and verify that P - A P is a diagonal matrix. Applied Discrete Structures by Alan Doerr & Kenneth Levasseur is licensed under a Creative Commons Attribution-Noncommercial-ShareAlikes. United States License.

8 Chapter - More Matrix Algebra 4. (a) Let A be the matrix in Example.4. and P P - A P (b) If you choose the columns of P in the reverse order, what is P - A P? 5. Diagonalize the following, if possible: (a) K - O (b) K -7 6 O (c) K 4 O. Without doing any actual matrix multiplications, determine the value of (d) (e) Diagonalize the following, if possible: (f) (a) K O (b) K - O (c) K 4 O (d) B Exercise (e) (f) Let A and P be as in Example.4.. Show that the columns of the matrix A P can be found by computing A P HL, A P HL,, A P HnL. 8. Prove that if P is an nµn matrix and D is a diagonal matrix with diagonal entries d, d,, d n, then P D is the matrix obtained from P, but multiplying column i of P by d i, i,,, n. C Exercise 9. (a) There is an option to the Mathematica functions Eigenvectors and Eigenvalues called Cubics that will use the cubic equation 8 to find exact eigenvalues of a matrix like 5. Use that option to find the exact eigenvalues of the matrix. Diagonalize the matrix using 7 the Cubics option and then convert the result to a matrix of approximate numbers to compare your result with the approximate result we found in the Mathematica Note..5 Some Applications A large and varied number of applications involve computations of powers of matrices. These applications can be found in science, the social sciences, economics, the analysis of relationships with groups, engineering, and, indeed, any area where mathematics is used and, therefore, where programs are to be developed. We will consider a few diverse examples here. To aid your understanding of the following examples, we develop a helpful technique to compute A m, m >. If A can be diagonalized, then there is a matrix P such that P - A P D, where D is a diagonal matrix and A m P D m P - for all m. (.5 a) You are asked to prove this equation in Exercise 9 of Section 5.4. The condition that D be a diagonal matrix is not necessary but when it is, the calculation on the right side is particularly easy to perform. Although the formal proof of equation.4a is done by induction, the reason why it is true is easily seen by writing out an example such as m : A m HP D P - L m To get this, solve P - A P D for A and substitute HP D P - L HP D P - L HP D P - L P D HP - PL D HP - P L D P - by associativity of matrix mult. P D I D I D P - P D D D P - P D P - Example.5.: Recursion. Consider the computation of terms of the Fibonacci sequence, which we examined in Example 8..5: F, F F k F k- + F k- for k. Applied Discrete Structures by Alan Doerr & Kenneth Levasseur is licensed under a Creative Commons Attribution-Noncommercial-ShareAlikes. United States License.

9 Chapter - More Matrix Algebra In order to formulate the calculation in matrix form, we introduced the "dummy equation" F k- F k- so that now we have two equations F k F k- + F k- F k- F k- These two equations can be expressed in matrix form as K F k F k- O K O K F k- F k- O if k We can use induction to prove that if k, K F k F k- O A k- K O A K F k- F k- O if A K O A K F k- O if k F k- etc. if k is large enough Next, by diagonalizing A and using the fact that A m P D m P -. we can show that F k k k See Exercise la of this section. Comments: () An equation of the form F k a F k- + b F k-, where a and b are given constants, is referred to linear homogeneous second-order difference equation. The conditions F c and F c, where c and c are constants, are called initial conditions. Those of you who are familiar with differential equations may recognize that the this language parallels what is used in differential equations. Difference (AKA recurrence) equations move forward discretely that is, in a finite number of positive steps while a differential equation moves continuously that is, takes an infinite number of infinitesimal steps. () A recurrence relationship of the form F k a F k- + b, where a and b are constants, is called a first-order difference equation. In order to write out the sequence, we need to know one initial condition. Equations of this type can be solved similarly to the method outlined in Example.5. by introducing the superfluous equation F k- + to obtain in matrix equation: K F k O K a b O K F k- O K F k O K a b k O K F O Example.5.: Graph Theory. Consider the graph in Figure.5.. a b c Figure.5. From the procedures outlined in Section 6.4, the adjacency matrix of this graph is A Recall that A k is the adjacency matrix of the relation r k, where r is the relation 8Ha, al, Ha, bl, Hb, al, Hb, cl, Hc, bl, Hc, cl< of the above graph. Also recall that in computing A k, we used Boolean arithmetic. What happens if we use "regular" arithmetic? For example, A How can we interpret this? We note that A and that there are two paths of length two from c (the third node) to c. Also, A, and there is one path of length from a to c. The reader should verify these claims from the graph in Figure.5.. Theorem.5.. The entry IA k M i j is the number of paths, or walks, of length k from node v i, to node v j. Applied Discrete Structures by Alan Doerr & Kenneth Levasseur is licensed under a Creative Commons Attribution-Noncommercial-ShareAlikes. United States License.

10 Chapter - More Matrix Algebra How do we find A k for possibly large values of k? From the discussion at the beginning of this section, we know that A k P D k P - if A is diagonalizable. We leave to the reader to show that l,, and - are eigenvalues of A with eigenvectors respectively, so that where P A k P - - -,, and k P - H-L k and P See Exercise 5 of this section for the completion of this example. 6 Example.5.: Matrix Calculus. Those who have studied calculus recall that the Maclaurin series is a useful way of expressing many common functions. For example, x k k! k x Indeed, calculators and computers use these series for calculations. Given a polynomial f HxL, we defined the matrix-polynomial f HAL for square matrices in Chapter 5. Hence, we are in a position to describe A for an n µ n matrix A as a limit of polynomial. Formally, we write A I + A + A! + A! A k + º k! k Again we encounter the need to compute high powers of a matrix. Let A be an nµn diagonalizable matrix. Then there exists an invertible nµn matrix P such that P - A P D, a diagonal matrix, so that A P D P- IP D P - M k k! k D k P P - k! k The infinite sum in the middle of this final expression can be easily evaluated if D is diagonal. All entries of powers off the diagonal are zero and the i th entry of the diagonal is D k k! k i i k Di i D i i k! k For example, if A K O, the first matrix we diagonalized in Section., we found that P K - O and D K 4 O. Therefore, A K - O º K O Comments on Example.5.: () Many of the ideas of calculus can be developed using matrices. For example, if AHtL t t + 8 t e t Applied Discrete Structures by Alan Doerr & Kenneth Levasseur is licensed under a Creative Commons Attribution-Noncommercial-ShareAlikes. United States License.

11 Chapter - More Matrix Algebra then d AHtL d t t 6 t + 8 e t () Many of the basic formulas in calculus are true in matrix calculus. For example, d HAHtL+BHtLL d t d AHtL d t and if A is a constant matrix, d A t d t A A t + d BHtL d t () Matrix calculus can be used to solve systems of differential equations in a similar manner to the procedure used in ordinary differential equations. Mathematica Note Mathematica's matrix exponential function is MatrixExp. MatrixExpBK OF H + 4 L H- + 4 L H- + 4 L H + 4 L Sage Note Sage's matrix exponential method is called exp. AMatrix(QQ,[[,],[,]]); A.exp() [ /*e + /*e^4 -/*e + /*e^4] [-/*e + /*e^4 /*e + /*e^4] EXERCISES FOR SECTION.5 A Exercises. (a) Write out all the details of Example.5. to show that the formula for F k given in the text is correct. (b) Use induction to prove the assertion made in Example.5. that K F k O A k- K F k- O. (a) Do Example 8..8 of Chapter 8 using the method outlined in Example.5.. Note that the terminology characteristic equation, characteristic polynomial, and so on, introduced in Chapter 8, comes from the language of matrix algebra, (b) What is the significance of Algorithm 8.., part c, with respect to this section?. Solve S HkL 5 S Hk - L + 4, with S HL, using the method of this section. 4. How many paths are there of length 6 between vertex and vertex in Figure.5.? How many paths from vertex to vertex of length 6 are there? Hint: The characteristic polynomial of the adjacency matrix is l 4. Applied Discrete Structures by Alan Doerr & Kenneth Levasseur is licensed under a Creative Commons Attribution-Noncommercial-ShareAlikes. United States License.

12 Chapter - More Matrix Algebra 4 Figure Use the matrix A of Example.5. to: (a) Determine the number of paths of length that exist from vertex a to each of the vertices in Example.5.. Verify using the graph. Do the same for vertices b and c. (b) Verify all the details of Example.5.. (c) Use Example.5. to determine the number of paths of length 4 there are from each node in the graph of Figure.5. to every node in the graph. Verify your results using the graph. 6. Let A K - - O (a) Find A (b) Recall that sin x H-L k x k H k+l! k and compute sin A. (d) Formulate a reasonable definition of the natural logarithm of a matrix and compute ln A. 7. We noted in Chapter 5 that since matrix algebra is not commutative under multiplication, certain difficulties arise. Let A K O and B K O. (a) Compute A, B, and A+B. Compare A B, B A and A+B. (b) Show that if is the µ zero matrix, then I. (c) Prove that if A and B are two matrices that do commute, then A+B A B, thereby proving that A and B commute. (d) Prove that for any matrix A, H A L - -A. 8. Another observation for adjacency matrices: For the matrix in Example.5., note that the sum of the elements in the row corresponding to the node a (that is, the first row) gives the outdegree of a. Similarly, the sum of the elements in any given column gives the indegree of the node corresponding to that column. 4 Figure.5. (a) Using the matrix A of Example.5., find the outdegree and the indegree of each node. Verify by the graph. (b) Repeat part (a) for the directed graphs in Figure.5.. Applied Discrete Structures by Alan Doerr & Kenneth Levasseur is licensed under a Creative Commons Attribution-Noncommercial-ShareAlikes. United States License.

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions

chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS The purpose of this chapter is to introduce you to matrix algebra, which has many applications. You are already familiar with several algebras: elementary

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

chapter 11 ALGEBRAIC SYSTEMS GOALS

chapter 11 ALGEBRAIC SYSTEMS GOALS chapter 11 ALGEBRAIC SYSTEMS GOALS The primary goal of this chapter is to make the reader aware of what an algebraic system is and how algebraic systems can be studied at different levels of abstraction.

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Math 314H Solutions to Homework # 3

Math 314H Solutions to Homework # 3 Math 34H Solutions to Homework # 3 Complete the exercises from the second maple assignment which can be downloaded from my linear algebra course web page Attach printouts of your work on this problem to

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors

More information

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class: Math 4242 Fall 206 homework page Math 4242 Fall 206 Darij Grinberg: homework set 8 due: Wed, 4 Dec 206 Exercise Recall that we defined the multiplication of complex numbers by the rule a, b a 2, b 2 =

More information

Recitation 8: Graphs and Adjacency Matrices

Recitation 8: Graphs and Adjacency Matrices Math 1b TA: Padraic Bartlett Recitation 8: Graphs and Adjacency Matrices Week 8 Caltech 2011 1 Random Question Suppose you take a large triangle XY Z, and divide it up with straight line segments into

More information

Math 205, Summer I, Week 4b:

Math 205, Summer I, Week 4b: Math 205, Summer I, 2016 Week 4b: Chapter 5, Sections 6, 7 and 8 (5.5 is NOT on the syllabus) 5.6 Eigenvalues and Eigenvectors 5.7 Eigenspaces, nondefective matrices 5.8 Diagonalization [*** See next slide

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name. HW2 - Due 0/30 Each answer must be mathematically justified. Don t forget your name. Problem. Use the row reduction algorithm to find the inverse of the matrix 0 0, 2 3 5 if it exists. Double check your

More information

Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity: Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices For example, determinants and eigenvalues are easy to compute, and multiplication

More information

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues. Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Exercise Set 7.2. Skills

Exercise Set 7.2. Skills Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Math Final December 2006 C. Robinson

Math Final December 2006 C. Robinson Math 285-1 Final December 2006 C. Robinson 2 5 8 5 1 2 0-1 0 1. (21 Points) The matrix A = 1 2 2 3 1 8 3 2 6 has the reduced echelon form U = 0 0 1 2 0 0 0 0 0 1. 2 6 1 0 0 0 0 0 a. Find a basis for the

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

= W z1 + W z2 and W z1 z 2

= W z1 + W z2 and W z1 z 2 Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of

More information

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a). .(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

1 Matrices and matrix algebra

1 Matrices and matrix algebra 1 Matrices and matrix algebra 1.1 Examples of matrices A matrix is a rectangular array of numbers and/or variables. For instance 4 2 0 3 1 A = 5 1.2 0.7 x 3 π 3 4 6 27 is a matrix with 3 rows and 5 columns

More information

Eigenvalues and Eigenvectors A =

Eigenvalues and Eigenvectors A = Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Mon Mar matrix eigenspaces. Announcements: Warm-up Exercise:

Mon Mar matrix eigenspaces. Announcements: Warm-up Exercise: Math 227-4 Week notes We will not necessarily finish the material from a given day's notes on that day We may also add or subtract some material as the week progresses, but these notes represent an in-depth

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

Linear Algebra Primer

Linear Algebra Primer Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics The Basics of Linear Algebra Marcel B. Finan c All Rights Reserved Last Updated November 30, 2015 2 Preface Linear algebra

More information

Eigenspaces and Diagonalizable Transformations

Eigenspaces and Diagonalizable Transformations Chapter 2 Eigenspaces and Diagonalizable Transformations As we explored how heat states evolve under the action of a diffusion transformation E, we found that some heat states will only change in amplitude.

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS MAT 217 Linear Algebra CREDIT HOURS: 4.0 EQUATED HOURS: 4.0 CLASS HOURS: 4.0 PREREQUISITE: PRE/COREQUISITE: MAT 210 Calculus I MAT 220 Calculus II RECOMMENDED

More information

1. Select the unique answer (choice) for each problem. Write only the answer.

1. Select the unique answer (choice) for each problem. Write only the answer. MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Math 315: Linear Algebra Solutions to Assignment 7

Math 315: Linear Algebra Solutions to Assignment 7 Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

More information

Eigenvalues and Eigenvectors: An Introduction

Eigenvalues and Eigenvectors: An Introduction Eigenvalues and Eigenvectors: An Introduction The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems

More information

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9 Bachelor in Statistics and Business Universidad Carlos III de Madrid Mathematical Methods II María Barbero Liñán Homework sheet 4: EIGENVALUES AND EIGENVECTORS DIAGONALIZATION (with solutions) Year - Is

More information

Practice Final Exam Solutions

Practice Final Exam Solutions MAT 242 CLASS 90205 FALL 206 Practice Final Exam Solutions The final exam will be cumulative However, the following problems are only from the material covered since the second exam For the material prior

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true. 1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III

More information

Solutions to Final Exam

Solutions to Final Exam Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns

More information

Diagonalization of Matrix

Diagonalization of Matrix of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that

More information

Final Exam Practice Problems Answers Math 24 Winter 2012

Final Exam Practice Problems Answers Math 24 Winter 2012 Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

Solving Homogeneous Systems with Sub-matrices

Solving Homogeneous Systems with Sub-matrices Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

1. Introduction to commutative rings and fields

1. Introduction to commutative rings and fields 1. Introduction to commutative rings and fields Very informally speaking, a commutative ring is a set in which we can add, subtract and multiply elements so that the usual laws hold. A field is a commutative

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

13.5 Finite Boolean Algebras as n-tuples of 0's and 1's

13.5 Finite Boolean Algebras as n-tuples of 0's and 1's 3.5 Finite Boolean Algebras as n-tuples of 's and 's From the previous section we know that all finite Boolean algebras are of order 2 n, where n is the number of atoms in the algebra. We can therefore

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation. 1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear

More information

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU LU Factorization A m n matri A admits an LU factorization if it can be written in the form of Where, A = LU L : is a m m lower triangular matri with s on the diagonal. The matri L is invertible and is

More information

First of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if

First of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if 5 Matrices in Different Coordinates In this section we discuss finding matrices of linear maps in different coordinates Earlier in the class was the matrix that multiplied by x to give ( x) in standard

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

MATHEMATICS 23a/E-23a, Fall 2015 Linear Algebra and Real Analysis I Module #1, Week 4 (Eigenvectors and Eigenvalues)

MATHEMATICS 23a/E-23a, Fall 2015 Linear Algebra and Real Analysis I Module #1, Week 4 (Eigenvectors and Eigenvalues) MATHEMATICS 23a/E-23a, Fall 205 Linear Algebra and Real Analysis I Module #, Week 4 (Eigenvectors and Eigenvalues) Author: Paul Bamberg R scripts by Paul Bamberg Last modified: June 8, 205 by Paul Bamberg

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Homework 3 Solutions Math 309, Fall 2015

Homework 3 Solutions Math 309, Fall 2015 Homework 3 Solutions Math 39, Fall 25 782 One easily checks that the only eigenvalue of the coefficient matrix is λ To find the associated eigenvector, we have 4 2 v v 8 4 (up to scalar multiplication)

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Definition: Let V T V be a linear transformation.

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information