Introduction to Diagonalization
|
|
- Gilbert Parker
- 5 years ago
- Views:
Transcription
1 Introduction to Diagonalization For a square matrix E, a process called diagonalization can sometimes give us more insight into how the transformation B ÈE B works. The insight has a strong geometric flavor, but we will see later that it can also be very useful in applications. The idea is to try to find to a new basis for 8 so that, if we compute in terms of -coordinates, E works like a diagonal matrix H. We would like to find -coordinates and a diagonal matrix H so that is the same as B È E B (a computation in standard coordinates) â same geometric point but named in two different coordinate systems á ÒBÓ È HÒBÓ (a computation is done in -coordinates) The point pictured below at the lower left (named Bor ÒBÓ in the two different coordinate systems) is moved by the transformation to the point pictured at the upper right (named EBin one coordinate system or HÒB Ó in the other coordinate system). Geometrically, the one point is moved to the other point; but we describe that action using two different coordinate systems: as B È E B or as ÒBÓ È HÒBÓ.
2 Changing coordinates: remember that multiplication by the change of basis matrix T converts -coordinates into standard coordinates, and the inverse matrix converts standard coordinates into -coordinates: TÒBÓ ÒBÓ œ B œ T So what we want is to find (if possible) a basis and a diagonal matrix H so that B T H T B œ EB Å l à converts Binto ÒBÓ Ð coordinatesñ Å compute HÒBÓ finally, convert the result HÒ Ó back into standard coordinates B In other words, we want a basis and diagonal matrix H so that E factors as T HT œ E (*) 8 $ The same ideas apply just as well in except that it's harder to draw a picture in (and impossible to draw in 8 for 8 $ÑÞ Definition If there is a basis for 8 and a diagonal matrix H such that the factorization (*) is true, we say that the matrix E is diagonalizable. If we can do this, it's a good thing because (if we compute in coordinates): i) diagonal matrices are so easy to work with, and ii) it's easy to visualize geometrically what a mapping D ÈHD does to DÞ Both i) and ii) can be important in applications. nfortunately, it is not always possible to diagonalize a square matrix E. We'll learn something in these notes about when it is possible, but more details will need to wait until Chapter 5. For now, introducing the basic ideas i) will let us practice with changing bases, and ii) will be good preparation for the lster material in Chapter 5.
3 The first example gives an illustration of why diagonalization is useful. Example This very elementary example is in. Exactly the same ideas apply for 8 8matrices E, but working in with a matrix Emakes the visualization much easier. If H is a diagonal matrix, what does the mapping B È HB do to B geometrically? That is, how does H operate on? To be specific, suppose H œ $. ' When multiply the basis vectors and by H, we just get a rescaling. These vectors are mapped onto scalar multiples of themselves: the scalars involved are the diagonal elements of HÞ H/ œ H œ $ œ $ / ß and B For any B H/ œ H œ œ '/ ', something similar happens: B $ B $B H œ œ B ' B 'B H simply rescales each coordinate of B by a factor of 3 in the first coordinate and and by a factor of 6 in the second coordinate. In this particular example, the diagonal entries of H are both bigger than, so multiplying B by H stretches or dilates each coordinate. This is pictured below (for standard coordinates) where $ ÈH œ ' À
4 % Example For a more general like Eœ ß it's not so clear geometrically & what E does to B. However we can get some insight by doing our calculations in terms of a new basis for. First of all, it's a fact ( just check by multiplying, if you like) that % $ $ $ Eœ œ & ' $ $ Å Å Å œ T H T For now, don't worry about where the matrix T Chapter 5. came from. Such questions come up in T is invertible so, by the Invertible Matrix Theorem, its columns are linearly independent and span. Therefore we can use the columns of T as a new basis for : œö, ß, œö ß Þ ( We looked at this basis as an example in the preceding lecture. ) Then T is the matrix that the textbook calls T œ the change of coordinates matrix from -coordinates to standard coordinates. T ÒBÓ œ B
5 The new basis lets us see more clearly how E operates. ( See also the figure, below) Suppose, for example, that B œ % E œ œ Þ & ( (standard coordinates) We can also calculate EB in a more roundabout way but one which gives us new insight: % $ $ $ E œ œ œ & ' ( $ $ Å Å Å Å Å E T H T B In the second calculation, look at what happens to B, step-by-step, as each of the three matrices, in turn, does its work: ) coordinates $ $ $ $ $ B œ È œ œ %. Then ' % ) $ $ $ $ multiplication by Å Å Å TF converts ÒBÓ the diagonal matrix H ( stretched standard coordinates stretches the new - into -coordinates -coordinates by factors of 3 and 6 (in directions corresponding to, and, ) Finally, œ ) ( Å multiplication by T F converts the ÐstretchedÑ -coordinates back into standard coordinates
6 se the figure below to check each of the preceding steps graphically ß as accurately as you can. $ 1) B œ has -coordinates ÒBÓ œ % 2) Stretching ÒB Ó by factors of $ and ' in the, and, directions gives the point with -coordinates. ) 3) Converting these -coordinates back to standard coordinates ) gives œe ( Þ $ So the effect of E on a point B Ðwhen described in standard coordinates) is the same as the effect of the diagonal matrix H on the same point B (when described in - coordinates). H stretches the -coordinates by a factor of $ in the direction of the positive, -axis and by a factor of ' in the direction of the positive, -axis Þ
7 There is another important observation in this example: look especially at the point B œ, or B œ, Þ Then % ' E, œ œ œ $ œ $ &, $ Again, it's helpful to look at what's going on in more detail, using the same 3-step process ( follow each step using the preceding figure) À % $ $ E, œ œ & ' $ $ $ $ $ ' œ œ œ œ $ œ $ ' $ l l l l l -coordinates stretched converted back into standard of, œ -coordinates coordinates unit vector in the direction, % ' You should analyze the equation E, œ = œ' œ' in the same &, ' way. Thus the matrix E maps each vector, and, to a scalar multiple of itself. Such vectors are called eigenvectors of E; the scalars Ð $ for the vector, and ' for the vector, Ñare called the eigenvalues associated with the eigenvectors, and, Þ The reason we were able to do such a nice analysis of how this matrix E works is that we were able to write down ( using the matrix T that I gave you in the beginning) a very special new basis for a basis whose members are eigenvectors of the matrix E. The general definition is:. Definition A nonzero vector B is called an eigenvector of the 8 8matrix E if EB œ -B for some scalar -. The scalar - is called an eigenvalue of EÐassociated with the eigenvector BÑÞ (It's traditional, in almost all books, to use the Greek letter - ( lambda ) to denote an eigenvalue. Some books call eigenvectors and eigenvalues by the loose translations characteristic vectors and characteristic values. ),
8 In the preceding example there is nothing special about the specific eigenvalues $ and '. They might be any other numbers - and - (where perhaps even - œ - ÑÞ In general, - if E can be factored as E œ T T, where T has columns, ß -,, then, and, are eigenvectors of E with eigenvalues - and - because: We can take œö, ß, as a new basis ( why? ) for, and then calculate - E T -, œ T, œ T - - l l -coordinates of, since, œ,, - - œt œò Ó œ œ,, -,, -, l l back to standard coordinates in -coordinates Similarly, E, œ,. - In this case, the geometric interpretation of how E operates on a point is just as it was earlier: convert the standard coordinates B of the point to -coordinates. Then multiplication by H rescales the first -coordinate by a factor of - and the second by a factor of -. ( Of course if, say, - then the rescaling of the first coordinate is a contraction of the first coordinate of B rather than a stretch ; and if (say) -, the rescaling of the second coordinate also reverses the sign of the second coordinate of B (as well as either stretching or contracting)). Again, E acts like a diagonal matrix H if you calculate in -coordinates. Exactly the same algebraic manipulations and geometric interpretation applies in with an 8 8matrix E when it can be factored as Eœ THT, where His diagonal matrix. This motivates the following definition. To repeat the definition given earlier: Definition Suppose E is an 8 8 matrix. If E can be factored as E œ THT ß where H is diagonal, then we say that E is a diagonalizable matrix. ( It is not always possible to factor E is this way. Based on the geometric notions and ideas used above, can you think of a matrix Ethat isn't diagonalizable? Can you think of a matrix for which there are no eigenvectors?) 8
9 The preceding discussion proves the following theorem (where 8 œ Ñ À - Theorem 1 Suppose E is a diagonalizable matrix, say E œ T T - where TœÒ,, ÓÞ Then i) the columns of T are eigenvectors of the matrix E, and ii) they form a basis for The eigenvalues - and - for these eigenvectors are the scalars found on the diagonal of the corresponding column of H. Moreover, a completely similar argument works for an 8 8matrix Eif EœTHT where His diagonal. Therefore we can say Theorem 1 Suppose E is an 8 8 matrix diagonalizable matrix, say - - â EœT â -$ â T ã ã ã - 8, where T œ Ò,, ÞÞÞ, 8ÓÞ Then i) the columns of T are eigenvectors of the matrix E, and ii) they form a basis for 8. The eigenvalues for these eigenvectors are the scalars found on the diagonal of the corresponding column of H (for example, - for, etc.) It turns out that the converse of Theorem 1 is also true: that is, Theorem 2 If E is an 8 8 matrix and if there is a basis œ Ö, ß, ß ÞÞÞß, 8 for 8 consisting of eigenvectors of E (with corresponding eigenvalues -, -,..., -8Ñ, then E is diagonalizable. Specifically, we can factor EœTHT, where - â - â T œ Ò,, ÞÞÞ, 8Ó and H œ ã ã ã ã - 8
10 First we illustrate the meaning of Theorem 2 with another example, using 8œ again to keep the notation as simple as possible. Then we will give the proof of the theorem in the case 8œ. The general proof for an 8 8matrix is done in exactly same way. ( Not surprisingly, the proof closely parallels the ideas in the example). ( Example Suppose Eœ Þ œö and œö ß is a basis %,, for ( why? ). Since ( & E œ œ œ &, we get that % & is an eigenvector with eigenvalue &. Similarly, ( $ E œ œ œ $, so % ' is an eigenvector with eigenvalue $. Thus, œö, ß, is a basis for consisting of eigenvectors of E. This tells use that E is diagonalizable, because: Let TœÒ,, Óœ. Because the columns of Tare linearly independent, T is invertible. Let H œ &, where the diagonal entries are the $ eigenvalues corresponding to the columns of T ( in the same order ) Theorem 2 asserts that having done this, it now turns out that EœTHT. To illustrate Theorem 2, we now verify that this is true. If we multiply EœTHT on both sides by T(on the right), we see that the equation E œ THT is equivalent to ET œ TH. We check that this is true rather than writing down T À ( & $ ET œ œ and % & ' & & $ TH œ œ $ & '
11 Proof of Theorem 2 ( for the case 8œ ) Suppose E is and that œ Ö, ß, is a basis where, ß, are eigenvectors of E, with corresponding eigenvalues - and -. - Let TœÒ,, Óand Hœ ÞSince, and, are linearly independent, T - is invertible Þ To complete the proof that E is diagonalizable, we show now that EœTHT. As in the preceding example, we just need to verify that ET œ T H, and, to do this, we simply need to remember the definition of matrix multiplication. THœT - œòt T Ó œò Ò,, Ó Ò,, Ó Ó - œò-, -, Ó, and ET œeò,, ÓœÒE, E, Óœ Ò-, -, Ó Å because, of E with eigenvalue -, and, is an eigenvector of E with eigenvalue Þ Therefore ET and T H have the same columns, so ET œ T HÞ - The Big Picture, so far In Chapter 4, we have been discussing vector spaces Z (where Z might not be 8 ). After discussing linear independence and spanning sets in this more general setting, we were led to the idea of a basis œ Ö, ß ÞÞÞß, 8 for Z. From that, the nique Representation Theorem (p. 216) led to the idea of using to define coordinates for each B in Z. The - - coordinate vector ÒBÓ œ if B œ-, -, ÞÞÞ-8, ã 8. The coordinate - 8 vector ÒBÓ is a vector in 8. We saw that the mapping B ÈÒBÓ from Z to 8 is an isomorphism (a one-to-one, onto linear transformation) Þ This mapping preserves all vector space operations and all
12 linear dependency and independency relations. For example, a set of vectors in Z is linearly independent (or dependent) if and only if the corresponding set of coordinate vectors in 8 is linearly independent (dependent). Vector spaces computations in Z are perfectly mirrored by the corresponding coordinate vectors computations in 8 Þ. 8 In the special case when Z œ, we could choose a basis œ Ö, ß ÞÞÞß, 8 different from the standard basis Ö/ ß ÞÞÞß / 8. Each vector B in 8 then gets a new name : its coordinate vector ÒB Órelative to the basis. The matrix T œ Ò, ÞÞÞ, 8Ó is the operator that changes -coordinates into standard coordinates according to the formula T ÒBÓ œ B. Since the columns of T are linearly independent, the change of coordinates matrix T is always invertible and T is the operator that converts standard coordinates into -coordinates: ÒBÓ œ T B. For any invertible 8 8matrix E, the column vectors can be used as a basis for 8 and for coordinates. In that case, T is just the matrix EÞ. If a matrix Ecan be factored into the form THT, where His a diagonal matrix, then E is called diagonalizable because in that case E acts like a diagonal matrix when computations are done relative to the basis that consists of the columns. This led us to the idea of eigenvectors and eigenvalues of the matrix E. We proved that a matrix is diagonalizable if and only if has a basis consisting of eigenvectors of Eand indicated that a completely similar proof works for an 8 8 matrix E operating on 8 Þ The conceptual idea of diagonalization and its relation to a basis of eigenvectors is nicely motivated geometrically and not very hard. You may have noticed, however, that in the preceding examples, a factorization of a given matrix E into THT was given, or a basis of eigenvectors was given so that the matrices T and H could be created. But if you are simply given an 8 8matrix E, then trying i) to find its eigenvalues and eigenvectors ( if it has any at all ), and ii) to determine whether 8 does or does not have a basis consisting of eigenvectors of E are harder questions. Be aware of those questions, but try to ignore your worries about them until we get into Chapter 5. For now just focus on the concept of diagonalizationß what it means ß and how diagonalization is connected to eigenvectors and eigenvalues.
Here are proofs for some of the results about diagonalization that were presented without proof in class.
Suppose E is an 8 8 matrix. In what follows, terms like eigenvectors, eigenvalues, and eigenspaces all refer to the matrix E. Here are proofs for some of the results about diagonalization that were presented
More informationExample: A Markov Process
Example: A Markov Process Divide the greater metro region into three parts: city such as St. Louis), suburbs to include such areas as Clayton, University City, Richmond Heights, Maplewood, Kirkwood,...)
More information: œ Ö: =? À =ß> real numbers. œ the previous plane with each point translated by : Ðfor example,! is translated to :)
â SpanÖ?ß@ œ Ö =? > @ À =ß> real numbers : SpanÖ?ß@ œ Ö: =? > @ À =ß> real numbers œ the previous plane with each point translated by : Ðfor example, is translated to :) á In general: Adding a vector :
More informationThe Spotted Owls 5 " 5. = 5 " œ!þ")45 Ð'!% leave nest, 30% of those succeed) + 5 " œ!þ("= 5!Þ*%+ 5 ( adults live about 20 yrs)
The Spotted Owls Be sure to read the example at the introduction to Chapter in the textbook. It gives more general background information for this example. The example leads to a dynamical system which
More informationÆ Å not every column in E is a pivot column (so EB œ! has at least one free variable) Æ Å E has linearly dependent columns
ßÞÞÞß @ is linearly independent if B" @" B# @# ÞÞÞ B: @: œ! has only the trivial solution EB œ! has only the trivial solution Ðwhere E œ Ò@ @ ÞÞÞ@ ÓÑ every column in E is a pivot column E has linearly
More informationMAT1302F Mathematical Methods II Lecture 19
MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationDefinition (T -invariant subspace) Example. Example
Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationB œ c " " ã B œ c 8 8. such that substituting these values for the B 3 's will make all the equations true
System of Linear Equations variables Ð unknowns Ñ B" ß B# ß ÞÞÞ ß B8 Æ Æ Æ + B + B ÞÞÞ + B œ, "" " "# # "8 8 " + B + B ÞÞÞ + B œ, #" " ## # #8 8 # ã + B + B ÞÞÞ + B œ, 3" " 3# # 38 8 3 ã + 7" B" + 7# B#
More informationsolve EB œ,, we can row reduce the augmented matrix
Motivation To PY Decomposition: Factoring E œ P Y solve EB œ,, we can row reduce the augmented matrix Ò + + ÞÞÞ+ ÞÞÞ + l, Ó µ ÞÞÞ µ Ò- - ÞÞÞ- ÞÞÞ-. Ó " # 3 8 " # 3 8 we get to Ò-" -# ÞÞÞ -3 ÞÞÞ-8Ó in an
More informationsolve EB œ,, we can row reduce the augmented matrix
PY Decomposition: Factoring E œ P Y Motivation To solve EB œ,, we can row reduce the augmented matrix Ò + + ÞÞÞ+ ÞÞÞ + l, Ó µ ÞÞÞ µ Ò- - ÞÞÞ- ÞÞÞ-. Ó " # 3 8 " # 3 8 When we get to Ò-" -# ÞÞÞ -3 ÞÞÞ-8Ó
More informationWhat is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =
STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row
More informationSpan and Linear Independence
Span and Linear Independence It is common to confuse span and linear independence, because although they are different concepts, they are related. To see their relationship, let s revisit the previous
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationFinal Exam Practice Problems Answers Math 24 Winter 2012
Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the
More informationRelations. Relations occur all the time in mathematics. For example, there are many relations between # and % À
Relations Relations occur all the time in mathematics. For example, there are many relations between and % À Ÿ% % Á% l% Ÿ,, Á ß and l are examples of relations which might or might not hold between two
More informationMAT 1302B Mathematical Methods II
MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More informationchapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS
chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader
More informationMath 3191 Applied Linear Algebra
Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationChapter 4 & 5: Vector Spaces & Linear Transformations
Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think
More informationLinear Algebra, Summer 2011, pt. 2
Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................
More informationMath 1553, Introduction to Linear Algebra
Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level
More information1 Matrices and Systems of Linear Equations. a 1n a 2n
March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real
More informationChapter 5. Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Section 5. Eigenvectors and Eigenvalues Motivation: Difference equations A Biology Question How to predict a population of rabbits with given dynamics:. half of the
More informationMath 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES
Math 46, Spring 2 More on Algebraic and Geometric Properties January 2, 2 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES Algebraic properties Algebraic properties of matrix/vector multiplication Last time
More informationThe Gram-Schmidt Process
The Gram-Schmidt Process How and Why it Works This is intended as a complement to 5.4 in our textbook. I assume you have read that section, so I will not repeat the definitions it gives. Our goal is to
More informationUNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if
UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is
More informationName Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.
Name Solutions Linear Algebra; Test 3 Throughout the test simplify all answers except where stated otherwise. 1) Find the following: (10 points) ( ) Or note that so the rows are linearly independent, so
More informationLecture 15, 16: Diagonalization
Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationDefinition Suppose M is a collection (set) of sets. M is called inductive if
Definition Suppose M is a collection (set) of sets. M is called inductive if a) g M, and b) if B Mß then B MÞ Then we ask: are there any inductive sets? Informally, it certainly looks like there are. For
More informationDiagonalization. MATH 1502 Calculus II Notes. November 4, 2008
Diagonalization MATH 1502 Calculus II Notes November 4, 2008 We want to understand all linear transformations L : R n R m. The basic case is the one in which n = m. That is to say, the case in which the
More information6.837 LECTURE 8. Lecture 8 Outline Fall '01. Lecture Fall '01
6.837 LECTURE 8 1. 3D Transforms; Part I - Priciples 2. Geometric Data Types 3. Vector Spaces 4. Basis Vectors 5. Linear Transformations 6. Use of Matrix Operators 7. How to Read a Matrix Expression 8.
More informationComputationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:
Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices For example, determinants and eigenvalues are easy to compute, and multiplication
More informationMATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by
MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar
More informationAnnouncements Wednesday, November 01
Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section
More informationAnnouncements Monday, November 13
Announcements Monday, November 13 The third midterm is on this Friday, November 17. The exam covers 3.1, 3.2, 5.1, 5.2, 5.3, and 5.5. About half the problems will be conceptual, and the other half computational.
More informationMath 110 Linear Algebra Midterm 2 Review October 28, 2017
Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections
More informationAnnouncements Monday, November 06
Announcements Monday, November 06 This week s quiz: covers Sections 5 and 52 Midterm 3, on November 7th (next Friday) Exam covers: Sections 3,32,5,52,53 and 55 Section 53 Diagonalization Motivation: Difference
More informationEigenvalues, Eigenvectors, and Diagonalization
Math 240 TA: Shuyi Weng Winter 207 February 23, 207 Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. We will
More informationPractice problems for Exam 3 A =
Practice problems for Exam 3. Let A = 2 (a) Determine whether A is diagonalizable. If so, find a matrix S such that S AS is diagonal. If not, explain why not. (b) What are the eigenvalues of A? Is A diagonalizable?
More informationchapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations
chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader
More informationCoordinate Systems. Recall that a basis for a vector space, V, is a set of vectors in V that is linearly independent and spans V.
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (rd edition). These notes are intended primarily for in-class presentation
More informationThe geometry of least squares
The geometry of least squares We can think of a vector as a point in space, where the elements of the vector are the coordinates of the point. Consider for example, the following vector s: t = ( 4, 0),
More informationAn Intuitive Introduction to Motivic Homotopy Theory Vladimir Voevodsky
What follows is Vladimir Voevodsky s snapshot of his Fields Medal work on motivic homotopy, plus a little philosophy and from my point of view the main fun of doing mathematics Voevodsky (2002). Voevodsky
More informationChapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of
Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple
More informationMAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction
MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example
More informationSection 1.8/1.9. Linear Transformations
Section 1.8/1.9 Linear Transformations Motivation Let A be a matrix, and consider the matrix equation b = Ax. If we vary x, we can think of this as a function of x. Many functions in real life the linear
More informationDeterminants and Scalar Multiplication
Invertibility and Properties of Determinants In a previous section, we saw that the trace function, which calculates the sum of the diagonal entries of a square matrix, interacts nicely with the operations
More information1.1.1 Algebraic Operations
1.1.1 Algebraic Operations We need to learn how our basic algebraic operations interact. When confronted with many operations, we follow the order of operations: Parentheses Exponentials Multiplication
More informationUsually, when we first formulate a problem in mathematics, we use the most familiar
Change of basis Usually, when we first formulate a problem in mathematics, we use the most familiar coordinates. In R, this means using the Cartesian coordinates x, y, and z. In vector terms, this is equivalent
More informationLinear transformations
Linear Algebra with Computer Science Application February 5, 208 Review. Review: linear combinations Given vectors v, v 2,..., v p in R n and scalars c, c 2,..., c p, the vector w defined by w = c v +
More informationSingular Value Decomposition
Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the
More informationModern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur
Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked
More informationFamily Feud Review. Linear Algebra. October 22, 2013
Review Linear Algebra October 22, 2013 Question 1 Let A and B be matrices. If AB is a 4 7 matrix, then determine the dimensions of A and B if A has 19 columns. Answer 1 Answer A is a 4 19 matrix, while
More informationMATH 310, REVIEW SHEET 2
MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,
More informationLecture 12: Diagonalization
Lecture : Diagonalization A square matrix D is called diagonal if all but diagonal entries are zero: a a D a n 5 n n. () Diagonal matrices are the simplest matrices that are basically equivalent to vectors
More informationChapter 2 Notes, Linear Algebra 5e Lay
Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication
More informationREVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION
REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationEigenvalues, Eigenvectors, and an Intro to PCA
Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.
More informationRoberto s Notes on Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 3. Diagonal matrices
Roberto s Notes on Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 3 Diagonal matrices What you need to know already: Basic definition, properties and operations of matrix. What you
More information2. Every linear system with the same number of equations as unknowns has a unique solution.
1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations
More informationLinear algebra and differential equations (Math 54): Lecture 10
Linear algebra and differential equations (Math 54): Lecture 10 Vivek Shende February 24, 2016 Hello and welcome to class! As you may have observed, your usual professor isn t here today. He ll be back
More informationRecitation 8: Graphs and Adjacency Matrices
Math 1b TA: Padraic Bartlett Recitation 8: Graphs and Adjacency Matrices Week 8 Caltech 2011 1 Random Question Suppose you take a large triangle XY Z, and divide it up with straight line segments into
More informationAnnouncements Monday, November 13
Announcements Monday, November 13 The third midterm is on this Friday, November 17 The exam covers 31, 32, 51, 52, 53, and 55 About half the problems will be conceptual, and the other half computational
More informationThis last statement about dimension is only one part of a more fundamental fact.
Chapter 4 Isomorphism and Coordinates Recall that a vector space isomorphism is a linear map that is both one-to-one and onto. Such a map preserves every aspect of the vector space structure. In other
More informationMAT224 Practice Exercises - Week 7.5 Fall Comments
MAT224 Practice Exercises - Week 75 Fall 27 Comments The purpose of this practice exercise set is to give a review of the midterm material via easy proof questions You can read it as a summary and you
More informationSolutions to Final Exam
Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns
More informationChapter 8. Rigid transformations
Chapter 8. Rigid transformations We are about to start drawing figures in 3D. There are no built-in routines for this purpose in PostScript, and we shall have to start more or less from scratch in extending
More informationDesigning Information Devices and Systems I Spring 2016 Official Lecture Notes Note 21
EECS 6A Designing Information Devices and Systems I Spring 26 Official Lecture Notes Note 2 Introduction In this lecture note, we will introduce the last topics of this semester, change of basis and diagonalization.
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationAnswers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3
Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,
More information3 Fields, Elementary Matrices and Calculating Inverses
3 Fields, Elementary Matrices and Calculating Inverses 3. Fields So far we have worked with matrices whose entries are real numbers (and systems of equations whose coefficients and solutions are real numbers).
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationMatrix Basic Concepts
Matrix Basic Concepts Topics: What is a matrix? Matrix terminology Elements or entries Diagonal entries Address/location of entries Rows and columns Size of a matrix A column matrix; vectors Special types
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationMITOCW ocw f99-lec30_300k
MITOCW ocw-18.06-f99-lec30_300k OK, this is the lecture on linear transformations. Actually, linear algebra courses used to begin with this lecture, so you could say I'm beginning this course again by
More informationLinearization of Differential Equation Models
Linearization of Differential Equation Models 1 Motivation We cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking
More informationMAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:
MAC 2 Module Determinants Learning Objectives Upon completing this module, you should be able to:. Determine the minor, cofactor, and adjoint of a matrix. 2. Evaluate the determinant of a matrix by cofactor
More informationLECTURE 16: TENSOR PRODUCTS
LECTURE 16: TENSOR PRODUCTS We address an aesthetic concern raised by bilinear forms: the source of a bilinear function is not a vector space. When doing linear algebra, we want to be able to bring all
More information1 Last time: inverses
MATH Linear algebra (Fall 8) Lecture 8 Last time: inverses The following all mean the same thing for a function f : X Y : f is invertible f is one-to-one and onto 3 For each b Y there is exactly one a
More informationEigenvalues and eigenvectors
Roberto s Notes on Linear Algebra Chapter 0: Eigenvalues and diagonalization Section Eigenvalues and eigenvectors What you need to know already: Basic properties of linear transformations. Linear systems
More informationSystems of Equations 1. Systems of Linear Equations
Lecture 1 Systems of Equations 1. Systems of Linear Equations [We will see examples of how linear equations arise here, and how they are solved:] Example 1: In a lab experiment, a researcher wants to provide
More informationChapter 3: Theory Review: Solutions Math 308 F Spring 2015
Chapter : Theory Review: Solutions Math 08 F Spring 05. What two properties must a function T : R m R n satisfy to be a linear transformation? (a) For all vectors u and v in R m, T (u + v) T (u) + T (v)
More informationEigenspaces and Diagonalizable Transformations
Chapter 2 Eigenspaces and Diagonalizable Transformations As we explored how heat states evolve under the action of a diffusion transformation E, we found that some heat states will only change in amplitude.
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for
More informationTopic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C
Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,
More informationMATH 223 FINAL EXAM APRIL, 2005
MATH 223 FINAL EXAM APRIL, 2005 Instructions: (a) There are 10 problems in this exam. Each problem is worth five points, divided equally among parts. (b) Full credit is given to complete work only. Simply
More informationCoordinate Systems. S. F. Ellermeyer. July 10, 2009
Coordinate Systems S F Ellermeyer July 10, 009 These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (rd edition) These notes are
More informationChapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015
Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal
More informationReal Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence
Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras Lecture - 13 Conditional Convergence Now, there are a few things that are remaining in the discussion
More informationMITOCW ocw f99-lec17_300k
MITOCW ocw-18.06-f99-lec17_300k OK, here's the last lecture in the chapter on orthogonality. So we met orthogonal vectors, two vectors, we met orthogonal subspaces, like the row space and null space. Now
More information0.2 Vector spaces. J.A.Beachy 1
J.A.Beachy 1 0.2 Vector spaces I m going to begin this section at a rather basic level, giving the definitions of a field and of a vector space in much that same detail as you would have met them in a
More informationThe Gauss-Jordan Elimination Algorithm
The Gauss-Jordan Elimination Algorithm Solving Systems of Real Linear Equations A. Havens Department of Mathematics University of Massachusetts, Amherst January 24, 2018 Outline 1 Definitions Echelon Forms
More information