Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Similar documents
[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Linear Algebra Highlights

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

2. Every linear system with the same number of equations as unknowns has a unique solution.

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

TBP MATH33A Review Sheet. November 24, 2018

MATH 310, REVIEW SHEET 2

LINEAR ALGEBRA REVIEW

Definitions for Quizzes

Math Linear Algebra Final Exam Review Sheet

Math 21b. Review for Final Exam

Review problems for MA 54, Fall 2004.

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

1 9/5 Matrices, vectors, and their applications

235 Final exam review questions

MA 265 FINAL EXAM Fall 2012

LINEAR ALGEBRA KNOWLEDGE SURVEY

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Conceptual Questions for Review

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

First we introduce the sets that are going to serve as the generalizations of the scalars.

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Solutions to Final Exam

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

Linear Algebra March 16, 2019

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

MATH 310, REVIEW SHEET

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Linear Algebra, Summer 2011, pt. 2

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Spring 2014 Math 272 Final Exam Review Sheet

Lecture 1 Systems of Linear Equations and Matrices

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Practice problems for Exam 3 A =

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

Topic 15 Notes Jeremy Orloff

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

After we have found an eigenvalue λ of an n n matrix A, we have to find the vectors v in R n such that

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

Lecture Summaries for Linear Algebra M51A

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

3 Fields, Elementary Matrices and Calculating Inverses

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Final Exam Practice Problems Answers Math 24 Winter 2012

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

1. General Vector Spaces

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

7. Dimension and Structure.

1 Last time: determinants

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Dot Products, Transposes, and Orthogonal Projections

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

1. Select the unique answer (choice) for each problem. Write only the answer.

18.06SC Final Exam Solutions

Math 416, Spring 2010 The algebra of determinants March 16, 2010 THE ALGEBRA OF DETERMINANTS. 1. Determinants

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Chapter 2 Notes, Linear Algebra 5e Lay

Math 315: Linear Algebra Solutions to Assignment 7

Linear Algebra Primer

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

18.06 Quiz 2 April 7, 2010 Professor Strang

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Chapter 4 & 5: Vector Spaces & Linear Transformations

Lecture 2 Systems of Linear Equations and Matrices, Continued

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

x y + z = 3 2y z = 1 4x + y = 0

SUMMARY OF MATH 1600

Comps Study Guide for Linear Algebra

Math 312 Final Exam Jerry L. Kazdan May 5, :00 2:00

Math 110 Answers for Homework 6

Online Exercises for Linear Algebra XM511

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

LINEAR ALGEBRA SUMMARY SHEET.

Linear Algebra Practice Final

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

Linear Algebra Review

Elementary Linear Algebra

and let s calculate the image of some vectors under the transformation T.

22A-2 SUMMER 2014 LECTURE 5

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

MIT Final Exam Solutions, Spring 2017

Definition (T -invariant subspace) Example. Example

Gaussian elimination

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Transcription:

Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch out for these common mistakes: When writing a system of equations as an augmented matrix, remember that you must first put all of the variables on one side of the equation. Make sure you can correctly find the solution to a system of equations, once its in reduced row echelon form. Don t forget about any of the equations. If you have a variable that doesn t appear in any of your equations (so the augmented matrix has a column of entirely 0 s) it does NOT mean that that variable is zero. What does it mean? If you have a linear transformation from R m to R n, does it correspond to an n m matrix or a m n matrix. How can you remember? Remember that the entries of the matrix of some linear transformation cannot depend on any of the variables. If you have a specific function from R m to R n, you should end up with a specific matrix. If you are composing the linear transformations T ( x) = A x and S( x) = B x, does the composition S T correspond to the matrix AB or to BA? How can you remember? If you are asked to find bases for the image and kernel of an n m matrix (n rows, m columns), what type of vectors should be in the image, n dimensional vectors or m dimensional vectors? What about the kernel? If you are trying to find a basis for the image of A by Gauss-Jordan, remember that your answer should be columns of the original matrix. We know that the rank and nullity of a matrix sum to either the number of rows or the number of columns. Which one is it? How can you remember? When you are working with arbitrary vector spaces, make sure you remember what your coordinates are, and what your vectors are. For instance, for a space of polynomials, thinks like x or x 2 are your vectors, and so you should treat them in the same way you would treat vectors. To see if something is a linear space, or a linear transformation, you really just need to understand whats happening to the coordinates. If everything is linear in those coordinates, then you have something linear. When you are trying to find the matrix of some linear transformation, pay attention to the basis. You can do this in the exact same way you would for linear function from R m to R n, but you need to remember to always use the given basis, instead of the standard basis. When you see a polynomial like 2 + 3x + 6x 2, its very tempting to immediately turn it into the vector 2 3, but this is only valid if you are working with the basis A = (1, x, x 2 ). If you are working with, say 6 B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 7 corresponds to 1. 6 When finding the basis for a vector space, V, your answer should always be a list of elements that are in V. 1 Something like 2 isn t in P 2. 3 Don t just memorize the formula for Gram-Schmidt, you will forget it. Focus on understanding it. Why are we subtracting off the things that we are? If you understand this, you should be able to come up with the formula on your own, if you ever forget it. When dealing with orthonormal bases and orthogonal projections, don t get confused about A T A and AA T. Both of these show up in different situations. How can you tell which one is the right one to use? 1

While the formula x = (A T A) 1 A T b technically works for finding a least squares solution, its usually a mistake to try using it. Just solve the system of equations A T A x = A T b normally. We know ways to solve systems of equations that are much easier than finding the inverse of a matrix. When counting the number of inversions in a pattern, remember that you need to think about all possible pairs of entries. If you are finding the determinant of a matrix by Gauss-Jordan, remember that factoring something out of a row will change the determinant (in what way?). It can be tempting to use fancy techniques like Cramer s rule or the adjoint matrix to do things like solving systems of equations, or finding inverses of matrices, but this is usually a bad idea. Unless you have a very good reason not to, you should probably use more basic techniques like Gauss-Jordan. This is usuall faster, easier to remember, and harder to mess up. Remember that λ = 0 can be an eigenvalue of a matrix. Really, there s nothing special about λ = 0 when you are talking about eigenvalues. If A is an n n matrix, then its characteristic polynomial must have degree n. If it doesn t, you ve done something wrong. When finding eigenvectors, remember that v = 0 is NOT an eigenvector. If v = 0 is the only solution to the system A v = λ v, then λ is NOT an eigenvalue of A you must have made a mistake in finding the eigenvalues. Make sure you are comfortable with the following: Systems of Linear Equations (1.1): Know how to use elimination or substitution to solve simple systems of linear equations. Know how to recognize when a system has infinitely many solutions, or no solutions. If a system has infinitely many solutions, how would you go about finding all of them? Understand solutions to systems of equations geometrically, in terms of intersecting lines/planes. Know how to recognize situations when you must solve a system of linear equations. Gauss-Jordan Elimination (1.2): Understand how a system of linear equations can be written as a augmented matrix. What are the dimensions of this matrix? If you have m variables and n equations, how many rows and columns does the augmented matrix have? Do you not lose any information by switching to the augmented matrix? Can you always get back to the original system? Know the elementary row operations. What do they represent in terms of the original system of equations? Why doesn t applying an elementary row operation change the set of solutions to the system of equations? Know what it means for a matrix to be in reduced row echelon form. Know the Gauss-Jordan algorithm. That is, know how to turn any matrix into a matrix into a matrix in reduced row echelon form. Know how to read off the solution to a system of equations, once the augmented matrix has been written in reduced row echelon form. What does the final augmented matrix look like if the system has only one solution? If the system is inconsistent? What if the system has infinitely many solutions? How do you recognize this? And how can you find all solutions in this case? 2

Know what leading variables and free variables are. What do they mean in terms of finding solutions to systems of equations? What does the number of free variables mean in terms of the set of solutions? If there is only 1 solution, how many free variables are there? What if the set of solutions forms a line? A plane? Rank of a Matrix (1.3): Know what the rank of a matrix is, how to find it, and why it is important. If A is an n m matrix, what is the largest possible value of rank(a)? If the n m matrix A is the coefficient matrix of a system of linear equations (i.e. the system is A x = b) under what conditions on m, n and rank(a) will: The system always have at least one solution? (i.e. never be inconsistent) Never have more than one solution? Always have exactly one solution. Can you interpret the above in terms of a linear transformation being injective, surjective, or both? If the system A x = b has a solution, how can number of dimensions of the set of solutions? Does this quantity depend on b? If an n n matrix has rank n, what is its reduced row echelon form? What if an n m matrix has maximal possible rank? What can you say about its rref? If a system of equations has n equations and m unknowns with n < m, is it possible for the system to have exactly one solution? What about no solutions? Remember, intuitively you can think of the rank of a system of equations as being the actual number of equations. You can always rewrite the system A x = b as a system with exactly rank(a) equations. In this situation, adding each equation really does decrease the dimension of the set of solutions by one. Linear Transformations (2.1): Understand what it means for a function T : R m R n to be linear. Know how this is different from saying the T is affine. (Before taking this course, when you used the term linear, you probably meant affine. Remember the distinction.) Understand why you can completely describe a linear transformation T : R m R n by simply giving the matrix of coefficients. What are the dimensions of this matrix? Make sure you understand this! The relationship between a matrix and a linear transformation is the single most important thing we will learn this term. Everything else we do will be based on this. Understand how a system of linear equations can be written as A x = b for some matrix A, and thus can be interpreted as T ( x) = b for a linear transformation T. This means that understanding linear transformations is the same thing as understanding systems of linear equations. Given an n m matrix, know how to use this to define a linear transformation R m R n. Remember to pay attention to the dimensions. How would you calculate the image of a specific vector v R m? Know how find the matrix corresponding to an explicitly given linear transformation. For instance, what a [ ] a + 2b 3c matrix corresponds to the map T b =? Be sure to get the dimensions right, and b c c remember that the entries of the final matrix can t depend on the inputs a, b and c. Know what the standard basis vectors e 1, e 2,..., e m of R m are, and know how any vector v R m can be written as a sum v = x 1 e 1 + x 2 e 2 + + x m e m (this is called a linear combination of e 1, e 2,..., e m ). Know why a function T : R m R n that satisfies the two properties T ( x + y) = T ( x) + T ( y) and T (k x) = kt ( x) must be linear. Understand why in this case, the vectors T (e 1 ), T (e 2 ),..., T (e m ) completely determine the function T, and know how to easily find the matrix representing T from these vectors. Know how to find the matrix representing simple linear transformations like T ( x) = 0, T ( x) = x or T ( x) = 3

k x. Geometric Transformations (2.2): Understand why most common geometric transformations are linear. (This is because addition and scalar multiplication of vectors can be defined geometrically, so any transformations that preserves the pictures defining addition/multiplication should be linear.) Know how to find the 2 2 matrix representing a rotation of the plane about the origin, through an angle of θ. Know how to find matrices representing other transformations, such as reflection, orthogonal projection, scaling, or shears. Also know how to find matrices of geometrical transformations in higher dimensions (eg rotations or reflections in three dimensions). Composition of linear transformations (2.3): If S, T : R m R n are linear transformations, understand why (S + T ) is also a linear transformation. What about kt, where k R? How do you find the matrices corresponding to these, in terms of the matrices of S and T? Now assume that S : R m R p and T : R p R n are linear transformations. Understand why the composition (T S)( x) = T (S( x)) is also a linear transformation. If A is the matrix corresponding to S and B is the matrix corresponding to T, know how to find the matrix corresponding to T S in terms of A and B. This is denoted by BA, and is called the matrix product. Make sure you know how to calculate it. Make sure you understand why the matrix product is defined the way that it is. It is not arbitrary, it is defined in exactly the correct way to make it agree with composition of functions. Understand why we generally don t have AB = BA, and why its possible to have AB = 0 with A, B 0 (or even A n = 0, but A 0). These might seem strange if you are used to multiplication of real numbers, but if you think of matrix multiplication as function composition, this may seem much more natural. Remember that the product AB will not even be defined for some choices of matrices A and B that you pick. Under what conditions will it exist? What does this mean in terms of function composition? Know how to use the function interpretation of matrix multiplication to show that matrix multiplication is associative (i.e. A(BC) = (AB)C) without any extra calculations. Know what the identity matrix, I n is, and why it is significant. What is I n A, or AI m (where A is n m)? Know how to use matrix multiplication to find the matrices corresponding to complicated linear transformations, by writing them in terms of simpler ones (such as writing orthogonal onto a line as a composition of two rotations and a simpler orthogonal projection). When you are doing this, be sure to pay attention to the order in which you are applying the transformations. The first one you apply should be on the right (for the same reason that f(g(x)) means applying g first, and then f). Inverses (2.4): Know what it means for a function f : X Y to be injective, surjective or bijective. What do these mean in terms of the equation f(x) = b? Understand why a bijective function f : X Y must have an inverse f 1 : Y X. If T is a linear function which is bijective, understand why T 1 must also be linear. If A and A 1 are the matrices corresponding to these linear transformations, then what are AA 1 and A 1 A? What is (A 1 ) 1? Know how to tell if an n m matrix A has an inverse. What must be true of n, m and rank(a)? If A is invertible, understand why the system of equations A x = b can be rewritten as x = A 1 b. This gives us a very easy solve any arbitrary linear equation involving A. 4

Know how to find the inverse of a matrix. If A is an n n invertible matrix, we must find some X with AX = I n. Doing this gives us a n systems of linear equations (one for each column) for the entries of X. Understand why this can be written as a single augmented matrix [A I n ]. What do you get when you write this augmented matrix in rref? What would happen in the above procedure if A was not invertible. Would you be able to finish the process and get an (incorrect) answer for A 1? Is it necessary to make sure that A is invertible before trying to calculate the inverse, or will you figure out that it isn t invertible in the process of trying to find A 1? Understand why knowing that AB = BA = I n automatically implies that B = A 1. What if we only had BA = I n? [ ] a b Know how to tell when a 2 2 matrix A = is invertible. If it is invertible, what is the inverse, in c d terms of a, b, c and d? In Chapter 6, we will do the same thing for n n matrices. Image and Kernel (3.1): Know what this image of a function f : X Y is, and how to find it. If T is a linear transformation, know how to find the image of T. Know what the span, span( v 1, v 2,..., v m ) of a set of vectors v 1, v 2,..., v m is. What does this mean geometrically? What is the span of a single vector? Two non-parallel vectors? Understand why the image of a matrix A is just the span of its columns. Understand what the kernel of a linear transformation is. What does it mean in terms of systems of equations, and why should we care about it? Know how to use Gauss-Jordan elimination to find the kernel of a matrix. You should be able the express the kernel as the span of a set of vectors. How many vectors do you get, and what do they correspond to? If you know the kernel of a matrix A, what does that tell you about the solutions to a system of equations like A x = b? If you have one solution, how do you find the others? Geometrically, what does the set of solutions look like, and how does it relate to ker(a)? If ker(t ) = { 0}, where T is a linear transformation, what can you conclude about T? Subspaces of R n (3.2): Understand what a subspace of R n is. This is just a subset of R n which satisfies a few simple properties (namely that you can t get out of it by adding two vectors, or by taking the scalar multiple of a vector). Know what these look like in a low number of dimensions. What is a 1-dimensional subspace? A 2- dimensional subspace? What do the subspaces of R 2 or R 3 look like? Know how to check if something is a subspace (this just amounts to checking the the properties hold). Know why ker(t ) and im(t ) are subspaces, and more generally, span( v 1, v 2,..., v m ) is a subspace of R n for any v 1, v 2,..., v m R n. Understand why having v 1, v 2,..., v m W, for W a subspace of R n, implies that span( v 1, v 2,..., v m ) W. Bases and Dimension (3.2,3.3): Given a list of vectors v 1, v 2,..., v m, understand what it means for some of them to be redundant, and understand why deleting the redundant vectors doesn t change the span. Know why a set of vectors v 1, v 2,..., v m is linearly independent (i.e. has no redundant vectors) if and only if it has no nontrivial relations in the form c 1 v 1 + c 2 v 2 + + c m v m = 0. If the columns, v 1, v 2,..., v n of an n m matrix A are linearly independent, then what is ker(a)? In general, how do relations, c 1 v 1 + c 2 v 2 + + c m v m = 0, between the columns of a matrix A relate to elements of ker(a)? Know what it means for a set of vectors v 1, v 2,..., v m W to be a basis for W. Why does this mean that any w W can be written uniquely as w = c 1 v 1 + c 2 v 2 + + c m v m? 5

Know why any two bases of a subspace W must have the same number of elements (called the dimension of W ). Understand why this definition of dimension lines up with your intuitive understanding of dimension. What is the dimension of a line? Of a plane? Of R n? Know how to use Gauss-Jordan elimination to find bases for ker(a) and im(a). How do the dimensions of these spaces relate to the rank of A? What is dim(ker(a)) + dim(im(a))? Linear spaces (4.1): Know what it means for a set to be a linear space (vector space), and know how to recognize if something is a linear space. Be familiar with the common examples of linear spaces (P n, R n m, C, etc.). Know to construct other linear spaces as subspaces of these (and how to recognize if a given subset of one of these is linear). When you are doing this, its very important to keep track of what the elements of your space are. For instance, in P 2 the elements are polynomials, a + bx + cx 2. This means that things like x or x 2 should be treated like vectors, not numbers (and so there is no value of x, these are functions), and the coefficients a, b and c should be treated as coordinates. [ In particular, ] you should never have [ x appearing ] 1 x e1 e in the entries of a vector or matrix. Something like x x 2 is just as meaningless as 2. e 2 e 3 Understand how pretty much everything we learned about R n in chapters 2 and 3 can be done for any linear space. Once you understand, and internalize this idea, the material from chapter 4 will start to seem much easier. There is very little new material in this chapter. Almost everything we learn here is just something you already learned earlier in the term, just phrased it a slightly more general way. If you are stuck on a problem about linear space, it is a good idea to think about what the equivalent problem about R n would be. If you know how to solve that problem, you should be able to to solve the original one in essentially the same way. In particular, understand how the concepts of linear independence, span, bases and dimensions generalize to linear space. Know how to find a basis for a vector space, and use this to determine the dimension. Generally, this amounts to writing out what the elements of V look like. For instance, elements of P 2 look like a + bx + cx 2 for arbitrary a, b and c. Describing the elements of P 2 in this form is exactly the same thing as writing the elements of P 2 as a linear combination of the vectors 1, x and x 2 (where the coefficients are your choice of coordinates, a, b and c). If there are no relations between your chosen coordinates (i.e. any choices of a, b and c give you an element of P 2 ) then this set 1, x, x 2 is a basis. For example, if V is the set of polynomials in P 2 with f (1) = 0, then an arbitrary element of V can be written as f(x) = a + bx + cx 2, with b = 2c. This is equivalent to saying f(x) = a 2cx + cx 2, where there is now no restriction on a and c. Thus V has basis 1, x 2 2x. What are the dimensions of the spaces P n, R n m and C? What is a simple choice of basis for each space? If B = (f 1,..., f n ) is a basis for V, understand how B allows you to think of V as being the space R n. Given some f V and a basis B for V, what do you need to do find the vector [f] B V corresponding to f? Remember that this depends on your choice of basis B. Linear Transformations (4.2): Understand what it means for a function T : V W to be a linear transformation, and know how to recognize when a function is linear. To figure out if T is linear, you should think about what it does to the coordinates of your vectors. If 6

you understand why a linear space is just the same think as R n, determining whether T is linear is just the same this as determining whether a map R m R n is linear, which is likely something you can do. Be familiar with simple examples of linear transformations. For instance: Derivatives or integrals. The map T (f) = f(c) from P n to R, where c is a constant. The map T ( x) = v x from R n to R, where v R n is a constant vector. The maps T (X) = AX or T (X) = XB (or T (X) = AXB), where X is in R n m, and A and B are constant matrices of the right dimension. Know how concepts such as the image and kernel, or rank-nullity, generalize to this context. Again, make sure you understand why these aren t anything new, these are exactly the same things we considered in chapters 2 and 3, just in a slightly different context. If you know how to work with linear transformations from R m to R n, then general linear transformations shouldn t be any harder. The matrix of a linear transformation (3.4, 4.3): If T : V V is a linear transformation, and B = (f 1,..., f n ) and is a basis for V, understand how T can be thought of as a map from R n to R n, and thus as corresponding to a matrix, [T ] B (and remember that this depends on the choice of B). Understand why [T ] B [f] B = [T (f)] B, for any f V. Know how to find the matrix [T ] B. This is another situation where understanding the case for maps T : R n R n helps a lot. To find the matrix corresponding to T : R n R n, one simply computes T ( e 1 ),..., T ( e n ) and takes these to be the columns of the matrix. To find the matrix of a map T : V V one does essentially the same thing, except with the basis B = (f 1,..., f n ) instead of the standard basis for R n. Namely, one computes T (f 1 ), T (f 2 ),..., T (f n ), and finds the coordinate vector of each one, with respect to the basis B (which essentially amounts writing each T (f i ) as a linear combination of f 1,..., f n ). Again, remember that this depends on the basis. If the basis B is not the standard basis of your space, then make sure you do NOT use the standard basis when you write T (f i ) as a coordinate vector. Also, remember that everything you know from chapter 2 still applies. If dim(v ) = n, what are the dimensions of [T ] B? If A and B are two different bases for the same space V, know how to find the change of basis matrix S = S A B satisfying S[f] A = [f] B, and know how to use this to find [f] B, given [f] A. Understand why S B A = (S A B ) 1. If T : V V is a linear transformation, understand how to compute [T ] B given [T ] A and S = S B A. Make sure you don t get S B A and S A B mixed up. Which side does each change of basis matrix go on? How can you remember? Know what it means for two matrices, A and B, to be similar, and why we sometimes say this means that A and B represent the same function. Orthogonality (5.1): Know what the dot product of two vectors v, w R n is, and know how to use this to compute the lengths of vectors, and angles between two vectors. In particular, understand why v and w are perpendicular (orthogonal) if and only if v w = 0. Know what it means for a set of vectors u 1,..., u m R n to be orthonormal. Understand why an orthonormal set of vectors is automatically linearly independent, and why a set of n orthonormal vectors in R n is automatically a basis. Understand why e 1,..., e n is an orthonormal basis for R n. Are there other orthonormal bases? Will a basis of R n usually be orthonormal? 7

If u 1,..., u n is an orthonormal basis of R n, and x R n, know how to easily find the coordinates of x with respect to the basis u 1,..., u n (that is, find c 1,..., c n such that x = c 1 u 1 + + c n u n ). If V is a subspace of R n, understand what the orthogonal projection, proj V ( x) of x R n onto V is. Know to compute proj V ( x), if you are given an orthonormal basis u 1,..., u m for V. The Gram-Schmidt Process (5.2): Understand why it is often important to find an orthonormal basis for a subspace of R n. If v is a nonzero vector, understand how to find a unit vector u parallel to v. If u is a unit vector and w is any other vector, know how to find the constant k for which w = w k u is perpendicular to u. If ( u, w) is a basis for a subspace V, why is ( u, w ) also a basis for V? If two vectors v and w form a basis for V, know how to use the above two bullet points to find an orthonormal basis for V. In general, if v 1,..., v m is a basis for V, know how to use the Gram-Schmidt process to find an orthonormal basis for V. Make sure you really understand how to do this process. If you simply try to memorize the formulas without understanding them, you will almost certainly get something wrong. Focus on understanding why the formula is what it is. For instance, ask yourself: Why do we only turn the first vector into a unit vector at the start? When we want to find v j we need to subtract off multiples of some other vectors. Which vectors are we subtracting, and why? How do we figure out what multiples of these vectors to use? How do we know that v j is perpendicular to u 1,..., u j 1? At which point do we turn each vector into a unit vector? How do you know that u 1,..., u m is still a basis for the same space as v 1,..., v m? Make sure that you can actually do these computations. Don t just learn the formulas and think you ll be able to use them correctly on the test. Practice them! What would you do if I asked you to find an orthonormal basis for some subspace V of R n, but didn t give you a basis to start with? Orthogonal Transformations (5.3): Understand what the transpose of a matrix is. If A is an n m matrix, what are the the dimensions of A T? How would you find A T if you knew A? If A and B are matrices (of the appropriate dimensions), what is (A+B) T? What is (ka) T? (AB) T? (A T ) T? Understand why the dot product v w can be thought of as the matrix product v T w. What does this mean about (A v) w, or (A v) (A w), where A is an n n matrix? Know what it means for a linear transformation T : R n R n to be orthogonal. If A is the matrix representing T, what must be true about A? If A is an orthogonal matrix (i.e. the matrix of an orthogonal transformation) what must be true about the columns of A? In terms of linear transformations, what must be true about the vectors T ( e 1 ), T ( e 2 ),..., T ( e n ) in order for T to be orthogonal? If V is a subspace of R n, understand why the map T : R n R n given by T ( x) = proj V ( x) is linear. If u 1,..., u m is an orthonormal basis for V, know how to find the matrix representing T ( x) = proj V ( x). Remember that this should be an n n matrix. Least Squares (5.4): If V is a subspace of R n, what is the space V? How does this relate to proj V? How does dim V relate to dim V? If A is an m n matrix, then A represents a linear transformation R m R n. A T also represents a linear 8

transformation. Which spaces does it map between? Understand why ker(a T ) = (im A). How can you use this observation to determine whether a vector v R n is perpendicular to im A? If x R n and x = proj im A ( x), what can you say about A T ( x x )? What is the relationship between rank(a) and rank(a T )? Understand why ker(a) = ker(a T A). If A : R m R n is injective, what can you say about A T A? Understand why, in real life, it is often unreasonable to assume that the systems of equations you consider will be inconsistent. If a system of equations does literally have a solution, what should you try to do? Understand what the least squares solution to a system of equations A x = b is. How is this different then simply asking for the solution to the equation. Remember, if you are trying to find the least squares solution to A x = b, then you are not actually trying to solve this equation. Therefore you CANNOT use techniques like Gauss-Jordan, or any of our other tricks, to solve it. DO NOT think of these as actual systems of equations. If x is a least squares solution to A x = b, understand why x is an actual solution to A T A x = A T b. Know how to use this to find least squares solutions. When solving problems like this, make sure you remember the difference between A T A and AA T, these are very different matrices. This can be a little confusing, as both of these products do show up, in different contexts (one for finding least squares solutions, the other for finding matrices of orthogonal projections). If you get confused, try to think whether you want an m m matrix or an n n matrix. Determinants (6.1): [ ] a b Understand why the determinant of a 2 2 matrix A =, det A = ad bc, determines whether A is c d invertible. What does this mean geometrically. If A is a 3 3 matrix with columns u, v and w, understand why it is reasonable to define det A to be u ( v w). Why is this zero if and only if A is invertible? What does this mean geometrically. Know how to compute the determinant of a 3 3 matrix. The formula for this is quite complicated, but there is an easy way to remember it. Know how to generalize the 3 3 case to an n n matrix. Computing the determinant of an n n matrix involves taking a bunch of products of n entries. How can you tell which products to include, and which not to include? For each such product, how do you figure out if you should add or subtract it? If a matrix has a lot of zeros, do you necessarily consider all patterns when computing the determinant? How do you figure out which ones you need to consider? How do you find the determinant of an upper triangular matrix? Properties of determinants (6.2): Understand what it means to say that the determinant is linear in each of its rows and columns. Why is this true? Know what happens to the determinant of a matrix A when you: Switch two rows of A. Multiply the i th row of A by k. Add k times the j th row to the i th row. 9

Understand how the above three properties allow you find the determinant of any matrix by using Gauss- Jordan. Usually, this will be the fastest way to find a determinant. If two rows of a matrix A are equal, what can you say about det A? Understand what affect applying a matrix A : R n R n has on the volume of some object in R n. What does this have to do with relation det(ab) = (det A)(det B). What does the fact that det(ab) = det(a) det(b) imply about det(a 1 )? det(a m )? det(s 1 AS)? Understand how we can define the determinant of a linear transformation T : V V. Why does this not depend on the choice of basis? Understand why det(a T ) = det(a). What does this imply about the determinant of an orthogonal matrix? Eigenvalues and Eigenvectors (7.1): Understand why the standard basis is often not the best basis to use when working with some matrix. For instance, is it (usually) easy to find A 1000? Would it be easier if A was diagonal? Know what it means to say that that v is an eigenvector of A, and that λ is the corresponding eigenvalue. If v is an eigenvector with eigenvalue 0, what is A v? Know what an eigenbasis for A is. If B is an eigenbasis for A, understand why the matrix [A] B is diagonal. Understand why A has an eigenbasis if and only if it is diagonalizable. That is if there is some matrix S for which S 1 AS is diagonal. How does S relate to the eigenbasis? What are the entries on the diagonal of S 1 AS? Know how to simplify (S 1 AS) t, and how to use this to compute A t, when A is diagonalizable (but not necessarily diagonal). [ ] [ ] 1 1 0 1 Know why some matrices, such as or are NOT diagonalizable. 0 1 1 0 Know how to find the eigenvalues and eigenvectors of a geometrical transformation. What are the eigenvalues/vectors of a reflection? A orthogonal projection? A rotation? Computing Eigenvalues (7.2): Understand why λ is an eigenvalue of A if and only if ker(λi n A) { 0}. Understand why this means that λ is an eigenvalue of A if and only if det(λi n A) = 0. Know how to use this to find all of the eigenvalues of a matrix, by just finding the roots of a polynomial. Know how to find the characteristic polynomial of a matrix. This is just finding a determinant, but it may be difficult to use Gauss-Jordan here (why?). This is a situation where you would likely want to use a different method of finding a determinant (such as our explicit formulas for n = 2 or n = 3, or looking at patterns for larger matrices). Know what the algebraic multiplicity of an eigenvalue is, and how to find it. Understand why an n n matrix can have at most n eigenvalues. If A is upper triangular, how can you easily find all of the eigenvalues, together with their algebraic multiplicities? Finding Eigenvectors (7.3): If λ is an eigenvalue of a matrix A, know how to find all eigenvectors of A corresponding to λ. This is essentially just solving a system of equations (which system?). How do you know that there has to be a nonzero solution to that system? Know what the eigenspace E λ of a matrix A is. How does this relate to the eigenvectors of A? What is the geometric multiplicity of an eigenvalue? How do these relate to finding an eigenbasis for A? Understand how you can find the geometric multiplicity of an eigenvalue by finding the rank of a matrix. Understand why ge. mu.(λ) al. mu.(λ) for any λ. If p A (λ) has n real roots (with multiplicity), what must 10

be true about al. mu.(λ) and ge. mu.(λ) in order for A to be diagonalizable? If al. mu.(λ) = 1, is it necessary to determine ge. mu.(λ)? In particular, if p A (λ) has n distinct real roots, why must A be diagonalizable? Eigenvalues and Eigenvectors of Linear Transformations (7.2,7.3): Understand why the matrices A and S 1 AS (for any S) have the same eigenvalues, characteristic polynomial and algebraic and geometric multiplicities. Do they have the same eigenspaces? What does this mean in terms of linear transformations and bases? Understand why this we say that this means that the above quantities depend only on the linear transformation, not the choice of basis. Understand why the product of the eigenvalues of a matrix is equal to its determinant. Why is this obvious for a diagonal matrix? Why does that imply it must also be true for a diagonalizable matrix? Know what the trace of a matrix is, and understand why it is equal to the sum of the eigenvalues. Why does this mean that the trace of A doesn t depend on the choice of basis? 11