EXAM 2 REVIEW DAVID SEAL

Similar documents
LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Linear Algebra Handout

Spring 2015 Midterm 1 03/04/15 Lecturer: Jesse Gell-Redman

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Review Solutions for Exam 1

The definition of a vector space (V, +, )

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

MATH 310, REVIEW SHEET 2

8 Square matrices continued: Determinants

Calculating determinants for larger matrices

Second Midterm Exam April 14, 2011 Answers., and

Linear Algebra, Summer 2011, pt. 2

Chapter 1. Vectors, Matrices, and Linear Spaces

Row Space, Column Space, and Nullspace

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

Math 320, spring 2011 before the first midterm

= c. = c. c 2. We can find apply our general formula to find the inverse of the 2 2 matrix A: A 1 5 4

Final Exam Practice Problems Answers Math 24 Winter 2012

MATH10212 Linear Algebra B Homework Week 5

Lecture 10: Powers of Matrices, Difference Equations

MTH 362: Advanced Engineering Mathematics

2. Every linear system with the same number of equations as unknowns has a unique solution.

36 What is Linear Algebra?

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

No books, notes, any calculator, or electronic devices are allowed on this exam. Show all of your steps in each answer to receive a full credit.

Math 54 HW 4 solutions

Practice problems for Exam 3 A =

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han

Math 308 Spring Midterm Answers May 6, 2013

Lecture 22: Section 4.7

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Math 110, Spring 2015: Midterm Solutions

Chapter 2 Notes, Linear Algebra 5e Lay

Math 3108: Linear Algebra

Linear Independence Reading: Lay 1.7

18.06 Problem Set 3 Due Wednesday, 27 February 2008 at 4 pm in

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Math Linear Algebra Final Exam Review Sheet

Dot Products, Transposes, and Orthogonal Projections

Introduction to Systems of Equations

0.2 Vector spaces. J.A.Beachy 1

MTH 464: Computational Linear Algebra

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

The scope of the midterm exam is up to and includes Section 2.1 in the textbook (homework sets 1-4). Below we highlight some of the important items.

SYMBOL EXPLANATION EXAMPLE

Math 3013 Problem Set 4

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4].

1 Last time: inverses

Linear Algebra- Final Exam Review

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Comps Study Guide for Linear Algebra

Determinants and Scalar Multiplication

Math 33A Discussion Notes

Gaussian elimination

Basics of Proofs. 1 The Basics. 2 Proof Strategies. 2.1 Understand What s Going On

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

ARE211, Fall2012. Contents. 2. Linear Algebra (cont) Vector Spaces Spanning, Dimension, Basis Matrices and Rank 8

Topic 15 Notes Jeremy Orloff

NAME MATH 304 Examination 2 Page 1

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Homework Set #8 Solutions

Linear Algebra. Preliminary Lecture Notes

( v 1 + v 2 ) + (3 v 1 ) = 4 v 1 + v 2. and ( 2 v 2 ) + ( v 1 + v 3 ) = v 1 2 v 2 + v 3, for instance.

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Lecture Summaries for Linear Algebra M51A

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

18.06SC Final Exam Solutions

LINEAR ALGEBRA REVIEW

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

Elementary maths for GMT

Exercise Sketch these lines and find their intersection.

Eigenvalues and eigenvectors

Linear Algebra (Math-324) Lecture Notes

a (b + c) = a b + a c

We see that this is a linear system with 3 equations in 3 unknowns. equation is A x = b, where

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Linear Algebra Exam 1 Spring 2007

Differential Equations

First we introduce the sets that are going to serve as the generalizations of the scalars.

MATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic.

Math 3C Lecture 20. John Douglas Moore

2. Duality and tensor products. In class, we saw how to define a natural map V1 V2 (V 1 V 2 ) satisfying

Some Notes on Linear Algebra

Linear Algebra. Preliminary Lecture Notes

MATH10212 Linear Algebra B Homework 7

Linear Algebra Highlights

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Transcription:

EXAM 2 REVIEW DAVID SEAL 3. Linear Systems and Matrices 3.2. Matrices and Gaussian Elimination. At this point in the course, you all have had plenty of practice with Gaussian Elimination. Be able to row reduce any matrix you re given. My advice to you concerning not making mistakes is the following: (1) Avoid fractions! Use least common multiples rather than deal with them. Computers don t make mistakes adding fractions, we do. (2) Take small steps. When I do these problems, you will never see me write down the operation 2R1+3R2. Instead I would break this up into three steps: 1.) R2=3R2 2.) R1= R1 3.) R2=R1+R2. In fact you can kind of cheat and do both operations 1 and 2 at once. This isn t an elementary row operation, but it doesn t hurt to do that. (3) Use back substitution after you have your matrix in diagonal form. You don t need your matrix in reduced row echelon form to find a solution, just row echelon form. What I mean by this third piece of advice is the following. Suppose you ve already performed the following row operations: [ A ] row ops b 4 2 6 1 1. 2 2 This is now in row echelon form, but not reduced row echelon form (see next section) because the off diagonal entries are non zero. But as far as we care, this is enough information to solve for all the variables. Write down the equation described by the 3rd row: 2z = 2, find z = 1. Then write down the equation described by the 2nd row: 6y + z = 1, and solve for y. For practice see problems 11, 12 and 23, 24. Practice enough of 11-18 until you don t have to think about doing these problems, it just becomes mechanical. 3.3. Reduced Row-Echelon Matrices. The most important theorem from this section is Theorem 1 (The Three Possibilities Theorem). Every linear system has either a unique solution, infinitely many solutions or no solutions. Date: Updated: March 29, 29. 1

2 DAVID SEAL Exercise: If A is an n n matrix, what are the possible outcomes of solving A x =? Hint: What could A s reduced row echelon form look like? Know how to put a matrix in reduced row echelon form. Usually if you re interested in solving an equation you don t do this, but this vocabulary term could come up and you should know it. For practice try problems 1-4. 3.4. Matrix Operations. Know if and when multiplication/addition is defined for matrices and how to do it. These are easy problems to do, but in case they show up on the exam you want to make sure you don t make any arithmetic mistakes! I m a bit reluctant to suggest problems from this section, but you should be able to do any of 1-16 with your eyes closed. See problems 17,18,19. Do these look familiar now? Can you produce a vector/vectors that uniquely describe the kernel of a particular matrix for each of these problems? What are the dimensions of these linear subspaces? (Hint: the dimension is the number of free variable that are necessary to describe the space). 3.5. Inverses of Matrices. The important definition. Definition. An n n matrix A (square matrix only!) is said to be invertible if there exists a matrix B such that AB = BA = I n n where I n n is the identity matrix. Fact. Inverses, when they exist, are unique. This is a very nice feature to have when you define something! You wouldn t want to get in a fight with your friend over who found the better inverse. We denote the (unique) inverse matrix of A by A 1. You DO NOT need to be prepared to find the inverse of a matrix using row operations! Prof. Bertrand said this type of problem is too long for an exam situation. One problem you might run into is given two matrices, are they inverses of each other? To check this you need to know how to multiply two matrices together and the definition of the inverse. The following problem is short enough to appear on an exam. Exercise: If A and B are invertible matrices, is AB an invertible matrix? If so, what s the inverse matrix? Can you prove this? 3.6. Determinants. We ll first begin with an extremely important fact. If A is a square matrix, then A is invertible if and only if det(a). You can add this to theorem 7 given on page 193. See your notes from discussion section about what I had to suggest in how to go about finding the determinant of a matrix. You can always do this through cofactor expansion, the thing to keep in mind is the checkerboard pattern that shows up when evaluating the sign of each coefficient. See problems 1,2. I don t imagine you ll be asked to evaluate the determinant for any large (i.e. larger than 4 4) matrix. One other trick to keep in mind is what sort of row operations can you do to a matrix when evaluating the determinant. See property 5 listed in the text. This says you re allowed to add a multiple of any row to another row and this doesn t change the determinant. You have to be very careful not to misuse this property! This doesn t mean you can arbitrarily multiply a row by a number like we re used to with systems of equations. See problems 7, 9.

EXAM 2 REVIEW 3 Exercise: If A is an n n matrix, express det(λa) in terms of λ and the matrix A. Hint: The answer is not λdet(a). 4. Vector Spaces 4.2. The Vector Space R n and Subspaces. If we have a vector space V, and we have a subset W V, a natural question to ask is whether or not W itself forms a vector space. This means it needs to satisfy all the properties of a vector space that are listed on page 236 of your text. The bad news is this is quite a long list, but the good news is we don t have to check every property on the list, because most of them are inherited from the original vector space V. In short, in order to see if W is a vector space, we need only check if W passes the following test. Theorem 2. If V is a vector space and W V is a non-empty subset, then W itself is a vector space if and only if it satisfies the following two conditions: (1) Additive Closure If a W and b W, then a + b W. (2) Multiplicative Closure If λ R and a W, then λ a W. The statement of this theorem has the term non-empty as one hypothesis for the theorem to be true. In most applications of this theorem, we actually replace the statement non-empty with requiring that W. I.e., the theorem from above is equivalent to the following theorem. Theorem 3. If V is a vector space and W V, then W itself is a vector space if and only if it satisfies the following two conditions: (1) Additive Closure If a W and b W, then a + b W. (2) Multiplicative Closure If λ R and a W, then λ a W. (3) Non Empty W. Note that these are two properties that are on the long laundry list of properties we require for a set to be a vector space. Example Consider W := { a = (x, y) R 2 : x = 2y}. Since W R 2, we may be interested if W itself forms a vector space. To answer this question we need only check two items: (1) Additive Closure: An arbitrary element in W can be described by (2y, y) where y R. Let (2y, y), (2z, z) W. Then (2y, y)+(2z, z) = (2y +2z, y + z)) W since 2y + 2z = 2(y + z). (2) Multiplicative Closure: We need to check if λ R, and a W, then λ a W. Again, an arbitrary element in W can be described by (2y, y) where y R. Let λ R and (2y, y) W. Then λ(2y, y) = (2λy, λy) W since the first coordinate is exactly twice the second element. Note: it is possible to write this set as the kernel of a matrix. In fact, you can check that W = ker(a), where A 1 2 = [ 2 1 ]. We actually have a theorem that says the kernel of any matrix is indeed a linear subspace. Example Consider W := { a R 3 : z }. In order for this to be a linear subspace of R 3, it needs to pass two tests. In fact, this set passes the additive closure test, but it doesn t pass multiplicative closure! For example, (,, 5) W, but ( 1) (,, 5) = (,, 5) / W.

4 DAVID SEAL Definition. If A m n is a matrix, we define ker(a) := {x R n : Ax = }. This is also called the nullspace of A. Note that ker(a) lives in R n. Definition. If A m n is a matrix, we define Image(A) := {y R m : Ax = y for some x R n }. This is also called the range of A. Note that Image(A) lives in R m. Theorem 4. If A m n is a matrix, then ker(a) is a linear subspace of R n and Image(A) is a linear subspace of R m. Exercise: show this theorem is true. In lieu of section 4-4, I think likely candidates from this section will be proving a set of element is not a vector space. For example see problems 7, 9, 1, 13. Problems 15-22 serve as excellent practice for leading up to section 4-4, but you should get the gist after doing problems from that section. 4.3. Linear Combinations and Independence of Vectors. If we have a collection of vectors { v 1, v 2,..., v k }, we can form many vectors by taking linear combinations of these vectors. We call this space the span of a collection of vectors, and we have the following theorem: Theorem 5. If { v 1, v 2,..., v k } is a collection of vectors in some vector space V, then span{ v 1, v 2,..., v k } := { w : w = c 1 v 1 +c 2 v 2 + +c k v k, for some scalars c i R} is a linear subspace of V. For a concrete example, we can take two vectors v 1 = (1, 1, ) and v 2 = (1,, ) which both lie in R 3. Then the set W = span{(1, 1, ), (1,, )} describes a plane that lives in R 3. This set is a linear subspace by this previous theorem. In fact, we can be a bit more descriptive and write W = {(x, y, z) R 3 : z = }. If we continue with this example, it is possible to write W in many other ways. In fact, we could have written W = span{(1, 1, ), (1,, ), ( 5, 1, )} = span{( 1, 1, ), (2, 1, )}. These examples illustrate the fact that our choice of vectors need not be unique. What is unique, is the least number of vectors that are required to describe the set. In fact this is so important we give it a name, and call it the dimension of a vector space. This is the content of section 4.4. In our example, dim(w) = 2, but right now we don t have enough tools to show this. In order to make this statement least, precise, we need to introduce following important definition. Definition Vectors { v 1, v 2,..., v k } are said to be linearly independent if c 1 v 1 + c 2 v 2 + + c k v k = for some scalars c i, it must follow that c i = for each i. OK, so definitions are all fine and good, but how do we check if vectors are linearly independent? The nice thing about this definition is it always boils down to solving a linear system. Example As a concrete example, let s check if vectors { v 1, v 2 } linearly independent where v 1 = (4, 2, 6, 4) and v 2 = (2, 6, 1, 4).

EXAM 2 REVIEW 5 We need to solve the problem c 1 v 1 + c 2 v 2 =. This reduces to asking what are the solutions to 4 2 c 1 2 6 + c 6 2 1 4 4 =. We can write this problem as a matrix equation A c = where 4 2 [ ] A = 2 6 6 1, c = c1 c 2 4 4 and solve this using Gaussian elimination. 4 2 2 6 6 1 4 4 ops row 1 1 Thus c 1 = c 2 = is the only solution to this problem, and so these two vectors are linearly independent. To demonstrate that a collection of vectors are not linearly independant, it suffices to find a non-trivial combination of these vectors and show they sum to. For example, see example 6 in the textbook. We have another method for checking if vectors are linearly independent. This method is more complicated to apply so I encourage you become familiar with using Gaussian Elimination (row operations) for checking independence. But to be complete, I ll include this other method as well. Essentially what happened in this last problem was we were able to do row operations to a matrix, and get the identity matrix in a part of it. Being row equivalent to the identity matrix is equivalent to being invertible and this is equivalent to having a non-zero determinant. What makes this tricky is what part of what matrix are we considering because determinants are only defined for square matrices. We ll do the special case first: Theorem 6. Independance of n Vectors in R n The n vectors { v 1, v 2,..., v n } are linearly independent if and only if det(a) where (as usual) A = v 1 v 2... v n, A is the (square n n) matrix given by putting the vectors v i as collumns. Note: this theorem currently only applies to a collection of n vectors in R n, but we can easily extend this to other collections of vectors. We ll do this in a minute. What this theorem doesn t give us is what combination gives us a sum that s. I do think seeing a proof of this is instructive. It tells you why we care about even taking determinants here, as well being a good illustration for how one goes about checking for independence..

6 DAVID SEAL Proof: (If det(a), then the vectors are independent). Suppose c 1 v 1 + +c n v n = for some scalars c i. Then as usual, this leads to solving the system A c = where A = c 1 c 2.. v 1 v 2... v n, c =. Since det(a), we know A is invertible which also means A is row equivalent to the identity matrix. This means we can do row operations to this and turn the system into: [A ] = v 1 v 2... v n row ops c n 1 1 1.... 1 1 Evidently, c 1 = c 2 = = c n =, so the vectors are linearly independent. To summarize. Know the definition of linear independence. Know that checking for independence always results in asking what are the solutions to the equation A c =. If c = is the only solution, then they are independent, and if there s a non-zero c that solves this, they are dependent. Finding c gives you the coefficients c 1, c 2,... that demonstrate linear dependence. Gaussian elimination (row operations) is a method that will give you the coefficients. Exercise Is the set of vectors {(,, 1), (1,, 1), (,, )} linearly independent? Exercise Is the set of vectors {(1, 1, 1), (1, 5, 1), ( 1, 17, ), (, 1,)} linearly independent? Hint: There is a one line solution to this, or you can write down the full linear system and solve it. Exercise 5. Bases and Dimension for Vector Spaces We ll begin with the major definition of the section. Definition. A collection of vectors { v 1, v 2,..., v n } are a basis for a vector space V if they satisfy (1) { v 1, v 2,..., v n } are linearly independent. (2) V = span { v 1, v 2,..., v n }. Fact: The number of vectors in any basis for a finite dimensional vector space is unique. We define dim(v ) = n where n is the number of vectors in a basis. Theorem 7. If dim(v ) = n, and S = { v 1, v 2,..., v n } is a collection of n linearly independent vectors, then S is a basis for V. Fact: dim(r n ) = n. This is a deep, non-trivial result that depends on the previous theorem! Why is this true? For R 3, we have the basis S = {(1, 1, 1), (1, 1, ), (1,, )} and there are 3 vectors here. I m guessing a problem very similar to your homework problems will show up with a high probability. All of your problems asked you to find a basis for the kernel of a matrix. Know how to do this.

EXAM 2 REVIEW 7 Practice 12-2 until you know how to do this! 5.1. Good Luck!