Second Midterm Exam April 14, 2011 Answers., and

Similar documents
k is a product of elementary matrices.

1 Last time: determinants

18.06 Problem Set 3 Due Wednesday, 27 February 2008 at 4 pm in

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

Matrices related to linear transformations

MATH 310, REVIEW SHEET 2

Math 308 Spring Midterm Answers May 6, 2013

LINEAR ALGEBRA REVIEW

Math 320, spring 2011 before the first midterm

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Exam 2 Solutions. (a) Is W closed under addition? Why or why not? W is not closed under addition. For example,

Homework Set #8 Solutions

Graduate Mathematical Economics Lecture 1

Undergraduate Mathematical Economics Lecture 1

Linear Algebra, Spring 2005

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

is Use at most six elementary row operations. (Partial

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

MAT 1302B Mathematical Methods II

Math 301 Test III. Dr. Holmes. November 4, 2004

Diagonalization. MATH 1502 Calculus II Notes. November 4, 2008

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

EXAM 2 REVIEW DAVID SEAL

1 Last time: inverses

Chapter 2. Square matrices

Math 369 Exam #2 Practice Problem Solutions

Final Exam Practice Problems Answers Math 24 Winter 2012

2018 Fall 2210Q Section 013 Midterm Exam II Solution

MTH 464: Computational Linear Algebra

MA 265 FINAL EXAM Fall 2012

Spring 2015 Midterm 1 03/04/15 Lecturer: Jesse Gell-Redman

Math 215 HW #9 Solutions

ECON 186 Class Notes: Linear Algebra

The definition of a vector space (V, +, )

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

Linear independence, span, basis, dimension - and their connection with linear systems

Lecture 22: Section 4.7

Chapter 2 Notes, Linear Algebra 5e Lay

b for the linear system x 1 + x 2 + a 2 x 3 = a x 1 + x 3 = 3 x 1 + x 2 + 9x 3 = 3 ] 1 1 a 2 a

1 Matrices and Systems of Linear Equations

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

This MUST hold matrix multiplication satisfies the distributive property.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Second Exam. Math , Spring March 2015

Math 308 Midterm November 6, 2009

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

Practice problems for Exam 3 A =

Row Space, Column Space, and Nullspace

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Review for Exam 2 Solutions

EA = I 3 = E = i=1, i k

Carleton College, winter 2013 Math 232, Solutions to review problems and practice midterm 2 Prof. Jones 15. T 17. F 38. T 21. F 26. T 22. T 27.

Review 1 Math 321: Linear Algebra Spring 2010

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH 2030: EIGENVALUES AND EIGENVECTORS

Study Guide for Linear Algebra Exam 2

Linear Algebra. Introduction. Marek Petrik 3/23/2017. Many slides adapted from Linear Algebra Lectures by Martin Scharlemann

Math 416, Spring 2010 The algebra of determinants March 16, 2010 THE ALGEBRA OF DETERMINANTS. 1. Determinants

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

1 Matrices and Systems of Linear Equations. a 1n a 2n

Linear Algebra Highlights

NAME MATH 304 Examination 2 Page 1

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH 15a: Linear Algebra Practice Exam 2

Some Notes on Linear Algebra

Determinants: summary of main results

MAT 242 CHAPTER 4: SUBSPACES OF R n

Math 1553 Introduction to Linear Algebra

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 8

Introduction to Determinants

Linear Algebra, Summer 2011, pt. 2

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Lecture Notes in Linear Algebra

Math 265 Midterm 2 Review

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

MATH 221, Spring Homework 10 Solutions

Linear Equations in Linear Algebra

SCHOOL OF BUSINESS, ECONOMICS AND MANAGEMENT BUSINESS MATHEMATICS / MATHEMATICAL ANALYSIS

ANSWERS. Answer: Perform combo(3,2,-1) on I then combo(1,3,-4) on the result. The elimination matrix is

Determinants of 2 2 Matrices

22m:033 Notes: 3.1 Introduction to Determinants

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Topic 15 Notes Jeremy Orloff

Matrices. Chapter Keywords and phrases. 3.2 Introduction

MATH10212 Linear Algebra B Homework 6. Be prepared to answer the following oral questions if asked in the supervision class:

Solutions to Final Exam

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Transcription:

Mathematics 34, Spring Problem ( points) (a) Consider the matrices all matrices. Second Midterm Exam April 4, Answers [ Do these matrices span M? ] [, ] [, and Lectures & (Wilson) ], as vectors in the vector space M of These three vectors cannot span M : We have seen (e.g. in one of the homework problems) that this is a space of dimension 4, so no set of fewer than 4 vectors could span it. [ ] [ ] [ ] [ ] (b) Now use the matrices,., and. Are these four vectors in M linearly independent? You could write a lot solving equations, but it is also apparent if you happen to look at them right that the third matrix is the sum of the first and last. So one of the matrices is a linear combination of some of the others, so they cannot be linearly independent. Problem ( points) Let S = { v, v,..., v k } be a set of k vectors in some vector space V. Prove that S is linearly dependent if and only if (at least) one of the vectors v j can be written as a linear combination of all the other vectors in S. Here we have to prove the fact that I used in answering problem (b)! Assume the vectors in S are linearly dependent. That means there is some combination a v + a v + + a k v k = giving the zero vector, without all of the coefficients a j being zero. Hence in particular a j for some j. Rearranging the terms, a j v j = (the sum of all the terms a i v i except for i = j). But since a j (and hence a j ) we can multiply by /( a j ) and get v j = a a j v a a j v + a k a j v k, where the sum on the right includes terms for each v i except v j. So v j can be written as a linear combination of the other vectors. Now assume v j can be written as a linear combination of the others, and we have to show the vectors are linearly dependent. We have v j = a v + a v + + a k v k where the right-hand sum extends over all subscripts from to k except j and the coefficients a i are some numbers. We can add v j to both sides and arrange the terms to get = a v + a v + v j + + a k v k, where the coefficients are the same as above except that we now have a term ( ) v j. This is a linear combination of the vectors in S that gives : Many of those coefficients might be zero, but the coefficient on v j is definitely not zero, so this is a linear combination giving with not all of the coefficients equal to zero, so the vectors are linearly dependent. Problem 3 Let v = ( points), v =, v 3 =, v 4 =, and v =.

Find a basis for R 3 consisting of some of the vectors v... v. I will write the vectors as the columns of a matrix, A =. Now row reduce A and get either (in Row Echelon Form, where there might be variations in this matrix) A R = 3 or (in Reduced Row Echelon Form) A RR =. In either case 4 4 we see that the first, third, and fourth columns contain leading entries, so we use the first, third, and fourth vectors v =, v 3 =, and v 4 = from the original set for our basis. (This is a procedure you could use mechanically. You could also have seen that v is twice v, so throw out v, and proceeded like that discarding vectors that are linear combinations of the ones you don t throw out.) Problem 4 ( points) Let V be any vector space. Suppose for some vector u in V and some number c, c u =, where is whatever the scalar multiplication is on V and is whatever is the zero vector for the space V. Prove that either c = (the real number ) or u =. (This is part (c) of Theorem 4.: Parts (a) and (b) were that u = and c = for any vector u and for any number c. You may use those facts in your proof for this problem if you find them helpful.) Suppose c u = and c : We need to show that in this case u =. Since c, there is a number c such that c c =. Multiply that number on both sides of our equation c u =, getting c (c u) = c. By an earlier part of this same theorem, c =. By the defining requirements for a vector space, c (c u) = ( c c) u, or c (c u) = u, and by another of the defining requirements for a vector space v = v for any vector v. Putting these pieces together, u = u = ( c c) u = c (c u) = c = and we are through. Problem ( points) [ ] a b (a) Show that the set of vectors in M c d which satisfy a + b + c + d = is a subspace of M. It is clear from the way it is described that the set in question is a subset of M, so we just have to show (i) the set is not empty, (ii) it is closed under addition, and (iii) it is closed under scalar multiplication. ] [ ] [ ] a b e f [ Since is in the set, the set is not empty. For (ii), suppose and are c d g h [ ] a + e b + f both in the set, i.e. a + b + c + d = and e + f + g + h =. Their sum is, so c + g d + h to determine whether that sum is in our set we ask whether (a + e) + (b + f) + (c + g) + (d + h) is. Rearranging that as (a + b + c + d) + (e + f + g + h) = + = we see the sum is in

[ ] a b the set. For (iii) we ask a similar question: If r is any real number and is in our set, c d [ ] [ ] a b ra rb i.e. a + b + c + d =, is r = in the set, i.e. is ra + rb + rc + rd =? But c d rc rd ra + rb + rc + rd = r(a + b + c + d) = r =, so the set is closed under scalar multiplication and we are through. [ ] a b (b) Show that the set of vectors in M c d which satisfy a + b + c + d = is not a subspace of M. We could test this set against (i) and (ii) and (iii) above. It is definitely not empty. But if we use a more sophisiticated version of that test, not just checking to see if some vector is in the set but specifically whether [ the ] zero vector of the space is in the set, we see that it fails: The zero vector in M is and the sum + + + =, so the zero vector is not in the set and hence it cannot be a subspace. You could also check and find that it fails both (ii) and (iii), but finding that it does not contain is probably easier. Problem 6 ( points) Let A =. (a) Find a basis for the null space of A, the space of solutions of A x =. In the answer to problem 3 I found that the reduced row echelon form of this matrix is 3. Hence the solution space is the set of solutions of x + x + 3x =, 4 x 3 x =, and x 4 + 4x =. Those can be rewritten as x = x 3x, x 3 = x, and x 4 = 4x. We can choose any arbitray values for x and x and use those equations x 3x to determine values for x, x 3, and x, giving as solutions x or equiv- x 4x x 3 alently x + x 4. So a basis for the solution space is the set of two vectors, 3 4. 3

(b) What is the rank of A? You could look at the REF and read off the number of non-zero rows, 3. Or you could use the fact that the rank plus the dimension of the solution space must equal the number of columns: From (a) we know the dimension of the solution space is, and there are columns, so the rank must be = 3. Problem 7 ( points) For the vector space V = P of polynomials with degree at most : () The set B = { + t, + t, t } is a basis for V. () The set B = { t, + t, + t } is also a basis for V. (a) Find the change of basis matrix (what the book denotes P B B ) that tells how coordinates with respect to B change to become coordinates with respect to B. We find the coordinates of each vector in B with respect to B, and write those as the columns of a matrix. For the first vector in B, t, could either set up equations and solve them or just think through it. I see that I could get as (+t ) t, and then t as (+t) = (+t) [(+t ) t )]. So t = ( + t) + ( ) ( + t ) + (t ), i.e. the coordinates of t with respect to B are,, and or as a column vector. Next we find the coordinates of + t with respect to B : Since the first vector in B is exactly + t, the coordinates are,, and or. Similarly, + t is exactly the second vector in B, so its coordinates are,, or or. So the matrix we want is P B B =. (b) Find the coordinates of + t t with respect to B. Reasoning as before (You could instead set this up as a system of equations to be solved): = ( + t) t. t = ( + t ) = ( + t ) [( + t) t] = ( + t ) ( + t) + t. So + t t = ( + t) t = ( + t) [( + t ) ( + t) + t] = (t) + ( + t) ( + t ) and hence the coordinates as a vector are. (c) Find the coordinates of + t t with respect to B. This time the calculations are simpler: + t and t are themselves basis vectors in B, so we can see + t t as ( + t) + ( + t ) (t ), i.e. the coordinates as a vector are. 4

(d) As a check on your work, multiply the matrix you got in (a) on the left of the coordinate vector resulting from (b) and show that you get the vector corresponding to (c). If we multiply we do get. Problem 8 ( points) Let A =. (a) Find the adjoint adj(a). [ ] a b Remember that the determinant of a matrix is ad bc. We need to compute nine c d determinants, each being just that easy, and affix ± signs. In the upper left of our 3 3 matrix we put, with a plus sign, the determinant ([ ]) of the matrix we get if we delete the first row and column from our matrix, i.e. det. That is =. For the entry in the i, j position we take the determinant of the matrix we would get by deleting the j th row and the i th column (don t forget the transpose here, i.e. the swap of i and j), and use for a sign [ i+j. So] for example in the middle of the top row, i = and j =, we take the determinant of and get, and + = 3 = so we use a minus sign. Filling in the calculations without the ± signs we have or 4. Now we put in the signs in their checkerboard pattern and get for the answer 4. (b) Find the determinant det(a) We could of course just calculate det(a) directly, but we also know that A adj(a) should give us det(a) I 3, so we can both get the determinant and have a check on our answer to (a) if we just multiply A and our answer. We get 4 = which is a scalar matrix, i.e. a scalar times I 3 (validating our computation of the adjoint) where the scalar is, so the determinant must be.

(c) Find the inverse A. Again there is a brute-force way to do this, but since A adj(a) = det(a) I 3, A has to be det(a) adj(a) (so long as det(a), which is the case here but anyway would have to be true ( ) for A to exist). So A = adj(a) =. Problem 9 ( points) Prove that the only subspaces of R are [ ]. The subspace consisting of just. The subspace consisting of all multiples of some non-zero vector v, and 3. The whole space R. Let V be some subspace of R. Since V is a subspace it must at least contain = [ ] : If that is all it contains we are through since that is case (). So we can assume there is something other than in V, call it u. Let W be the set of all scalar multiples of u. That makes W the span of { u}, which is a subspace of R. Since V is also a subspace (of R ), it is closed under scalar multiplication, so W is contained in V. If V consists only of W, we are again through since that is case (). If we are not yet through, then V must contain some vector v that is not in W and hence not a multiple of u. Hence in the set S = { u, v}, neither vector is a linear combination (multiple) of preceding ones, so S is a linearly independent set of two vectors in the space R, which has dimension. We know a set of n linearly independent vectors in a space of dimension n must be a basis for that space, so S is a basis for R. But S is contained in V, and V has to be closed under linear combinations since it is a subspace, so the span of the vectors in S must be contained in V. But since S was a basis for R, that span is all of R. So V contains all of R. But V is also contained in R, so V = R, which is case (3), and we are through. 6