Math 24 Winter 2010 Sample Solutions to the Midterm

Similar documents
MTH Midterm 2 Review Questions - Solutions

Review 1 Math 321: Linear Algebra Spring 2010

Math 4377/6308 Advanced Linear Algebra

Solution to Homework 1

Advanced Linear Algebra Math 4377 / 6308 (Spring 2015) March 5, 2015

Chapter 3: Theory Review: Solutions Math 308 F Spring 2015

Math 54 HW 4 solutions

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Math 353, Practice Midterm 1

2018 Fall 2210Q Section 013 Midterm Exam II Solution

1.4 Linear Transformation I

MAT 242 CHAPTER 4: SUBSPACES OF R n

Solutions to Math 51 Midterm 1 July 6, 2016

MATH 235. Final ANSWERS May 5, 2015

Math 24 Spring 2012 Questions (mostly) from the Textbook

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Linear Transformations

Review for Exam 2 Solutions

Row Space, Column Space, and Nullspace

2: LINEAR TRANSFORMATIONS AND MATRICES

MATH 1553, SPRING 2018 SAMPLE MIDTERM 2 (VERSION B), 1.7 THROUGH 2.9

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Math 110: Worksheet 3

Criteria for Determining If A Subset is a Subspace

Answer Key for Exam #2

1 Invariant subspaces

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Midterm solutions. (50 points) 2 (10 points) 3 (10 points) 4 (10 points) 5 (10 points)

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

Math 54. Selected Solutions for Week 5

is Use at most six elementary row operations. (Partial

Carleton College, winter 2013 Math 232, Solutions to review problems and practice midterm 2 Prof. Jones 15. T 17. F 38. T 21. F 26. T 22. T 27.

Math 265 Midterm 2 Review

DEF 1 Let V be a vector space and W be a nonempty subset of V. If W is a vector space w.r.t. the operations, in V, then W is called a subspace of V.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

= A+A t +αb +αb t = T(A)+αT(B). Thus, T is a linear transformation.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Math 20F Final Exam(ver. c)

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Overview. Motivation for the inner product. Question. Definition

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

Math 235: Linear Algebra

Dr. Abdulla Eid. Section 4.2 Subspaces. Dr. Abdulla Eid. MATHS 211: Linear Algebra. College of Science

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Mathematics 24 Winter 2004 Exam I, January 27, 2004 In-class Portion YOUR NAME:

Linear Models Review

Solutions to Math 51 First Exam April 21, 2011

7. Dimension and Structure.

Chapter 2 Notes, Linear Algebra 5e Lay

MATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005

Math 250B Midterm II Information Spring 2019 SOLUTIONS TO PRACTICE PROBLEMS

Definition Suppose S R n, V R m are subspaces. A map U : S V is linear if

Span & Linear Independence (Pop Quiz)

Math 110, Spring 2015: Midterm Solutions

MTH5102 Spring 2017 HW Assignment 3: Sec. 1.5, #2(e), 9, 15, 20; Sec. 1.6, #7, 13, 29 The due date for this assignment is 2/01/17.

Chapter 6. Orthogonality

March 27 Math 3260 sec. 56 Spring 2018

Comps Study Guide for Linear Algebra

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

Tues Feb Vector spaces and subspaces. Announcements: Warm-up Exercise:

x y + z = 3 2y z = 1 4x + y = 0

Solutions for Math 225 Assignment #5 1

P = A(A T A) 1 A T. A Om (m n)

Lecture 3: Linear Algebra Review, Part II

Math 550 Notes. Chapter 2. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Linear Algebra Final Exam Study Guide Solutions Fall 2012

EE5120 Linear Algebra: Tutorial 3, July-Dec

Math 290, Midterm II-key

Online Exercises for Linear Algebra XM511

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH 2030: ASSIGNMENT 4 SOLUTIONS

Solution: (a) S 1 = span. (b) S 2 = R n, x 1. x 1 + x 2 + x 3 + x 4 = 0. x 4 Solution: S 5 = x 2. x 3. (b) The standard basis vectors

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Vector Spaces and Dimension. Subspaces of. R n. addition and scalar mutiplication. That is, if u, v in V and alpha in R then ( u + v) Exercise: x

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

1 Last time: determinants

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

University of Houston Fall 2017 Dr. G. Heier Advanced Linear Algebra I (Math 4377/6308)

T ((x 1, x 2,..., x n )) = + x x 3. , x 1. x 3. Each of the four coordinates in the range is a linear combination of the three variables x 1

Test 1 Review Problems Spring 2015

2.3. VECTOR SPACES 25

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

MTH 362: Advanced Engineering Mathematics

Linear Algebra (Math-324) Lecture Notes

! x y. z 1 2 = = r z 1 rz 1

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Linear algebra and differential equations (Math 54): Lecture 10

Check that your exam contains 20 multiple-choice questions, numbered sequentially.

MATH 22A: LINEAR ALGEBRA Chapter 4

Chapter 2: Linear Independence and Bases

1 Last time: inverses

Transcription:

Math 4 Winter Sample Solutions to the Midterm (.) (a.) Find a basis {v, v } for the plane P in R with equation x + y z =. We can take any two non-collinear vectors in the plane, for instance v = (,, ) and v = (,, ). (b.) You know from multivariable calculus that the vector v = (,, ) is perpendicular to the plane P. Therefore β = {v, v, v } is linearly independent, and forms an ordered basis for R. Let T : R R be the perpendicular projection onto the plane P. In other words, T (v) is the perpendicular projection of v onto P. What is [T ] β? As v and v are in the plane, T (v ) = v and T (v ) = v. As v is a vector perpendicular to the plane, its projection is the origin, T (v ) =. Therefore [T (v )] β = [v ] β =, [T (v )] β = [v ] β =, [T (v )] β = [] β =, [T ] β =. (c.) Let α be the standard ordered basis for R. Find the change of coordinate matrices Q β α that changes α coordinates into β coordinates, and Q α β that changes β coordinates into α coordinates. Do not use matrix inversion (if you know how to invert matrices) to do this problem. Find each matrix by explicitly computing the coordinates of the appropriate vectors in the appropriate bases. If you wish, you can check your work by verifying that Q β αq α β = I. Q α β =, the matrix whose columns are the α (standard) coordinates of the vectors in β. To find the β coordinates of the vectors in α (the standard basis vectors) we need to solve the vector equations: (,, ) = a(,, ) + b(,, ) + c(,, ) (,, ) = a(,, ) + b(,, ) + c(,, ) (,, ) = a(,, ) + b(,, ) + c(,, ) When we do this, we get (,, ) = 6 (,, ) (,, ) + (,, ) (,, ) = (,, ) + (,, ) + (,, )

that is, (,, ) = (,, ) + (,, ) (,, ), [(,, ) β = [(,, )] β = Now, using these coordinates as the columns of Q β α, we have Q β α =. [(,, )] β =. (d.) Find the matrix of T in the standard basis, [T ] α. If you want to use the result of part (b) but were not able to do part (b), you may pretend [T ] β =. This is not the correct answer to part (b). [T ] α = [T ] α α = Q α β [T ]β β Qβ α =. [T ] α =. (e.) Use your answers to find the perpendicular projection of (,, ) onto the plane P. If you want to check your work here, note that this point should in fact be on the plane P, and the line between it and (,, ) should be perpendicular to P. If you used the wrong answer to part (b) supplied in part (d), the point you found will not be on P, but the line between it and (,, ) will still be perpendicular to P. [T (,, )] α = [T ] α [(,, )] α = T (,, ) = (,, ). =.

(.) Recall that if A is an n m matrix, the linear transformation L A : F m F n is defined by L A (v) = Av, and A is the matrix that represents L A in the standard ordered bases for F m and F n. This means that the columns of A are L A (e ), L A (e ),...,L A (e m ), written as column vectors, where e, e,... e m are the standard basis vectors for F m. (a.) Show that an n n matrix is invertible if and only if its columns are linearly independent. By Corollary on page 8, an n n matrix A is invertible if and only if the linear transformation L A : F n F n is invertible. By Theorem. on page 7, L A is invertible if and only if it is onto. By Theorem. on page 68, the range of L A is spanned by L A (e ), L A (e ),... L A (e n ), so L A is invertible if and only if the vectors L A (e ), L A (e ),... L A (e n ) span F n. Since F n is an n-dimensional vector space, by Corollary on pages 47-48 the n vectors L A (e ), L A (e ),... L A (e n ) span F n if and only if they are linearly independent. The vectors L A (e ), L A (e ),... L A (e n ) are the columns of A, hence they are linearly independent if and only if the columns of A are linearly independent. Putting all this together, we see that an n n matrix A is invertible if and only if its columns are linearly independent. (b.) Determine whether the matrix sum of its columns.) is invertible. (Hint: Check the Because its columns sum to zero, they are not linearly independent, and therefore by part (a) the matrix is not invertible. (c.) Determine whether the linear transformation T : M (R) P (R) defined by ( ) a b T = (a + b + c d) + (b c)x + (a + b c d)x + ( a + b + c + d)x c d is invertible. {( ) ( ) ( ) ( )} If β =,,, is a basis for M (R) and α = {, x, x, x, } is a basis for P (R), then [T ] α β =. By part (b) this matrix is not invertible, and therefore by Theorem.8 on page, T is not invertible.

(.) (a.) Find a basis for the subspace W of R 4 consisting of all solutions to the following system of equations. w + x + y + z = w + x 4y z = y + z = w x + y =. By subtracting multiples of equations and from equations and 4 we can convert this to the equivalent system w + x + z = y + z =, which can be rewritten as w = x z y = z. This system is satisfied by any vector of the form w x y = z x z x z z = x + z, which shows the solution set is spanned by the set,. Since this set is clearly linearly independent, it forms a basis for the solution set. (b.) Find a basis for the subspace Z of R 4 spanned by the vectors (,,, ), (,, 4, ), (,,, ), and (,,, ). We know (Theorem.9 on page 44) that some subset of this set forms a basis. Using our algorithm for finding a linearly independent set with the same span, we can begin with the last two vectors, (,,, ), and (,,, ). Since neither of these is a multiple of the other, {(,,, ), (,,, )} is a linearly independent set. Checking to see whether (,, 4, ) is in the span of {(,,, ), (,,, )}, we see that (,, 4, ) = ( )(,,, ) + ( )(,,, ), so (,, 4, ) is in the span of {(,,, ), (,,, )}. Now, checking to see whether (,,, ) is in the span of {(,,, ), (,,, )}, we see that (,,, ) = ()(,,, ) + ( )(,,, ), so (,,, ) is in the span of {(,,, ), (,,, )}. Therefore {(,,, ), (,,, )} is a basis for the span of these four vectors. (c.) Let v be any vector in R 4. Show that v is in W if and only if v z = for every element of Z (where v z denotes the familiar dot product). Let S = {(,,, ), (,, 4, ), (,,, ), (,,, )}. 4

First, by the definition of W, we see that v = (w, x, y, z) is in W if and only if v satisfies the equations of the system in part (a), which can be rewritten as (,,, ) (w, x, y, z) = (,, 4, ) (w, x, y, z) = (,,, ) (w, x, y, z) = (,,, ) (w, x, y, z) =. So v is in W if and only if s v = for every s S. Since the dot product is commutative, we can write this as v s = for every s S. Now, by definition, Z = span(s). So we must show that the following two things are equivalent: (a.) For every s S, we have v s =. (b.) For every z span(s), we have v z =. We will do this for any set S R n. First, it is clear that (b) = (a), since if s S then s span(s), so by (b) we have v s =. To show that (a) = (b), assume that (b) holds, and let z span(s). We must show v z =. By the definition of span, we can write z = r s + r s + + r n s n for some scalars r, r,..., r n and elements s, s,..., s n of S. Now by properties of the dot product, we have v z = v (r s + r s + + r n s n ) = r (v s ) + r (v s ) + + r n (v s n ). By (b) we have v s i = for i =,,..., n, and so we can conclude v z =. (d.) Let A be the matrix 4, and At be the transpose of A. Then L A and L A t are linear transformations from R 4 to R 4. Two pairs of the spaces W, Z, N(L A ), R(L A ), N(L A t), and R(L A t) are equal. Which ones? N(L A ) = W R(L A t) = Z. To see the first equality, note that N(L A ) consists of all solutions to the matrix equation Ax =, which is equivalent to the system of equations in part (a), and W also consists of all solutions to that system. To see the second, note that R(L A t) is the span of the columns of A t, which are the rows of A, and Z is also the span of the rows of A. The interesting fact that comes out of this (which holds for any matrix A in M m n (R), as you can see by looking at the proof in part(c)) is that R(L A t) consists of all vectors that are perpendicular to every vector in N(L A ).

(4.) Suppose that T and U are linear transformations from R to R, and rank(t ) = rank(u) =. (a.) What is the largest possible rank of the composition T U? The largest possible rank is. Show this rank is possible by giving an example. Let T (x, x, x, x 4, x ) = (x, x, x,, ), and U(x, x, x, x 4, x ) = (x, x, x,, ). Both U and T have rank since their range is the three-dimensional subspace spanned by (,,,, ), (,,,, ), and (,,,, ). Now T U(x, x, x, x 4, x ) = T (U(x, x, x, x 4, x )) = T (x, x, x,, ) = (x, x, x,, ). We see T U has rank since its range is also the three-dimensional subspace spanned by (,,,, ), (,,,, ), and (,,,, ). Prove that no larger rank is possible. The range of T U is contained in the range of T. (To see this: Suppose that v R(T U). This means that, for some w, we have v = T U(w) = T (U(w)), which shows that v R(T ).) Therefore the dimension of R(T U) is at most the dimension of R(T ). But now we have that rank(t U) = dim(r(t U)) dim(r(t )) = rank(t ) =. (b.) What is the smallest possible rank of the composition T U? The smallest possible rank is. Show this rank is possible by giving an example. Let T (x, x, x, x 4, x ) = (x, x 4, x,, ), and U(x, x, x, x 4, x ) = (x, x, x,, ). Both U and T have rank since their range is the three-dimensional subspace spanned by (,,,, ), (,,,, ), and (,,,, ). Now T U(x, x, x, x 4, x ) = T (U(x, x, x, x 4, x )) = T (x, x, x,, ) = (x,,,, ). We see T U has rank since its range is the one-dimensional subspace spanned by (,,,, ). Prove that no smaller rank is possible. We need to show that rank(t U). We will suppose that rank(t U) = and get a contradiction. Because rank(t U) =, we have R(T U) = {}, so for every vector v V, we have T U(v) =, or T (U(v)) =. This means that for every vector w = U(v) in R(U), we have T (w) = T (U(v)) =. This shows that R(U) N(T ). But we know that dim(r(u)) = rank(u) =, and by the Dimension Theorem, dim(n(t )) = nullity(t ) = dim(domain(t )) rank(t ) = =. Now we have a three-dimensional subspace (R(U)) contained in a two-dimensional subspace (N(T )). This is impossible, so we have a contradiction. 6

(.) Suppose that W and Z are subspaces of a vector space V, that W Z = {}, and that span(w Z) = V. Show that if X is a basis for W and Y is a basis for Z, then X Y is a basis for V. To show X Y is a basis, we need to show that it is linearly independent and spans V. First we show that X Y is linearly independent. To do this, we need to show that no nontrivial linear combination of vectors from X Y equals zero. Suppose then that () a x + a x + + a n x n + b y + b y + + b m y m =, where x i X and y j Y. We must show that a i = and b j = for all i and j. We can rewrite equation () as () a x + a x + + a n x n = (b y + b y + + b m y m ). Because the left hand side of equation () is a linear combination of vectors from X, which is a basis for W, it must be in W ; that is, a x + a x + + a n x n W. Similarly, the right hand side of equation () is a linear combination of vectors from Y, which is a basis for Z, so (b y + b y + + b m y m ) Z. We are given that W Z = {}, so is the only vector that is in both W and Z. This means we have But now we have a x + a x + + a n x n = (b y + b y + + b m y m ) =. a x + a x + + a n x n =, and since the x i come from the linearly independent set X, we have a i = for all i. Similarly, since b y + b y + + b m y m = and the y j come from the linearly independent set Y, we have b j = for all j. Now we show that X Y spans V. To do this, we must show that any vector in V can be expressed as a linear combination of vectors from X Y. Let v be any vector of V. Because v is in the span of W Z, we can write () v = a w + a w + + a n w n + b z + b z + + b m z m =, where w i W and z j Z. 7

Because W is a subspace, a w + a w + + a n w n W, and since any element of W can be expressed as a linear combination of elements of the basis X, we can write where x i X. Similarly, we can write a w + a w + + a n w n = c x + c x + + c k x k, b z + b z + + b m z m = d y + d y z + + d l y l, where y j Y. Substituting back into equation (), we get v = c x + c x + + c k x k + d y + d y z + + d l y l, which expresses v as a linear combination of elements from X Y. This completes the proof. 8

Outline of an alternative proof for ()(a): Instead of using the proof given above, you can prove more directly that a square matrix A is invertible if and only if its columns are independent. First, show that an n n matrix A is invertible if and only if there is a matrix B such that AB = I n. To do this, you can use the ideas in exercises 9 and a on page 7 of the textbook. A way to do 9 is to relate the properties of A, B, and AB to properties of L A, L B, and L AB. Second, show that the i th column of AB is a linear combination of the columns of A, with the coefficients given by the i th column of B. Therefore, there is a B such that AB = I n if and only if the columns of I n (the standard basis vectors) are all in the span of the columns of A. Now the standard basis vectors are all in the span of the columns of A if and only if the columns of A span F n, which, since there are n columns and F n has dimension n, is the case if and only if the columns of A are linearly independent. In problem (4) we can prove a more general fact: Let V, W, and Z be vector spaces with dimensions m, n, and p respectively. If U : V W and T : W Z are linear transformations with rank(u) = r and rank(t ) = s, then we have rank(t U) r; rank(t U) s; rank(t U) ; rank(t U) (r + s) n. (Note, by the Dimension Theorem, we know that r m, r n, s n, and s p.) To prove this, you can let T be the restriction of T with domain the subspace R(U) W and codomain the subspace R(T ) Z. That is, the domain of T is R(U), and for any w R(U), we have T (w) = T (w). Now, you can prove that the range of T U is the range of T. Therefore the problem becomes to find upper and lower bounds on the size of T. By the Dimension Theorem, rank(t ) dim(domain(t )) = dim(r(u)) = rank(u) = r; rank(t ) dim(codomain(t )) = dim(r(t )) = rank(t ) = s. You can also prove that the null space of T is N(T ) R(U). In particular, N(T ) N(T ), so that nullity(t ) nullity(t ) and, again by the Dimension Theorem, we have rank(t ) = dim(domain(t )) nullity(t ) dim(domain(t )) nullity(t ) = dim(r(u)) (n rank(t )) = (r) (n s) = (r + s) n. You can also give examples where the rank is as small and as large as these constraints permit. If we let {v,..., v m } be a basis for V, {w,..., w n } be a basis for W, and {z,..., z p } be a basis for Z, and let T (v i ) = w i for i r and T (v i ) = for i > r, we can obtain the largest and smallest possible rank for T U by setting, respectively, U(w i ) = z i for i s and U(w i ) = for i > s or U(w i ) = for i n s and U(w i ) = z i s for i > n s. 9