Solution to Homework 1

Similar documents
EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

2. Every linear system with the same number of equations as unknowns has a unique solution.

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math Linear Algebra Final Exam Review Sheet

Math 54 HW 4 solutions

Math 3108: Linear Algebra

MATH 110: LINEAR ALGEBRA HOMEWORK #2

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Online Exercises for Linear Algebra XM511

1 Last time: inverses

Linear Algebra Highlights

MATH 235. Final ANSWERS May 5, 2015

Lecture Notes in Linear Algebra

Math 110, Spring 2015: Midterm Solutions

CSL361 Problem set 4: Basic linear algebra

Math113: Linear Algebra. Beifang Chen

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Math 24 Winter 2010 Sample Solutions to the Midterm

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Lecture Summaries for Linear Algebra M51A

1.5 Linear Dependence and Linear Independence

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Math 110: Worksheet 3

Chapter 1 Vector Spaces

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

Chapter 7. Linear Algebra: Matrices, Vectors,

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

NAME MATH 304 Examination 2 Page 1

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

MAT 2037 LINEAR ALGEBRA I web:

Linear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MAT 242 CHAPTER 4: SUBSPACES OF R n

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Chapter 2: Matrix Algebra

Chapter 1: Linear Equations

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Review Solutions for Exam 1

Math 24 Spring 2012 Questions (mostly) from the Textbook

Study Guide for Linear Algebra Exam 2

Chapter 1: Linear Equations

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

LINEAR ALGEBRA QUESTION BANK

Homework 5 M 373K Mark Lindberg and Travis Schedler

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Linear Algebra II. 1. Inner Products and Norms

MTH5102 Spring 2017 HW Assignment 3: Sec. 1.5, #2(e), 9, 15, 20; Sec. 1.6, #7, 13, 29 The due date for this assignment is 2/01/17.

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

Span & Linear Independence (Pop Quiz)

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

2018 Fall 2210Q Section 013 Midterm Exam II Solution

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

MATH PRACTICE EXAM 1 SOLUTIONS

LINEAR ALGEBRA REVIEW

2 b 3 b 4. c c 2 c 3 c 4

Review 1 Math 321: Linear Algebra Spring 2010

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

LINEAR ALGEBRA SUMMARY SHEET.

Linear Algebra (Math-324) Lecture Notes

Calculating determinants for larger matrices

Components and change of basis

MH1200 Final 2014/2015

Comps Study Guide for Linear Algebra

Math 344 Lecture # Linear Systems

2: LINEAR TRANSFORMATIONS AND MATRICES

0.1 Rational Canonical Forms

MODEL ANSWERS TO THE THIRD HOMEWORK

3 Matrix Algebra. 3.1 Operations on matrices

Linear equations in linear algebra

Homework Set #8 Solutions

Evaluating Determinants by Row Reduction

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra Lecture Notes-II

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

MAT Linear Algebra Collection of sample exams

Introduction to Linear Algebra, Second Edition, Serge Lange

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

1 Matrices and Systems of Linear Equations

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

This MUST hold matrix multiplication satisfies the distributive property.

Chapter 3. Vector spaces

Chapter 2: Linear Independence and Bases

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Math 113 Winter 2013 Prof. Church Midterm Solutions

Review for Exam 2 Solutions

1 Last time: least-squares problems

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

1 Matrices and Systems of Linear Equations. a 1n a 2n

Transcription:

Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false when a = (e) Yes (f) No It has m rows and n columns (g) No (h) No For example, we have x + ( x) = (i) Yes (j) Yes (k) Yes That is the definition 8 By (VS 7) and (VS 8), we have (a + b)(x + y) = a(x + y) + b(x + y) = ax + ay + bx + by 4 Yes All the condition are preserved when the field is the real numbers 5 No Because a real-valued vector scalar multiply with a complex number will not always be a real-valued vector 2 Since each entries could be or and there are m n entries, there are 2 m n vectors in that space Sec 3 (a) No This should make sure that the field and the operations of V and W are the same Otherwise, for example, V = R and w = Q respectively Then W is a vector space over Q but not a space over R So it is not a subspace of V (b) No We should have that any subspace contains (c) Yes We can choose W = (d) No Let V = R, E = {} and E = {} Then we have E E = is not a subspace

(e) Yes Only entries on diagonal could be nonzero (f) No It is the summation of that (g) No But it is called isomorphism That is, they are the same in view of structure 8 Just check whether it s closed under addition and scalar multiplication and whether it contains And here s and t are in R (a) Yes It is a line t(3,, ) (b) No It contains no (,, ) (c) Yes It is a plane with normal vector (2, 7, ) (d) Yes It is a plane with normal vector (, 4, ) (e) No It contains no (,, ) (f) No We have both ( 3, 5, ) and (, 6, 3) are elements of W 6 But their sum ( 3, 5 + 6, 3) is not an element of W 6 No in general but yes when n = Since W is not closed under addition For example, when n = 2, (x 2 + x) + ( x 2 ) = x is not in W 9 It s easy to say that it is sufficient since if we have W W 2 or W 2 W Then the union of W and W 2 will be W or W 2, a space of course To say it is necessary we may assume that neither W W 2 or W 2 W holds Then we can find some x W W 2 and y W 2 W Thus by the condition of subspace we have x + y is a vector in W or in W 2, say W But this will make y = (x + y) x should be in W It will be contradictory to the original hypothesis that y W 2 W 23 (a) We have (x + x 2 ) + (y + y 2 ) = (x + y ) + (x 2 + y 2 ) W + W 2 and c(x + x 2 ) = cx + cx 2 W + W 2 if x, y W and x 2, y 2 W 2 And we have = + W +W 2 Finally W = {x+ : x W, W 2 } W + W 2 and it s similar for the case of W 2 (b) If U is a subspace contains both W and W 2 then x + y should be a vector in U for all x W and y W 2 28 By the previous exercise we have (M + M 2 ) t = M t + M t 2 = (M + M 2 ) and (cm) t = cm t = cm With addition that zero matrix is skewsymmetric we have the set of all skew-symmetric matrices is a space We have M nn (F) = A A M nn (F) = (A + A t ) + (A A t ) A M nn (F) = W + W 2 and W W 2 = {} The final equality is because A + A t is symmetric and A A t is skew-symmetric If F is of characteristic 2, we have W = W 2 3 If V = W W 2 and some vector y V can be represented as y = x +x 2 = x + x 2, where x, x W and x 2, x 2 W 2 Then we have x x W and x x = x 2 x 2 W 2 But since W W 2 = {}, we have x = x and x 2 = x 2 Conversely, if each vector in V can be uniquely written as x + x 2, then V = W +W 2 Now if x W W 2 and x, then we have that x = x+ with x W and W 2 or x = + x with W and x W 2, ie a contradiction 2

3 (a) If v + W is a space, we have = v + ( v) v + W and thus v W and v W Conversely, if v W we have actually v + W = W, a space (b) We can prove that v +W = v 2 +W if and only if (v v 2 )+W = W This is because ( v ) + (v + W ) = W and ( v ) + (v 2 + W ) = ( v + v 2 ) + W So if (v v 2 ) + W = W, a space, then we have v v 2 W by the previous exercise And if v v 2 W we can conclude that (v v 2 ) + W = W (c) We have (v + W ) + (v 2 + W ) = (v + v 2 ) + W = (v + v 2) + W = (v +W )+(v 2+W ) since by the previous exercise we have v v W and v 2 v 2 W and thus (v + v 2 ) (v + v 2) W On the other hand, since v v W implies av av = a(v v ) W, we have a(v + W ) = a(v + W ) (d) It is closed because V is closed The commutativity and associativity of addition is also because V is commutative and associative For the zero element we have (x + W ) + W = x + W For the inverse element we have (x + W ) + ( x + W ) = W For the identity element of multiplication we have (x + W ) = x + W The distribution law and combination law are also followed by the original propositions in V But there are one more thing should be checked, that is whether it is well-defined But this is the exercise 33 (c) Sec 4 (a) Yes Just pick any coefficient to be zero (b) No By definition it should be {} (c) Yes Every subspaces of which contains S as a subset contains span(s) and span(s) is a subspace (d) No This action will change the solution of one system of linear equations (e) Yes (f) No For example, x = 3 has no solution Any symmetric 2 2 matrices can be written as following: ( ) a b = am b c + cm 2 + bm 3 For x the statement is the definition of linear combination and the set is a line For x = the both side of the equation is the set of zero vector and the set is the origin 4 We prove span(s S 2 ) span(s )+span(s 2 ) first For v span(s S 2 ) we have v = n a ix i + m j= a jy j with x i S and y j S 2 Since the first summation is in span(s ) and the second summation is in span(s 2 ), we have v span(s ) + span(s 2 ) For the converse, let u + v span(s ) + span(s 2 ) with u span(s ) and v span(s 2 ) We can write v = n a ix i + m j= a jy j with x i S and y j S 2 This means u + v span(s S 2 ) 3

Sec 5 (a) No For example, take S = {(, ), (2, ), (, )} and then (, ) is not a linear combination of the other two (b) Yes It is because = (c) No It is independent by the remark after Definition of linearly independent (d) No For example, we have S = {(, ), (2, ), (, )} but {(, ), (, )} is linearly independent (e) Yes This is the contrapositive statement of Theorem 6 (f) Yes This is the definition 2 (a) Linearly dependent (b) Linearly independent (c) Linearly independent (d) Linearly dependent (e) Linearly dependent (f) Linearly independent (g) Linearly dependent (h) Linearly independent (i) Linearly independent (j) Linearly dependent 9 It is sufficient since if u = tv for some t F then we have utv = While it is also necessary since if au + bv = for some a, b F with at least one of the two coefficients not zero, then we may assume a and u = b a v Pick v = (,, ), v 2 = (,, ), v 3 = (,, ) And we have that none of the three is a multiple of another and they are dependent since v v 2 v 3 = 3 (a) Sufficiency: If {u+v, u v} is linearly independent we have a(u+v)+ b(u v) = implies a = b = Assuming that cu + dv =, we can deduce that c+d c d c+d 2 (u + v) + 2 (u v) = and hence 2 = c d 2 = This means c = d = if the characteristic is not two Necessity: If {u, v} is linearly independent we have au + bv = implies a = b = Assuming that c(u + v) + d(u v) =, we can deduce that (c + d)u + (c d)v = and hence c + d = c d = This means c = d = if the characteristic is not two (b) Similar to (a) 5 Sufficiency: If u =, then S is linearly independent Suppose u k+ is in span({u, u 2,, u k }) for some k, say u k+ = a u + a 2 u 2 + + a k u k Then we have a u + a 2 u 2 + + a k u k u k+ = is a nontrivial representation 4

Necessary: If S is linearly dependent, there is some integer k such that there is some nontrivial representation a u + a 2 u 2 + + a k u k + a k+ u k+ = Furthermore we may assume that a k+ otherwise we may choose less k until that a k+ Hence we have u k+ = a k+ (a u + a 2 u 2 + + a k u k ) and so u k+ span({u, u 2,, u k }) Sec 6 (a) No The empty set is its basis (b) Yes This is the result of Replacement Theorem (c) No For example, the set of all polynomials has no finite basis (d) No R 2 has {(, ), (, )} and {(, ), (, )} as bases (e) Yes This is the Corollary after Replacement Theorem (f) No It is n + (g) No It is m n (h) Yes This is the Replacement Theorem (i) No For S = {, 2}, a subset of R, then 5 = +2 2 = 3 + 2 (j) Yes This is Theorem (k) Yes It is and V respectively (l) Yes This is the Corollary 2 after Replacement Theorem 5 It is impossible since the dimension of R 3 is three 7 We have first {u, u 2 } is linearly independent And since u 3 = 4u and u 4 = 3u + 7u 2, we can check that {u, u 2, u 5 } is linearly independent and hence it is a basis Since {u, v} is a basis for V, dim V = 2 So if {u + v, au} is a set of 2 linearly independent vectors, then it is a basis Now if then we get c (u + v) + c 2 (au) =, (c + ac 2 )u + c v = By the linear independency of {u, v}, we have { c + ac 2 = c = As a and b are nonzero, we have c = c 2 = linearly independent and is a basis for V Hence {u + v, au} are Similarly c (au) + c 2 (bv) = implies c = c 2 = So {au, bv} are linearly independent, hence a basis for V 5

2 Suppose {u, v, w} is a basis for V Then we know dim V = 3 So {u + v + w, v + w, w} is a basis if they are linearly independent For we can rearrange to get c (u + v + w) + c 2 (v + w) + c 3 (w) =, c u + (c + c 2 )v + (c + c 2 + c 3 )w = By the linear independence of {u, v, w}, we have c = c +c 2 = c +c 2 +c 3 = Then obviously we have c = c 2 = c 3 = So {u + v + w, v + w, w} are linearly independent and hence a basis 22 One can see that the dimension of W after intersecting with W 2 does not change So it is natural to think that W is a set not larger than W 2 and contained in W 2 More rigorously, we claim that W W 2 is the condition It is obvious to see that W W 2 implies dim(w W 2 ) = dim(w ) Since W W 2, W W 2 is just W So their dimensions are equal To show the necessary condition, we show that dim(w W 2 ) = dim(w ) implies W W 2 Suppose that W is not contained in W 2, that means there is some vector v W \ W 2 Let β be the basis of W W 2, in particular β is in W Note that v is in W but not W 2, so v span(β) But then β {v} are linearly independent in W So the size of this linearly independent set of vectors is greater than dim(w ), which leads to a contradiction Hence W must be contained in W 2 23 (a) Note that W = span({v, v 2,, v k }) has the same dimension as W 2 = span({v, v 2,, v k, v}) So v cannot help in generating more combinations In other word v is already in span({v, v 2,, v k }) Hence we claim that v {v, v 2,, v k } = W is the condition To show the necessary condition, we suppose v W Let β be a basis of W Note that β W W 2 and v W 2 Then β {v} is a linearly independent set in W 2 But then dim(w 2 ) = dim(w ) = β < β + = β {v} dim(w 2 ) leads to a contradiction Hence v W To check that the condition is sufficient, if v {v, v 2,, v k }, then span({v, v 2,, v k, v}) = span({v, v 2,, v k }) Hence W and W 2 are the same, so are their dimensions (b) With the above condition, it shows that dim(w ) dim(w 2 ) implies v W Then following the same arguments in (a), we arrive at dim(w ) = β < β + = β {v} dim(w 2 ) Hence dim(w ) < dim(w 2 ) 6

26 For any f P n (R) with f(a) =, we have f(x) = (x a)g(x) for some g P n (R) So the subspace is equivalent to {f P n (R) : f(a) = } {f P n (R) : f(x) = (x a)g(x), g P n (R)} But P n (R) is of dimension n, so is the subspace More explicitly, {, x,, x n } is a basis for P n (R) One can easily check that {(x a), (x a)x,, (x a)x n } is a basis for the subspace Or simply one can observe {x a, x 2 a 2,, x n a n } is another basis for the subspace 29 (a) Suppose {u, u 2,, u k } is a basis for W W 2 Then one can extend it to a basis {u, u 2,, u k, v, v 2,, v m } for W and a basis {u, u 2,, u k, w, w 2,, w p } for W 2 We want to show that {u, u 2,, u k, v, v 2,, v m, w, w 2,, w p } is a basis for W + W 2 Suppose we have k m a i u i + b i v i + One can rearrange to get m b i v i = p c i w i = k a i u i + v i + p c i w i We then see the left hand side is in W, while the right hand side is in W 2 So both are in W W 2 In particular, m b i can be expressed as a combination of {u, u 2,, u k } m b i v i = s u + s 2 u 2 + + s k u k We can rearrange to get s u + s 2 u 2 + + s k u k + b v + b 2 v 2 + + b m v m = Note that {u, u 2,, u k, v, v 2,, v m } is a basis The linear independence implies coefficients are all zero In particular b = b 2 = = b m = Now the right hand side vanishes = k a i u i + v i + p c i w i Again, linear independence of {u, u 2,, u k, w, w 2,, w p } implies that a = a 2 = = a k = c = c 2 = = c p = Hence {u, u 2,, u k, v, v 2,, v m, w, w 2,, w p } 7

Sec 2 are linearly independent Now for any vector z W + W 2, we have z = x + y, where x W and y W 2 Then we can write x and y as follows x = a u + a 2 u 2 + + a k u k + b v + b 2 v 2 + + b m v m y = c u + c 2 u 2 + + c k u k + d w + d 2 w 2 + + d m w m So we can rearrange the sum to get z = k m (a i + c i )u i + b i v i + p d i w i span({u, u 2,, u k, v, v 2,, v m, w, w 2,, w p }) Hence we have checked {u, u 2,, u k, v, v 2,, v m, w, w 2,, w p } is a basis for W + W 2 Then one can easily check their dimensions dim(w + W 2 ) = k + m + p = dim(w ) + dim(w 2 ) dim(w + W 2 ) (b) V is the direct sum of W and W 2 if and only if W W 2 = {} But W W 2 = {} if and only if dim(w W 2 ) = From the above formula, we have dim(v ) = dim(w ) + dim(w 2 ) 5 To show that T is linear, for any f, g P (R), we write f(x) = n i= a ix i and g(x) = sum m i= b ix i Without loss of generality, we may assume n >= m and set b i = for i > m So we can write g(x) = sum n i= b ix i Then T (f(x) + g(x)) = x (f(x) + g(x))dt = and, similarly, T (αf(x)) = αt (f(x)) = x i= x (a i + b i )t i dt i= x a i t i dt + = T (f(x)) + T (g(x)) b i t i dt To prove that T is one-to-one, we suppose T (f(x)) = T (g(x)) Then one can do the integration and get a i = b i for all i n i= x f(t)dt = a i i + xi+ = Hence f g and T is one-to-one x i= g(t)dt b i i + xi+ However T is not onto For example, for any constant c P (R) but there is no such f P (R) such that T (f(x)) = x f(t)dt = i= 8

7 Let s recall the rank-nullity formula dim V = dim R(T ) + dim N(T ) (a) Suppose dim V < dim W but T is onto Then we have R(T ) = W But dim N(T ) So dim V < dim W = dim R(T ) <= dim R(T ) + dim N(T ) = dim V, which is a contradiction Hence T cannot be onto (b) Suppose dim V > dim W but T is one-to-one Then we have N(T ) = {} and dim N(T ) = But dim R(T ) dim W So dim V > dim W dim R(T ) = dim R(T ) + dim N(T ) = dim V, which is again a contradiction Hence T cannot be one-to-one 8 Note that N(T ) is in the domain while R(T ) is in the codomain Consider the linear transformation T (x, y) = (y, ) One can easily check that N(T ) = {(x, ) : x R} = R(T ) 9 Consider T (x, y) = ( y, x) and U(x, y) = (2x, 2y) Then and R(T ) = {(x, y) R 2 } = R(U) N(T ) = {(, ) R 2 } = N(U) Actually T is a rotation and U is a scaling One can also try U to be a translation The conclusion is the same Sec 22 3 Suppose T : V W, U : V W and R(T ) R(U) = {} If αt + βu is the zero linear transformation from V to W (that is the in L(V, W )), then we want to show that α = β = That means (αt + βu)(v) = for all v V So αt (v) = βu(v) for all v V Note that the left hand side is in R(T ), while the right hand side is in R(U) Then R(T ) R(U) = {} implies that αt (v) = and βu(v) = for all v in V Note that T and U are nonzero linear transformations Hence it must be that α = β = and the linear independency follows 6 Let n = dim V Suppose {v, v 2,, v k } is a basis for N(T ) Then one can extend it to be a basis β for V β = {v, v 2, v k, v k+,, v n } Note that {T v k+,, T v n } is a basis for R(T ) Since dim W = dim V = n, we can extend the above to be a basis γ for W γ = {w, w 2,, w k, T v k+,, T v n } 9

Now we can check that [T ] γ β is diagonal [T ] γ β = [T v ] γ [T v 2 ] γ [T v n ] γ One can see that T v, T v 2,, T v k are zero vectors and T v k+,, T v n are basis in W So {[T v k+ ] γ,, [T v n ] γ } are {e k+,, e n }, where e j is a vector with a at jth entry and elsewhere ( ) [T ] γ O O β = O I n k Sec 23 2 Note that UT is a transformation from V to Z (a) Suppose T (x) = T (y) for x, y V Then by transforming them with U, we have UT (x) = UT (y) As UT is one-to-one, so x = y Hence T is one-to-one However U may not be one-to-one One example is that T (x, y) = (x, y, ) and U(x, y, z) = (x, y) Then T and UT are one-to-one, while U is not (b) Suppose z Z As UT is onto, there exists some v V such that UT (v) = z Then take w = T (v), we have U(w) = U(T (v)) = z Hence U is onto However T may not be onto A counter example would be the one in (a) (c) For UT (x) = UT (y), as U is one-to-one, we have T (x) = T (y) Again as T is one-to-one, we have x = y So UT is one-to-one For z Z, as U is onto, there exists some w W such that U(w) = z Then for this w, as T is onto, there exist some v V such that T (v) = w Hence we have found v V such that UT (v) = U(w) = z So UT is onto 3 Denote (M) ij as the entry at ith row and jth column Then one can simply write from definition to check that they are equal tr(ab) = tr(ba) = (AB) ii = (BA) jj = j= j= j= (A) ij (B) ji (A) ji (B) ij = j= (A) ij (B) ji As the sum is finite, we can interchange the order of summations and get the result

7 Suppose T : V V is a linear transformation such that T = T 2 Denote W = {y V : T (y) = y} and W 2 = {y V : T (y) = }(= N(T )) Note that for every vector x V, we can write x = T (x) + (x T (x)) In other words, we write x = u + v, where u = T (x) and v = x T (x) We claim that u W and v W 2 Firstly, T (u) = T (T (x)) = T (x) = u, so u W Next, T (v) = T (x T (x)) = T (x) T (T (x)) = T (x) T (x) =, so v W 2 Moreover we can see that V = W W 2, that means W W 2 = {} For any y W W 2, we have both T (y) = y and T (y) =, so y = To interpret this, V is decomposed into two subspaces, say V = W W 2 A vector v V can be expressed as a sum of two parts, one from each subspace A transformation T will preserve the first part and remove the other part, that is just a projection Hence T is a projection of vectors onto some subspace W V which V = W W 2 Sec 24 4 As A and B are invertible, there exists A and B such that So it is easy to check A A = AA = I and B B = BB = I B A AB = I and ABB A = I Hence AB is invertible and B A is the inverse of AB In other words (AB) = B A 6 Φ is an isomorphism if it is a one-to-one onto linear transformation It is easy to check that Φ is linear Φ(C + D) = B (C + D)B = B CB + B DB = Φ(C) + Φ(D) Φ(αC) = B (αc)b = αb CB = αφ(c) Note that B is invertible Then for any C and D such that Φ(C) = Φ(D), we have B CB = B DB Hence C = D and Φ is one-to-one Moreover, given A, we can also find C = BAB such that Φ(C) = B BAB B = A Hence Φ is onto So it is an isomorphism

Sec 32 2 (g) By elementary row operations, we get the RREF (reduced row echelon form) 2 2 2 So the rank is just 5 (h) By computing the RREF of (A I), we see the rank is just 3 So the inverse does not exist 2 2 5 2 3 Sec 33 3 (g) Consider { x + 2x 2 + x 3 + x 4 = x 2 x 3 + x 4 = One particular solution is x = x 2 = x 3 = and x 4 = Now we consider the homogenous system { { x + 2x 2 + x 3 + x 4 = x = 3x 3 + x 4 x 2 x 3 + x 4 = x 2 = x 3 x 4 We choose ( 3,,, ) and (,,, ) to be a basis for the solution set of the homogenous system Then all the solutions can be written as (,,, )+s( 3,,, )+t(,,, ) Hence the solutions to the system are x x 2 x 3 x 4 = 3s + t s t s t + for all s, t R 4 (b) First, rewrite the system of linear equations as Ax = b x + 2x 2 x 3 = 5 x + x 2 + x 3 = 2x 2x 2 + x 3 = 4 2 2 2 x x 2 x 3 5 = 4 i By computing the RREF of (A I), 2 2 2 9 4 9 3 3 3 2 9 2 3 9, 2

Sec 34 we get A = 9 3 3 3 2 4 6 ii As we now have A, x is just A b x x 2 = 3 3 3 2 5 = 9 x 3 4 6 4 3 2 2 (a) Check that (,,,,, ) and (,,,,, ) are solutions to the system, so S V Suppose a(,,,,, ) + b(,,,,, ) = (,,,,, ) One can easily solve that a = b = Hence they are linearly independent (b) Consider the system { x x 2 +2x 4 3x 5 +x 6 = 2x x 2 x 3 +3x 4 4x 5 +4x 6 = One can solve for any two of the variables, say x and x 2 { x = x 3 x 4 +x 5 3x 6 x 2 = x 3 +x 4 2x 5 2x 6 We then get a set of basis for the solution set by setting each of {x 3, x 4, x 5, x 6 } to be 3 2 2,,, To extend S to a basis for V, we compute the RREF to select two more linearly independent vectors to form a basis 3 2 2 3

Sec 4 Hence we choose (,,,,, ) and ( 3, 2,,,, ) Therefore, the extended basis is 3 2,,, For Exercise 5, 7 and 8, we introduce the following notations ( ) a b A = c d 5 Suppose B is obtained by interchanging the rows of A ( ) c d B = a b Then the determinants can be easily computed 7 Note that det(b) = cb ad = (ad bc) = det(a) Then the determinants are equal ( ) A t a c = b d det(a t ) = ad bc = det(a) 8 As A is upper triangular, we have c = Then det(a) = ad bc = ad, which is the product of the diagonal entries of A Sec 42 26 By negating a row of the matrix A, we change det(a) by a factor of To obtain A, we can negate every row of A, that is negating n times So det( A) = ( ) n det(a) For det( A) = det(a), either det(a) =, which means A is not invertible; or ( ) n =, which means n is even or F is of characteristic 2 27 It is obvious that A is not of full-rank, so det(a) = 4

Sec 43 2 Note that det(q t ) = det(q) So if QQ t = I, we have det(qq t ) = det(i) Then det(q) 2 = det(q) det(q t ) = det(qq t ) = det(i) = Hence det(q) = ± 5 Note that det(q ) = det(q) for invertible matrix Q M n n (F) If A and B are similar, there exists some invertible matrix Q such that A = Q BQ So det(a) = det(q BQ) = det(q ) det(b) det(q) = det(b) 22 (a) Note that β = {, x,, x n } and γ = {e, e 2,, e n+ } [T ] γ β = [T ()] γ [T (x)] γ [T (x 2 )] γ [T (x n )] γ For T (x j ) = (c j, cj,, cj n) for j =,, 2,, n As γ is just the standard basis, T (x j ) as its usual representation under γ Hence we have c c 2 c n M = [T ] γ β = c c 2 c n c n c 2 n c n n (b) Using Exercise 22 from Sec 24, we know that T is an isomorphism In other words, T is an invertible transformation, so is M Hence det(m) (c) We recall that a n b n = (a b)(a n + a n 2 b + + ab n 2 + b n ) 5

Sec 44 Then we are ready to expand the determinant c c 2 c n c c 2 c n det(m) = det c n c 2 n c n n c c 2 c n c c c 2 c 2 c n c n = det c n c c 2 n c 2 c n n c n c c c 2 c 2 c n c n = det c n c c 2 n c 2 c n n c n n c n + c = (c i c ) det c n + c i= cn i c i n i= cn i n c i Observe that the second column can be reduced by a multiple of the first column c n c 2 + c c + c 2 n det(m) = (c i c ) det c n c 2 n + c n c + c 2 i= cn i c i n i= cn i n c i In the same way, we can reduce the jth column by multiples of previous columns By these column operations, we can get c n c 2 c n det(m) = (c i c ) det c n c 2 n c n n In the same fashion, we expand this determinant c n n 2 c n 2 2 det(m) = (c i c ) (c i c ) det i=2 c n c n 2 n n n n = (c i c ) (c i c ) (c i c n ) = i,j n i=2 (c j c i ) Basically these are the major rules for us to handle determinants Please make sure you don t get them wrong i=n 6

(a) True (b) False (It depends) (c) True (d) False (e) False (f) True (g) True (h) False (i) True (j) True (k) True And, of course, please make sure you understand the reasons behind each fact 7