MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Similar documents
Math 54 HW 4 solutions

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

MAT 2037 LINEAR ALGEBRA I web:

web: HOMEWORK 1

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Linear Algebra March 16, 2019

Solution to Homework 1

Linear Algebra, Summer 2011, pt. 2

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

1 Last time: inverses

Span and Linear Independence

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

MODEL ANSWERS TO THE THIRD HOMEWORK

Solutions of Linear system, vector and matrix equation

Math 24 Spring 2012 Questions (mostly) from the Textbook

NAME MATH 304 Examination 2 Page 1

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Section 4.5. Matrix Inverses

is Use at most six elementary row operations. (Partial

Linear Algebra Highlights

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

MIT Final Exam Solutions, Spring 2017

Chapter 6: Orthogonality

Math 54 Homework 3 Solutions 9/

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Solution Set 7, Fall '12

Math Linear Algebra Final Exam Review Sheet

2. Every linear system with the same number of equations as unknowns has a unique solution.

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Math 369 Exam #2 Practice Problem Solutions

Math 3C Lecture 25. John Douglas Moore

Chapter 3. Vector spaces

Chapter 3: Theory Review: Solutions Math 308 F Spring 2015

SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra

Lecture 1 Systems of Linear Equations and Matrices

CSL361 Problem set 4: Basic linear algebra

Math 220: Summer Midterm 1 Questions

Multiple Choice Questions

Introduction to Matrices

Solving Systems of Equations Row Reduction

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

MTH 362: Advanced Engineering Mathematics

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Lecture Summaries for Linear Algebra M51A

Chapter 2: Matrix Algebra

Lecture 2 Systems of Linear Equations and Matrices, Continued

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Extra Problems for Math 2050 Linear Algebra I

Math 346 Notes on Linear Algebra

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Review of Matrices and Block Structures

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Linear Algebra Handout

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

7. Dimension and Structure.

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Chapter 3. More about Vector Spaces Linear Independence, Basis and Dimension. Contents. 1 Linear Combinations, Span

Review Solutions for Exam 1

Math 2174: Practice Midterm 1

4.3 - Linear Combinations and Independence of Vectors

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Daily Update. Math 290: Elementary Linear Algebra Fall 2018

Offline Exercises for Linear Algebra XM511 Lectures 1 12

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

7.6 The Inverse of a Square Matrix

Matrix Arithmetic. j=1

MATH 2360 REVIEW PROBLEMS

MATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic.

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Math Linear Algebra

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

MATH 260 Homework 2 solutions. 7. (a) Compute the dimension of the intersection of the following two planes in R 3 : x 2y z 0, 3x 3y z 0.

Matrices and Matrix Algebra.

4 Elementary matrices, continued

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Notes on Row Reduction

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

Solutions to Exam I MATH 304, section 6

Math 308 Midterm November 6, 2009

pset3-sol September 7, 2017

This MUST hold matrix multiplication satisfies the distributive property.

Linear Equations in Linear Algebra

Linear equations in linear algebra

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

Transcription:

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element of Span(v + cw, w) is in Span(v, w), and that every element of Span(v, w) is in Span(v, w). If x Span(v + cw, w), then x = a(v + cw) + bw for some scalars a, b R. Then x = av + (b + ac)w, so x is in Span(v, w). Similarly, if x Span(v, w), then x = dv+ew for some scalars d, e R. We can rearrange this as x = dv + dcw dcw + ew = d(v + cw) + (e dc)w. Hence, x is in Span(v + cw, w). 1.1.29. (a) Using only the properties listed in Exercise 28, prove that for any x R n, we have 0x = 0. We will be extremely formalistic in this exercise. On most problems, you don t have to show quite so much work, but the idea here is to see carefully that the properties in Exercise 28 are really all that s required to do linear algebra. For any vector x R n, we have: 0x + x = 0x + x by (h) = (0 + 1)x by (g) = 1x = x by (g) again Now add x (which exists by property (d)) to both sides of the equation: (0x + x) + ( x) = x + ( x) 0x + (x + ( x)) = x + ( x) 0x + 0 = 0 x = 0 by (b) by (d) by (a) and (c) (b) Prove that ( 1)x = x. First, we have to observe that x is uniquely characterized by the property that x+( x) = 0. If y were some other vector with the property that x + y = 0, then we could write: y + (x + ( x)) = (y + x) + ( x) y + 0 = 0 + ( x) y = xby (c). 1 by (b) by (d) and our assumption on y

2 MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS Hence, it suffices to show that x + ( 1)x = 0. This is seen as follows: as required. x + ( 1)x = 1x + ( 1)x by (h) = (1 1)x by (g) = 0x = 0 by part (a) of the problem 1.2.16. Let y R n. If x y = 0 for all x R n, then prove that y = 0. Let s follow the hint and consider the dot products of y with some strategically chosen vectors. For i = 1,..., n, let e i be the vector with a 1 in the i th entry and 0 everywhere else. Observe that e i y = 0y 1 + + 0y i 1 + 1y i + 0y i+1 + + 0y n = y i. Thus, since e i y = 0 for all i (which is true by assumption), we see that y 1 = = y n = 0, and hence y = 0. 2. HW 2: Due September 11 1.3.12. Suppose a 0 and P R 3 is the plane through the origin with normal vector a. Suppose P is spanned by u and v, and assume that u v = 0. (a) Show that for every x P, we have x = proj u (x) + proj v (x). Since x P, and P = Span(u, v), we can write x = su + tv for some scalars s, t. We will try to figure out what s and t must be. Compute the dot product of x with u: x u = (su + tv) u = s(u u) + t(v u) = s(u u) so s = x u x v. A similar computation shows that t =. Plugging in these values for s and t, u u v v we have: x = x u u u u + x v v v v = proj u (x) + proj v (x) by definition. (b) Show that for any x R n, we have x = proj a (x) + proj u (x) + proj v (x). Following the hint, let w = x proj a (x), which is just the perpendicular part of x with respect to a. Thus, w a = 0, so w P. Hence, by applying the previous part, we can write and therefore w = proj u (w) + proj v (w), x = proj a (x) + proj u (w) + proj v (w). We just have to check that proj u (x) = proj u (w) and proj v (x) = proj v (w), and then we ll be done.

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 3 For the first of these equations, let s write proj a (x) = ca, where c = x a. We have: a a proj u (x) = x u u u u (w + ca) u = u u u = w u u u u + ca u u u u = w u u u u = proj u (w) as required. Here we used the fact that a u = 0, which is true since u P. A similar reasoing applies to show that proj v (x) = proj v (w). 1.4.15. (a) Prove or give a counterexample: If A is an m n matrix and x R n satisfies Ax = 0, then either every entry of A is zero or x = 0. This is definitely false: it would be saying that no homogeneous linear equation can have any nonzero solutions! As a very basic counterexample, let A = [ 1 1 ] [ 1, and x =, so 1] that Ax = [1 1] = [0] (which we re considering as a vector in R 1 ). (b) Prove or give a counterexample: If A is an m n matrix, and Ax = 0 for every vector x R n, then every entry of A is 0. Following the hint, notice that the entries of Ax are the dot products A i x, where A i are the rows of A. If Ax = 0 for all x, then A i x = 0 for all x, and therefore A i = 0 by Problem 1.2.16 (from last week, solved above). Hence we deduce that A is the zero matrix. 3. HW 3: Due September 18 1.5.12. In each case, give positive integers m and n and an example of m n matrix A with the stated property, or explain why none can exist. (a) Ax = b is inconsistent for every b R m. If b = 0, then the solution Ax = b always has at least one solution, namely x = 0. Therefore, this can t happen. (b) Ax = b has one solution for every b R m. This will be true for any nonsingular n n matrix. The most basic example is m = n = 1 and A = [1]. (c) Ax = b has no solutions for some b R m and one solution for every other b R m. This definitely can t happen. For instance, if Ax = b has no solutions, then Ax = 2b also has no solutions, since if x were a solution to Ax = 2b, then 1 x would be a solution to 2 Ax = b. (d) Ax = b has infinitely many solutions for every b R m. This will be true for any m n matrix A with m < n and rank(a) = m. An example is m = 1, n = 2, A = [ 1 1 ]. (e) Ax = b is inconsistent for some b R m and has infnitely many solutions whenever it is consistent.

4 MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS This will be true for any m n matrix A where rank(a) < m (which guarantees that it s sometimes inconsistent) and rank(a) < n (which guarantees[ that] there are infinitely many 1 0 solutions). For instance, we could take m = n = 2 and A =. 0 0 (f) There are vectors b 1, b 2, b 3 R m such that Ax = b 1 has no solutions, Ax = b 2 has one solution, and Ax = b 3 has infnitely many solutions. We saw in class that this cannot happen. 1.5.13. Suppose A is an m n matrix with rank m, and v 1,..., v k R n with Span(v 1,..., v k ) = R n. Prove that Span(Av 1,..., Av k ) = R m. We need to show that every vector in R n can be written as a linear combination of Av 1,..., Av k. Since rank(a) = m, for any b R n, we know that the system Ax = b has a solution, which means that b = Aw for some vector w R n. Since the vectors v 1,..., v k span all of R n, we can write w = c 1 v 1 + + c k v k for some scalars c 1,..., c k. We then observe: b = Aw = A(c 1 v 1 + + c k v k ) = c 1 Av 1 + + c k Av k. So b Span(Av 1,..., Av k ), as required. 1.5.14. Let A be an m n matrix with row vectors A 1,..., A m. (a) Suppose A 1 + +A m = 0. Deduce that rank(a) < m. First proof: For any vector x = (x 1,..., x n ), we have 0 = (A 1 + + A m ) x = A 1 x + + A m x, which is the sum of the entries of Ax. This means that if Ax = b has solutions, then the sum of the entries in b must be zero. Equivalently, if b is any vector whose sum of entries is nonzero, then Ax = b has no solutions. This implies that rank(a) < m. Second proof: Let s perform some row operations to A. First, add a multiple of each of the first m 1 rows to the last row, and call the resulting matrix B. By assumption, the new m th row will be all zeros. If we then perform Gaussian elimination to obtain any echelon form of B (which is an echelon form of A), there will be at least one row of zeros at the bottom, and therefore rank(a) < m. (b) More generally suppose there is some linear combination c 1 A 1 + +c m A m = 0, where some c i 0. Show that rank(a) < m. We can easily adapt the first proof from above to show that if Ax = b has solutions, then c 1 b 1 + + c m b m = 0. If we choose b = (0,..., 1,..., 0), where the 1 is in the i th entry, then we see there are no solutions. 4. HW 4: Due September 27 [ ] a b 2.1.7. Find all 2 2 matrices A = satisfying: c d (a) A 2 = I

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 5 We need to solve four non-linear equations: a 2 + bc = 1 ab + bd = 0 ac + cd = 0 bc + d 2 = 1. Combining the first and fourth equations tells you that a 2 = d 2 = 1 bc, which means that a = ±d. We can consider two cases: either a = d 0, or a = d. In the first case, the second and third equations tell us that b = c = 0, and therefore the [ first and ] fourth [ equations ] say that a 2 = d 2 = 1. Hence, the only two matrices we obtain are 1 0 1 0 and. 0 1 0 1 In the second case, the second and third equations don t give any constraint on b and c. The only constraint is that 1 bc 0, since otherwise a 2 and d[ 2 would be negative. For ] any ± 1 bc b b and c with bc 1, we obtain two possible matrices, namely c. 1 bc (b) A 2 = O We proceed similarly to the preceding problem, where now the right-hand sides of all four equations are 0. As before, we deduce that a 2 = d 2. If a = d, then the second and third equations give b = c = 0, and then the first and fourth give a = d = 0. If a = d, then b and [ c can be any numbers ] with bc 0 (i.e. they have opposite signs), and then we get ± bc b A = c. bc (c) A 2 = I 2 In this case, note that we [ can t have a = d 0, since ] that would force a 2 = d 2 = 1. ± 1 bc b Hence the only solutions are c. 1 bc 4.1. 2.1.14. Find [ all 2 ] 2 matrices A that commute [ with ] all 2 2 matrices B. a b e f Suppose A =, and that for every B =, we have AB = BA. This means c d g h that for all possible e, f, g, h R, we have: ae + bg = ae + cf ce + dg = ag + eh af + bh = be + df cf + dh = bg + dh. In particular, we can plug in (e, f, g, h) = (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), or (0, 0, 0, 1). This tells us that b = c = 0 and a = d, i.e. A must be a multiple of the identity matrix. (And we already know that any multiple of the diagonal matrix commutes with all matrices B.) 2.2.7. (a) Calculate A θ A φ and A φ A θ. [ ] [ ] cos θ sin θ cos φ sin φ A θ A φ = sin θ cos θ sin φ cos φ [ ] cos θ cos φ sin θ sin φ sin θ cos φ cos θ sin φ = sin θ cos φ + cos θ sin φ cos θ cos φ + sin θ sin φ

6 MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS And A φ A θ equals the same matrix, as can be seen by swapping the roles of θ and φ everywhere. (b) Use your answer to part (a) to derive the addition formulas for sine and cosine. Geometrically, rotating the plane by θ and then rotating it by φ is the same as rotating it by θ + φ all at once. Hence, the above matrix is equal to A θ+φ. This means, in particular, that cos(θ + φ) = cos θ cos φ sin θ sin φ sin(θ + φ) = sin θ cos φ + cos θ sin φ. (Note: If, like me, you always have difficulty remembering the angle addition formulas, you can easily remember them using this method!) 2.2.8. For 0 θ π, prove that A θ x = x and that the angle between x and A θ x equals θ. To avoid writing lots of square roots, let s just compute A θ x 2. We have: [ ] A θ x 2 = x1 cos θ x 2 sin θ 2 x 1 sin θ + x 2 cos θ = (x 1 cos θ x 2 sin θ) 2 + (x 1 sin θ + x 2 cos θ) 2 = x 2 1 cos 2 θ 2x 1 x 2 cos θ sin θ + x 2 2 sin 2 θ x 2 1 sin 2 θ + 2x 1 x 2 cos θ sin θ + x 2 2 cos 2 θ = (x 2 1 + x 2 2)(cos 2 θ + sin 2 θ) = x 2 1 + x 2 2 = x 2. If φ denote the angle between x and A θ x, then cos φ = x A θx x A θ x = x 1(x 1 cos θ x 2 sin θ) + x 2 (x 1 sin θ + x 2 cos θ) x 2 = (x2 1 + x 2 2) cos θ x 2 = cos θ Since θ and φ are both between 0 and π and both have the same cosine, we must have θ = φ. 2.3.16. Suppose A is an n n matrix satisfying A 10 = O. Prove that the matrix I n A is invertible.

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 7 There s actually nothing special about 10 here; let s assume A m = O for some integer m > 1. The trick is first to remember some facts about factoring polynomials: and in general t 2 1 = (t 1)(t + 1) t 3 1 = (t 1)(t 2 + t + 1) Analogous formulas hold for matrices: t 4 1 = (t 1)(t 3 + t 2 + t + 1) t m 1 = (t 1)(t m 1 + t m 2 + + t + 1). A m I n = (A I n )(A m 1 + A m 2 + + A + I n ). We can prove this by just expanding out the right side and canceling a lot of terms. Now if A m = O, we see that and hence I n = (A I n )(A m 1 + A m 2 + + A + I n ) I n = (I n A)(A m 1 + A m 2 + + A + I n ) (I n A) 1 = A m 1 + A m 2 + + A + I n. 5. HW 5: Due October 11 2.4.12. Suppose A and B are two m n matrices with the same reduced echelon form. Show that there exists an invertible matrix E so that EA = B. Is the converse true? Let C denote the reduced echelon form of both A and B. Since row operations can be realized by multiplying on the left by elementary matrices, there are elementary matrices E 1,..., E k such that C = E k E 1 A and elementary matrices E 1,..., E l such that C = E l E 1B. Equating these two, and using the fact that elementary matrices are invertible, we have: E k E 1 A = E l E 1B A = (E 1 ) 1 (E k ) 1 E l E 1B Hence we define E = (E 1 ) 1 (E k ) 1 E l E 1, which is invertible since it is the product of invertible matrices. To see the converse, suppose EA = B, where E is invertible. This means that E can be row reduced to the identity. It follows that E is equal to a product of elementary matrices. Thus, there is a sequence of row operations taking A to B. By the uniqueness of reduced echelon forms, we see that A and B must have the same reduced echelon form. 2.5.12. Suppose A is a symmetric n n matrix. If x, y R n are vectors satisfying Ax = 2x and Ay = 3y, show that x and y are orthogonal. Using the fact that A is symmetric, we have x Ay = x T Ay = y T A T x = y T Ax = Ax y. Also, x Ay = x 3y = 3(x y), and Ax y = 2x y = 2(x y). Putting these together, we see that 2(x y) = 3(x y), and hence x y = 0.

8 MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 6. HW 6: Due October 16 3.1.13. Let V R n be a subspace. Show that V (V ). Do you think more is true? For any vector v V, and any vector w V, we have v w = 0 (since w is orthogonal to every vector in V ). That is, v is orthogonal to every vector in V, so v (V ). This shows that V (V ). The reverse inclusion was shown in class. 3.1.14. Let V and W be subspaces of R n with the property that V W. Prove that W V. For any vector u W, we have u w = 0 for every vector w W. In particular, this is true for every w V. Thus, u V. 3.2.10. Let A be an m n matrix, and let B be an n p matrix. (a) Prove that N(B) N(AB). For any vector v N(B), we have Bv = 0, and hence AB(v) = A(Bv) = 0. (Note: here the 0 in the first equation is in R n, and the 0 in the second is in R m.) Hence, v N(AB). (b) Prove that C(AB) C(A). If v C(AB), then there is a vector w R p such that (AB)w = v. (I.e., the system (AB)x = v is consistent.) Rewriting this, we see that A(Bw) = v, which means that the system Ax = v is consistent as well. Hence, v C(A). (c) If A is n n and nonsingular, prove that N(B) = N(AB). We already know from part (a) that N(B) N(AB); we need to prove that N(AB) N(A). For any v N(AB), we have ABv = 0. Multiplying both sides on the left by A 1 (which exists because A is nonsingular), we see that Bv = 0, so v N(B). (d) If B is n n and nonsingular, prove that C(AB) = C(A). We already know from part (b) that C(AB) C(A); we just need to show that C(A) C(AB). For any vector v C(A), there is a vector w R n such that Aw = v. Then AB(B 1 w) = Aw = v, so v C(AB) as well. 3.2.11. Let A be an m n matrix. Prove that N(A T A) = N(A). The inclusion N(A) N(A T A) follows from the previous problem; we need to see that N(A T A) N(A). If v N(A T A), then A T Av = 0, so Av N(A T ). Also, Av C(A), by definition. And since C(A) = N(A T ), we deduce that Av = 0, so v N(A), as required. 7. HW 7: Due October 23 3.3.10. Suppose v 1,..., v k are nonzero vectors such that v i v j = 0 whenever i j. Prove that {v 1,..., v k } is linearly independent. Suppose that c 1 v 1 + + c k v k = 0; we will show that this implies that c 1 = = c n = 0, which means that the vectors v 1,..., v k are linearly independent. For each i = 1,..., n, taking the dot product of this equation with v i gives: c 1 v 1 v i + + c i v i v i + + c k v k = 0. By hypothesis, all of the terms in this sum except for the ith are 0, so c i (v i v i ) = 0. Moreover, since v i 0, v i v i 0, so we can divide and deduce c i = 0.

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 9 3.3.11. Suppose v 1,..., v n are nonzero, mutually orthogonal vectors in R n. (a) Prove that they form a basis for R n. By the previous problem, we see that v 1,..., v n are linearly independent, and any n linearly independent vectors in R n must span R n. (b) Given any x R n, give an explicit formula for the coordinates of x with respect to the basis {v 1,..., v n }. Suppose x = a 1 v 1 + + a n v n. By definition, the coordinates of x are the coefficients a 1,..., a n. Just as in the previous problem, for each i = 1,..., n, we take the dot product of each side with v i : x v i = a 1 v 1 v i + + a i v i v i + + a n v n v i = a i v i v i and therefore a i = x v i v i v i. (c) Deduce from your answer to part (b) that x = n i=1 proj v i (x). We have n n x v i n x = a i v i = v i = proj v i v vi (x). i i=1 i=1 3.3.15. Suppose k > n. Prove that any k vectors in R n must form a linearly dependent set. Let v 1,..., v k be the vectors, and let A be the n k matrix whose columns are v 1,..., v k. Since rank(a) n < k, there must be a nonzero vector c with Ac = 0. But this means that c 1 v 1 + + c k v k = 0 and the coefficients c i are not all zero, as required. 3.3.19. Let A be an n n matrix. Prove that if A is nonsingular and {v 1,..., v k } is linearly independent, then {Av 1,..., Av k } is likewise linearly independent. Give an example to show that the result is false if A is singular. Suppose we have a linear relation c 1 (Av 1 ) + + c k (Av k ) = 0. We may rewrite this as A(c 1 v 1 + + c k v k ) = 0. Since A is nonsingular, the only solution to Ax = 0 is the zero vector, so c 1 v 1 + + c k v k = 0. Therefore, c 1 = = c k since v 1,..., v k are linearly independent. [ We ] have thus[ shown that Av 1,..., Av k are linearly independent as well. 1 0 0 If A = and v 0 0 1 =, then Av 1] 1 = {0 0}, so the set {Av 1 } is linearly dependent even though {v 1 } is linearly independent. Note: The proof of 3.3.21 is essentially the same. 3.3.22. Let A be an n n matrix, and suppose v 1, v 2, v 3 R n are nonzero vectors such that Av 1 = v 1, Av 2 = 2v 2, and Av 3 = 3v 3. Prove that {v 1, v 2, v 3 } is linearly independent. Following the hint, we ll first show that {v 1, v 2 } is linearly independent. If not, then v 2 is a nonzero multiple of v 1, say v 2 = av 1, where a 0. On the one hand, we have Av 2 = 2v 2 = 2av 1, but on the other hand Av 2 = A(av 1 ) = av 1. Since a 2a and v 1 0, this is impossible. Hence, {v 1, v 2 } is linearly independent. i=1

10 MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS Next, if {v 1, v 2, v 3 } is linearly dependent, then v 3 must be in the span of v 1, v 2 : say v 3 = bv 1 + cv 2, where b and c are not both zero. Then Av 3 = 3v 3 = 3bv 1 + 3cv 2, but at the same time Av 3 = A(bv 1 + cv 2 ) = bv 1 + 2cv 2. Combining these: and hence 3bv 1 + 3cv 2 = bv 1 + 2cv 2 2bv 1 + cv 2 = 0. Since {v 1, v 2 } is linearly independent, this forces b = c = 0, a contradiction. {v 1, v 2, v 3 } is linearly independent. Hence, 3.4.20. Let U and V be subspaces of R n. Prove that if U V = {0}, then dim(u + V ) = dim U + dim V. Suppose that dim U = k and dim V = l; we will prove that dim(u + V ) = k + l. Let {u 1,..., u k } be a basis for U and {v 1,..., v l } be a basis for V. We claim that {u 1,..., u k, v 1,..., v l } is a basis for U + V. It is simple to check that these vectors span U +V. Any vector x U +V can be written as u + v, where u U and v V. We can write u = a 1 u 1 + + a k u k and v = b 1 v 1 + + b l v l for some a 1,..., a k, b 1,..., b l R, and therefore x = u = a 1 u 1 + + a k u k + b 1 v 1 + + b l v l as required. To see that {u 1,..., u k, v 1,..., v l } is linearly independent, suppose we have a linear relation: c 1 u 1 + + c k u k + d 1 v 1 + + d l v l = 0. Rewrite this as c 1 u 1 + + c k u k = d 1 v 1 + d l v l. The left-hand side is an element of U, and the right-hand side is an element of V, and they are equal to each other. Since U V = {0}, each side must equal 0. Since {u 1,..., u k } and {v 1,..., v l } are each linearly independent, we see that c 1 = = c k = d 1 = = d l = 0, as required.