Math 54 Homework 3 Solutions 9/

Similar documents
Review Solutions for Exam 1

Chapter 2 Notes, Linear Algebra 5e Lay

Linear Equations in Linear Algebra

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Math 54 HW 4 solutions

Span and Linear Independence

Solutions to Exam I MATH 304, section 6

Section 2.2: The Inverse of a Matrix

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH10212 Linear Algebra B Homework Week 4

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

Span & Linear Independence (Pop Quiz)

Section 1.8/1.9. Linear Transformations

Matrix equation Ax = b

Linear algebra and differential equations (Math 54): Lecture 10

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Linear Combination. v = a 1 v 1 + a 2 v a k v k

Math Final December 2006 C. Robinson

Matrices related to linear transformations

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Study Guide for Linear Algebra Exam 2

February 20 Math 3260 sec. 56 Spring 2018

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Math 220 F11 Lecture Notes

18.06 Problem Set 3 Due Wednesday, 27 February 2008 at 4 pm in

1 Last time: inverses

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns

Math Linear algebra, Spring Semester Dan Abramovich

MATH240: Linear Algebra Review for exam #1 6/10/2015 Page 1

Math 220 Some Exam 1 Practice Problems Fall 2017

MATH10212 Linear Algebra B Homework Week 5

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Chapter 3: Theory Review: Solutions Math 308 F Spring 2015

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han

Week #4: Midterm 1 Review

Linear Algebra March 16, 2019

Math 3C Lecture 20. John Douglas Moore

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

1 Last time: multiplying vectors matrices

Solutions to Final Exam

2018 Fall 2210Q Section 013 Midterm Exam I Solution

(c)

1 Last time: determinants

Linear Algebra Exam 1 Spring 2007

Math 2940: Prelim 1 Practice Solutions

MAT 2037 LINEAR ALGEBRA I web:

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

All of my class notes can be found at

Extreme Values and Positive/ Negative Definite Matrix Conditions

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

Linear Equations in Linear Algebra

Chapter 1: Systems of Linear Equations

APPM 3310 Problem Set 4 Solutions

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Linear Algebra Math 221

Review for Chapter 1. Selected Topics

Math 3191 Applied Linear Algebra

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

1. TRUE or FALSE. 2. Find the complete solution set to the system:

Section 4.5. Matrix Inverses

Abstract Vector Spaces

Math Linear Algebra Final Exam Review Sheet

Solutions to Math 51 First Exam April 21, 2011

Finite Math - J-term Section Systems of Linear Equations in Two Variables Example 1. Solve the system

Lecture 03. Math 22 Summer 2017 Section 2 June 26, 2017

Solutions to Math 51 Midterm 1 July 6, 2016

Section 1.5. Solution Sets of Linear Systems

Linear Equations in Linear Algebra

Solution Set 3, Fall '12

Dot Products, Transposes, and Orthogonal Projections

Homework Set #8 Solutions

Review : Powers of a matrix

MA 1B PRACTICAL - HOMEWORK SET 3 SOLUTIONS. Solution. (d) We have matrix form Ax = b and vector equation 4

Chapter 1: Linear Equations

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Choose three of: Choose three of: Choose three of:

Matrix operations Linear Algebra with Computer Science Application

Chapter 1: Linear Equations

22A-2 SUMMER 2014 LECTURE 5

Math 24 Spring 2012 Questions (mostly) from the Textbook

Mon Feb Matrix inverses, the elementary matrix approach overview of skipped section 2.5. Announcements: Warm-up Exercise:

MATH 12 CLASS 4 NOTES, SEP

MATH 54 - WORKSHEET 1 MONDAY 6/22

Solution: By inspection, the standard matrix of T is: A = Where, Ae 1 = 3. , and Ae 3 = 4. , Ae 2 =

Lecture 22: Section 4.7

Linear Algebra Quiz 4. Problem 1 (Linear Transformations): 4 POINTS Show all Work! Consider the tranformation T : R 3 R 3 given by:

Math 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

1111: Linear Algebra I

Linear Equations in Linear Algebra

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Math 51, Homework-2. Section numbers are from the course textbook.

Sept. 3, 2013 Math 3312 sec 003 Fall 2013

Notes on Row Reduction

Extra Problems for Math 2050 Linear Algebra I

Linear Independence x

This MUST hold matrix multiplication satisfies the distributive property.

Linear Algebra, Summer 2011, pt. 2

Transcription:

Math 54 Homework 3 Solutions 9/4.8.8.2 0 0 3 3 0 0 3 6 2 9 3 0 0 3 0 0 3 a a/3 0 0 3 b b/3. c c/3 0 0 3.8.8 The number of rows of a matrix is the size (dimension) of the space it maps to; the number of columns is the size of the space it maps from. Therefore A must be a 7 5 matrix..8.2 We can solve this by row reducing the augmented matrix [A b], just like before, and seeing if there is a solution. If there is, b is in the range, and if not then it isn t.

3 2 0 6 0 2 4 3 0 2 3 4 0 8 4 0 2 4 3 0 2 4 6 0 0 2 3 0 4 8 2 0 2 4 3 0 2 3 0 0 0 0 8 0 0 0 0 5. 0 2 4 3 3 2 0 6 0 2 3 4 0 8 4 0 2 4 3 0 2 3 0 2 4 6 0 0 4 8 2 Now we can see there s a pivot in the final column, hence there is no solution, meaning b is not in the range of the transformation..8.4 T scales each vector by a factor of 2...6 T projects each vector onto the y-axis and scales that by a factor of 2..8.7 T (2u) 2T (u) 2 [ 4 ] [ 8 2 ] T (3v) 3T (v) 3 [ 3 ] [ 3 9 ] T (2u + 3v) 2T (u) + 3T (v) [ 8 2 ] + [ 3 9 ] [ 5 ]..8.22 (a) True. If the components of x are [x, x 2,..., x n ], and the columns of A are v, v 2,..., v n, then Ax is the linear combination x v + x 2 v 2 +... + x n v n. (b) True. This is because if A is a matrix, v and w are vectors of the right dimension, and r is a real number, then A(rv) r Av and A(v + w) Av + Aw. (c) False. This is an existence question. A uniqueness question would be Is there only one vector v in R n such that T v c? (d) True. This is the definition of a linear transformation. (e) True. This follows from the definition mentioned in part (d): since 0 0 0 ( 0 is the zero vector, 0 is the real scalar), we see that T ( 0) T (0 0) 0 T ( 0) 0. 2

.8.30 Since the vectors v,..., v p span R n, we can write any element v of R n in the form v a v + a 2 v 2 +... + a p v p. Then T (v) T (a v + a 2 v 2 +... + a p v p ) T (a v ) + T (a 2 v 2 ) +... + T (a p v p ) a T (v ) + a 2 T (v 2 ) +... + a p T (v p ) a 0 + a 2 0 +... + a p 0 0. In the second and third lines, we just used the definition of a linear operator..8.3 If {v, v 2, v 3 } is linearly dependent, there must be some numbers a, b, c for which av + bv 2 + cv 3 0 and not all a, b, c are 0. Hitting the equation with T, we get T (av + bv 2 + cv 3 ) T (0) 0 at (v ) + bt (v 2 ) + ct (v 3 ) 0. Since not all of a, b, c are 0, we have shown that {T (v ), T (v 2 ), T (v 3 )} are linearly dependent (with the same coefficients as {v, v 2, v 3 })..8.32 There is an absolute value in the definition, so let s see what happens when we multiply by. We see T (, ) ( 2, 4) (, 3), and T (, ) ( 2, + 4) ( 3, 3). But if T were linear, then T (, ) T (, ), but it doesn t as we see. 2.9.9.2 The standard matrix of T is the one whose columns are T (e ), T (e 2 ) and T (e 3 ). Hence it is just 2 3. 4 9 8.9.4 3

So T (e ) e and T (e 2 ) 2e + e 2. Thus the standard matrix is 2. 0.9.6 Look graphically at what is happening: if we rotate e by 3π/2 radians, we get e 2, and if we rotate e 2, we get e. That is to say, T (e ) e 2, T (e 2 ) e. Thus the matrix is 0. 0.9.9 Thinking graphically again: if we reflect e nothing happens, and if we then rotate it by π/2 we get e 2. If we reflect e 2 over the x-axis we get e 2, and if we rotate e 2 we get e. The matrix is then 0. 0.9.9 First, let s calculate: T (, 0, 0) [(, 0); T (0,, ] 0) ( 5, ); and T (0, 0, ) (4, 6). Hence 5 4 the matrix that implements T is. 0 6.9.22 If the first coordinate has to be 0, this means 2x x 2. Plugging that in to the second coordinate, we get 3x + x 2 3x + 2x x and we want this to equal. Hence x, x 2 2 makes the first two coordinates work out. We don t have any more variables, so now the last thing we need to do is check that this solution also works for the third coordinate. Luckily it does: 2x 3x 2 2 6 4 just as we wanted. So, the solution is x (, 2). 2 Alternately (and more systematically), the standard matrix for T is 3, which 2 3 gives the augmented matrix 2 0 3 2 3 4. Row reducing gives 2 0 0 /2 0 2 4 2 0 0 /2. The final augmented matrix tells us the system is consistent, x 2 2, 0 0 0 and then x. 4

.9.26 (a,b) Recall from the section in the book that T is onto iff its standard matrix has a pivot in every row, and one-to-one[ if the standard ] matrix has a pivot in every column. Then from 2 3 #2 the standard matrix is. There are more columns than rows, so T cannot 4 9 8 have a pivot in every column, hence cannot be one-to-one. However, subtracting [ 4 times ] the 2 3 first row from the second row, row reduces the standard matrix, giving, so 0 7 20 there is a pivot in every row and hence T is onto..9.33 (optional) If B is some matrix, then Be i, the product of B with the vector e i, just gives the ith column of B. Since we are assuming T (x) Ax Bx for all x, choosing x e, e 2,..., e n gives us that each column of A equals each column of B (i.e. the fact that Ae Be tells us their first columns are the same, etc.).9.35 If T is onto then m n, and if T is one-to-one then m n. As stated in #20, T is onto iff its standard matrix has a pivot in every row. If T has more rows than columns, i.e. m > n, this cannot happen; so if T in fact is onto, then we need m n. Similarly, since T is one-to-one iff its standard matrix has a pivot in every column, and T has more columns than rows, then it cannot be one-to-one. So T being one-to-one means m n..9.36 Because it asks: for each w in the range of T, does there exist some vector v in the domain of T, such that T (v) w. Alternatively, let A be the standard matrix of T. We know from an earlier section that the equation Av w has a solution iff w is in the span of the columns of A. Hence, being onto asks: for each w in R n, does there exist a solution to Av w? 3 2. 2.. 5

2 0 2A 2 4 5 2 4 0 2 ; 8 0 4 7 5 4 0 2 B 2A + using the last line for 2A 4 3 8 0 4 3 5 3 ; 7 6 7 2 3 5 CD 2 4 3 + 2 5 + 2 4 2 3 + 2 5 + 4 3. 7 6 The third expression, AC, is not defined, because the number of columns in A and the number of rows in C are unequal. 2..2 2 0 7 5 A + 3B + 3 4 5 2 4 3 23 5 2 7 7 7 3 5 7 5 DB 4 4 3 3 7 + 5 3 5 + 5 4 3 + 5 3 7 + 4 5 + 4 4 + 4 3 26 35 2. 3 3 The quantity 2C 3E is not defined, because C and E (and hence 2C and 3E) have different dimensions. The quantity EC is not defined because the number of columns of E is unequal to the number of rows of C. 2..0 6

Clearly B C. Computing the rest, 3 + 6 3 3 + 6 4 AB + 2 3 + 2 4 2 2 7 7 3 3 + 6 2 3 5 + 6 AC 3 + 2 2 5 + 2 2 2, 7 7 What this shows is that you can t divide (i.e. cancel) nonzero matrices like you do real numbers. 2..2 A column of B should be a vector that A cancels: i.e. a solution to Av 0. One such v is (2, ) (which you can find either by using row reduction to find a non-trivial solution, or just noticing that the first column times [ 2 ] equals the second column), and any multiple of 2 4 this also works: say (4, 2). Hence B works. 2 2..5 (a) False: multiplication of column vectors is not defined. (b) False: switching around A and B in the sentence makes it true. (c) True: the distributive property continues to hold for matrix multiplication, by linearity. (d) True: it doesn t matter whether we add the entries and then transpose them, or transpose the entries and then add them. (e) False: transposes switch the order of products. 2..8 It is all zeros. The third column of AB is the one where we go along the third row of A and the third column of B, multiply numbers and add as we go along. Since all the numbers from B here are 0, so will the entries in AB be. 2..22 If we write B in terms of its columns, say B [v v 2... v n ], then AB [Av Av 2... Av n ]. So say the columns {v,..., v n } are linearly dependent: say a v +a 2 v 2 +... +a n v n 0 and not all the a i s are 0. Then by linearity of matrix multiplication, a Av +a 2 Av 2 +...+a n Av n 0, i.e. {Av,..., Av n } are linearly dependent as well. We said in the first sentence that these are the columns of AB, and this is what we initially set out to prove. 7

2..23 If Ax 0, then CAx C(Ax) C0 0. But CA I n, which means CAx I n x x. Putting the two sentences together, this means x 0. If A had more columns than rows, the columns would be linearly dependent: after row reducing, there couldn t be a pivot in every column. By the last problem, this would mean that the columns of CA were linearly dependent; but clearly the columns of I n are not (i.e. they are linearly independent), so if CA I n then A cannot have more columns than rows. 2..3 As stated two problems ago, if we write A in terms of its column vectors, say A [v v 2... v n ], where each v i is in R m, then I m A [I m v I m v 2... I m v n ]. But I m v v for any v R m, hence I m A [v v 2... v n ] A. 2..32 The column definition of AI n says that the ith column of AI n is a linear combination of the columns of A, whose coefficients are given by the ith column of I n. But by definition, all of these coefficients are 0 except for the ith one, which is : hence rereading this definition, it says that the ith column of AI n is the ith column of A. This then verbatim means that AI n A, as desired. 8