Winter 2017 Ma 1b Analytical Problem Set 2 Solutions

Similar documents
Abstract Vector Spaces

19. Basis and Dimension

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

6.4 BASIS AND DIMENSION (Review) DEF 1 Vectors v 1, v 2,, v k in a vector space V are said to form a basis for V if. (a) v 1,, v k span V and

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Motion in Three Dimensions

Lecture 9: Vector Algebra

Math Exam 2, October 14, 2008

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true.

The definition of a vector space (V, +, )

3/31/ Product of Inertia. Sample Problem Sample Problem 10.6 (continue)

Solutions to Final Exam

Family Feud Review. Linear Algebra. October 22, 2013

Solution: By inspection, the standard matrix of T is: A = Where, Ae 1 = 3. , and Ae 3 = 4. , Ae 2 =

Solutions to Homework 5 - Math 3410

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 102, Winter 2009, Homework 7

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations

MTH5102 Spring 2017 HW Assignment 3: Sec. 1.5, #2(e), 9, 15, 20; Sec. 1.6, #7, 13, 29 The due date for this assignment is 2/01/17.

Linear equations in linear algebra

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Matrix Transformation (2A) Young Won Lim 11/9/12

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Lecture 3: Linear Algebra Review, Part II

MAT Linear Algebra Collection of sample exams

NOTES (1) FOR MATH 375, FALL 2012

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

A. Correct! These are the corresponding rectangular coordinates.

Math 54. Selected Solutions for Week 5

Eigenvalues and Eigenvectors

Fact: Every matrix transformation is a linear transformation, and vice versa.

Chapter 6 - Orthogonality

Math 24 Spring 2012 Questions (mostly) from the Textbook

DS-GA 1002 Lecture notes 10 November 23, Linear models

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns

LINEAR ALGEBRA: THEORY. Version: August 12,

Span and Linear Independence

The Jordan Canonical Form

Inverses of Square Matrices

Kernel and range. Definition: A homogeneous linear equation is an equation of the form A v = 0

Lecture 6: Corrections; Dimension; Linear maps

EE5120 Linear Algebra: Tutorial 3, July-Dec

Announcements Monday, October 29

Vectors. Vectors and the scalar multiplication and vector addition operations:

General Vector Space (2A) Young Won Lim 11/4/12

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

1. TRUE or FALSE. 2. Find the complete solution set to the system:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

Final Examination 201-NYC-05 December and b =

Exercises Chapter II.

Chapter 2: Linear Independence and Bases

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

This MUST hold matrix multiplication satisfies the distributive property.

Vector space and subspace

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

Sept. 3, 2013 Math 3312 sec 003 Fall 2013

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

This lecture: basis and dimension 4.4. Linear Independence: Suppose that V is a vector space and. r 1 x 1 + r 2 x r k x k = 0

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

(a) only (ii) and (iv) (b) only (ii) and (iii) (c) only (i) and (ii) (d) only (iv) (e) only (i) and (iii)

Solutions to Math 51 First Exam April 21, 2011

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Vector Spaces 4.5 Basis and Dimension

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Linear Algebra Quiz 4. Problem 1 (Linear Transformations): 4 POINTS Show all Work! Consider the tranformation T : R 3 R 3 given by:

2018 Fall 2210Q Section 013 Midterm Exam II Solution

1 Invariant subspaces

Math 369 Exam #2 Practice Problem Solutions

The scope of the midterm exam is up to and includes Section 2.1 in the textbook (homework sets 1-4). Below we highlight some of the important items.

Study Guide for Linear Algebra Exam 2

Math 250B Midterm II Information Spring 2019 SOLUTIONS TO PRACTICE PROBLEMS

Vectors for Physics. AP Physics C

Review for Exam 2 Solutions

Linear independence, span, basis, dimension - and their connection with linear systems

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Chapter 1 Vector Spaces

Chapter 2: Matrix Algebra

Linear Algebra. Min Yan

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Math 113 Winter 2013 Prof. Church Midterm Solutions

LINEAR ALGEBRA SUMMARY SHEET.

Matrix of Linear Xformations

Mathematical Foundations: Intro

SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra

Math 2331 Linear Algebra

Test 3, Linear Algebra

Math 3C Lecture 20. John Douglas Moore

(i) [7 points] Compute the determinant of the following matrix using cofactor expansion.

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Transcription:

1. (5 pts) From Ch. 1.10 in Apostol: Problems 1,3,5,7,9. Also, when appropriate exhibit a basis for S. Solution. (1.10.1) Yes, S is a subspace of V 3 with basis {(0, 0, 1), (0, 1, 0)} and dimension 2. x 1 = 0 and x 2 = 0 so x 1 + x 2 = 0 and (x 1, y 1, z 1 ) + (x 2, y 2, z 2 ) = (x 1 + x 2, y 1 + y 2, z 1 + z 2 ) S, and S is closed under vector addition. For any a R and (x 1, y 1, z 1 ) S we have x 1 = 0 so ax 1 = 0 and a(x 1, y 1, z 1 ) = (ax 1, ay 1, az 1 ) S and S is closed under scalar multiplication. Thus S is a subspace of V 3. {(0, 0, 1), (0, 1, 0)} is independent in S because for any a, b R, a(0, 0, 1) + b(0, 1, 0) = (0, 0, 0) gives us (0, b, a) = (0, 0, 0) and a = b = 0. L({(0, 0, 1), (0, 1, 0)}) = S because for any (x, y, z) S, x = 0 so (x, y, z) = (0, y, z) = y(0, 1, 0) + z(0, 0, 1). Thus {(0, 0, 1), (0, 1, 0)} is a basis of S, and S has dimension 2. (1.10.3) Yes, S is a subspace of V 3 with basis {( 1, 0, 1), ( 1, 1, 0)} and dimension 2. x 1 + y 1 + z 1 = 0 and x 2 + y 2 + z 2 = 0 so (x 1 + x 2 ) + (y 1 + y 2 ) + (z 1 + z 2 ) = (x 1 + y 1 + z 1 ) + (x 2 + y 2 + z 2 ) = 0 and (x 1, y 1, z 1 ) + (x 2, y 2, z 2 ) = (x 1 + x 2, y 1 + y 2, z 1 + z 2 ) S, and S is closed under vector addition. For any a R and (x 1, y 1, z 1 ) S we have x 1 + y 1 + z 1 = 0 so ax 1 + ay 1 + az 1 = 0 and a(x 1, y 1, z 1 ) = (ax 1, ay 1, az 1 ) S and S is closed under scalar multiplication. Thus S is a subspace of V 3. {( 1, 0, 1), ( 1, 1, 0)} is independent in S because for any a, b R, a( 1, 0, 1) + b( 1, 1, 0) = (0, 0, 0) gives us ( a b, b, a) = (0, 0, 0) and a = b = 0. L({( 1, 0, 1), ( 1, 1, 0)}) = S because for any (x, y, z) S, x + y + z = 0 so (x, y, z) = ( y z, y, z) = y( 1, 1, 0) + z( 1, 0, 1). Thus {( 1, 0, 1), ( 1, 1, 0)} is a basis of S, and S has dimension 2. (1.10.5) Yes, S is a subspace of V 3 with basis {(1, 1, 1)} and dimension 1. x 1 = y 1 = z 1 and x 2 = y 2 = z 2 so x 1 + x 2 = y 1 + y 2 = z 1 + z 2 and (x 1, y 1, z 1 ) + (x 2, y 2, z 2 ) = (x 1 + x 2, y 1 + y 2, z 1 + z 2 ) S, and S is closed under vector addition. For any a R and (x 1, y 1, z 1 ) S we have x 1 = y 1 = z 1 so ax 1 = ay 1 = az 1 and a(x 1, y 1, z 1 ) = (ax 1, ay 1, az 1 ) S and S is closed under scalar multiplication. Thus S is a subspace of V 3. {(1, 1, 1)} is independent in S because for any a R, a(1, 1, 1) = (0, 0, 0) gives us (a, a, a) = (0, 0, 0) and a = 0. L({(1, 1, 1)}) = S because for any (x, y, z) S, x = y = z so (x, y, z) = (x, x, x) = x(1, 1, 1). Thus {(1, 1, 1)} is a basis of S, and S has dimension 1. (1.10.7) No, S is not a subspace of V 3 since (1, 1, 0), (1, 1, 0) S but (1, 1, 0) + (1, 1, 0) = (2, 0, 0) S. (1.10.9) Yes, S is a subspace of V 3 with basis {(1, 2, 3)} and dimension 1. y 1 = 2x 1, z 1 = 3x 1, y 2 = 2x 2, and z 2 = 3x 2, so y 1 + y 2 = 2(x 1 + x 2 ), z 1 + z 2 = 3(x 1 + x 2 ), and (x 1, y 1, z 1 )+(x 2, y 2, z 2 ) = (x 1 +x 2, y 1 +y 2, z 1 +z 2 ) S, and S is closed under vector addition. 1

For any a R and (x 1, y 1, z 1 ) S we have y 1 = 2x 1 and z 1 = 3x 1 so ay 1 = 2ax 1, az 1 = 3ax 1, and a(x 1, y 1, z 1 ) = (ax 1, ay 1, az 1 ) S and S is closed under scalar multiplication. Thus S is a subspace of V 3. {(1, 2, 3)} is independent in S because for any a R, a(1, 2, 3) = (0, 0, 0) gives us (a, 2a, 3a) = (0, 0, 0) and a = 0. L({(1, 2, 3)}) = S because for any (x, y, z) S, y = 2x and z = 3x, so (x, y, z) = (x, 2x, 3x) = x(1, 2, 3). Thus {(1, 2, 3)} is a basis of S, and S has dimension 1. 2. (8 pts) From Ch. 1.10 in Apostol: Problem 24. Solution. (a) We adapt the proof of Theorem 1.7(a). Let B = {x 1,..., x k } be any independent set of elements in S. If L(B) = S, then B is a basis for S. If not, then there is some element y in S which is not in L(B). Adjoin this element to B and consider the new set B = {x 1,..., x k, y}. If this set were dependent there would be scalars c 1,..., c k+1 not all zero such that k c i x i + c k+1 y = O. Since the x 1,..., x k are independent, we would not have c k+1 = 0 or else c 1 = = c k = 0 by the linear independence of x 1,..., x k, contrary to our assumption that B is dependent. Thus we would have c k+1 0 and could solve this equation for y and find that y L(B), contradicting our assumption that y is not in L(B). Therefore, the set B is independent but contains k + 1 elements. If L(B ) = S, then B is a basis of S. If L(B ) S, we can argue with B as we did with B, getting a new set B which contains k + 2 elements and is independent. We can repeat the process until we arrive at a basis in a total of dim V k steps or less, or else we eventually obtain an independent set with dim V + 1 elements, contradicting Theorem 1.5. This basis will be finite and have at most dim V elements, so S is finite dimensional and dim S dim V. (b) The if part holds trivially by substitution. The only if part follows from applying Theorem 1.7(b) to the basis for S constructed in part (a). The basis for S is a basis for V, so taking the span gives us S = V. (c) Apply Theorem 1.7(a). (d) Consider the example V = V 2 and S = R {0} V 2. {(1, 1), (1, 1)} is a basis for V which contains no basis for S because (1, 1), (1, 1) S and is not a basis for S since S contains nonzero elements. 3. Let V be a vector space and U, W subspaces of V. Prove that if U W = {0} then dim L(U, W ) = dim U + dim W. (Do not appeal to a more general result for dim L(U, W )). Solution. If U or W is infinite dimensional then suppose for the sake of contradiction that L(U, W ) is finite dimensional. Since U and W are subspaces of L(U, W ) that implies that U and W are finite dimensional by part (a) of the previous problem, so we get a contradiction. Thus we get L(U, W ) infinite dimensional. It remains to consider the case where U and W are both finite dimensional. Let S = {x 1,..., x n } be a basis for U and T = {y 1,..., y m } be a basis for W. A finite linear combination of elements of U or W is a finite linear combination of elements of S T because 2

we can rewrite each element of U as a finite linear combination of elements of S, rewrite each element of W as a finite linear combination of elements of T, and distribute to get a finite linear combination of elements of S T. Thus, L(U, W ) L(S, T ). Since S T U W, it follows that L(U, W ) L(S, T ) and hence L(U, W ) = L(S, T ) (ie. S T spans L(U, W )). S T is linearly independent by the following argument. Suppose a i x i + b j y j = 0. Then and since U W = {0} we have a i x i = b j y j, a i x i = b j y j = 0. Since S is linearly independent and T is linearly independent, we get a i = 0 for all 1 i n and b j = 0 for all 1 j m. Therefore, S T is linearly independent. S T spans L(U, W ) and is linearly independent so S T is a basis of L(U, W ) and dim L(U, W ) = S T = S + T = dim U + dim W. 4. (4 pts) From Ch. 2.4. in Apostol: Problems 11-14. Determine whether T is linear. Solution. (2.4.11) Yes, T is linear. Proof. In V 2, vector addition and scalar multiplication are defined in terms of rectangular coordinates, so it is necessary to describe what T does to points given in rectangular coordinates. (r, θ) is (x, y) = (r cos θ, r sin θ) in rectangular coordinates, and (r, θ + ϕ) is (r cos(θ + ϕ), r sin(θ + ϕ)) = (r(cos θ cos ϕ sin θ sin ϕ), r(sin θ cos ϕ + cos θ sin ϕ)) = (r cos θ cos ϕ r sin θ sin ϕ, r sin θ cos ϕ + r cos θ sin ϕ) = (x cos ϕ y sin ϕ, y cos ϕ + x sin ϕ), so T (x, y) = (x cos ϕ y sin ϕ, y cos ϕ + x sin ϕ). Suppose (x 1, y 1 ), (x 2, y 2 ) V 2. Then T ((x 1, y 1 ) + (x 2, y 2 )) = T (x 1 + x 2, y 1 + y 2 ) = ((x 1 + x 2 ) cos ϕ (y 1 + y 2 ) sin ϕ, (y 1 + y 2 ) cos ϕ + (x 1 + x 2 ) sin ϕ) = (x 1 cos ϕ y 1 sin ϕ, y 1 cos ϕ + x 1 sin ϕ) + (x 2 cos ϕ y 2 sin ϕ, y 2 cos ϕ + x 2 sin ϕ) 3

so T preserves vector addition. Suppose a R and (x 1, y 1 ) V 2. We have T (a(x 1, y 1 )) = T (ax 1, ay 1 ) = (ax 1 cos ϕ ay 1 sin ϕ, ay 1 cos ϕ + ax 1 sin ϕ) = a(x 1 cos ϕ y 1 sin ϕ, y 1 cos ϕ + x 1 sin ϕ) = at (x 1, y 1 ) (2.4.12) Yes, T is linear. Proof. T maps a point with polar coordinates (r, θ) onto the point with polar coordinates (r, ϕ θ), where ϕ is the angle of elevation of the fixed line. In V 2, vector addition and scalar multiplication are defined in terms of rectangular coordinates, so it is necessary to describe what T does to points given in rectangular coordinates. (r, θ) is (x, y) = (r cos θ, r sin θ) in rectangular coordinates, and (r, ϕ θ) is (r cos(ϕ θ), r sin(ϕ θ)) = (r(cos ϕ cos θ + sin ϕ sin θ), r(sin ϕ cos θ cos ϕ sin θ)) = (r cos θ cos ϕ + r sin θ sin ϕ, r cos θ sin ϕ r sin θ cos ϕ) = (x cos ϕ + y sin ϕ, x sin ϕ y cos ϕ), so T (x, y) = (x cos ϕ + y sin ϕ, x sin ϕ y cos ϕ). Suppose (x 1, y 1 ), (x 2, y 2 ) V 2. Then T ((x 1, y 1 ) + (x 2, y 2 )) = T (x 1 + x 2, y 1 + y 2 ) = ((x 1 + x 2 ) cos ϕ + (y 1 + y 2 ) sin ϕ, (x 1 + x 2 ) sin ϕ (y 1 + y 2 ) cos ϕ) = (x 1 cos ϕ + y 1 sin ϕ, x 1 sin ϕ y 1 cos ϕ) + (x 2 cos ϕ + y 2 sin ϕ, x 2 sin ϕ y 2 cos ϕ) so T preserves vector addition. Suppose a R and (x 1, y 1 ) V 2. We have T (a(x 1, y 1 )) = T (ax 1, ay 1 ) = (ax 1 cos ϕ + ay 1 sin ϕ, ax 1 sin ϕ ay 1 cos ϕ) = a(x 1 cos ϕ + y 1 sin ϕ, x 1 sin ϕ y 1 cos ϕ) = at (x 1, y 1 ) (2.4.13) No, T is not linear. T ((0, 1)) + T ((1, 0)) = (1, 1) + (1, 1) = (2, 2) (1, 1) = T (1, 1) = T ((0, 1) + (1, 0)) so T does not preserve vector addition. (2.4.14) Yes, T is linear. 4

Proof. In V 2, vector addition and scalar multiplication are defined in terms of rectangular coordinates, so it is necessary to describe what T does to points given in rectangular coordinates. (r, θ) is (x, y) = (r cos θ, r sin θ) in rectangular coordinates, and (2r, θ) is (2r cos θ, 2r sin θ) = (2x, 2y) so T (x, y) = (2x, 2y). Suppose (x 1, y 1 ), (x 2, y 2 ) V 2. Then T ((x 1, y 1 ) + (x 2, y 2 )) = T (x 1 + x 2, y 1 + y 2 ) = (2(x 1 + x 2 ), 2(y 1 + y 2 )) = (2x 1 + 2x 2, 2y 1 + 2y 2 ) = (2x 1, 2y 1 ) + (2x 2, 2y 2 ) so T preserves vector addition. Suppose a R and (x 1, y 1 ) V 2. Then T (a(x 1, y 1 )) = T (ax 1, ay 1 ) = (2ax 1, 2ay 1 ) = a(2x 1, 2y 1 ) = at (x 1, y 1 ) 5