Lecture 1s Isomorphisms of Vector Spaces (pages )

Similar documents
Math 110, Spring 2015: Midterm Solutions

E.g. The set RE of regular expressions is given by the following rules:

Dot Products, Transposes, and Orthogonal Projections

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Math 3361-Modern Algebra Lecture 08 9/26/ Cardinality

Let V be a vector space, and let X be a subset. We say X is a Basis if it is both linearly independent and a generating set.

Review Solutions for Exam 1

Abstract Vector Spaces and Concrete Examples

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

MATH 310, REVIEW SHEET 2

Linear Algebra Handout

This last statement about dimension is only one part of a more fundamental fact.

LECTURE 16: TENSOR PRODUCTS

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

i=1 α ip i, where s The analogue of subspaces

Lecture 6: Corrections; Dimension; Linear maps

Eigenspaces and Diagonalizable Transformations

Fact: Every matrix transformation is a linear transformation, and vice versa.

Linear Algebra, Summer 2011, pt. 3

MAT 1302B Mathematical Methods II

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Linear Algebra Quiz 4. Problem 1 (Linear Transformations): 4 POINTS Show all Work! Consider the tranformation T : R 3 R 3 given by:

Updated: January 16, 2016 Calculus II 7.4. Math 230. Calculus II. Brian Veitch Fall 2015 Northern Illinois University

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

One-to-one functions and onto functions

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Linear algebra and differential equations (Math 54): Lecture 10

Math 4153 Exam 1 Review

Math 369 Exam #2 Practice Problem Solutions

Definition 2.3. We define addition and multiplication of matrices as follows.

MATH 310, REVIEW SHEET

Math 54 Homework 3 Solutions 9/

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Exam 1 - Definitions and Basic Theorems

MAT224 Practice Exercises - Week 7.5 Fall Comments

Lecture 4: Applications of Orthogonality: QR Decompositions

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b

MATH 115, SUMMER 2012 LECTURE 12

Math 54 HW 4 solutions

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Gaussian elimination

MATH10212 Linear Algebra B Homework Week 4

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011

1 Review of the dot product

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Methods of Mathematical Physics X1 Homework 2 - Solutions

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true.

Last Time. x + 3y = 6 x + 2y = 1. x + 3y = 6 y = 1. 2x + 4y = 8 x 2y = 1. x + 3y = 6 2x y = 7. Lecture 2

Presentation 1

Review 1 Math 321: Linear Algebra Spring 2010

Final EXAM Preparation Sheet

Linear Algebra Highlights

Semidirect products are split short exact sequences

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16

MA 1128: Lecture 19 4/20/2018. Quadratic Formula Solving Equations with Graphs

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 8

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

and The important theorem which connects these various spaces with each other is the following: (with the notation above)

2: LINEAR TRANSFORMATIONS AND MATRICES

Usually, when we first formulate a problem in mathematics, we use the most familiar

Footnotes to Linear Algebra (MA 540 fall 2013), T. Goodwillie, Bases

Lecture 6: Finite Fields

1 Last time: inverses

(II.B) Basis and dimension

Linear Algebra, Summer 2011, pt. 2

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

1 Invariant subspaces

Lecture 3: Latin Squares and Groups

3.2 Constructible Numbers

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.

V (v i + W i ) (v i + W i ) is path-connected and hence is connected.

Homogeneous Linear Systems and Their General Solutions

Math 480 The Vector Space of Differentiable Functions

MIT Final Exam Solutions, Spring 2017

Dylan Zwick. Fall Ax=b. EAx=Eb. UxrrrEb

What if the characteristic equation has a double root?

(v, w) = arccos( < v, w >

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Math 3191 Applied Linear Algebra

LINEAR ALGEBRA MICHAEL PENKAVA

Criteria for Determining If A Subset is a Subspace

4.5 Integration of Rational Functions by Partial Fractions

0.2 Vector spaces. J.A.Beachy 1

Span and Linear Independence

Linear Algebra Math 221

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

UMA Putnam Talk LINEAR ALGEBRA TRICKS FOR THE PUTNAM

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

LECTURE 7: LINEAR TRANSFORMATION (CHAPTER 4 IN THE BOOK)

5 Linear Transformations

Vector Spaces and Linear Maps

MITOCW ocw f99-lec30_300k

Algebra 8.6 Simple Equations

Transcription:

Lecture 1s Isomorphisms of Vector Spaces (pages 246-249) Definition: L is said to be one-to-one if L(u 1 ) = L(u 2 ) implies u 1 = u 2. Example: The mapping L : R 4 R 2 defined by L(a, b, c, d) = (a, d) is not oneto-one. One counterexample is that L(1, 2, 1, 2) = (1, 2) and L(1, 1, 2, 2) = (1, 2), so L(1, 2, 1, 2) = L(1, 1, 2, 2) even though (1, 2, 1, 2) (1, 1, 2, 2). The mapping L : R 2 R 4 defined by L(a, b) = (a, b, a, b) is one-to-one. To prove this, we assume that L(a 1, b 1 ) = L(a 2, b 2 ). That means (a 1, b 1, a 1, b 1 ) = (a 2, b 2, a 2, b 2 ). From this we have a 1 = a 2 and b 1 = b 2, which means (a 1, b 1 ) = (a 2, b 2 ). And so we have shown that L(a 1, b 1 ) = L(a 2, b 2 ) implies (a 1, b 1 ) = (a 2, b 2 ). The mapping L : R 3 P 2 defined by L(a, b, c) = a + (a + b)x + (a c)x 2 is one-to-one. To prove this, we assume that L(a 1, b 1, c 1 ) = L(a 2, b 2, c 2 ). That means a 1 + (a 1 + b 1 )x + (a 1 c 1 )x 2 = a 2 + (a 2 + b 2 )x + (a 2 c 2 )x 2. This means that a 1 = a 2, a 1 + b 1 = a 2 + b 2, and a 1 c 1 = a 2 c 2. Plugging the fact that a 1 = a 2 into the other two equations gives us a 1 + b 1 = a 1 + b 2 b 1 = b 2 and a 1 c 1 = a 1 c 2 c 1 = c 2. And since a 1 = a 2, b 1 = b 2, and c 1 = c 2, we have (a 1, b 1, c 1 ) = (a 2, b 2, c 2 ), and thus L is one-to-one. Lemma 4.7.1: L is one-to-one if and only if Null(L) = {0}. Proof of Lemma 4.7.1: We will need to prove both directions of this if and only if statement: (If L is one-to-one, then Null(L) = {0}): Suppose that L is one-to-one, and let x Null(L). Then L(x) = 0. But we also know that L(0) = 0. Since L is one-to-one and L(x) = L(0), we have x = 0. As such, we see that 0 is the only element of Null(L). (If Null(L) = {0}, then L is one-to-one): Suppose that Null(L) = {0}, and further suppose that L(u 1 ) = L(u 2 ). Then L(u 1 u 2 ) = L(u 1 ) L(u 2 ) = 0. This means that u 1 u 2 is in the nullspace of L. But 0 is the only element of the nullspace, so we have u 1 u 2 = 0. By the uniqueness of the additive inverse, this means that u 1 = u 2, and this shows that L is one-to-one. Definition: L : U V is said to be onto if for every v V there exists some u U such that L(u) = v. That is, Range(L) = V. Example: The mapping L : R 4 R 2 defined by L(a, b, c, d) = (a, d) is onto. To prove this, let (s 1, s 2 ) R 2. Then (s 1, 0, 0, s 2 ) R 4 is such that L(s 1, 0, 0, s 2 ) = 1

(s 1, s 2 ). (Note: I could have also used (s 1, 1, 1, s 2 ) or (s 1, s 1, s 2, s 2 ), or countless other vectors from R 4. We need to be able to find at least one vector u such that L(u) = v, not exactly one.) The mapping L : R 2 R 4 defined by L(a, b) = (a, b, a, b) is not onto. One counterexample is that there is no (a, b) R 2 such that L(a, b) = (1, 2, 3, 4), since we would need to have a = 1 and a = 3, which is not possible. The mapping L : R 3 P 2 defined by L(a, b, c) = a + (a + b)x + (a c)x 2 is onto. To prove this, let s 0 + s 1 x + s 2 x 2 P 2. To show that L is onto, we need to find (a, b, c) R 3 such that L(a, b, c) = s 0 + s 1 x + s 2 x 2. That is, we need to find a, b, c R such that a + (a + b)x + (a c)x 2 = s 0 + s 1 x + s 2 x 2. Setting the coefficients equal to each other, we see that we need a = s 0, a + b = s 1, and a c = s 2. Plugging a = s 0 into the other two equations, we get that s 0 + b = s 1 b = s 1 s 0, and s 0 c = s 2 c = s 0 s 2. And so we have found (s 0, s 1 s 0, s 0 s 2 ) R 3 such that L(s 0, s 1 s 0, s 0 s 2 ) = s 0 + (s 0 + (s 1 s 0 ))x + (s 0 (s 0 s 2 ))x 2 = s 0 + s 1 x + s 2 x 2. Theorem 4.7.2: The linear mapping L : U V has an inverse mapping L 1 : V U if and only if L is one-to-one and onto. Proof of Theorem 4.7.2: First, let s suppose that L has an inverse mapping. To see that L is one-to-one, we assume that L(u 1 ) = L(u 2 ). Then u 1 = L 1 (L(u 1 )) = L 1 (L(u 2 )) = u 2, and so we see that L is one-to-one. To see that L is onto, let v V. Then L 1 (v) W is such that L(L 1 (v)) = v, and this means that L is onto. Now, let s suppose that L is one-to-one and onto, and let s try to define a mapping M : V U that will turn out to be L 1. There are two things we need to do to define any mapping from V to U. The first is to make sure we define M for every element of V. So, let v V. Then, since L is onto, we know that there is some u U such that L(u) = v. So if we say M(v) = u such that L(u) = v, then we are defining M for every element of V. But the other thing we have to do when defining a mapping is to make sure that we don t send our vector v to two different vectors in U. However, because L is one-to-one, we know that if L(u 1 ) = v and L(u 2 ) = v, then L(u 1 ) = L(u 2 ), so u 1 = u 2. And so, we can define M(v) to be the unique vector u U such that L(u) = v. And this mapping M satisfies M(L(u)) = M(v) = u and L(M(v)) = L(u) = v, which means that M = L 1. Definition: If U and V are vector spaces over R, and if L : U V is a linear, one-to-one, and onto mapping, then L is called an isomorphism (or a vector space isomorphism), and U and V are said to be isomorphic. Note: The reason that we include the alternate name vector space isomorphism is that there are lots of different definitions for an isomorphism in the world of mathematics. You may have encountered a definition that only re- 2

quires a function that is one-to-one and onto (but not linear). The word isomorphism comes from the Greek words meaning same form, so the definition of an isomorphism depends on what form you want to have be the same. In linear algebra, linear combinations are an important part of the form of a vector space, so we add the requirement that our function preserve linear combinations. You ll find that different areas of study have a different concept of form. You get used to it... Example: We ve seen that the linear mapping L : R 3 P 2 defined by L(a, b, c) = a + (a + b)x + (a c)x 2 is both one-to-one and onto, so L is an isomorphism, and R 3 and P 2 are isomorphic. Theorem 4.7.3: Suppose that U and V are finite-dimensional vector spaces over R. Then U and V are isomorphic if and only if they are of the same dimension. Proof of Theorem 4.7.3: We will need to prove both directions of this if and only if statement: (If U and V are isomorphic, then they have the same dimension): Suppose that U and V are isomorphic, let L be an isomorphism from U to V, and let B = {u 1,..., u n } be a basis for U. Then I claim that C = {L(u 1 ),..., L(u n )} is a basis for V. First, lets show that C is linearly independent. To that end, let t 1,..., t n R be such that then t 1 L(u 1 ) + + t n L(u n ) = 0 V L(t 1 u 1 + + t n u n ) = 0 V and so we see that t 1 u 1 + + t n u n Null(L). But, since L is an isomorphism, L is one-to-one, so we know that Null(L) = {0 U }. This means that t 1 u 1 + + t n u n = 0 U Since the vectors {u 1,..., u n } are a basis for U, they are linearly independent, so we know that t 1 = = t n = 0. As such, we ve shown that the vectors {L(u 1 ),..., L(u n )} are also linearly independent. Now we need to show that they are a spanning set for V. That is, given any v V, we need to find s 1,..., s n R such that s 1 L(u 1 ) + + s n L(u n ) = v Since L is an isomorphism, we know that L is onto, so there is a u U such that L(u) = v. And since {u 1,..., u n } is a basis for U, we know that there are r 1,..., r n R such that 3

r 1 u 1 + + r n u n = u Then v = L(u) = L(r 1 u 1 + + r n u n ) = r 1 L(u 1 ) + + r n L(u n ) and so we see that v is in the span of C = {L(u 1 ),..., L(u n )}. And since we have shown that C is a linearly independent spanning set for V, we know that it is a basis for V. And this means that the dimension of V is n, which is the same as the dimension of U. (If U and V have the same dimension, then they are isomorphic): Let U and V have the same dimension, let B = {u 1,..., u n } be a basis for U, and let C = {v 1,..., v n } be a basis for V. Then I claim that the linear mapping L : U V defined by L(t 1 u 1 + + t n u n ) = t 1 v 1 + + t n v n is an isomorphism. I will take for granted that it is a linear mapping, and instead focus on showing that it is one-to-one and onto. To see that L is one-to-one, suppose that L(r 1 u 1 + + r n u n ) = L(s 1 u 1 + + s n u n ) Then r 1 v 1 + + r n v n = s 1 v 1 + + s n v n Bringing everything to one side, we get that (r 1 s 1 )v 1 + + (r n s n )v n = 0 V But since {v 1,..., v n } is a basis for V, it is linearly independent, so we have that r i s i = 0 for all i, or that r i = s i for all i. Which means that r 1 u 1 + + r n u n = s 1 u 1 + + s n u n 4

and so L is one-to-one. Now let s see that L is onto. To that end, let v V. Then there are s 1,..., s n R such that v = s 1 v 1 + + s n v n And we then have L(s 1 u 1 + + s n u n ) = s 1 v 1 + + s n v n so s 1 u 1 + + s n u n U is such that L(s 1 u 1 + + s n u n ) = v, and we see that L is onto. For once the book has generalized more than I would, because I find the following statement to be quite important in the study of linear algebra. Corollary to Theorem 4.7.3: If U is a vector space with dimension n, then U is isomorphic to R n. That is, every finite dimensional vector space is really just the same as R n. Which is fabulous, because we know a lot of things about R n, and R n is easy to work with. I ll discuss the wonders of this new revelation in the next lecture. But for now, we finish with one last theorem. Theorem 4.7.4: If U and V are n-dimensional vector spaces over R, then a linear mapping L : U V is one-to-one if and only if it is onto. Proof of Theorem 4.7.4: Let U and V be n-dimensional vector spaces over R, and let B = {u 1,..., u n } be a basis for U and C = {v 1,..., v n } be a basis for V. Now suppose that L is one-to-one. To show that L is onto, I am first going to show that the set C 1 = {L(u 1 ),..., L(u n )} is a basis for V. And to do that, I first want to show that C 1 is linearly independent. To that end, suppose that t 1,..., t n R are such that t 1 L(u 1 ) + + t n L(u n ) = 0 Using the linearity of L we get L(t 1 u 1 + + t n u n ) = 0 and so we see that t 1 u 1 + + t n u n is in the nullspace of L. But, since L is one-to-one, Lemma 4.7.1 tells us that Null(L) = {0}. So we have 5

t 1 u 1 + + t n u n = 0 At this point we use the fact that {u 1,..., u n } is a basis for U, and therefore is linearly independent, to see that t 1 = = t n = 0. And this means that our set C 1 = {L(u 1 ),..., L(u n )} is also linearly independent. We also know that C 1 contains n vectors. This is actually not immediately obvious, since it in general one could have that L(u i ) = L(u j ) for some i j, but because L is one-to-one, we know that L(u i ) L(u j ) whenever i j, because the vectors u i are all distinct. An even easier way to see this is to note that the set C 1 is linearly independent, and a linearly independent set cannot contain duplicate vectors. Either way, we have found that C 1 is a linearly independent set containing dim(v) elements of V, so by the two-out-of-three rule (aka Theorem 4.3.4(3)), C 1 is also a spanning set for V, and therefore is a basis for V. But what I actually need is the fact that C 1 is a spanning set for V. Because now, given any v V, there are scalars s 1,..., s n such that s 1 L(u 1 ) + + s n L(u n ) = v This means that s 1 u 1 + + s n u n U is such that L(s 1 u 1 + + s n u n ) = s 1 L(u 1 ) + + s n L(u n ) = v and at long last we have shown that L is onto. Now, we turn to the other direction of our proof. So, let us assume that L is onto, and show that L is also one-to-one. The first step in that process will be to note that, because L is onto, for every vector v i in our basis C = {v 1,..., v n } for V, there is a vector w i in U such that L(w i ) = v i. I claim that the set B 1 = {w 1,..., w n } of these vectors is a basis for U. To prove this claim, I will first show that B 1 is linearly independent. To that end, let t 1,..., t n R be such that t 1 w 1 + + t n w n = 0 Then we see that L(t 1 w 1 + + t n w n ) = L(0) = 0, so t 1 L(w 1 ) + + t n L(w n ) = 0, so t 1 v 1 + + t n v n = 0 6

Since {v 1,..., v n } is linearly independent, we have t 1 = = t n = 0. And so we have shown that B 1 = {w 1,..., w n } is also linearly independent. And, as noted earlier, this means that B 1 does not contain any repeated entries, which means that B 1 has n = dim(u) elements. Again using the two-out-of-three rule, B 1 is a spanning set for U, and thus is a basis for U. But again, it is the fact that B 1 is a spanning set for U that I want to use. For now, to show that L is one-to-one, suppose that L(u) = L(w). Since B 1 is a spanning set for U, we know that there are scalars a 1,..., a n and b 1,..., b n such that u = a 1 w 1 + + a n w n and w = b 1 w 1 + + b n w n Then L(u) = L(a 1 w 1 + + a n w n ) = a 1 L(w 1 ) + + a n L(w n ) = a 1 v 1 + + a n v n and similarly L(w) = L(b 1 w 1 + + b n w n ) = b 1 L(w 1 ) + + b n L(w n ) = b 1 v 1 + + b n v n which means that a 1 v 1 + + a n v n = b 1 v 1 + + b n v n and from this we get (a 1 b 1 )v 1 + + (a n b n )v n = 0 Since {v 1,..., v n } is linearly independent, we have (a 1 b 1 ) = = (a n b n ) = 0. And this means that a i = b i for all 1 i n. And this means that u = w, so we have shown that L is one-to-one. 7