This last statement about dimension is only one part of a more fundamental fact.

Similar documents
MATH Linear Algebra

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps

Elementary maths for GMT

Systems of Linear Equations

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Dot Products, Transposes, and Orthogonal Projections

Chapter 3. Abstract Vector Spaces. 3.1 The Definition

Chapter 2 Linear Transformations

Solutions to Final Exam

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Lecture 3: Linear Algebra Review, Part II

Definition 2.3. We define addition and multiplication of matrices as follows.

Linear Models Review

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

There are two things that are particularly nice about the first basis

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Usually, when we first formulate a problem in mathematics, we use the most familiar

12. Perturbed Matrices

Math Linear Algebra II. 1. Inner Products and Norms

Linear Algebra. Preliminary Lecture Notes

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.

A PRIMER ON SESQUILINEAR FORMS

1.8 Dual Spaces (non-examinable)

Systems of Linear Equations

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Linear Algebra March 16, 2019

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

1 Matrices and Systems of Linear Equations. a 1n a 2n

Section 1.6. M N = [a ij b ij ], (1.6.2)

Math 110, Spring 2015: Midterm Solutions

(II.B) Basis and dimension

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Linear Algebra Differential Equations Math 54 Lec 005 (Dis 501) July 10, 2014

Chapter 2 Notes, Linear Algebra 5e Lay

Linear Algebra. Min Yan

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Math 291-2: Lecture Notes Northwestern University, Winter 2016

0.2 Vector spaces. J.A.Beachy 1

Two matrices of the same size are added by adding their corresponding entries =.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Working with Block Structured Matrices

Linear Algebra. Preliminary Lecture Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Outline. Linear maps. 1 Vector addition is commutative: summands can be swapped: 2 addition is associative: grouping of summands is irrelevant:

Linear Algebra Review

Linear Algebra. Session 8

Gaussian elimination

CSL361 Problem set 4: Basic linear algebra

Linear Algebra, Summer 2011, pt. 2

Math Linear Algebra Final Exam Review Sheet

x n -2.5 Definition A list is a list of objects, where multiplicity is allowed, and order matters. For example, as lists

Matrix-Vector Products and the Matrix Equation Ax = b

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

October 25, 2013 INNER PRODUCT SPACES

Definition 1.2. Let p R n be a point and v R n be a non-zero vector. The line through p in direction v is the set

1 Linear transformations; the basics

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

x 1 x 2. x 1, x 2,..., x n R. x n

Review of linear algebra

Math 54 Homework 3 Solutions 9/

Math 113 Winter 2013 Prof. Church Midterm Solutions

CHANGE OF BASIS AND ALL OF THAT

Section 4.5. Matrix Inverses

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

MAT 2037 LINEAR ALGEBRA I web:

Chapter 1 Vector Spaces

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

MAT 242 CHAPTER 4: SUBSPACES OF R n

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Announcements Wednesday, October 10

MATH PRACTICE EXAM 1 SOLUTIONS

Fact: Every matrix transformation is a linear transformation, and vice versa.

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

CS 246 Review of Linear Algebra 01/17/19

Definition Suppose S R n, V R m are subspaces. A map U : S V is linear if

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Chapter 3: Theory Review: Solutions Math 308 F Spring 2015

A supplement to Treil

NORMS ON SPACE OF MATRICES

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

Solving Systems of Equations Row Reduction

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Lecture 18: The Rank of a Matrix and Consistency of Linear Systems

2: LINEAR TRANSFORMATIONS AND MATRICES

Chapter 6. Orthogonality

Lecture 9. Econ August 20

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

1 Last time: multiplying vectors matrices

x y = 1, 2x y + z = 2, and 3w + x + y + 2z = 0

Row Space, Column Space, and Nullspace

1 Matrices and Systems of Linear Equations

MA106 Linear Algebra lecture notes

Linear Algebra Highlights

Lecture 1s Isomorphisms of Vector Spaces (pages )

Chapter 4 & 5: Vector Spaces & Linear Transformations

MTH 35, SPRING 2017 NIKOS APOSTOLAKIS

Linear equations in linear algebra

Transcription:

Chapter 4 Isomorphism and Coordinates Recall that a vector space isomorphism is a linear map that is both one-to-one and onto. Such a map preserves every aspect of the vector space structure. In other words, if L : V W is an isomorphism, then any true statement you can say about V using abstract vector notation, vector addition, and scalar multiplication, will transfertoatruestatement about W when L is applied to the entire statement. We make this more precise with some examples. Example. If L : V W is an isomorphism, then the set {v 1,...,v n } is linearly independent in V if and only if the set {L(v 1 ),...,L(v n )} is linearly independent in W. The dimension of the subspace spanned by the first set equals the dimension of the subset spanned by the second set. In particular, the dimension of V equals that of W. This last statement about dimension is only one part of a more fundamental fact. Theorem 4.0.1. Suppose V is a finite-dimensional vector space. Then V is isomorphic to W if and only if dim V = dim W. Proof. Suppose that V and W are isomorphic, and let L : V W be an isomorphism. Then L is one-to-one, so dim ker L = 0. Since L is onto, we also have dim iml = dim W. Plugging these into the rank-nullity theorem for L shows then that dim V = dim W. Now suppose that dim V = dim W = n,andchoosebases{v 1,...,v n } and {w 1,...,w n } for V and W, respectively. Foranyvectorv in V, wewritev = a 1 v 1 + + a n v n,and define L(v) =L(a 1 v 1 + + a n v n )=a 1 w 1 + + a n w n. We claim that L is linear, one-to-one, and onto. (Proof omitted.) In particular, and 2 dimensional real vector space is necessarily isomorphic to R 2,for example. This helps to explain why so many problems in these other spaces ended up reducing to solving systems of equations just like those we saw in R n. Looking at the proof, we see that isomorphisms are constructed by sending bases to bases. In particular, there is a different isomorphism V W for each choice of basis for V and for W. 27

28 CHAPTER 4. ISOMORPHISM AND COORDINATES One special case of this is when we look at isomorphisms V V. Suchanisomorphism is called a change of coordinates. If S = {v 1,...,v n } is a basis for V, wesaythen-tuple (a 1,...,a n ) is the coordinate vector of v with respect to S if v = a 1 v + + a n v n.wedenotethisvectoras[v] S. Example. Find the coordinates for (1, 3) with respect to the basis S = {(1, 1), ( 1, 1)}. We set (1, 3) =a(1, 1)+b( 1, 1), whichleadstotheequationsa b = 1 and a + b = 3. This system has solution a = 2, b = 1. Thus (1, 3) =2(1, 1)+1( 1, 1), so that [(1, 3)] S = (2, 1). Example. Find the coordinates for t 2 + 3t + 2withrespecttothebasisS = {t 2 + 1, t + 1, t 1}. Wesett 2 + 3t + 2 = a(t 2 + 1)+b(t + 1)+c(t 1). Collectingliketermsgives t 2 + 3t + 2 = at 2 +(b + c)t +(a + b c). Thisleadstothesystemofequations a = 1 b + c = 3 a + b c = 2 The solution is a = 1, b = 2, c = 1. Thus we have t 2 + 3t + 2 = 1(t 2 + 1)+2(t + 1)+ 1(t 1), sothat[t 2 + 3t + 2] S =(1, 2, 1). Note that for any vector v in an n dimensional vector space V and for any basis S for V, the coordinate vector [v] S is an element of R n. Proposition 4.0.2. For any basis S for an n dimensional vector space V, thecorrespondence v [v] S is an isomorphism from V to R n. Corollary 4.0.3. Every n dimensional vector space over a R is isomorphic to R n.

Chapter 5 Linear Maps R n R m Since every finite-dimensional vector space over R is isomorphic to R n, any problem we have in such a vector space that can be expressed entirely in terms of vector operations can be tranferred to one in R n. Since our ultimate goal is to understand linear maps V W, wewillfocusoureffortsonunderstandinglinearmapsr n R m,without worrying about expressing things in abstract terms. Remark. Unlike any previous section, we focus specifically on R n in this chapter. To emphasize the distinction, we use x to denote an arbitrary vector in R n. 5.1 Linear maps from R n to R We ve already seen above that the linear maps R R are precisely those of the form L(x) =ax for some real number a. For the next step, we allow our domain to have multiple dimensions, but insist that our target space be R. Wewilldiscoverthatlinear maps L : R n R are already familiar to us. Theorem 5.1.1. If L : R n R is a linear map, then there is some vector m such that L(x) =a x. Proof. For j = 1,..., n, wesete j equal to the jth standard basis vector in R n. Set a = (a 1,...,a n ),whereeacha j = L(e j ),andconsideranarbitraryvectorx =(x 1,...,x n ) in R n.wecompute L(x) =L(x 1 e 1 + + x n e n )=x 1 L(e 1 )+ + x n L(e n )=x 1 a 1 + + x n a n = x a. Remark. Wait, didn t we say that we weren t going to think about dot products? Then we would be studying inner product spaces rather than vector spaces! Yes, and that s still true. Within a given vector space, we will not be performing any dot products, and so in particular will never speak of length or angle. And in factourdefinitionof linear map did not use the notion of dot product; it used only vector addition and scalar multiplication. What we ve shown is that every linear map from R n to R has the form f (x 1,,...,x n )=a 1 x 1 + + a n x n 29

30 CHAPTER 5. LINEAR MAPS R N R M for some fixed real numbers a 1,...,a n. It just so happens that we have a name for this type of operation, and we call it the dot product, but this is just a convenient way to explain what linear maps do; we re not studying the algebraic or geometric properties of the dot product in R n. 5.2 Linear Maps R n R m One of the first things you learn in vector calculus is that functions with multiple outputs can be thought of as a list of functions with one output. Thus given an arbitrary function f : R 2 R 3,say,wethinkofitas f (x, y) =(f 1 (x, y), f 2 (x, y), f 3 (x, y)), whereeach component function f j is a map R 2 R 1. We thus expect to find that linear maps from R n to R m are those whose component functions are linear maps from R n to R, which we saw in the last section are just dot products. This is the content of the following. Theorem 5.2.1. The function L: R n R m is linear if and only if each component function L j : R n R is linear. Proof. Omitted. Thus any linear map R n R m is built up from a bunch of dot products in each component. In the next section we will make use of this fact to come up with a nice way to present linear maps. 5.3 Matrices There are many ways to write vectors in R n.forexample,thesamevectorinr 3 can be represented as 3i + 2j 4k, 3, 2, 4, (3, 2, 4), [3, 2, 4], 3 2. 4 We will focus on these last two for the time being. In particular, whenever we have a dot product x y of two vectors x and y (in that order), we will write the first as a row in square brackets and the second as a column in square brackets. Thus we have, for example, [ 1 2 ] 3 2 3 4 = 2 + 6 12 = 4. Note that we are also avoiding commas in the row vector.

5.3. MATRICES 31 Now suppose L is an arbitrary linear map from R n to R. Thengiveninputvectorx, L(x) is the dot product a x for some fixed vector a. Thuswemaywrite x 1 x n x 1 L. = [ ] a 1 a 2 a n.. Now suppose L is a linear map from R n to R m,andtheith component functions is the dot product with a i.thewecanwrite x 1 a 11 a 12 a 1n x 1 a 1 x L. = a 21 a 22 a 2n.... = a 2 x.. x n a m1 a m2 a mn x n a m x Thus we can think of any linear map from R n to R m as multiplication by a matrix, assuming we define multiplication in exactly this way. Definition 5.3.1. If A =(a ij ) is an m n matrix and x is an n 1columnvector,the product Ax is defined to be the m 1 column vector whose ith entry is the dot product of the ith row of A with x. Thus we are led to the fortuitous observation that every linear map L : R n R m has the form L(x) =Ax for some m n matrix A. ThuslinearmapsfromR to itself are just multiplication by a 1 1matrix;i.e.,multiplicationbyaconstant.Thisagreeswith what we saw earlier. We now note an important fact about compositions of linear maps. Theorem 5.3.2. Suppose L : R n R m and T : R m R p are linear maps. Then the composition T L : R n R p is a linear map. Suppose L is represented by the m n matrix A and T is represented by the p m matrix B. BecauseT L is also linear, it is represented by some p n matrix C. Wenow show how to construct C from A and B. We begin with a motivating example. Suppose L maps from R 2 to R 2,asdoesT, and suppose L dots with a =(a 1, a 2, ) and b =(b 1, b 2 ) while T dots with c =(c 1, c 2 ) and d =(d 1, d 2 ).Then T L(x) =T ([ ]) a x = T b x ([ ]) a1 x 1 + a 2 = b 1 x 1 + b 2 [ c1 a = 1 + c 2 b 1 c 1 a 2 + c 2 b 2 d 1 a 1 + d 2 b 1 d 1 a 2 + d 2 b 2 x n [ ] c1 (a 1 x 1 + a 2 )+c 2 (b 1 x 1 + b 2 ) d 1 (a 1 x 1 + a 2 )+d 2 (b 1 x 1 + b 2 ) ][ x1 ]

32 CHAPTER 5. LINEAR MAPS R N R M [ ] [ ] a1 a Thus if L is multiplication by A = 2 c1 c and T is multiplication by B = 2, b 1 b 2 d 1 d 2 then T L is multiplication by C =(c ij ),wherec ij is the dot product of the ith row of B with the jth row of A. Inotherwords,wehave [ ][ ] [ ] c1 c 2 a1 a 2 c1 a = 1 + c 2 b 1 c 1 a 2 + c 2 b 2. d 1 d 2 b 1 b 2 d 1 a 1 + d 2 b 1 d 1 a 2 + d 2 b 2 This may seem a strange way to define the product of two matrices, but since we re thinking of matrices as representing linear maps, it only makes sense that the product of two should be the matrix of the composition, so the definition is essentially forced upon us. Remark. According to this definition, we cannot just multiply any two matrices. Their sizes have to match up in a nice way. In particular, for the dot products to make sense in computing AB, the rows of A have to have just as many elements as the columns of B. In short, the product AB is defined as long as A is m p and B is p n, inwhichcase the product is m n. Proposition 5.3.3. Matrix multiplication is associative when it is defined. In other words, for any matrices A, B, and C we have A(BC) =(AB)C, as long as all the individual products in this identity are defined. Proof. It is straightforward, though incredibly tedious, to prove this directly using our algebraic definition of matrix multiplication. What is far easier, however, is simply to note that function composition is always associative, when it s defined. The result follows. There are some particularly special linear maps: the zero map and the identity. It is not to hard to see that the zero map R n R m can be represented as multiplication by the zero matrix 0 m n.theidentitymapr n R m is represented by the aptly named identity matrix I m n,whichhas1sonitsmaindiagonaland0selsewhere. Notethatit follows that IA = AI = A for approriately sized I, whilea0 = 0A = 0, forappropriatelysized 0.