CHANGE OF BASIS AND ALL OF THAT

Similar documents
EXAM. Exam #3. Math 2360, Spring April 24, 2001 ANSWERS

MAT 1302B Mathematical Methods II

This last statement about dimension is only one part of a more fundamental fact.

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

LS.1 Review of Linear Algebra

MAT1302F Mathematical Methods II Lecture 19

x n -2.5 Definition A list is a list of objects, where multiplicity is allowed, and order matters. For example, as lists

A primer on matrices

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

Linear Algebra, Summer 2011, pt. 2

A primer on matrices

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Homogeneous Linear Systems and Their General Solutions

Introduction to Determinants

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

M. Matrices and Linear Algebra

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

First of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Dot Products, Transposes, and Orthogonal Projections

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

Math 110, Spring 2015: Midterm Solutions

Chapter 2: Matrix Algebra

Notes on vectors and matrices

I = i 0,

There are two things that are particularly nice about the first basis

22A-2 SUMMER 2014 LECTURE 5

12. Perturbed Matrices

Solution to Homework 1

Mathematics for Graphics and Vision

ME751 Advanced Computational Multibody Dynamics

6. Orthogonality and Least-Squares

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

3. The Sheaf of Regular Functions

MAT 2037 LINEAR ALGEBRA I web:

Diagonalization. MATH 1502 Calculus II Notes. November 4, 2008

Chapter 2. Matrix Arithmetic. Chapter 2

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Linear Algebra, Summer 2011, pt. 3

Topics in linear algebra

Review of linear algebra

Linear transformations: the basics

Linear equations in linear algebra

2 Two-Point Boundary Value Problems

The Matrix Vector Product and the Matrix Product

1 Matrices and matrix algebra

Math 344 Lecture # Linear Systems

1 Matrices and Systems of Linear Equations. a 1n a 2n

Chapter 1: Linear Equations

Linear Equations in Linear Algebra

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Eigenspaces and Diagonalizable Transformations

CS 246 Review of Linear Algebra 01/17/19

Linear Algebra. Session 8

Chapter 3. Vector spaces

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Determinants and Scalar Multiplication

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Chapter 6: Orthogonality

Chapter 1: Linear Equations

Introduction to Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Determinants and Scalar Multiplication

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

Review of Matrices and Block Structures

and let s calculate the image of some vectors under the transformation T.

Definition 1.2. Let p R n be a point and v R n be a non-zero vector. The line through p in direction v is the set

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Chapter 4 & 5: Vector Spaces & Linear Transformations

(II.B) Basis and dimension

MA 511, Session 10. The Four Fundamental Subspaces of a Matrix

a (b + c) = a b + a c

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

Linear Algebra Basics

4 Elementary matrices, continued

Matrices. 1 a a2 1 b b 2 1 c c π

1 Last time: determinants

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

Linear Algebra March 16, 2019

22A-2 SUMMER 2014 LECTURE Agenda

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

4 Elementary matrices, continued

Introduction to Linear Algebra

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrices and systems of linear equations

pset2-sol September 7, 2017 a 2 3

Linear Independence Reading: Lay 1.7

MATH 583A REVIEW SESSION #1

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

Transcription:

CHANGE OF BASIS AND ALL OF THAT LANCE D DRAGER Introduction The goal of these notes is to provide an apparatus for dealing with change of basis in vector spaces, matrices of linear transformations, and how the matrix changes when the basis changes We hope this apparatus will make these computations easier to remember and work with To introduce the basic idea, suppose that V is vector space and v, v,, v n is an ordered list of vectors from V If a vector x is a linear combination of v, v,, v n, we have () x = c v + c v + + c n v n for some scalars c, c,, c n The formula () can be written in matrix notation as () x = c v v v n The object v v v n might seem a bit strange, since it is a row vector whose entries are vectors from V, rather than scalars Nonetheless the matrix product in () makes sense If you think about it, it makes sense to multiply a matrix of vectors with a (compatibly sized) matrix of scalars, since the entries in the product are linear combinations of vectors It would not make sense to multiply two matrices of vectors together, unless you have some way to multiply two vectors together and get some object You don t have that in a general vector space More generally than (), suppose that we have vectors x, x,, x m that are all linear combinations of v, v,, v n The we can write x = a v + a v + + a n v n c c n (3) x = a v + a v + + a n v n x m = a m v + a m v + + a nm v n for scalars a ij The indices tell you that a ij occurs as the coefficient of v i in the expression of x j This choice of the order of the indices makes things work out nicely later These equations can be summarized as n (4) x j = v i a ij, j =,,, m, i= Version Time-stamp: 007-04-5 :55:05 drager

LANCE D DRAGER where I m writing the scalars a ij to the right of the vector to make the equation easier to remember; it means the same thing as writing the scalars on the left Using our previous idea, we can write (3) in matrix form as a a a m a a a m (5) x x x m = v v v n a n a n a nm To write (5) in briefer form, let X = [ x x x m ], V = [ v v v n ] and let A be the n m matrix a a a m a a a m A = a n a n a nm Then we can write (5) as (6) X = VA The object V is a row vector whose entries are vectors For brevity, we ll refer to this as a row of vectors We ll usually use upper case script letters for a row of vectors, with the corresponding lower case bold letters for the entries Thus, if we say that U is a row of k vectors, you know that U = [ u u u k ] You want to remember that the short form (6) is equivalent to (3) and and (4) It is also good to recall, from the usual definition of matrix multiplication, that (6) means (7) x j = V col j (A), where col j (A) is the column vector formed by the j-th column of A The multiplication between rows of vectors and matrices of scalars has the following properties, similar to matrix algebra with matrices of scalars Theorem Let V be a vector space and let U be a row of vectors from V Let A and B denotes scalar matrices, with sizes appropriate to the operation being considered Then we have the following () UI = U, where I is the identity matrix of the right size () (UA)B = U(AB), the associative law (3) (su)a = U(sA) = s(ua) for any scalar s (4) U(A + B) = UA + UB, the distributive law The proofs of these properties are the same as the proof of the corresponding statements for the algebra of scalar matrices They re not illuminating enough to write out here The associative law will be particularly important in what follows Ordered Bases Let V be a vector space An ordered basis of V is an ordered list of vectors, say v, v,, v n, that span V and are linearly independent We can think of this

CHANGE OF BASIS AND ALL OF THAT 3 ordered list as a row of vectors, say V = [ v v v n ] Since V is an ordered basis, we know that any vector v V can be written uniquely as v = c v + c v + + c n v n c = c v v v n where c is the column vector = Vc, c c c = c n Thus, another way to state the properties of an ordered basis V is that for every vector v V there is a unique column vector c R n such that c n v = Vc This unique column vector is called the coordinate vector of v with respect to the basis V We will use the notation [v] V for this vector Thus, the coordinate vector [v] V is defined to be the unique column vector such that () v = V[v] V The following proposition records some facts (obvious, we hope) about coordinate vectors Proving this proposition for yourself would be an instructive exercise Proposition Let V be a vector space of dimension n and let V be an ordered basis of V Then, the following statements hold () [sv] V = s[v] V, for all v V and scalars s () [v + w] V = [v] V + [w] V for all vectors v, w V (3) v [v] V is a one-to-one correspondence between V and R n More generally, if U is a row of m vectors in V, our discussion of (7) shows () U = VA col j (A) = [u j ] V, j =,,, m Since the coordinates of a vector with respect to the basis V are unique, we get the following fact, which is stated as a proposition for later reference Proposition Let V be a vector space of dimension n, and let V be an ordered basis of V Then, if U is a row of m vectors from V, there is a unique n m matrix A so that U = VA We can apply the machine we ve built up to prove the following theorem

4 LANCE D DRAGER Theorem 3 Let V be a vector space of dimension n and let V be an ordered basis of V Let U be a row of n vectors from V and let A be the n n matrix such that U = VA Then U is an ordered basis of V if and only if A is invertible Proof Since U contains n vectors, U is a basis if and only if its entries are linearly independent To check this, we want to consider the equation (3) c u + c u + + c n u n = 0 If we let c be the column vector with entries c j, we have 0 = c u + c u + + c n u n = Uc = (VA)c = V(Ac) By Proposition, V(Ac) can be zero if and only if Ac = 0 Suppose that A is invertible Then Ac = 0 implies that c = 0 Thus, all the c i s in (3) must be zero, so u, u,, u n are linearly independent Conversely, suppose that u, u,, u n are linearly independent Let c be a vector such that Ac = 0 Then (3) holds By linear independence, c must be 0 Thus, the equation Ac = 0 has only the trivial solution, and so A is invertible 3 Change of Basis Suppose that we have two ordered bases U and V of an n-dimensional vector space V From the last section, there is a unique matrix A so that U = VA We will denote this matrix by S VU Thus, the defining equation of S VU is (3) U = VS VU Proposition 3 Let V be an n-dimensional vector space Let U, V, and W be ordered bases of V Then, we have the following () S UU = I () S UW = S UV S VW (3) S UV = [S VU ] Proof For the first statement, note that U = UI so, by uniqueness, S UU = I For the second statement, recall the defining equations Compute as follows W = US UW, V = US UV, W = VS VW W = VS VW = (US UV )S VW = U(S UV S VW ) and so, by uniqueness, S UW = S UV S VW For the last statement, set W = U in the second statement This yields S UV S VU = S UU = I, so S UV and S VU are inverses of each other

CHANGE OF BASIS AND ALL OF THAT 5 The matrices S UV tell you how to change coordinates from one basis to another, as detailed in the following Proposition Proposition 3 Let V be a vector space of dimension n and let U and V be ordered bases of V Then we have for all vectors v V Proof The defining equations are Thus, we have [v] U = S UV [v] V, v = U[v] U, v = V[v] V v = V[v] V and so, by uniqueness, [v] U = S UV [v] V = (US UV )[v] V = U(S UV [v] V ), Thus, S UV tells you how to compute the U-coordinates of a vector from the V-coordinates Another way to look at this is the following diagram (3) V [ ] V [ ] U R n R n S UV In this diagram, [ ] V represents the operation of taking coordinates with respect to V, and [ ] U represents taking coordinates with respect to U The arrow labeled S UV represents the operation of left multiplication by the matrix S UV We say that the diagram (3) commutes, meaning that the two ways of taking a vector from V to the right-hand R n yield the same results Example 33 Consider the space P 3 of polynomials of degree less than 3 course, P = [ x x ] is an ordered basis of P 3 Consider V = [ + 3x + x x + x + x + x ] Show that V is a basis of P 3 and find the transition matrices S PV and S VP Let p(x) = + x + 5x Use the transition matrix to calculate the coordinates of p(x) with respect to V and verify the computation Solution By reading off the coefficients, we have [ (33) + 3x + x x + x + x + x ] = [ x x ] 0 3 Thus, V = PA, where A is the 3 3 matrix in the previous equation A quick check with a calculator shows that A is invertible, so V is an ordered basis by Of

6 LANCE D DRAGER Theorem 3 Equation (33) shows that S PV = A But then we have S VP = S PV A calculation shows S VP = 0 / Since p(x) = + x + 5x, we have p(x) = [ x x ], 5 so we have [p(x)] P = 5 What we want is [p(x)] V From Proposition 3 we have 8 [p(x)] V = S VP [p(x)] P = 0 = 4 / 5 7/ The significance of this calculation is that we are claiming that + x + 5x = p(x) = V[p(x)] V = [ + 3x + x x + x + x + x ] 8 4 7/ = 8( + 3x + x ) + 4(x + x ) + 7 ( + x + x ) The reader is invited to carry out the algebra to show this is correct A few remarks are in order before we attempt computations in R n If U = [ u u u m ] is a row of vectors in R n, each vector u i is a column vector We can construct an n m matrix U by letting the first column of U have the entries in u, the second column of U be u, and so on The difference between the row of vectors U and the matrix U is just one of point of view, depending on what we want to emphasize If U is a row of vectors in R n, we ll use the notation mat(u) for the corresponding matrix Fortunately, the two possible notions of matrix multiplication are compatible, ie, V = UA mat(v) = mat(u)a, since one description of the matrix multiplication mat(u)a is that the jth column of mat(u)a is a linear combination of the columns of mat(u), using coefficients from the jth column of A; and that is exactly what we re thinking when we look at UA

CHANGE OF BASIS AND ALL OF THAT 7 In R n we have the standard basis E = e e e n, where ej is the column vector with in the jth row and all other entries zero If x R n, we have 0 0 x 0 0 x x = = x 0 0 0 + x + + x n = x e + x e + + x n e n = Ex x n 0 0 0 0 0 This leads to the observation that the standard basis has the property that [x] E = x, for all x R n In terms of matrix multiplication, this is not surprising, since x = Ex is equivalent to x = mat(e)x and mat(e) is the identity matrix Suppose that V is an ordered basis of R n The transition matrix S EV is defined by V = ES EV This is equivalent to the matrix equation mat(v) = mat(e)s EV but mat(e)s EV = IS EV = S EV Thus, we have the important fact that S EV = mat(v) Example 34 Let U be the ordered basis of R given by u =, u = 3 () Show that U is a basis and find S EU and S UE () Let y = [ 3 5 ] T Find [y]u and express y as a linear combination of U Solution Let A be the matrix such that U = EA Then, as above, A = mat(u) = 3 The determinant of A is, so A is invertible Then, since E is a basis, U is a basis We have S EU = A, and so [ S UE = [S EU ] 3 ] = = 3 For the last part, we note that [y] E = y = [ 3 [ 3 [y] U = S UE [y] E = By saying that [y] U = 5 ] T and we have ] = ] [ 3 5

8 LANCE D DRAGER we re saying that 3 = y 5 = U[y] U = [ u u ] [ ] = u + u = +, 3 the reader is invited to check the vector algebra to see that this claim is correct Example 35 Consider the ordered basis U of R 3 given by u =, u = 3, u 3 = Let V be the ordered basis of R 3 given by v =, 6 v =, v 3 = 0 4 () Find S UV and S VU () Suppose that [x] U = [ ] T Find [x]v and express x as a linear combination of the basis V Verify Solution Since it s easier to find the transition matrix between a given basis and E, we go through E Thus, we have 3 6 S EU = mat(u) =, S EV = mat(v) = 0 4 But then we can compute 3 S UV = S UE S EV = [S EU ] S EV = From this, we have 3 6 S VU = [S UV ] == 0 5 3 6 3 6 0 = 0 4 5 3 3 3 = 5 3 5 3 3 For the second part, we re given the coordinates of x with respect to U and we want the coordinates with respect to V We can calculate these by 3 3 7 [x] V = S VU [x] U = 5 3 = 5 5 3 3 8

CHANGE OF BASIS AND ALL OF THAT 9 To verify this, note that by saying [x] V = [ 7 5 8 ] T, we re claiming that x = V[x] V = [ v v ] v 3 7 5 8 = 7v 5v + 8v 3 = 7 5 6 + 8 0 4 = 7 4 On the other hand, saying [x] U = [ ] T is saying that x = U[x] U = u u u 3 = u + u u 3 3 = + 7 = 4 4 Matrix Representations of Linear Transformations We want to study linear transformations between finite dimensional vector spaces So suppose that V is a vector space of dimension n, W is a vector space of dimension p and L: V W is a linear transformation Choose ordered bases V for V and W for W For each basis vector v j in V, the image L(v j ) is an element of W, and so can be expressed as a linear combination of the basis vectors in W Thus, we have L(v ) = a w + a w + + a p w p L(v ) = a w + a w + + a p w p L(v n ) = a n w + a n w + + a pn w p,

0 LANCE D DRAGER for some scalars a ij We can rewrite this system of equations in matrix form as a a a n (4) [ L(v ) L(v ) L(v n ) ] = a a a n w w w p a p a p a pn If we introduce the notation L(V) = [ L(v ) L(v ) L(v n ) ], we can write (4) simply as (4) L(V) = WA The matrix A is unique and we will denote it by [L] WV and call it the matrix of L with respect to V and W Thus, the defining equation of [L] WV is L(V) = W[L] WV Another way to look at the specification of this matrix is to note that (4) says that L(v j ) = W col j (A), and so col j (A) = [L(v j )] W Thus, another we to interpret the definition of the matrix of L is Column j of [L] WV = [L(v j )] W, in other words, column j of [L] WV is the coordinates with respect to the basis W in the target space of the image of the j-th basis vector in the source space under L We will need one fact about the way we ve defined L(V), which is just a restatement of the fact that L is linear Proposition 4 Let L: V W be a linear transformation, let U be any row of k vectors in V and let A be a k l matrix Then (43) L(UA) = L(U)A Proof Let V = UA Then we have Thus, we have v j = U col j (A) = ( k ) L(v i ) = L u i a ij = i= k L(u i )a ij, i= = L(U) col j (A), k u i a ij i= since L is linear, Thus, the ith entry of the left-hand side of (43) is equal to the ith entry of the right-hand side, for all i =,, l

CHANGE OF BASIS AND ALL OF THAT Next, we want to describe the action of L on elements of V in terms of coordinates To do this, let x V be arbitrary, so we have Then, we have x = V[x] V L(x) = L(V[x] V ) = L(V)[x] V = (W[L] WV )[x] V = W([L] WV [x] V ) On the other hand, the defining equation of [L(x)] W is L(x) = W[L(x)] W Comparing with the last computation, we get the important equation (44) [L(x)] W = [L] WV [x] V Thus the matrix [L] WV tells you how to compute the coordinates of L(x) from the coordinates of x Another way to look at (44) is the following Let n be the dimension of V and let p be the dimension of W Then (44) is equivalent to the fact that the following diagram commutes (45) V L W [ ] V [ ] W R n [L] WV R p where the arrow labeled [L] WV means the operation of left multiplication by that matrix The vertical arrows are each a one-to-one correspondence, so the diagram says that L and multiplication by [L] WV are the same thing under the correspondence Example 4 Let D : P 3 P 3 be the derivative operator Find the matrix of D with respect to the basis U = [ x x ] Use this matrix to compute D of p(x) = 5x + x + 3 Solution To find [D] UU we want to find the matrix that satisfies D(U) = U[D] UU But, we have D(U) = [ x 0 ] = [ x x ] 0 0 0 0 0 0 0 Thus, we ve calculated that 0 0 0 [D] UU = 0 0 0 0

LANCE D DRAGER Now consider p(x) We have p(x) = 5x + x + 3 = [ x x ] 5 3 Thus, [p(x)] U = [ 5 3 ] T But then we have [p (x)] U = [D(p(x))] U = [D] UU [p(x)] U = 0 0 0 0 0 5 0 0 3 = 0 0, which tells us the coordinates of p (x) But the coordinates of p (x) tell us what p (x) is, namely, p (x) = U[p (x)] U = [ x x ] 0 0 = 0x +, which is, of course, what you get from taking the derivative of p(x) using the rules from calculus 5 Composition of Linear Transformations The next Proposition says that the composition of two linear transformations is a linear transformation Proposition 5 Let T : U V and S : V W be linear transformations, where U, V and W are vector spaces Let L: U W be defined by L = S T, ie, L(u) = S(T (u)) Then L is a linear transformation Proof We have to show that L preserves addition and scalar multiplication To show L preserves addition, let u and u be vectors in U Then we have L(u + u ) = S(T (u + u )) by the definition of L = S(T (u ) + T (u )) since T is linear = S(T (u )) + S(T (u )) since S is linear = L(u ) + L(u ) by the definition of L Thus, L(u + u ) = L(u ) + L(u ) and L preserves addition For scalar multiplication, let u be a vector in U and let α be a scalar Then we have L(αu) = S(T (αu)) by the definition of L = S(αT (u)) since T is linear αs(t (u)) since S is linear = αl(u) by the definition of L

CHANGE OF BASIS AND ALL OF THAT 3 Thus, L(αu) = αl(u), so L preserves scalar multiplication This completes the proof that L is linear The next Proposition says that the matrix of a composition of linear transformations is the product of the matrices of the transformations Proposition 5 Let T : U V and S : V W be linear transformations, where U, V and W are vector spaces Let U be an ordered basis for U, V an ordered basis for V and W an ordered basis for W Then, we have Proof The defining equations are [S T ] WU = [S] WV [T ] VU (5) (S T )(U) = W[S T ] WU (5) T (U) = V[T ] VU (53) S(V) = W[S] WV To derive (5), begin by applying S to both sides of (5) This gives (S T )(U) = S(T (U)) = S(V[T ] VU ) by (5) Thus, we have = S(V)[T ] VU = (W[S] WV )[T ] VU by (53) = W([S] WV [T ] VU ) (S T )(U) = W([S] WV [T ] VU ), and comparing this with the defining equation (5) show that [S T ] WU = [S] WV [T ] VU 6 Change of Basis for Linear Transformation This computation address the question of how the matrix of a linear transformation changes when we change the bases we re using Proposition 6 Let L: V W be a linear transformation, where V and W are vector spaces of dimension n and m respectively Let U and V be ordered bases for V and let X and Y be ordered bases for W Then (6) [L] YV = S YX [L] X U S UV So, if we know [L] UX and then decide to use new bases V for V and Y for W, this formula tells us how to compute the matrix of L with respect to the new bases from the matrix with respect to the old bases Proof The defining equations of the two matrices of L are (6) (63) L(U) = X [L] X U L(V) = Y[L] YV

4 LANCE D DRAGER The defining equations for the two transition matrices in (6) are (64) X = YS YX (65) V = US UV Then, we have L(V) = L(US UV ) by (65) = L(U)S UV = (X [L] X U )S UV by (6) = X ([L] X U S UV ) = (YS YX )([L] X U S UV ) by (64) = Y(S YX [L] X U S UV ) Thus, we have L(V) = Y(S YX [L] X U S UV ) Comparing with the defining equation (63), we see that [L] YV = S YX [L] X U S UV This equation, and much of our work above, can be brought together in the commutative pup tent diagram We use the same notation as in the last Proposition Then, the following diagram commutes: R n L V W [ ] U [ ] X [ ] V [ ] Y [L] X U R S m VU S YX R n [L] YV R m In this diagram, the triangular ends of the tent are the diagrams corresponding to (3), the rectangular sides of the tent are (45), and the fact that the rectangular floor commutes is (6) If we consider a linear operator L: V V from a vector space to itself, we usually use the same basis for both sides Thus, if U is a basis for V, the matrix [L] UU of L with respect to U is defined by L(U) = U[L] UU and the action of L in coordinates with respect to this basis is given by [L(v)] U = [L] UU [v] U If we have another basis V of V, we can use (6) to find (66) [L] VV = S VU [L] UU S UV

CHANGE OF BASIS AND ALL OF THAT 5 If we ve been working with the basis U, we re most likely to know the transition matrix S UV, since it is defined by V = US UV, ie, it tells us how to express the new basis elements in V as linear combinations of the old basis elements in U But then we also know S VU, since S VU = S UV Thus, we can rewrite (66) as [L] VV = S UV [L] UU S UV Example 6 Recall the operator D : P 3 P 3 from Example 4 Let U be the basis in Example 4 and let V = [ x + x + 4x + 5x + 6 x + x + ] Show that V is a basis of P 3 Find the matrix of D with respect to V Let p(x) be the polynomial such that [p(x)] V = [ ] T Find the coordinates with respect to V of p (x) Verify the computation Solution The basis U from Example 4 was U = [ x x ] To see if V is a basis, we can write V = UA for some matrix A In fact, we have [ x + x + 4x + 5x + 6 x + x + ] = [ x x ] 4 5, 6 Thus, we see that 4 A = 5 6 Punching it into the calculator shows that A is invertible This implies that V is a basis, and we have S UV = A From Example 4, we recall that 0 0 0 [D] UU = 0 0 0 0 From our work above, we find that [D] VV = S VU [D] UU S UV = S UV [D] S UU UV 4 0 0 0 4 = 5 0 0 5 6 0 0 6 3 3 = 8 5 5

6 LANCE D DRAGER Next, we compute [p (x)] V We re given that [p(x)] V = [ ] T Thus, so we have [p (x)] V = [D(p(x))] V = [D] VV [p(x)] V 3 3 = 8 5 5 = 4 0, 6 [p (x)] V = 4 0 6 To verify this computation, note that our computation of [p (x)] V implies that p (x) = V[p (x)] V = [ x + x + 4x + 5x + 6 x + x + ] 4 0 6 = 4(x + x + ) + 0(4x + 5x + 6) 6(x + x + ) = 0x + 6 (the reader is invited to check the algebra on the last step) On the other hand, we know [p(x)] V, so p(x) = V[p(x)] V = [ x + x + 4x + 5x + 6 x + x + ] = (x + x + ) + (4x + 5x + 6) (x + x + ) = 5x + 6x + 6 (as the reader is invited to check) correctly Thus, we see that we have computed p (x) Before doing an example in R n, some remarks are in order Suppose that we have a linear transformation L: R n R m Let E n be the standard basis of R n and E m be the standard basis of R m The matrix [L] EmE of L with respect to the n standard bases is defined by The equivalent matrix equation is L(E n ) = E m [L] EmE n mat(l(e n )) = mat(e m )[L] EmE n = I[L] EmE n = [L] EmE n Thus, [L] EmE n is the matrix mat(l(e n )) = mat( [ L(e ) L(e ) L(e n ) ] ),

CHANGE OF BASIS AND ALL OF THAT 7 ie, the matrix whose columns are the column vectors L(e ), L(e ),, L(e n ) This is just the way we ve previously described the standard matrix of L Since [x] En = x for x R n and and [y] Em = y for y R m, the equation [L(x)] Em = [L] EmE [x] n En for the coordinate action of L just becomes L(x) = [L] EmE n x, again agreeing with our discussion of the standard matrix of a linear transformation Example 63 Let T : R R be the linear transformation whose standard matrix is 4/5 /5 A = /5 /5 Let U be the basis of R with u =, u = Find the matrix of T with respect to U Discuss the significance of this matrix Solution Letting E denote the standard basis of R, we have [T ] EE = A, where A is given above As usual in R n, we have S EU = mat(u) = Thus, we have [T ] UU = S UE [T ] EE S EU = S EU [T ] S EE EU 4/5 /5 = /5 /5 0 =, 0 3 so we have 0 [T ] UU = 0 3 This is a diagonal matrix, ie, the only nonzero entries are on the main diagonal This has a special significance The defining equation T (U) = U[T ] UU becomes [ T (u ) T (u ) ] = [ u u ] [ 0 0 3 ] = u 3u Thus, T (u ) = u and T (u ) = 3u Thus, geometrically, T is a dilation by a factor of in the u direction and dilation by a factor of 3 in the u direction Also, if c = [x] U, then 0 c c [T (x)] U = [T ] UU [x] U = = 0 3 c 3c Thus, T is easier to understand in this coordinate system than in the standard coordinate system

8 LANCE D DRAGER The same sort of reasoning leads to an important concept Let A and B be n n matrices We say A and B are similar if there is a nonsingular matrix P such that B = P AP To see the significance of this concept, let T : R n R n be the linear transformation whose standard matrix is A Let P the the row of vectors in R n formed by the columns of P Then we have P = EP (why?) Since P is invertible, P is a basis and S EP = P The matrix of T with respect to P is given by [T ] PP = S PE [T ] EE S EP = S EP [T ] EE S EP = P AP = B Thus, [T ] PP = B The conclusion is that if B = P AP, the matrices A and B represent the same linear transformation with respect to different bases Indeed, if A is the matrix of the transformation with respect to the standard basis, then B is the matrix of the same transformation with respect to the basis formed by the columns of P If we can find P so that P AP = D, where D is a diagonal matrix, then in the new basis formed by the columns of P, the corresponding transformation is just a dilation along each of the basis directions, and so is easier to understand in the new coordinate system Department of Mathematics, Texas Tech University, Lubbock, TX 79409-04 URL: http://wwwmathttuedu/~drager E-mail address: lancedrager@ttuedu