LECTURE 7: LINEAR TRANSFORMATION (CHAPTER 4 IN THE BOOK)

Similar documents
Chapter 2 Linear Transformations

Linear Transformations: Kernel, Range, 1-1, Onto

MTH 362: Advanced Engineering Mathematics

Linear Maps and Matrices

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

5 Linear Transformations

2: LINEAR TRANSFORMATIONS AND MATRICES

1 Background and Review Material

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Vector Spaces and Linear Transformations

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

Linear Algebra. Session 8

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Quadratic Forms. Marco Schlichting Notes by Florian Bouyer. January 16, 2012

Linear Vector Spaces

CSL361 Problem set 4: Basic linear algebra

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Homework 5 M 373K Mark Lindberg and Travis Schedler

Kernel and range. Definition: A homogeneous linear equation is an equation of the form A v = 0

Math 121 Homework 4: Notes on Selected Problems

4 Vector Spaces. 4.1 Basic Definition and Examples. Lecture 10

Linear Algebra Review

MATH PRACTICE EXAM 1 SOLUTIONS

Linear Algebra Review

Rank & nullity. Defn. Let T : V W be linear. We define the rank of T to be rank T = dim im T & the nullity of T to be nullt = dim ker T.

0.1 Diagonalization and its Applications

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Second Exam. Math , Spring March 2015

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Chapter 3. Vector spaces

Elementary linear algebra

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

2 Determinants The Determinant of a Matrix Properties of Determinants Cramer s Rule Vector Spaces 17

A Little Beyond: Linear Algebra

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

1. General Vector Spaces

Test 1 Review Problems Spring 2015

Chapter 2: Matrix Algebra

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Lecture Notes-I

Abstract Vector Spaces and Concrete Examples

Review of Linear Algebra

Vector Spaces and Linear Maps

Math 3108: Linear Algebra

Section 4.5. Matrix Inverses

1. Select the unique answer (choice) for each problem. Write only the answer.

Math 113 Winter 2013 Prof. Church Midterm Solutions

Math 113 Midterm Exam Solutions

Notes for Functional Analysis

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 583A REVIEW SESSION #1

GENERAL VECTOR SPACES AND SUBSPACES [4.1]

3 - Vector Spaces Definition vector space linear space u, v,

The definition of a vector space (V, +, )

Solving a system by back-substitution, checking consistency of a system (no rows of the form

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Linear Algebra. Week 7

Lecture Summaries for Linear Algebra M51A

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Honors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007

MAT Linear Algebra Collection of sample exams

2.3. VECTOR SPACES 25

Math 353, Practice Midterm 1

Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University

Math 321: Linear Algebra

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

Linear maps. Matthew Macauley. Department of Mathematical Sciences Clemson University Math 8530, Spring 2017

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

Typical Problem: Compute.

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

6. The scalar multiple of u by c, denoted by c u is (also) in V. (closure under scalar multiplication)

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

MAT 242 CHAPTER 4: SUBSPACES OF R n

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

1.8 Dual Spaces (non-examinable)

Part I True or False. (One point each. A wrong answer is subject to one point deduction.)

Math 250B Midterm II Information Spring 2019 SOLUTIONS TO PRACTICE PROBLEMS

Section II.1. Free Abelian Groups

Section 6.2: THE KERNEL AND RANGE OF A LINEAR TRANSFORMATIONS

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Definition 3 A Hamel basis (often just called a basis) of a vector space X is a linearly independent set of vectors in X that spans X.

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Mathematics 1. Part II: Linear Algebra. Exercises and problems

Abstract Vector Spaces

10. Smooth Varieties. 82 Andreas Gathmann

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

ABSTRACT ALGEBRA 2 SOLUTIONS TO THE PRACTICE EXAM AND HOMEWORK

Matrices and Linear transformations

Math Linear algebra, Spring Semester Dan Abramovich

DEPARTMENT OF MATHEMATICS

MATRIX THEORY (WEEK 2)

1. Let r, s, t, v be the homogeneous relations defined on the set M = {2, 3, 4, 5, 6} by

Solution to Homework 1

A = 1 6 (y 1 8y 2 5y 3 ) Therefore, a general solution to this system is given by

Linear Algebra Practice Problems

235 Final exam review questions

Eigenvalues and Eigenvectors

A supplement to Treil

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4].

Transcription:

LECTURE 7: LINEAR TRANSFORMATION (CHAPTER 4 IN THE BOOK) Everything marked by is not required by the course syllabus In this lecture, F is a fixed field and all vector spcaes are over F. One can assume F = R or C. 1.1. Linear maps. 1. Definitions Definition 1.1. A mapping L from a vector space V into a vector space W is said to be a linear transformation (or linear map or linear homomorphism) if for all v 1, v 2 V and for all scalars α and β. Remark: L(αv 1 + βv 2 ) = αl(v 1 ) + βl(v 2 ) 1. A map L from V to W could be denoted by L V W. 2. Now from the definiotion, L is a linear map is equivalent to say (a) L(v 1 + v 2 ) = L(v 1 ) + L(v 2 ) and (b) L(αv) = α(v). 3. If L V W is a linear map, the we have (a) L(0 V ) = 0 W. (b) L(α 1 v 1 +α 2 v 2 + +α k v k ) = α 1 L(v 1 )+α 2 L(v 2 )+ +α k L(v k ) (the image of a linear combination of some vectors is a linear combination of the image of these vectors) (c) L( v) = L(v). 4. A linear map L V V (from V to V itself) is called a linear operator on V or a linear endomorphism of V Example 1.2. (1) The idenity map L V V defined by L(v) = v. (2) The map L V V defined by L(v) = 2v is a linear map/ linear operator on V (this a scaler multiplication). (3) The map L V V defined by L(v) = v + a is not a linear operation on V unless a is the zero vector 0. (4) Rotation let R θ R 2 R 2 be the map that send a vector in 2d plan to its rotation by degree θ counterclockwise. Then R θ ((x, y) T ) = (x cos θ y sin θ, x sin θ + y cos θ) = ( cos θ sin θ sin θ cos θ ) (x y ) 1

2 LECTURE 7: LINEAR TRANSFORMATION (5) In general, let A be an m n matrix. The map L A defined by L A F n F m x Ax is a linear map from F n to F m. The claim follows form the arthematics of matrices: L A (αx 1 + βx 2 ) =A(αx 1 + βx 2 ) = A(αx 1 ) + A(βx 2 ) =αax 1 + βax 2 = αl A (x 1 ) + βl A (x 2 ). (6) Recall that F [x] is the space of polynomials with coefficient in F. Let p = a n x n + a n 1 x n 1 + +a 1 x+a 0 be a polynomial in F [x]. The dirivative map D F [x] F [x] defined by D(p) = d dx p = na nx n 1 + (n 1)a n 1 x n 2 + + a 1 is a linear map. Similarly, we have indefinite integration is a linear map I F [x] F [x] defined by 1 I(p) = p dx = n + 1 a nx n+1 + 1 n a n 1x n + + 1 2 a 1x 2 + a 0 x. The following examples are about function spaces, you can skip it as your wish. (7) Recall that C k ([a, b]) is the space of conitnouns differencaible functions of order k on the intervial [a, b]. Then D C n ([a, b]) C n 1 ([a, b]) defined by D(f) = d dxf is a linear map. (8) Now consider the integrations: L C[a, b] R defiend by L(f) = a b f(x)dx. is a linear transformation. (9) Fix a point x 0 [a, b] the evalutation map eval x0 C([a, b]) R given by eval x0 (f) = f(x 0 ) is a linear map. (10) Definite integration gives linear maps: L C([a, b]) R defined by L(f) = b a f(x)dx. I C([a, b]) C 1 ([a, b]) defined by (I(f))(t) = t a f(x) d dx. 1.2. Kernel and image. Definition 1.3. Let L V W be a linear map. The kernel of L, denoted ker(l), is defined by ker(l) = { v V L(v) = 0 W }. Definition 1.4. Let L V W be a linear map and let S be a subspace of V. The image of S, denoted L(S), is defined by L(S) = { w W w = L(v) for some v S } The image of the entire vector space, image(l) = L(V ), is called the range or image of L. Lemma 1.5. If L V W is a linear transformation and S is a subspace of V, then (i) ker(l) is a subspace of V. (ii) L(S) is a subspace of W.

LECTURE 7: LINEAR TRANSFORMATION 3 Proof. (1) It is obvious that ker(l) is nonempty since 0 V ker(l). Now we show that ker(l) is closed under scalar multiplication and addition. Let v, v 1, v 2 ker(l) and let α F. Closure under scalar multiplication: Therefore, αv ker(l). For closure under addition: L(αv) = αl(v) = α0 W = 0 W. L(v 1 + v 2 ) = L(v 1 ) + L(v 2 ) = 0 W + 0 W = 0 W Therefore, v 1 + v 2 ker(l) and hence ker(l) is a subspace of V. (2) L(S) is nonempty, since 0 W = L(0 V ) L(S). For closure under scalar multiplication If w L(S), then w = L(v) for some v S. For any scalar α, αw = αl(v) = L(αv) Since αv S, it follows that αw L(S), and hence L(S) is closed under scalar multiplication. For closur under addition If w 1, w 2 L(S), then there exist v 1, v 2 S such that L(v 1 ) = w 1 and L(v 2 ) = w 2. Thus, w 1 + w 2 = L(v 1 ) + L(v 2 ) = L(v 1 + v 2 ). and hence L(S) is closed under addition. It follows that L(S) is a subspace of W. Remark: One can visualize the relationship between these spaces by following diagram which is called a short exact sequance. (Here 0 reprensents the zero verctor space.) 0 ker(l) V L(V ) 0. Example 1.6. (1) Let A be an m n and consider the map L A F n F m defined by L A (x) = Ax. Then kernel ker(l A ) of L A is the solution space of the equation Ax = 0. The image L A (V ) of L is the column space of A. (2) Recall that P n = { p F [x] p has degree less than or equal to n }. Recall dirivative operator D P n P n. We have ker(d) = P 0, i.e. the space of constant polynomials and D(P n ) = P n 1. 1.3. Quick Review about maps. Recall that a map between two sets, say f A B is called an injection or injective if for each element b B there is at most one a A such that f(a) = b. f is called a surjection or surjective if f(a) = B. If f is both injective and surjective, it is called a bijection or bijective. For a bijection f A B, each element in a A is corresponding to a unique element b B. Therefore, we can define the inverse map f 1 B A by f(a) a. The inverse map f 1 is characterised by the property that it is the unique map, say h, such that h f = id A where id A A A is the identity map given by a a. Two maps f A B and g B C could be compose to form a map g f A C by a g(f(a)). 1.4. Linear isomorphism. Lemma 1.7. Let L V W be a linear map. If ker(l) = 0 if and only if L is a injection. Proof. ( ) Suppose ker(l) = 0. Suppose there is two elements v 1 v 2 V such that L(v 1 ) = L(v 2 ) = w for some w W. Hence L(v 1 v 2 ) = 0 W, i.e. 0 V v 1 v 2 ker(l). This is contradict to ker(l) = 0. ( ) Suppose that L is an injection, we claim that ker(l) is the zero space. We prove by contradiction, suppose ker(l) is not zero, then there is a nonzero vector v ker(l).

4 LECTURE 7: LINEAR TRANSFORMATION We have L(v) = 0 W v 0 V. = L(0 V ). This contradict to that L is an injection, since Definition 1.8. A linear map L V W is called a linear isomorphism if ker(l) = 0 and L(V ) = W. We could denote V L W or V W. Suppose L V W is a linear isomorphism then it is a bijection. Its inverse L 1 W V is well defined and also a linear isomorphism. Exercise (Spaces which are linear isomorphic to each other have the same dimnsion). Suppose L V W is a linear isomorphism, then dim V = dim W. (Hint: check that L sends any basis of V into linearly independent set of W, hence dim V dim W ; apply the same argument to L 1 W W, get dim W dim V ; down) Example 1.9 (an important example). Now let V be a finite dimensional vector space. Suppose V has dimension n and S = { v 1,, v n } is a basis of V. Then the map send each vector v in V to its coordinate [v] S with respect to the basis S is give a linear isomorphism form V to F n. Similarly, the inverse of the above map is also an isomorphism which is given by (x 1, x 2,, x n ) x 1 v 1 + x 2 v 2 + + x n v n. In summary, the example says, V F n if V is has dimension n. Roughly speaking, if two vector spaces V and W are isomorphic, one could think them are the same in many cases. However isomorphisms are not unique. (For example, the isomorphism V F n are all different for different basis.) Notation. Let V be an n-dimensional vector space and S = { v 1, v 2,, v n }. Let ι S F n V denote the isomorphism from F n to V given by the basis S. More precisely, and ι S ((x 1,, x n )) = x 1 v 1 + +x 2 v 2 + + x n v n ι 1 S (v) = [v] S for v V. for (x 1, x 2,, x n ) F n Example 1.10. Consider the basis S = { 1, x, x 2,, x n } of P n. Then ι S ((a 0, a 1,, a n )) =a 0 + a 1 x + + a n x n P n ι 1 S (a 0 + a 1 x + + a n x n ) =(a 0, a 1,, a n ) F n+1 gives the isomorphism between F n+1 and P n. Example 1.11 (Change of basis: from standard basis). Consider the case V = F n and a basis S = { v 1, v 2,, v n } as column vectors. Define matrix (by abuse of notation) S = (v 1 v 2 v n ). Then, for x = (x 1,, x n ) F n and v V = F n, we have ι S (x) = Sx and [v] S = ι 1 S (v) = S 1 v. (Note that S is an invertible matrix since { v 1,, v n } is a basis!)

1.5. Composition of linear maps. LECTURE 7: LINEAR TRANSFORMATION 5 Definition 1.12. Suppose L V W and T W U are two linear maps. Define the composition T L V U of L and T by T L(v) = L(T (v)). Exercise. Check that T L is a linear map. Notation (Commutative diagram ). Let Q V W be a linear map. Suppose Q = T L then we say the following diagram is commutative. V L U Q Example 1.13. Let L A F n F m and L B F m F t be two maps represented by matrices A and B respectivly. Then L B L A = L BA. T W In fact, L B L A (x) = L B (L A (x)) = L B (Ax) = B(Ax) = (BA)x = L BA (x). 2. Matrix Representations of Linear Transformations The following theorem says, all linear maps from F n to F m is given by a matrix. Theorem 2.1. If L F n F m is a linear map from F n to F m, then there is an m n matrix A such that L(x) = Ax for each x F n. In fact, A = (a 1 a 2 a n ) where the j-th column a j is given by (e j s from the standard basis) a j = L(e j ). Proof. This is clear in fact. Write each vector x = (x 1,, x n ) T in F n interms of standard baisis: x = x 1 e 1 + x 2 e 2 + + x n e n Now L(x) =x 1 L(e 1 ) + x 2 L(e 2 ) + + x n L(e n ) =x 1 a 1 + x 2 a 2 + + x n a j x 1 x 2 = (a 1 a 2 a n ) =Ax Notation. We will identify an n m matrix with the corresponding linear map L A F n F m by abuse of notation. Theorem 2.2 (Matrix Representation Theorem). If S = { v 1, v 2,, v n } and T = { w 1, w 2, w m } are ordered bases for vector spaces V and W, respectively, then corresponding to each linear transformation L V W, there is an m n matrix A such that x n [L(v)] T = A[v] S for each v V.

6 LECTURE 7: LINEAR TRANSFORMATION Proof. Consider the diagram: V ι S L W ι T F n L A F m Similarly, let ι T F n W corresponding to T. Then ι 1 T by certain matrix A. Now [L(v)] T = ι 1 T (L(v)) = (ι 1 T L ι S )(ι 1 S (v)) = A[v] S. L ι S F n F m is represented Definition 2.3. Under the notation of above theorem. We will let [L] S,T matrix A representing L under the basises S and T. denote the 2.1. Changing of basis down right. Let V be a vector spaces of dimension n. S = { v 1,, v n } and T = { u 1,, u 2 }. The change of basis matrix is viewed via the below diagram: V ι S F n id [id] S,T V F n ι T Here [id] S,T is represented by the change of basis matrix. Clearly we have [id] T,S = ([id] S,T ) 1 When V = F n and v i, u j s as column vectors and matrices (abuse notation) S = (v 1 v 2 v n ) and T = (u 1 u 2 u n ) Then the change of basis matrix is For any vector v V, we have [id] S,T = ι 1 T ι S = T 1 S. [v] T = ι 1 T (v) =ι 1 T ι S (ι 1 S (v)) = ι 1 T ι S ([v] S ) = T 1 S [v] S Notation. Let E denote the standard basis { e 1, e 2,, e n }. Clearly, (e 1 e 2 e n ) = I n. By discussions in Example 1.11, [id] S,E = S and [id] E,S = S 1. 2.2. Matrix prsentation and changing of basis. Theorem 2.4. Let S and S be two basis of V. Let T and T be two basis of W. Let L V W be a linear map. Let [L] S,T be the matrix representing L under the basis S and T. Then [L] S,T = [id W ] T,T [L] S,T [id V ] S,S Proof. Clear from the following diagram V id V V L W id W W ι S ι S ι T ι T F n [id V ] S,S F n [L] S,T F m [id W ] T,T F m

LECTURE 7: LINEAR TRANSFORMATION 7 Lemma 2.5. Let S = { v 1,, v n } and T = { u 1,, u m } be ordered bases for F n and F m, respectively. If L F n F m is a linear transformation, then In particular, the j-the column of [L] S,T is where T = (u 1,, u m ). [L] S,T = T 1 (L(v 1 ) L(v 2 ) L(v n )). T 1 L(v j ). Proof. Let S = (v 1 v 2 v n ). Let A be the matrix representing L with respect to the standard basis. Now [L] S,T =[id W ] E,T A [id V ] S,E = T 1 AS =T 1 A (v 1 v 2 v n ) = (T 1 Av 1 T 1 Av 2 T 1 Av n ) = (T 1 L(v 1 ) T 1 L(v 2 ) T 1 L(v n )) The colloray below give a way to calculate [L] S,T by row operations. Corollary 2.6. Under the assumption of Lemma 2.5, the reduced row echelon form of (u 1 u m L(v 1 ) L(v n )) is (I m [L] S,T ). Proof. Follows from the fact that is row equivalent to (T L(v 1 ) L(v n )) T 1 (T L(v 1 ) L(v n )) = (I m T 1 L(v 1 ) T 1 L(v n )) = (I m [L] S,T ). 2.3. Similarity. Theorem 2.7. Let T = { v 1,, v n } and T = { u 1,, u n } be two ordered bases for a vector space V, and let L be a linear operator on V. Let S be the transition matrix representing the change from T to T. If A is the matrix representing L with respect to S, and B is the matrix representing L with respect to T, then B = S 1 AS. Proof. By definition, S = [id] T,T, S 1 = [id] T,T, A = [L] T,T and B = [L] T,T. Now Theorem 2.4 implies B = [L] T,T = [id] T,T [L] T,T [id] T,T = S 1 AS. Definition 2.8. Let A and B be n n matrices. B is said to be similar to A if there exists a nonsingular matrix S such that B = S 1 AS.

8 LECTURE 7: LINEAR TRANSFORMATION Upshot, similar matrices could be think as the matrix representations of a linear map under different basises. A very important problem is to find a nonsignular matrix S such that S 1 AS is as simple as possible. Example 2.9. Following are some very important facts (we will not give a proof right now) (1) A 2 2 matrix over R is similar to one of the following matrices (with a 1 a 2 R, λ R, θ [0, 2π) ) ( a 1 0 ), ( λ 1 θ sin θ ), (cos 0 a 2 0 λ sin θ cos θ ) (2) Any matrix over C is similar to an upper triangler matrix. (In fact, one can say more). (3) Any symmatric matrix over R is similar to a diagonal matrix. 2.4. Nullity-Rank theorem (revisited). Let L V W be a linear map between finite dimensional spaces V and W. Definition 2.10. (i) The nullity of L is defined as the dimension of ker(l). It is denoted by null(l) (ii) The rank of L is defined as the dimension of L(V ). It is denoted by rank(l). Theorem 2.11. Let L V W be a linear map. We have null(l) + rank(l) = dim V. Proof. Suppose that dim V = n and dim W = m. Fix any basis S of V and T of W. Then [L] S,T is an n m matrix. Recall that N([L] S,T ) is the null space of the matrix [L] S,T. Let Col([L] S,T ) be the column space of [L] S,T. Now ι S (N([L] S,T )) = ker(l) and ι T (Col([L] S,T ) = L(V ). Therefore dim ker(l) = dim N([L] S,T ) and dim L(V ) = dim Col([L] S,T = dim Row([L] S,T ). Hence the theorem follows from the nullity-rank theorem for matrices.