Linear Algebra Short Course Lecture 2

Similar documents
Linear Algebra Short Course Lecture 4

Vector Spaces and Linear Transformations

5 Linear Transformations

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Abstract Vector Spaces and Concrete Examples

Linear Algebra. Session 8

Review 1 Math 321: Linear Algebra Spring 2010

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Review of linear algebra

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Elementary linear algebra

Solving a system by back-substitution, checking consistency of a system (no rows of the form

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Math 113 Midterm Exam Solutions

Systems of Linear Equations

MAT Linear Algebra Collection of sample exams

Solution to Homework 1

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Linear Algebra Final Exam Study Guide Solutions Fall 2012

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

12. Perturbed Matrices

Math 3108: Linear Algebra

Linear maps. Matthew Macauley. Department of Mathematical Sciences Clemson University Math 8530, Spring 2017

Lecture Summaries for Linear Algebra M51A

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Chapter 2 Linear Transformations

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Linear Algebra. Min Yan

Outline. Linear maps. 1 Vector addition is commutative: summands can be swapped: 2 addition is associative: grouping of summands is irrelevant:

Linear Systems. Math A Bianca Santoro. September 23, 2016

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps

Matrices. In this chapter: matrices, determinants. inverse matrix

The following definition is fundamental.

Math 113 Winter 2013 Prof. Church Midterm Solutions

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

EIGENVALUES AND EIGENVECTORS 3

Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University

Eigenvectors. Prop-Defn

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

LINEAR ALGEBRA SUMMARY SHEET.

Test 3, Linear Algebra

This last statement about dimension is only one part of a more fundamental fact.

1. General Vector Spaces

1 Linear transformations; the basics

Linear Algebra Lecture Notes-I

i.e. the i-th column of A is the value of T at the i-th standard basis vector e i R n.

Linear Algebra. Week 7

1 9/5 Matrices, vectors, and their applications

Definition 1.2. Let p R n be a point and v R n be a non-zero vector. The line through p in direction v is the set

(i) [7 points] Compute the determinant of the following matrix using cofactor expansion.

Linear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay

Topic 1: Matrix diagonalization

Math 344 Lecture # Linear Systems

NOTES ON LINEAR ALGEBRA. 1. Determinants

LINEAR ALGEBRA REVIEW

1 Invariant subspaces

Elementary maths for GMT

Dimension. Eigenvalue and eigenvector

Review of Linear Algebra

Math 1553 Introduction to Linear Algebra

Linear Maps and Matrices

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Math 4153 Exam 1 Review

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

LINEAR ALGEBRA MICHAEL PENKAVA

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Linear Algebra Highlights

Systems of Linear Equations

Test 1 Review Problems Spring 2015

Math Linear Algebra Final Exam Review Sheet

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

A course in low-dimensional geometry

Math 110, Spring 2015: Midterm Solutions

1 Matrices and Systems of Linear Equations

SYLLABUS. 1 Linear maps and matrices

Linear Algebra 2 More on determinants and Evalues Exercises and Thanksgiving Activities

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Projections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for

Chapter 2. Square matrices

Online Exercises for Linear Algebra XM511

Chapter 3. Vector spaces

Chapter 8 Integral Operators

Linear Algebra problems

MATH 304 Linear Algebra Lecture 15: Linear transformations (continued). Range and kernel. Matrix transformations.

2: LINEAR TRANSFORMATIONS AND MATRICES

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math Linear Algebra II. 1. Inner Products and Norms

Section 2.2: The Inverse of a Matrix

MAT 2037 LINEAR ALGEBRA I web:

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

ELEMENTARY LINEAR ALGEBRA

Linear Algebra Primer

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Transcription:

Linear Algebra Short Course Lecture 2 Matthew J. Holland matthew-h@is.naist.jp Mathematical Informatics Lab Graduate School of Information Science, NAIST 1

Some useful references Introduction to linear maps: Axler (1997, Ch. 3) Metric space of linear maps: Rudin (1976, Ch. 9) Excellent review of matrix basics: Horn and Johnson (1985, Ch. 0) Very accessible matrix algebra; basic identities, inequalities: Magnus and Neudecker (1999, Ch. 1 3,11) Invariant quantities: Axler (1997, Ch. 10) (note high dependency on previous chapters) 2

Lecture contents 1. Linear transformations and their classes 2. Transformations and space structure 3. Matrices and their role in the theory 2

Lecture contents 1. Linear transformations and their classes 2. Transformations and space structure 3. Matrices and their role in the theory 2

Linearity: from sets to functions The stage for our current theory is vector spaces U, V, W with common field F, assumed R or C. Our focus shifts from sets with a linearity property to functions with a linearity property. Defn. We call T : U W a linear transformation (or map) when u, u U, α F, T(u + u ) = T(u) + T(u ) T(αu) = αt(u) (*) The naming is natural; T maps any linear combination of say u 1,..., u m U to a linear combination of their maps T(u 1 ),..., T(u m ). 3

Linearity: from sets to functions Some additional notation: Denote by L(U, W) the set of all linear maps from U to W, L(U, W).= {T : U W; T is linear}. When T L(U, U), call T a linear operator on U. Denote by L(U).= L(U, U). Linear operators are without question the key focus of LA. 4

Linear maps and bases The bases of domain/co-domain of linear maps plays a key role. Let B U = {u 1,..., u m } be a basis of U. Example. (*) Linear maps on U are completely determined by where they map the vectors of B U. That is, for linear maps S, T L(U, W), S(u i ) = T(u i ), i = 1,..., m S = T. Example. (*) Similarly, given arbitrary m vectors w 1,..., w m W, the only linear map T L(U, W) which satisfies T(u i ) = w i, i = 1,..., m is that defined where u = α 1 u 1 + + α m u m. T(u).= α 1 w 1 + + α m w m, u U 5

Various linear maps (*) For A R m n, the map S(x).= Ax is S L(R n, R m ). (*) If P(R) is set of polynomials on R, note T(p).= b a p(x) dx satisfies T L(P(R), R) T(p).= p ( ) satisfies T L(P(R), P(R)) (*) Counter-example: for A R m n, the map defined S(x).= Ax + m for m 0 is not linear, i.e., S / L(R n, R m ). (*) For x = (x 1,..., x n ) F n, note T(x).= (x π(1),..., x π(n) ), where π is an arbitrary permutation, is T L(F n ). (*) T defined (Tp)(x).= βx 3 p(x) for fixed β R, is T L(P(R)). 6

Various linear maps (more) (*) All linear operators on dim-1 spaces are simply scalar multiplications. (*) Additivity is not a superfluous requirement; find a map T : R 2 R such that T(αx) = αt(x) but T / L(R 2, R). (*) Extensions of linear maps. Let U V be a subspace, and T L(U, W). Construct a map T L(V, W) such that T(u) = T(u), u U. 7

Classes of linear maps Linear spaces come in many varying forms. With standard algebraic operations, L(U, V) is yet another example. Example. (*) If U, V are vector spaces on field F, define operations for arbitrary S, T L(U, V) by (αt)( ).= αt( ), α F (T + S)( ).= T( ) + S( ) Consider what the additive inverse/identity are, recalling in particular VM.5 from Lec 1, and show L(U, V) is a vector space on F. What is dim L(U, V)? This motivates some new tools. (*) If F is R or C, note that the operator norm T.= is a valid norm on L(F n, F m ). sup T(x) 2 x 2 1 8

Products via compositions A quasi-multiplication operation is naturally defined between elements of L(U, V) and L(V, W). Defn. For T L(U, V), S L(V, W), we define the product ST by the composition (ST)(u).= S(T(u)), u U. (*) As one would hope, ST L(U, W). (*) Extends naturally to general case of m 2 multiplicands, i.e., where T 1 L(V 0, V 1 ), T 2 L(V 1, V 2 ),..., T m L(V m 1, V m ). (*) The product is almost like that seen on fields. Prove: Analogue of associativity of multiplication on fields (FM.3). Existence of multiplicative identity, i.e., there exists I L(V, W) s.t. IT = T for all T L(U, V), and vice versa. But commutativity need not hold, i.e., ST need not equal TS. 9

Lecture contents 1. Linear transformations and their classes 2. Transformations and space structure 3. Matrices and their role in the theory 9

Lecture contents 1. Linear transformations and their classes 2. Transformations and space structure 3. Matrices and their role in the theory 9

Transformation-induced structure T L(U, V) induces all sorts of interesting structure to U, V. Defn. The nullspace (or kernel) and range (or image) of T are null T.= {u U : Tu = 0} range T.= T(U).= {v V : Tu = v, u U} The structure we promised is easily observed. (*) Both null T and range T are subspaces of U and V. (*) Let D L(P(R)) be the derivative operation. What is null D? (*) Same D but now D L(P k (R)), where P k (R) restricts the polynomials to order k > 0 or less. What is range D? 10

Transformation-induced structure The key structural theorem for T L(U, V) is as follows. Thm. (**) Let U be dim U <. Then, dim range T < and dim U = dim null T + dim range T. This is a huge generalization of the key points of G. Strang s fundamental theorems. Example. (*) Let A R m n. Define T(x).= Ax, S(x).= A T y. Then note range T = col A = row A T, range S = col A T = row A and of course the nullspaces coincide with the usual nullspace of the matrices. The rest is just preservation of rowspaces in reducing to row-echelon form. The rank is just rank A = dim range T. 11

Transformation info encoded in subspaces A review of basic terms. Defn. We call a map T : U V injective if and surjective if range T = V. u u = T(u) T(u ), If both, we call T bijective, or say it is a one-to-one mapping from U onto V. (*) If T L(U, V) is injective and {u 1,..., u k } U is independent, then {T(u 1 ),..., T(u n )} V is independent. What about if not injective? (*) Similarly, if [{u 1,..., u k }] = U and T is surjective, then [{T(u 1 ),..., T(u k )}] = V. What if not surjective? 12

Transformation info encoded in subspaces The structural results furnish handy conditions for these properties. Assume general T L(U, V). (*) T injective null T = {0}. (*) Thus injectivity equivalent to dim U = dim range T. (*) If dim U > dim V, then T cannot be injective. (*) If dim U < dim V, then T cannot be surjective. (*) Thus, have surjective T L(U, V) dim V dim U. Let T L(F n, F m ). Statements about generalized linear systems follow naturally from these results: (*) In terms of m and n, what can we say about the existence and uniqueness of solutions to T(x) = 0 and T(x) = b, x F n, b F m? 13

Invertibility of linear maps Defn. We say T L(U, V) is invertible if T 1 L(V, U) such that T 1 T = I L(U) TT 1 = I L(V) where I is the identity map on the respective spaces. Note: we are requiring T 1 be linear. (*) Justify the notation T 1 ; show the inverse, if it exists, is unique. (*) The following fact should be verified. T is invertible T is bijective The key to = direction is proving the inverse is linear. 14

Basic isomorphism theorems Defn. If exists T L(U, V), T invertible, then we say U and V are isomorphic. (*) If U, V are isomorphic, then (*) Let dim U, dim V <. Then dim U < dim V < U and V isomorphic dim U = dim V. This important basic fact says we can always find invertible linear maps between any finite-dim U, V of equal dimension. 15

Specializing to linear operators Things often become easier when we focus on linear operators, namely T L(U). (*) Assuming dim U <, the following are equivalent: (1) T is invertible (2) T is injective (3) T is surjective The finite-dim requirement is not vacuous: (*) Define T L(P(R)) by (Tp)(x).= 5x 3 p(x). Note injectivity need not imply surjectivity. (*) For U on F with dim U < and S, T L(U), we have: ST invertible S, T both invertible ST = I TS = I T = αi, some α F ST = TS, S L(U) 16

Lecture contents 1. Linear transformations and their classes 2. Transformations and space structure 3. Matrices and their role in the theory 16

Lecture contents 1. Linear transformations and their classes 2. Transformations and space structure 3. Matrices and their role in the theory 16

Matrices as arrays of field elements Defn. In general, a m n matrix B on field F is simply an array, b 11 b 1n B =......., b ij F b m1 b mn with addition/multiplication operations defined. Some notation: [b ij ].= B. Let b i be ith row entries; b (j) are jth column entries. Recall for B, B F m n, C F n l, x F n, α F, B + B = [b ij + b ij] αb = [αb ij ] Bx = x 1 b (1) + + x n b (n) = (b T 1 x,..., b T mx) BC = [ b T 1 ] C Bc (1) Bc (l) =.. b T mc 17

The many faces of matrices Matrices are quite multifaceted; in particular, we re interested in: Matrices as linear maps Matrices as representations of linear maps The first is easy. Already showed B F m n specifies a map in L(F n, F m ). Countless matrix identities and inequalities are well-known and very useful (Magnus and Neudecker, 1999). The latter is more subtle. The basic idea is that there exist equivalence classes of matrices unified by a unique underlying linear map whose characteristics specify properties of all the matrices in the equivalence class. 18

Matrix representations of abstract objects Let T L(U, V), dim U, dim V <, and fix bases B U.= {u 1,..., u n }, B V.= {v 1,..., v m }. Recalling T(u j ) = a 1j v 1 + + a mj v m, 1 j n uniquely represents each T(u j ) V, the scalars a ij completely specify T. Defn. Given the above discussion, we define a 11 a 1n M(T; B U, B V ).=......., a m1 a mn called the matrix representation of T. If U = V, denote M(T; B U ).= M(T; B U, B U ). (*) For fixed bases, note map T M(T) F m n is a bijection. 19

Matrix representations of abstract objects Let T L(U, V), S L(V, W). Fix bases B U, B V, B W. (*) Natural properties hold; the representation of the product is the product of the representations: M(ST; B U, B W ) = M(S; B V, B W )M(T; B U, B V ) Things extend naturally to vectors. For u U, define M(u; B U ).= α 1.. α n where u = α 1 u 1 + + α n u n is its B U expansion. (*) Then handily, verify M(T(u); B V ) = M(T; B U, B V )M(u; B U ). 20

Additional properties of T M(T) (*) First, note F m n is a vector space. What is dim F m n? (*) Then, note for U, V on field F, and M defined by T M(T; B U, B V ) for fixed bases, we have linearity, i.e., and furthermore M is invertible. M L(L(U, V), F m n ) (*) Using this, prove dim L(U, V) = dim(u) dim(v). (*) For T L(F n, F m ) and M(T) = [c ij ] F m n wrt standard bases, T(x) = M(T)x = x 1 c (1) + + x 1 c (n), x F n. 21

Matrix representations of abstract objects Why is this useful? Fixing bases, we may equivalently consider T(u) = v V or M(T(u)) = M(T)M(u). The former is abstract (u, v might be functions, etc.). The latter is concrete (typically F is R or C). This idea is central to linear algebra! It says some U, V and U, V can be very different, yet the transformations L(U, V) and L(U, V ) are fundamentally linked. This link is explicitly captured by matrix representations. 22

Links between genuinely distinct spaces Example. (*) Consider T L(P m (C), P m+2 (C)) and S L(R m, R m+2 ) defined for β C by (Tp)(x).= βx 2 p(x), p P m (C) S(u).= (0, 0, βu 1,..., βu m ), u R m With respect to the standard bases of each space, verify 0 0 0 0 0 β 0 M(T) = M(S) = 0 β,.... 0 0 β a (m + 2) m complex matrix. 23

Some of the key questions of LA Our next natural questions touch some of the fundamental objectives of linear algebra. Let T L(U, V), with associated matrices A.= M(T; B U, B V ) A.= M(T; B U, B V). What information about T can we decode from A and A? Is this information consistent between A and A? 24

Decoding information from matrix representations Defn. Let V be a vector space on F, with dim V = n. Given G F n n, if there exist bases B 1 = {b 1,..., b n }, B 2 = {b 1,..., b n} such that G = M(I; B 1, B 2 ) = [ M(b 1 ; B 2 ) M(b n ; B 2 ), ] then we call G a change-of-basis matrix on V from B 1 to B 2. We shall often denote G 1,2.= G in this case. (*) Every invertible A F n n is a change of basis matrix. (*) Conversely, every change of basis matrix is invertible, easily using the fact I = M(I 2 ; B, B) = M(I; B, B )M(I; B, B). The above facts are very important. Now we look at nomenclature. 25

Decoding information from matrix representations (*) First note importantly that if G 1,2 is a change of basis matrix on V from B 1 to B 2, then G 1 1,2 = G 2,1. (*) With this, one may readily confirm M(T; B 1 ) = G 2,1 M(T; B 2 )G 1,2. Defn. We call two square matrices A, B F n n similar, denoted A B, if there exists a COB matrix G such that A = G 1 BG. (*) Note similarity is an equivalence relation (i.e., check symmetry, reflexivity, transitivity). 26

Decoding information from matrix representations So, in the special case of operator T L(U), we have for any bases B, B. M(T; B) M(T; B ) (*) Thus, if we know A = M(T; B 1 ), and some A A, then it is guaranteed there exists a basis B 2 s.t. A = M(T; B 2 ). Hence the equivalence class of matrices similar to M(T; B 1 ) can be considered the class of matrices with underlying map T. 27

Decoding information from matrix representations It is well-known that similar matrices A A have many invariants, such as: det A = det A trace A = trace A A and A share eigenvalues A and A share a characteristic polynomial The same canonical forms (sparse, convenient forms) These facts are very nice for decoding information about two distinct matrices (since knowing similarity is sufficient). However, this tells us little intrinsic information about T itself! Can we define invariants in terms of just T that coincide with the invariants of its matrix representations? The answer is yes; this will be handled in Lecture 3 mainly. 28

Issues with more general arguments All the key ideas we just briefly introduced related to linear operator T L(U), with square matrix M(T; B). Which assumptions are critical? The deepest results are for operators only. Thus T L(U, U) = L(U). Of course this implies for any bases B, B of U that M(T; B, B ) is square, but that is not enough; we are interested in matrix representations M(T; B, B) only (save for the COB matrix). The reason for this is in the next example. 29

Issues with more general arguments Example. (*) Let T L(U) with n n matrix M(T; B 1, B 2 ) = [a ij ], and let A be the same as A, save for the kth row, which is defined a k. = a k + λa l, that is, by an elementary operation on A. Find a basis B 2 such that A = M(T; B 1, B 2). Properties such as trace and determinant need not be preserved over such operations, and extensions as above are infeasible. (*) Rank is interesting; recall it is preserved across elementary operations (which need not preserve similarity). It is also preserved across similar matrices, i.e., A A = rank A = rank A. 30

Lecture contents 1. Linear transformations and their classes 2. Transformations and space structure 3. Matrices and their role in the theory 30

Lecture contents 1. Linear transformations and their classes 2. Transformations and space structure 3. Matrices and their role in the theory 30

References Axler, S. (1997). Linear Algebra Done Right. Springer, 2nd edition. Horn, R. A. and Johnson, C. R. (1985). Matrix Analysis. Cambridge University Press, 1st edition. Magnus, J. R. and Neudecker, H. (1999). Matrix differential calculus with applications in statistics and econometrics. Wiley, 3rd edition. Rudin, W. (1976). Principles of Mathematical Analysis. McGraw-Hill, 3rd edition. 31