Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Similar documents
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Announcements Wednesday, November 01

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Linear Algebra, 3rd day, Wednesday 6/30/04 REU Info:

Discrete Math, Second Problem Set (June 24)

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

MAT Linear Algebra Collection of sample exams

MAT 2037 LINEAR ALGEBRA I web:

Announcements Wednesday, November 01

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Apprentice Linear Algebra, 1st day, 6/27/05

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

MATH 3030, Abstract Algebra Winter 2012 Toby Kenney Sample Midterm Examination Model Solutions

Linear Algebra March 16, 2019

4. Linear transformations as a vector space 17

Chapter 1 Vector Spaces

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Chapter 2 Notes, Linear Algebra 5e Lay

Solving Systems of Equations Row Reduction

Elementary linear algebra

UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH *

10. Smooth Varieties. 82 Andreas Gathmann

REU 2007 Apprentice Class Lecture 8

Math 240 Calculus III

Linear Algebra Lecture Notes-I

18.S34 linear algebra problems (2007)

CSL361 Problem set 4: Basic linear algebra

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

1 Last time: multiplying vectors matrices

1.4 Linear Transformation I

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Review 1 Math 321: Linear Algebra Spring 2010

Chapter 3. Vector spaces

MA106 Linear Algebra lecture notes

MATH 583A REVIEW SESSION #1

235 Final exam review questions

Abstract Vector Spaces and Concrete Examples

Math 24 Spring 2012 Questions (mostly) from the Textbook

Chapter Two Elements of Linear Algebra

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Chapter 1: Linear Equations

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Solution Set 3, Fall '12

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

2. Every linear system with the same number of equations as unknowns has a unique solution.

Math113: Linear Algebra. Beifang Chen

Final EXAM Preparation Sheet

Math 369 Exam #2 Practice Problem Solutions

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013.

Lecture Summaries for Linear Algebra M51A

5 Eigenvalues and Diagonalization

Chapter 2: Linear Independence and Bases

Recall the convention that, for us, all vectors are column vectors.

Linear Algebra Lecture Notes

Linear algebra and applications to graphs Part 1

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

1 Last time: inverses

4.6 Bases and Dimension

Math 3108: Linear Algebra

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Conceptual Questions for Review

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

Math 113 Midterm Exam Solutions

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Calculation in the special cases n = 2 and n = 3:

Chapter 1: Linear Equations

A Do It Yourself Guide to Linear Algebra

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Algebra Workshops 10 and 11

Online Exercises for Linear Algebra XM511

Math 4310 Solutions to homework 7 Due 10/27/16

Problem Set 9 Due: In class Tuesday, Nov. 27 Late papers will be accepted until 12:00 on Thursday (at the beginning of class).

Spanning, linear dependence, dimension

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space

1111: Linear Algebra I

Review problems for MA 54, Fall 2004.

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

EIGENVALUES AND EIGENVECTORS 3

Chapter 1: Systems of Linear Equations

web: HOMEWORK 1

Linear Algebra I. Ronald van Luijk, 2015

Math 2940: Prelim 1 Practice Solutions

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Numerical Linear Algebra Homework Assignment - Week 2

Linear Equations in Linear Algebra

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

MAT 1302B Mathematical Methods II

Math 110, Spring 2015: Midterm Solutions

CS 246 Review of Linear Algebra 01/17/19

SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra

Eigenvectors and Hermitian Operators

(1) for all (2) for all and all

Transcription:

Linear Algebra, 4th day, Thursday 7/1/04 REU 004. Info http//people.cs.uchicago.edu/laci/reu04. Instructor Laszlo Babai Scribe Nick Gurski 1 Linear maps We shall study the notion of maps between vector spaces, called linear maps or homomorphisms. A function f V! W between vector spaces dened over the same eld F is a linear map if (1) f(x + y) = f(x) + f(y) () f(x) = f(x) for all x; y V and all F; these two conditions are equivalent to the single condition that f distributes over linear combinations X X f( i x i ) = i f(x i ) i i Now assume that we have a basis e = fe 1 ; e ; ; e n g for V and a basis e 0 for W. We would like to assign a matrix to the linear map f which depends on the two bases we have chosen and represents the function f completely. We do this by dening the matrix [f] (e;e 0 ) to have as its i-th column the vector [f(e i )] e 0 Thus we can write this matrix as [f] (e;e 0 ) = [[f(e 1 )] e 0 [f(e n )] e 0]; if W is of dimension k, this matrix is a k n matrix. Our next goal is to show that this matrix [f] (e;e 0 ) gives the same geometric information as the linear map f. We begin this task with a theorem. Theorem 4.1. Let e = fe 1 ; ; e n g be a basis for V. Then for any set of vectors w 1 ; ; w n W, there exists a unique linear map f V! W with the property that f(e i ) = w i for all i. 1

Proof We rst show uniqueness, and existence follows easily afterwards. Assuming there is such an f, then its value on a vector x V is determined since e is a basis. Writing x = Pn i=1 ie i, we have that this shows uniqueness. f(x) = f( nx i=1 i e i ) = To show the existence of such a map, we dene f(x) = nx i=1 nx i=1 i w i i f(e i ); and check that this is a linear map. This is left as a simple exercise. The theorem means that the matrix [f] is determined by the images f(e i ) of each of the basis vectors in V, and by the denition of basis this determines the linear map f as well. Conversely, if we know where f sends each basis vector, then there is a unique linear map that agrees with f on e. Thus the matrix [f] and the linear map f give the same information. Example 4. ((The derivative of polynomials)). Remember that the collection of all polynomials of degree n, denoted P n, is a vector space of dimension n + 1 which is spanned by the basis 1; x; x ; ; x n. There is a linear map D which sends the polynomial p(x) to its derivative polynomial p 0 (x); thus we have a linear map D P n! P n 1. We shall compute the matrix [D] with respect to the natural bases. We know that D(x k ) = kx k 1, so D(e k ) = ke k 1 where k ranges from 0 to n. This shows that the matrix [D] has, as its k-th column the vector which is all zeros except that its (k 1)-st entry is (k 1); this is an n (n + 1) matrix. For n = 3, the matrix for D P 3! P is shown below. [D] = Example 4.3. Projection onto the plane 4 0 1 0 0 0 0 0 0 0 0 3 We have a linear map R 3! R which forgets about the z-coordinate. Using the standard basis, we have (e 1 ) = e 1 (e ) = e (e 3 ) = 0 and so the 3 matrix [] is 1 0 0 0 1 0 3 5

Example 4.4. Rotation of the plane by the angle We denote by the map which rotates R by the xed angle. Using the standard basis, we nd that cos sin [ ] = sin cos If we use a dierent basis, however, the matrix for changes. Let e 1 be the vector (1; 0) as before, but now let e 3 be the vector obtained by rotating e 1 by. In general, these vectors are linearly independent, and thus form a basis for R. Now we will compute [ ] with respect to this basis. By denition, (e 1 ) = e 3 ; using some basic geometry we can determine (e 3 ) and thus nd that the matrix [ ] is now 0 1 1 cos This examples shows that when computing the matrix associated to a linear map, it is important to remember what bases are involved. We can now make two denitions that will be useful later. Denition 4.5. Let A be an n n matrix (it is important that A is a square matrix here). Then we dene the trace of A, denoted tra, to be the sum of the diagonal entries of A. If A is the matrix a b c d then we dene the determinant of A to be ; deta = ad bc We will later see how to dene determinant for n n matrices. If we are given a linear map f V! V, then the trace and determinant of the matrix [f] do not depend on what basis we have chosen. In the examples above of the two matrices associated to the rotation map, it is easy to check that both matrices have trace cos and determinant 1. Assume that we are given two matrices, A = (a ij ) and B = (b jk ) where A is an r s matrix and B is an s t matrix. Then we can multiply them to get a matrix C = AB, where C = (c ik ) and the entry c ik is equal to sx j=1 a ij b jk Note that A must have the same number of rows as B has columns or the indices will not match as they should. Let f V! W be a linear map, with bases e for V and e 0 for W. Given a vector x V, we would like to determine the coordinates of the vector [f(x)] with respect to e 0 in terms of the corrdinates of x with respect to e and the matrix [f] (e;e 0 ). 3

Exercise 4.6. Using the denition of matrix multiplication above, we have the formula [f] (e;e 0 )[x] e = [f(x)] e 0 We can use this exercise to study the relationship between the composite of two linear maps and the product of their matrices. Let f V! W and g W! Z be two linear maps, and let V; W; Z have bases e; e 0 ; e 00, respectively. We have the composite map gf which is dened on vectors as (gf)(x) = g(f(x)); we also have matrices [f] (e;e 0 ) and [g] (e 0 ;e 00 ), as well as a matrix for the composite gf with respect to e and e 00. The goal is to prove that To do this, a lemma is required. [gf] (e;e 00 ) = [g] (e 0 ;e 00 )[f] (e;e 0 ) Lemma 4.7. If A is a k n matrix, then A = 0 if and only if for all x F n, Ax = 0. Proof If A = 0, it is obvious that Ax = 0 for all x. Assume that A 6= 0; we will produce a vector x such that Ax 6= 0, and that will complete the proof. Since A is nonzero, it has a nonzero entry; call it a ij. Then Ae j = Pl a lje l, where the vector e l is zero except for entry l which is 1. This vector is nonzero, since a ij is nonzero. Corollary 4.8. If for all vectors x, Ax = Bx, then A = B as matrices. Proof If Ax = Bx, then (A B)x = 0 for all x and therefore A B = 0 by the above. Now we can show that [gf] = [g][f] by showing that these two matrices are equal when applied to any vector x. On the left side, we have by the exercise above. On the right, we have [gf][x] = [(gf)(x)] [g][f][x] = [g][f(x)] = [g(f(x))]; but [(gf)(x)] = [g(f(x))] since that is exactly how the composition of two functions is dened. This proves that [gf] = [g][f]. Returning to our example of the linear map which rotates by, we can use that [ ][ ] = [ ] to prove the angle-sum formula from trigonometry. Since rotating by and then rotating by is the same as rotating by +, we get that [ ] = [ + ] = cos( + ) sin( + ) sin( + ) cos( + ) Multiplying the matrices for and together, the result is cos cos sin sin (sin cos + cos sin ) sin cos + cos sin cos cos sin sin 4 ;

comparing these two matrices gives the desired identity. Now we turn to using matrices to solve systems of linear equations. If A = [a 1 a a n ] is a k n matrix with columns a i, and x is any vector, then Ax = x 1 a 1 + x a + + x n a n. Therefore the equation Ax = 0, where x is the variable, always has the trivial solution x = 0. Exercise 4.9. Ax = 0 has a nontrivial solution if and only if the columns of A are linearly dependent if and only if the rank of A is less than n. Exercise 4.10. If A is a k n matrix, then Ax = 0 has a nontrivial solution if k < n (Hint use Exercise??). Exercise 4.11 (Showing the existence of a nontrivial solution). Using the corollary above, we can show that the system of equations x + 3y z = 0 7x + 8y + 5z = 0 has a nontrivial solution. Solving this system is equivalent to solving Ax = 0 where 3 1 A = 7 8 5 and x = 4 7 8 5 In this case, k = and n = 3, so there must be a nontrivial solution. X If we try to solve the general equation Ax = b, this is equivalent to solving i 3 5 x i a i = b; or nding out if b is in the span of the vectors a i. Exercise 4.1. The equation Ax = b is solvable if and only if b Spanfa i g if and only if the rank of A is the same as the rank of [Ajb]. Let U = fx Ax = 0g; this is called the solution space. It is easy to see that U is a subspace of F n. It is obvious that U contains the zero vector. If x; y U, then (x + y) U since A(x + y) = Ax + Ay = 0 + 0 = 0 Similarly, if x U, then x U. Theorem 4.13. The dimension of U is n rk(a). 5

Proof Dene ' F n! F k by '(x) = Ax; ' is linear because A is a matrix. The kernel of ' is exactly the subspace U dened above. We know that dim(ker ') = n dim(im'), so we only need to show that dim(im') = rk(a). But the image of ' is the set of all vectors of the form Ax, and this subspace is spanned by the columns of A, hence the dimension of the image is necessarily the rank of A. It is important to note that Ax = b may or may not have a solution. Assume that it does have a solution, and call that solution x 0. To nd all solutions to Ax = b, we can subtract Ax = b Ax 0 = b A(x x 0 ) = 0 to nd that all solutions of Ax = b are translates of the vector x 0 by elements of U. Another way to say this is that x x 0 is a solution to Ax = 0 (where x really means two dierent things in this sentence), so x x 0 U; this is the same as saying that x x 0 + U or that x has the form x 0 + w for w U. 6