Lecture 14: The Rank-Nullity Theorem

Similar documents
Lecture 1: Electrical Networks and Random Walks

Lecture 19: The Determinant

Lecture 1: Period Three Implies Chaos

Lecture 3: Sizes of Infinity

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

Lecture 4: Applications of Orthogonality: QR Decompositions

Lecture 8: A Crash Course in Linear Algebra

Lecture 6: Finite Fields

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Math 54 HW 4 solutions

Lecture 22: Section 4.7

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

Spanning, linear dependence, dimension

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4].

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations

Algorithms to Compute Bases and the Rank of a Matrix

Midterm 1 Solutions Math Section 55 - Spring 2018 Instructor: Daren Cheng

Math 113 Winter 2013 Prof. Church Midterm Solutions

MAT 242 CHAPTER 4: SUBSPACES OF R n

LINEAR ALGEBRA: THEORY. Version: August 12,

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Solution: (a) S 1 = span. (b) S 2 = R n, x 1. x 1 + x 2 + x 3 + x 4 = 0. x 4 Solution: S 5 = x 2. x 3. (b) The standard basis vectors

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

14: Hunting the Nullspace. Lecture

Lecture 2: Change of Basis

1 Last time: inverses

Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1)

Solutions to Math 51 First Exam April 21, 2011

Linear algebra and differential equations (Math 54): Lecture 10

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Lecture 2: Vector Spaces, Metric Spaces

ARE211, Fall2012. Contents. 2. Linear Algebra (cont) Vector Spaces Spanning, Dimension, Basis Matrices and Rank 8

EXAM. Exam #2. Math 2360 Summer II, 2000 Morning Class. Nov. 15, 2000 ANSWERS

LINEAR ALGEBRA KNOWLEDGE SURVEY

A Harvard Sampler. Evan Chen. February 23, I crashed a few math classes at Harvard on February 21, Here are notes from the classes.

Vector space and subspace

Lecture 2: Period Three Implies Chaos

Footnotes to Linear Algebra (MA 540 fall 2013), T. Goodwillie, Bases

MATA22Y 2018, Tutorial #1, Lucas Ashbury-Bridgwood. oce hours: Mondays 1011, IC 404. assignments & sol posted Fridays on Quercus/modules

Math 3361-Modern Algebra Lecture 08 9/26/ Cardinality

Recitation 5: Elementary Matrices

Math 265 Midterm 2 Review

Dot Products, Transposes, and Orthogonal Projections

Math 4377/6308 Advanced Linear Algebra

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations

Minilecture 2: Sorting Algorithms

MATH 2360 REVIEW PROBLEMS

SUPPLEMENT TO CHAPTER 3

MITOCW ocw f99-lec09_300k

Orthogonal Complements

Linear Algebra, Summer 2011, pt. 2

Chapter 6 - Orthogonality

We see that this is a linear system with 3 equations in 3 unknowns. equation is A x = b, where

MATH2210 Notebook 3 Spring 2018

Lecture 2: Mutually Orthogonal Latin Squares and Finite Fields

Rank & nullity. Defn. Let T : V W be linear. We define the rank of T to be rank T = dim im T & the nullity of T to be nullt = dim ker T.

Vector space and subspace

Cool Results on Primes

Equivalent Forms of the Axiom of Infinity

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

Math 110, Spring 2015: Midterm Solutions

Math 235: Linear Algebra

Lecture 4: Constructing the Integers, Rationals and Reals

Lecture 18: The Rank of a Matrix and Consistency of Linear Systems

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Chapter 2 Linear Transformations

Introduction to Determining Power Law Relationships

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Vector Spaces and Dimension. Subspaces of. R n. addition and scalar mutiplication. That is, if u, v in V and alpha in R then ( u + v) Exercise: x

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Section 4.5. Matrix Inverses

i=1 α ip i, where s The analogue of subspaces

Solutions to Section 2.9 Homework Problems Problems 1 5, 7, 9, 10 15, (odd), and 38. S. F. Ellermeyer June 21, 2002

Math 3191 Applied Linear Algebra

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1

2018 Fall 2210Q Section 013 Midterm Exam II Solution

Systems of Linear Equations

Row Reduction

March 27 Math 3260 sec. 56 Spring 2018

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Math 110 Answers for Homework 6

Chapter 2: Matrix Algebra

Recitation 9: Probability Matrices and Real Symmetric Matrices. 3 Probability Matrices: Definitions and Examples

CITS2211 Discrete Structures (2017) Cardinality and Countability

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

(II.B) Basis and dimension

Recitation 8: Graphs and Adjacency Matrices

(b) The nonzero rows of R form a basis of the row space. Thus, a basis is [ ], [ ], [ ]

NOTES (1) FOR MATH 375, FALL 2012

Math 291-2: Lecture Notes Northwestern University, Winter 2016

means is a subset of. So we say A B for sets A and B if x A we have x B holds. BY CONTRAST, a S means that a is a member of S.

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ.

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

1111: Linear Algebra I

Transcription:

Math 108a Professor: Padraic Bartlett Lecture 14: The Rank-Nullity Theorem Week 6 UCSB 2013 In today s talk, the last before we introduce the concept of matrices, we prove what is arguably the strongest theorem we ve seen thus far this quarter the rank-nullity theorem! 1 The Rank-Nullity Theorem: What It Is The rank-nullity theorem is the following result: Theorem. Let U, V be a pair of finite-dimensional vector spaces, and let T : U V be a linear map. Then the following equation holds: dimension(null(t )) + dimension(range(t )) = dimension(u). Though we have never mentioned this theorem in class before, we have proved that it holds in a number of specific situations in earlier classes! For example, in lecture 11 where we were describing why the null space was interesting, we considered the linear map T : R 2 R, T (x, y) = 2x y. Using this map, we made the observations that 1. the null space of T is the line {(x, 2x) x R}, and 2. for any a R, T 1 (a) is just a copy of this line shifted by some constant: i.e. T 1 (a) = {(0, a) + n n null(t )}. We illustrated this situation before with the following picture: T Graphically, what we did here is decompose the domain of our linear map, R 2, into range(t )-many copies of the null space. If we look at the dimensions of the objects we studied here, we said that we could take a two-dimensional object (the domain) and break it into copies of this one dimensional object (the null space), with as many copies of this one-dimensional object as we have T 1 (a) s (i.e. elements in the range.) In general, we said in this class that we could always break the domain into copies of the null space, with as many copies as we have sets T 1 (a) s, i.e. elements a in the range! This should lead you to believe that the rank-nullity theorem is true: it certainly seems like it should always hold, given our work in that earlier lecture. 1

However, this might not feel a lot like a proper proof to some of you: there s a pretty picture here, but where is the rigor? How am I turning this decomposition into an argument about dimension (which is a statement about bases, which we re not saying anything about here?) The point of this lecture is to take the above intuition, and turn it into a proper argument. We do so as follows: 2 The Proof: Some Useful Lemmas First, recall the following theorem, that we proved over the course of two days earlier in the class (see lecture 6): Theorem. The idea of dimension is well defined. In other words: suppose that U is a vector space with two different bases B 1, B 2 containing finitely many elements each. Then there are as many elements in B 1 as there are in B 2. We will need this theorem to prove the rank-nullity theorem. As well, we will also need the following: Theorem. Suppose that U is a n-dimensional vector space with basis B, and that S is a subspace of U. Then S is also finite dimensional, and in particular has dimension no greater than n. Proof. We prove this statement by contradiction. Suppose that S is a set with dimension greater than n. Using this fact, create a set T of n + 1 linearly independent vectors in S as follows. At first, simply pick any nonzero vector v S, and put v T. Then, repeat the following process 1. Look at T. If it has no more than n vectors in it, then it cannot span S, because S has dimension greater than n. So there is some vector w S that is not in the span of T. 2. Put w in T. Notice that if T was linearly independent before we added w, it is still linearly independent, because w was not in the span of T. So: we now have a set of n + 1 linearly independent vectors in S, which means in particular we have a set of n + 1 linearly independent vectors in U! This is a problem, because U is n-dimensional. To see why, notice that we can grow this set T of n + 1 linearly independent vectors into a basis for U as follows. Start with T. If T spans U, stop. Otherwise, repeat the following process until we get a linearly independent set that spans U: 1. If T does not span U, then there is some element b in the basis B of U that is not in the span of T (because otherwise, if T contained all of a basis for U in its span, it would be forced to contain all of U itself in its span!). 2. Add b to T. Again, notice that if T was linearly independent before we added b, it is still linearly independent, because b was not in the span of T. 2

Eventually, this process will stop, as after n steps your set T will at the least contain all of the elements of B, which is a basis for U! So this creates a linearly independent set that spans U: i.e. a basis! And in particular a basis with at least n + 1 elements, because T started with n + 1 elements and had more things potentially added to it later. However, B is a basis for U with n elements. We ve proven that dimension is well-defined: i.e. that a vector space cannot have two different bases with different sizes! Therefore, this is impossible, and we have a contradiction. Consequently, our assumption must be false: in other words, S must have dimension no greater than n. 3 The Main Proof With these tools set up, we proceed with the main proof: Theorem. Let U, V be a pair of finite-dimensional vector spaces, and let T : U V be a linear map. Then the following equation holds: dimension(null(t )) + dimension(range(t )) = dimension(u). Proof. We prove this as follows: we will create a basis for U with as many elements as dimension(null(t )) + dimension(range(t )), which will clearly demonstrate the above equality. We do this as follows: 1. By using Theorem 2, we can see that the null space of T is finite dimensional. Take some basis N B for the null space of T. 2. Now: let B be a basis for U, and R B be some set that is empty for now. If N B is also a basis for U, stop. Otherwise, repeat the following process: (a) Look at N B R B. If it is a basis for U, stop. Otherwise, there is some vector b that is in the basis set B, that is not in N B R B. (Again, this is because if not, T would contain the entirety of a basis for U in its span, which would force it to contain U itself in its span!) (b) Put b in R B. Notice that if N B R B was linearly independent before we added b, it is still linearly independent, because r was not in the span of NB R B. As before, this process will eventually end, because we only have finitely many elements in B. 3. At the end of this process, we have two sets R B, N B such that their union is a basis for U. Look at the set T (R B ) = {T ( r) r R B }. We claim that T (R B ) is a basis for the range of T. To prove this, we will simply show that this set spans the range, and is linearly independent. 4. To see that T (R B ) spans the range: take any v in the range of T. Because v is in the range, there is some u U such that T ( u) = v. Write u as some linear combination 3

of elements in our basis R B N B : i.e. u = + n i N B γ i n i. Let w be the following vector: w = In other words, w is just u if you get rid of all of the parts made using vectors in the null space! Now, simply observe that on one hand, w is an element in the span of R B, and on the other v = T ( u) = T + γ i n i n i N B = T + T γ i n i n i N B = λ i T ( r i ) + λ i T ( n i ) n i N B = λ i T ( r i ) + 0 = λ i T ( r i ), because things in the null space get mapped to 0! Therefore, given any v in the range, we have expressed it as a linear combination of elements in T (R B ). In other words, T (R B ) spans the range of V. 5. To show that T (R B ) is linearly independent: take any linear combination of vectors in T (R B ) that is equal to 0: λ i T ( r i ) = 0. We want to show that all of the coefficients λ i are 0. To see this, start by using the fact that T is linear: 0 = λ i T ( r i ) = T. 4

This means that the vector is in the null space of T. Therefore, we can create a linear combination of vectors in N B that are equal to this vector: i.e. we can find elements in N B and constants γ i such that = γ i n i. n i N B But this means that we have n i N B γ i n i = 0, which is a linear combination of elements in our basis R B N B that equals 0! Because this set is a basis, it is linearly independent; therefore all of the coefficients in this linear combination must be 0. In particular, this means that all of the λ i s are 0, which proves what we wanted: that the set T (R B ) is linearly independent! Consequently, we have created a basis for U of the form N B and R B, where T (R B ) is a basis for the range and N B is a basis for the null space. This means that, in particular, the dimension of U is the number of elements in N B (i.e. the dimension of the null space) plus the number of elements in R B (i.e. the dimension of the range.) So we re done! 5