Row Space and Column Space of a Matrix

Similar documents
Algorithms to Compute Bases and the Rank of a Matrix

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

LINEAR ALGEBRA REVIEW

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 2360 REVIEW PROBLEMS

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

CSL361 Problem set 4: Basic linear algebra

Math 2174: Practice Midterm 1

Math Final December 2006 C. Robinson

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

5 Linear Transformations

MAT 242 CHAPTER 4: SUBSPACES OF R n

The definition of a vector space (V, +, )

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

This MUST hold matrix multiplication satisfies the distributive property.

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Study Guide for Linear Algebra Exam 2

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

1 Last time: determinants

Math 313 Chapter 5 Review

Math 369 Exam #2 Practice Problem Solutions

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

PRACTICE PROBLEMS FOR THE FINAL

4.9 The Rank-Nullity Theorem

Chapter 2: Matrix Algebra

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Announcements Monday, October 29

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Determine whether the following system has a trivial solution or non-trivial solution:

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Linear Equations in Linear Algebra

Lecture 21: 5.6 Rank and Nullity

1111: Linear Algebra I

MA 511, Session 10. The Four Fundamental Subspaces of a Matrix

ELE/MCE 503 Linear Algebra Facts Fall 2018

Solutions to Math 51 First Exam April 21, 2011

Introduction to Determinants

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Math 240 Calculus III

The Four Fundamental Subspaces

Chapter 2 Subspaces of R n and Their Dimensions

Matrices and systems of linear equations

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Midterm 1 Solutions Math Section 55 - Spring 2018 Instructor: Daren Cheng

2018 Fall 2210Q Section 013 Midterm Exam II Solution

Matrix Operations: Determinant

MAT Linear Algebra Collection of sample exams

Math 1553 Introduction to Linear Algebra

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

Chapter 2 Notes, Linear Algebra 5e Lay

Determinants Chapter 3 of Lay

Chapter 2. General Vector Spaces. 2.1 Real Vector Spaces

MTH 464: Computational Linear Algebra

Chapter 7. Linear Algebra: Matrices, Vectors,

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Matrix invertibility. Rank-Nullity Theorem: For any n-column matrix A, nullity A +ranka = n

Linear Independence x

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

(a) only (ii) and (iv) (b) only (ii) and (iii) (c) only (i) and (ii) (d) only (iv) (e) only (i) and (iii)

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

Eigenvalues and Eigenvectors

Linear equations in linear algebra

Lecture Summaries for Linear Algebra M51A

Chapter 5 Eigenvalues and Eigenvectors

Basis, Dimension, Kernel, Image

1. Select the unique answer (choice) for each problem. Write only the answer.

Linear Algebra 1 Exam 1 Solutions 6/12/3

Dimension. Eigenvalue and eigenvector

NOTES on LINEAR ALGEBRA 1

3.4 Elementary Matrices and Matrix Inverse

1 9/5 Matrices, vectors, and their applications

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Math 407: Linear Optimization

Math Linear Algebra Final Exam Review Sheet

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Online Exercises for Linear Algebra XM511

Basis, Dimension, Kernel, Image

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Working with Block Structured Matrices

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

MATH 2210Q MIDTERM EXAM I PRACTICE PROBLEMS

Transcription:

Row Space and Column Space of a Matrix 1/18 Summary: To a m n matrix A = (a ij ), we can naturally associate subspaces of K n and of K m, called the row space of A and the column space of A, respectively. Their dimensions are called the rwo rank and the column rank of A, respectively. These spaces play a useful role in studying properties of the matrix A of the system of linear equations having A as its coefficient matrix. It is a remarkable fact that the row rank and the column rank of a matrix are always the same. Moreover, they are also equal to another quantity, called the determinantal rank. The common value is simply called the rank of A and it is an important number associated to the matrix. The column space of A turns out to be the same as the range of the linear map f A : K n K m associated to the m n matrix A.

Basic Definitions 2/18 Definition Let A = (a ij ) be any m n matrix with entries in K. The row space of A, denoted R(A), is defined as the linear span of the rows of A, wheres the column space of A, denoted C(A), is defined as the linear span of the columns of A. In other words, if, as usual, we denote by A 1,..., A m the row vectors of A and by A 1,..., A n the column vectors of A, then R(A) := L(A 1,..., A m ) and C(A) := L(A 1,..., A n ). The dimension of R(A) is called the row rank of A and denoted by ρ(a), while the dimension of C(A) is called the column rank of A and denoted by κ(a). Note that ρ(a) m, since R(A) is spanned by m vectors, whereas ρ(a) n, since R(A) is a subspace of R n. Thus ρ(a) min{m, n}. Similarly, κ(a) min{m, n}.

Examples Examples: 1 If A = I n is the n n identity matrix, then clearly, R(A) = K n = C(A) and thus ρ(a) = n = κ(a) in this case. 2 If A is an m n matrix in row-echelon form and A has r pivots, then it is clear that the first r rows of A are linearly independent (check!) whereas the remaining m r rows are full of zeros. It follows that the first r rows are a basis of R(A) and ρ(a) = r. 3 If A is an m n matrix in reduced row-echelon form and A has r pivots appearing in columns indexed by j 1, j 2,..., j r, then these r column vectors are the standard basis vectors e 1,..., e r in K m and so they are linearly independent. Moreover, any other column vector of A will have its last m r coordinates equal to 0. It follows that the set of columns {A j 1, A j 2,..., A jr } containing the pivots form a basis of C(A) and κ(a) = r. From the previous example, we see that ρ(a) = κ(a) in this case. 3/18

Effect of an Elementary Row Operation 4/18 We will first examine the effect of an elementary row operation on the row space of a rectangular matrix. Let A be an m n matrix and let A 1,..., A m denote the row vectors of A. Suppose A is the m n matrix obtained from A by an elementary row operation. Then the rows of A look like or or A 1,..., A j,..., A i,..., A n A 1,..., A i 1, ca i, A i+1,..., A n (c K, c 0) A 1,..., A i + ca j,..., A j,..., A n (c K) depending on the type of the elementary row operation used to obtain A from A. In each case, it is clear that R(A) = R(A ). So: The row space of a matrix is unaltered upon an elementary row operation.

Finding the Row Rank and a Basis of the Row Space The observation in the previous slide together with the second example (of a matrix in row-echelon form) implies the following. It shows also that our current usage of the term row-rank is consistent with the earlier usage where it was taken to be the number of pivots in a row-echelon form of the given matrix. Theorem Let A be an m n matrix and let r be the number of pivots in a row-echelon form of A. Then row rank of A = ρ(a) = r. Moreover, the first r rows of a row-echelon form à of A form a basis of the row space R(A) of A. Since the reduced row-echelon form of a matrix A is also a row-echelon form of A, the above theorem applied equally well to it and shows that the nonzero rows of the reduced row-echelon form of A form a basis of R(A). 5/18

The case of column space Let us examine the effect of an elementary row operation on the column space of a matrix. We begin with a simple example. Example: Consider the 2 2 matrix [ ] 1 2 A =. 2 4 It is clear that C(A) = L ([ ] 1, 2 [ ]) ([ ]) 2 1 = L 4 2 and κ(a) = 1. Suppose we make an elementary row operation to transform A to A. For example, interchanging the two rows changes A to [ ] ([ ]) A 2 4 = and C(A) to C(A 2 ) = L, 1 2 1 On the other hand, multiplying the second row by 2, the matrix A changes to 6/18

Example continued 7/18 A = [ ] 2 4 2 4 and C(A) to C(A ) = L ([ ]) 1, 1 In case we make an elementary ro operation of type III, for instance, R 2 2R 1, then A changes to A = [ ] 1 2 0 0 and C(A) to C(A ) = L ([ ]) 1, 0 Thus we see that in each case, the column space changes completely. Indeed, the only vector that any two of the column spaces C(A), C(A ), C(A ) and C(A ) have in common is the zero vector [geometrically, they correspond to distinct lines passing through the origin]. However, dimensions are always the same and moreover, the first column appears to give a basis of the column space in each case. It turns out that this is a general phenomenon.

Effect of Elementary Row Operation on Column Space 8/18 Theorem Let A be an m n matrix and let A is the m n matrix obtained from A by an elementary row operation. Then for any j 1 < < j k in {1,..., n} and any c 1,..., c k K, c 1 A j 1 + + c k A j k = 0 c 1 A j 1 + + c k A j k = 0 ( ) Hence if S := {A j 1,..., A j k } and S := {A j 1,..., A j k }, then S is linearly independent S is linearly independent. S is a basis of C(A) S is a basis of C(A ). In particular,dim C(A) = dim C(A ), i.e., κ(a) = κ(a ). Proof: Note that A j = Ae j and A j = A e j where e j is the standard basis vector in K n with 1 in j th place and 0 elsewhere.

Proof of Theorem Continued Since A = EA (and A = E 1 A ) for some elementary matrix, the equivalence (*) follows from multiplying on the left by E or by E 1. Now (*) clearly implies that S is linearly independent S is linearly independent. Moreover, if {A j 1,..., A j k } spans C(A), then any A j can be expressed as a linear combination of A j 1,..., A j k thus leading to a dependence relation among A j, A j 1,..., A j k and by (*), a similar dependence relation holds among A j, A j 1,..., A j k. This shows that S spans K m S spans K m. Consequently, S is a basis of C(A) S is a basis of C(A ). In particular, C(A) and C(A ) have the same dimension, i.e., κ(a) = κ(a ). 9/18

10/18 Results of the above theorem can be summarized as follows. The column space of a matrix is can change upon an elementary row operation. But the column rank remains the same. Moreover, if a set of columns of a transformed matrix forms a basis of its column space then the corresponding columns of the original matrix form a basis of the column space of the original matrix. Repeatedly applying the above result and using Example 3 (of a matrix in reduced row-echelon form), we obtain the following. Theorem The column rank of a matrix is the number of pivots in its row-echelon form. In particular, if A is any m n matrix, then ρ(a) = κ(a), that is, row-rank(a) = column-rank(a). Moreover, if the pivots occur in the columns indexed by j 1, j 2,..., j r in a row-echelon form of A, then the columns of A indexed by j 1, j 2,..., j r form a basis of C(A).

The Case of a Square Matrix 11/18 Now that the row-rank and the column-rank of any rectangular matrix A are shown to be equal, we may as well use just the word rank to describe them and use the notation rank(a) rather than than ρ(a) or κ(a). see how how rank relates to determinants etc. in the special case of a square matrix. If A is a square matrix of size n n, then we know that any row-echelon form of A is either upper triangular with nonzero entries on the diagonal (these nonzero entries are in fact, its pivots), or it has a row of zeros. By what we have seen earlier, these two possibilities correspond exactly to the conditions rank(a) = n and rank(a) < n. Thus we see that rank(a) = n A is invertible det(a) 0. These conditions are also equivalent to saying that the homogeneous system Ax = 0 has only the trivial solution or that the system Ax = b has a unique solution for every b K n.

Determinantal Rank of a Matrix In other words, det-rank(a) = r if and only if some r r minor of A is nonzero and every (r + 1) (r + 1) minor of A is zero [note that the last condition implies that every s s minor of A is zero whenever s r + 1]. It is easy that the determinantal rank of a matrix in row-echelon form is equal to its rank. This is, in fact, true in general. 12/18 There is yet another way to characterize the rank of a matrix. Recall that by a minor of a m n matrix A we mean the determinant of a square submatrix of A; if this submatrix is of size k k, then the minor may be referred to as a k k minor. By convention, the 0 0 minor is always equal to 1. Thus a m n matrix can have ( )( m n k k) possible k k minors for each k = 0, 1,..., min{m, n}. Now here is the definition. Definition The determinantal rank of a m n matrix A is defined by det-rank(a) := max{k : A has a nonzero k k minor}.

Determinantal Rank is equal to Rank Theorem Let A be a a m n matrix. Then det-rank(a) = rank(a). Proof: Suppose rank(a) = r. Then we can find a basis of the row space R(A) consisting of r columns of A (do you see why?). We may assume without loss of generality that the first r rows A 1,..., A r of A constitute a basis of R(A). In particular, if B is the r n matrix formed by the first r rows of A, then rank(b) = r. Hence the column-rank of B is also r and so r of its columns, say B j 1,..., B jr are linearly independent. These columns constitute a r r submatrix M of A of rank r. Consequently, det(m) 0. Thus we see that some r r minor of A is nonzero. On the other hand, since any (r + 1) rows of A are linearly dependent, the same holds for the rows of any (r + 1) (r + 1) submatrix of A, and hence we see that every (r + 1) (r + 1) minor of A is zero. It follows that det-rank(a)= r, as desired. 13/18

14/18 Column space of A and the Range of f A As seen earlier for any linear map f : V W of finite dimensional vector spaces, the range of f is a subspace of W given by R(f ) := {f (v) : v V } and the dimension of R(f ) is called the rank of f. It is easy to see how this relates to the notion of (row and column) rank of a matrix. Let A be a m n matrix with entries in K and let f A : K n K m be the linear map corresponding to A [given by f A (x) = Ax for x K n ]. Then R(f ) = {Ax : x = [x 1,..., x n ] T K n} = {A(x 1 e 1 + + x n e n ) : x 1,..., x n K} { } = x 1 A 1 + + x n A n : x 1,..., x n K = C(A). Thus we see that the range of f A is precisely the column space of A and in particular, rank(f A ) = κ(a) = ρ(a).

Column Space and Null Space; Rank and Nullity Just as the range of f A : K n K m is precisely the column space of the corresponding matrix A, the null space of f A is the set N (A) := {x K n : Ax = 0}, which is the set of all solutions of the homogeneous system Ax = 0. We call it simply the null space of A and its dimension to be the nullity of A. The Rank Nullity Theorem for linear transformations takes the form rank(a) + nullity(a) = n for any m n matrix A. We have seen already a method to find a basis for R(A) and for C(A). Namely, obtain a row-echelon form à of A. Then the first r rows à given a basis of R(A), whereas the columns of A corresponding to the columns containing pivots in à give a basis of C(A). To obtain a basis of N (A), one observes that à = PA for some invertible m m matrix P, and hence N (A) = N (Ã). 15/18

Finding a Basis for the Null Space 16/18 As noted above, if à is a row-echelon form of A, then N (A) = N (Ã). In other words, the solutions of Ax = 0 are the same as the solutions of Ãx = 0. Let r = rank(a) and s = n r be the nullity of A. We have seen that the solutions of Ãx = 0 can be found by back substitution and these will involve s free parameters, say t 1,..., t s and can be expressed as x = t 1 v 1 + + t s v s for some v 1,..., v s K n. In this case, the vectors v 1,..., v s will constitute a basis of N (A). You are encouraged to practice the tutorial problems to find explicitly bases for row spaces, column spaces and null spaces in several numerical examples.