Working with Block Structured Matrices

Similar documents
Review of Matrices and Block Structures

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Chapter 1. Vectors, Matrices, and Linear Spaces

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Elementary maths for GMT

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Chapter 2: Matrix Algebra

x y + z = 3 2y z = 1 4x + y = 0

Row Space and Column Space of a Matrix

Matrices and systems of linear equations

MATRICES. a m,1 a m,n A =

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Math Linear Algebra Final Exam Review Sheet

CSL361 Problem set 4: Basic linear algebra

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

MTH 362: Advanced Engineering Mathematics

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns.

Linear Algebra Handout

Linear Algebra March 16, 2019

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Matrix operations Linear Algebra with Computer Science Application

LINEAR ALGEBRA REVIEW

ELEMENTARY LINEAR ALGEBRA

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

ELEMENTARY LINEAR ALGEBRA

Midterm 1 Solutions Math Section 55 - Spring 2018 Instructor: Daren Cheng

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

4 Elementary matrices, continued

SUMMARY OF MATH 1600

EE731 Lecture Notes: Matrix Computations for Signal Processing

Section 2.2: The Inverse of a Matrix

Notes on Row Reduction

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

System of Linear Equations

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MODEL ANSWERS TO THE THIRD HOMEWORK

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Gaussian elimination

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

MAT 242 CHAPTER 4: SUBSPACES OF R n

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Matrix & Linear Algebra

Solving Systems of Linear Equations

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

MAT 2037 LINEAR ALGEBRA I web:

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

Chapter 2: Matrices and Linear Systems

INVERSE OF A MATRIX [2.2]

A matrix over a field F is a rectangular array of elements from F. The symbol

Row Space, Column Space, and Nullspace

Lecture 22: Section 4.7

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

1111: Linear Algebra I

4 Elementary matrices, continued

Chapter 2 Notes, Linear Algebra 5e Lay

The Four Fundamental Subspaces

Vector Spaces, Orthogonality, and Linear Least Squares

EA = I 3 = E = i=1, i k

Methods for Solving Linear Systems Part 2

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

GEOMETRY OF MATRICES x 1

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Algorithms to Compute Bases and the Rank of a Matrix

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

Lecture 2 Systems of Linear Equations and Matrices, Continued

7. Dimension and Structure.

4.9 The Rank-Nullity Theorem

Linear Algebra, Summer 2011, pt. 2

SYMBOL EXPLANATION EXAMPLE

Math 313 Chapter 5 Review

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

Chapter 4. Solving Systems of Equations. Chapter 4

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

MTH 464: Computational Linear Algebra

Math 1553 Introduction to Linear Algebra

Math 369 Exam #2 Practice Problem Solutions

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

1 - Systems of Linear Equations

Linear Algebra I Lecture 8

Math 3C Lecture 20. John Douglas Moore

Section 1.1 System of Linear Equations. Dr. Abdulla Eid. College of Science. MATHS 211: Linear Algebra

Lecture 2: Linear Algebra

Linear Algebra. Chapter Linear Equations

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

We see that this is a linear system with 3 equations in 3 unknowns. equation is A x = b, where

Linear Algebra and Matrix Inversion

7.6 The Inverse of a Square Matrix

Chapter 1: Linear Equations

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Transcription:

Working with Block Structured Matrices Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations with matrices having millions of components. The key to understanding how to implement such algorithms is to exploit underlying structure within the matrices. In these notes we touch on a few ideas and tools for dissecting matrix structure. Specifically we are concerned with block matrix structures.. Rows and Columns Let A IR m n so that A has m rows and n columns. Denote the element of A in the ith row and jth column as A ij. Denote the m rows of A by A A 2 A 3... A m and the n columns of A by A A 2 A 3... A n. For example if then A 24 = 00 A = 3 2 5 7 3 2 27 32 00 0 0 89 0 47 22 2 33 A = [ 3 2 5 7 3 A 2 = [ 2 27 32 00 0 0 A 3 = [ 89 0 47 22 2 33 and A = Exercise: If 3 2 A 2 = 2 27 A 3 = 32 A 4 = 5 00 A 5 = 7 0 A 6 = 3 0. 89 0 47 22 2 33 C = 3 4 0 0 2 2 0 0 0 0 0 0 0 0 0 0 2 4 0 0 0 0 3 4 what are C 44 C 4 and C 4? For example C 2 = [ 2 2 0 0 0 2 and C 2 = 0 0. 0 The block structuring of a matrix into its rows and columns is of fundamental importance and is extremely useful in understanding the properties of a matrix. In particular for A IR m n it allows us to write A A 2 A = A 3. and A = [ A A 2 A 3... A n. A m

2 These are called the row and column block representations of A respectively.. Matrix vector Multiplication. Let A IR m n and x IR n. In terms of its coordi- x nates (or components) we can also write x = 2. with each x j IR. The term x j is called x n the jth component of x. For example if x = 5 00 22 then n = 3 x = 5 x 2 = 00 x 3 = 22. We define the matrix-vector product Ax by A x A 2 x Ax = A 3 x. A m x where for each i = 2... m A i x is the dot product of the ith row of A with x and is given by n A i x = A ij x j. For example if then Exercise: If what is Cx? A = j= x 3 2 5 7 3 2 27 32 00 0 0 0 and x = 89 0 47 22 2 33 0 2 3 C = Ax = 24 29. 32 3 4 0 0 2 2 0 0 0 0 0 0 0 0 0 0 2 4 0 0 0 0 3 and x = 0 0 2 3

Note that if A IR m n and x IR n then Ax is always well defined with Ax IR m. In terms of components the ith component of Ax is given by the dot product of the ith row of A (i.e. A i ) and x (i.e. A i x). The view of the matrix-vector product described above is the row-space perspective where the term row-space will be given a more rigorous definition at a later time. But there is a very different way of viewing the matrix-vector product based on a column-space perspective. This view uses the notion of the linear combination of a collection of vectors. Given k vectors v v 2... v k IR n and k scalars α α 2... α k IR we can form the vector α v + α 2 v 2 + + α k v k IR n. Any vector of this kind is said to be a linear combination of the vectors v v 2... v k where the α α 2... α k are called the coefficients in the linear combination. The set of all such vectors formed as linear combinations of v v 2... v k is said to be the linear span of v v 2... v k and is denoted Ax = Span { v v 2... v k} := { α v + α 2 v 2 + + α k v k α α 2... α k IR }. Returning to the matrix-vector product one has that A x + A 2 x 2 + A 3 x 3 + + A n x n A 2 x + A 22 x 2 + A 23 x 3 + + A 2n x n... A m x + A m2 x 2 + A m3 x 3 + + A mn x n = x A + x 2 A 2 + x 3 A 3 + + x n A n which is a linear combination of the columns of A. That is we can view the matrix-vector product Ax as taking a linear combination of the columns of A where the coefficients in the linear combination are the coordinates of the vector x. We now have two fundamentally different ways of viewing the matrix-vector product Ax. Row-Space view of Ax: Column-Space view of Ax: A x A 2 x Ax = A 3 x. A m x Ax = x A + x 2 A 2 + x 3 A 3 + + x n A n. 2. Matrix Multiplication We now build on our notion of a matrix-vector product to define a notion of a matrixmatrix product which we call matrix multiplication. Given two matrices A IR m n and B IR n k note that each of the columns of B resides in IR n i.e. B j IR n i = 2... k. Therefore each of the matrix-vector products AB j is well defined for j = 2... k. This 3

4 allows us to define a matrix-matrix product that exploits the block column structure of B by setting () AB := [ AB AB 2 AB 3 AB k. Note that the jth column of AB is (AB) j = AB j IR m and that AB IR m k i.e. Also note that if H IR m n and L IR n k then HL IR m k. if T IR s t and M IR r l then the matrix product T M is only defined when t = r. For example if 2 0 A = 3 2 5 7 3 2 2 2 27 32 00 0 0 0 3 and B = 89 0 47 22 2 33 0 0 2 then Exercise: if C = 2 2 AB = A 0 0 2 3 4 2 2 0 0 0 0 0 0 0 0 2 0 0 is CD well defined and if so what is it? 0 2 3 A = 5 5 58 50. 0 33 87 0 2 4 3 and D = 0 2 4 5 5 2 4 3 0 0 0 The formula () can be used to give further insight into the individual components of the matrix product AB. By the definition of the matrix-vector product we have for each j = 2... k AB j = A B j A 2 B j. A m B j Consequently (AB) ij = A i B j i = 2... m j = 2... k. That is the element of AB in the ith row and jth column (AB) ij is the dot product of the ith row of A with the jth column of B.

2.. Elementary Matrices. We define the elementary unit coordinate matrices in IR m n in much the same way as we define the elementary unit coordinate vectors. Given i { 2... m} and j { 2... n} the elementary unit coordinate matrix E ij IR m n is the matrix whose ij entry is with all other entries taking the value zero. This is a slight abuse of notation since the notation E ij is supposed to represent the ijth entry in the matrix E. To avoid confusion we reserve the use of the letter E when speaking of matrices to the elementary matrices. Exercise: (Multiplication of square elementary matrices)let i k { 2... m} and j l { 2... m}. Show the following for elementary matrices in IR m m first for m = 3 and then in general. { Eil if j = k () E ij E kl = 0 otherwise. (2) For any α IR if i j then (I m m αe ij )(I m m + αe ij ) = I m m so that (I m m + αe ij ) = (I m m αe ij ). (3) For any α IR with α 0 (I + (α )E ii )(I + (α )E ii ) = I so that (I + (α )E ii ) = (I + (α )E ii ). Exercise: (Elementary permutation matrices)let i l { 2... m} and consider the matrix P ij IR m m obtained from the identity matrix by interchanging its i and lth rows. We call such a matrix an elementary permutation matrix. Again we are abusing notation but again we reserve the letter P for permutation matrices (and later for projection matrices). Show the following are true first for m = 3 and then in general. () P il P il = I m m so that P il = P il. (2) Pil T = P il. (3) P il = I E ii E ll + E il + E li. Exercise: (Three elementary row operations as matrix multiplication)in this exercise we show that the three elementary row operations can be performed by left multiplication by an invertible matrix. Let A IR m n α IR and let i l { 2... m} and j { 2... n}. Show that the following results hold first for m = n = 3 and then in general. () (row interchanges) Given A IR m n the matrix P ij A is the same as the matrix A except with the i and jth rows interchanged. (2) (row multiplication) Given α IR with α 0 show that the matrix (I + (α )E ii )A is the same as the matrix A except with the ith row replaced by α times the ith row of A. (3) Show that matrix E ij A is the matrix that contains the jth row of A in its ith row with all other entries equal to zero. (4) (replace a row by itself plus a multiple of another row) Given α IR and i j show that the matrix (I + αe ij )A is the same as the matrix A except with the ith row replaced by itself plus α times the jth row of A. 5

6 2.2. Associativity of matrix multiplication. Note that the definition of matrix multiplication tells us that this operation is associative. That is if A IR m n B IR n k and C IR k s then AB IR m k so that (AB)C is well defined and BC IR n s so that A(BC) is well defined and moreover (AB)C = (2) (AB)C (AB)C 2 (AB)C s where for each l = 2... s Therefore we may write (2) as (AB)C l = [ AB AB 2 AB 3 AB k C l = C l AB + C 2l AB 2 + + C kl AB k = A [ C l B + C 2l B 2 + + C kl B k = A(BC l ). (AB)C = [ (AB)C (AB)C 2 (AB)C s = [ A(BC ) A(BC 2 )... A(BC s ) = A [ BC BC 2... BC s = A(BC). Due to this associativity property we may dispense with the parentheses and simply write ABC for this triple matrix product. Obviously longer products are possible. Exercise:Consider the following matrices: 2 3 4 A = B = C = 2 3 2 3 0 3 0 7 2 0 D = 2 3 2 2 0 F = 0 4 0 2 3 2 3 0 2 0 G =. 0 3 0 8 5 5 Using these matrices which pairs can be multiplied together and in what order? Which triples can be multiplied together and in what order (e.g. the triple product BAC is well defined)? Which quadruples can be multiplied together and in what order? Perform all of these multiplications. 3. Block Matrix Multiplication To illustrate the general idea of block structures consider the following matrix. 3 4 0 0 0 2 2 0 0 A = 0 0 0 0 0 0 2 4. 0 0 0 0 3

Visual inspection tells us that this matrix has structure. But what is it and how can it be represented? We re-write the the matrix given above blocking out some key structures: 7 A = 3 4 0 0 0 2 2 0 0 0 0 0 0 0 0 2 4 0 0 0 0 3 = [ B I3 3 0 2 3 C where B = 3 4 0 2 2 0 C = [ 2 4 0 3 I 3 3 is the 3 3 identity matrix and 0 2 3 is the 2 3 zero matrix. Having established this structure for the matrix A it can now be exploited in various ways. As a simple example we consider how it can be used in matrix multiplication. Consider the matrix M = 2 0 4 2 4 3 2 0 The matrix product AM is well defined since A is 5 6 and M is 6 2. We show how to compute this matrix product using the structure of A. To do this we must first block decompose M conformally with the block decomposition of A. Another way to say this is that we must give M a block structure that allows us to do block matrix multiplication with the blocks of A. The correct block structure for M is. [ X M = Y where X = 0 4 2 and Y = 4 3 2 2 0

8 B I3 3 since then X can multiply and Y can multiply. This gives 0 2 3 C B I3 3 X BX + Y AM = = 0 2 3 C Y CY 2 2 2 + 2 6 4 3 = 2 2 0 0 4 = 4 2 2 9 0 3 0 4. Block structured matrices and their matrix product is a very powerful tool in matrix analysis. Consider the matrices M IR n m and T IR m k given by An m M = B n m 2 C n2 m D n2 m 2 and Em k T = F m k 2 G m k 3 H m2 k J m2 k 2 K m2 k 3 where n = n + n 2 m = m + m 2 and k = k + k 2 + k 3. The block structures for the matrices M and T are said to be conformal with respect to matrix multiplication since AE + BH AF + BJ AG + BK MT =. CE + DH CF + DJ CG + DK Similarly one can conformally block structure matrices with respect to matrix addition (how is this done?). Exercise:Consider the the matrix 2 3 2 0 0 0 0 3 0 0 0 0 2 0 0 0 0 0 0 0 4 0 0 H =. 0 0 0 2 7 0 0 0 0 0 0 2 3 0 0 0 0 0 0 0 0 0 8 5 Does H have a natural block structure that might be useful in performing a matrix-matrix multiply and if so describe it by giving the blocks? Describe a conformal block decomposition

of the matrix 2 3 4 5 6 M = 2 3 4 that would be useful in performing the matrix product HM. Compute the matrix product HM using this conformal decomposition. Exercise:Let T IR m n with T 0 and let I be the m m identity matrix. Consider the block structured matrix A = [ I T. (i) If A IR k s what are k and s? (ii) Construct a non-zero s n matrix B such that AB = 0. The examples given above illustrate how block matrix multiplication works and why it might be useful. One of the most powerful uses of block structures is in understanding and implementing standard matrix factorizations or reductions. 4. Gauss-Jordan Elimination Matrices and Reduction to Reduced Echelon Form In this section we show that Gaussian-Jordan elimination can be represented as a consequence of left multiplication by a specially designed matrix called a Gaussian-Jordan elimination matrix. Consider the vector v IR m block decomposed as v = a α b where a IR s α IR and b IR t with m = s + + t. In this vector we refer to the α entry as the pivot and assume that α 0. We wish to determine a matrix G such that Gv = e s+ where for j =... n e j is the unit coordinate vector having a one in the jth position and zeros elsewhere. We claim that the matrix G = I s s α a 0 0 α 0 0 α b I t t does the trick. Indeed (3) Gv = I s s α a 0 0 α 0 0 α b I t t a α b = a a α α b + b = 0 0 = e s+. 9

0 The matrix G is called a Gaussian-Jordan Elimination Matrix or GJEM for short. Note that G is invertible since G = I a 0 0 α 0 0 b I Moreover for any vector of the form w = x 0 y Gw = w. where x IR s y IR t we have The GJEM matrices perform precisely the operations required in order to execute Gauss- Jordan elimination. That is each elimination step can be realized as left multiplication of the augmented matrix by the appropriate GJEM. For example consider the linear system 2x + x 2 + 3x 3 = 5 2x + 2x 2 + 4x 3 = 8 4x + 2x 2 + 7x 3 = 5x + 3x 2 + 4x 3 = 0 and its associated augmented matrix 2 3 5 A = 2 2 4 8 4 2 7. 5 3 4 0 The first step of Gauss-Jordan elimination is to transform the first column of this augmented matrix into the first unit coordinate vector. The procedure described in (3) can be employed for this purpose. In this case the pivot is the ( ) entry of the augmented matrix and so which gives s = 0 a is void α = 2 t = 3 and b = 2 4 5 /2 0 0 0 G = 0 0 2 0 0. 5/2 0 0 Multiplying these two matrices gives /2 0 0 0 2 3 5 /2 3/2 5/2 G A = 0 0 2 2 4 8 2 0 0 4 2 7 = 0 3 0 0. 5/2 0 0 5 3 4 0 0 /2 7/2 5/2

We now repeat this process to transform the second column of this matrix into the second unit coordinate vector. In this case the (2 2) position becomes the pivot so that 0 s = a = /2 α = t = 2 and b = /2 yielding /2 0 0 G 2 = 0 0 0 0 0 0. 0 /2 0 Again multiplying these two matrices gives /2 0 0 /2 3/2 5/2 0 G 2 G A = 0 0 0 0 3 0 0 0 0 0 = 0 3 0 0. 0 /2 0 0 /2 7/2 5/2 0 0 4 4 Repeating the process on the third column transforms it into the third unit coordinate vector. In this case the pivot is the (3 3) entry so that [ s = 2 a = α = t = and b = 4 yielding 0 0 G 3 = 0 0 0 0 0. 0 0 4 Multiplying these matrices gives 0 0 0 0 0 0 G 3 G 2 G A = 0 0 0 3 0 0 0 0 0 = 0 0 2 0 0 0 0 4 0 0 4 4 0 0 0 0 which is in reduced echelon form. Therefore the system is consistent and the unique solution is Observe that x = 0 2. 3 /2 0 G 3 G 2 G = 0 2 0 0 0 /2 4

2 and that (G 3 G 2 G ) = G 3 2 0 0 0 /2 0 0 0 0 = 2 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 5 0 0 0 /2 0 0 0 4 2 3 0 = 2 2 4 0 4 2 7 0. 5 3 4 G 2 G In particular reduced Gauss-Jordan form can always be achieved by multiplying the augmented matrix on the left by an invertible matrix which can be written as a product of Gauss-Jordan elimination matrices. 2 Exercise:What are the Gauss-Jordan elimination matrices that transform the vector 3 2 5 in to e j for j = 2 3 4 and what are the inverses of these matrices? 5. The Four Fundamental Subspaces and Echelon Form Let A IR m n. We associate with A its four fundamental subspaces: In the text it is shown that Ran (A) := {Ax x IR n } Null (A) := {x Ax = 0} Ran (A T ) := { A T y y IR m } Null (A T ) := { y A T y = 0 }. (4) rank(a) + nullity(a) = n and where (5) Observe that rank(a) := dim Ran (A) rank(a T ) + nullity(a T ) = m nullity(a) := dim Null (A) rank(a T ) := dim Ran (A T ) nullity(a T ) := dim Null (A T ) Null (A) := {x Ax = 0} = {x A i x = 0 i = 2... m} = {A A 2... A m } = Span {A A 2... A m } = Ran (A T ).

3 Since for any subspace S IR n we have (S ) = S we obtain (6) Null (A) = Ran (A T ) and Null (A T ) = Ran (A). The equivalences in (6) are called the Fundamental Theorem of the Alternative. By combining this with the rank-plus-nullity equals dimension statement in (4) we find that rank(a T ) = dim Ran (A T ) = dim Null (A) = n nullity(a) = rank(a). Consequently the row rank of a matrix equals the column rank of a matrix i.e. the dimensions of the row and column spaces of a matrix are the same! These observations have consequences for computing bases for the four fundamental subspaces of a matrix using reduced echelon form. Again consider A IR m n and form the augmented matrix [A I m. Apply the necessary sequence of Gauss-Jordan eliminations to put this matrix in reduced echelon form. If each column has a non-zero pivot then the reduced echelon form looks like (7) Ik T R R 2 0 0 F I (m k) where T IR k (n k) R IR k k and F IR (m k) k. rank(a) = k and nullity(a) = n k. A basis for Ran (A): Observe that (7) implies that Ik T GA = 0 0 where is an invertible matrix. In particular G = R R 2 F I (m k) This representation tells us that GA j = e j for j = 2... k. Therefore the vectors A j j = 2... k are linearly independent (since G is invertible and the vectors e j for j = 2... k are linearly independent). Since rank(a) = k these vectors necessarily form a basis for Ran (A). The reduced echelon form (7) provides a second way to obtain a basis for Ran (A). To see this observe that if b Ran (A) then we can obtain a solution to Ax = b. In particular R R GAx = Gb = 2 b F I (m k) b 2 where b IR k and b 2 IR (m k) is a decomposition of b that is conformal to the block structure of G. But then b 0 = [F I (m k) b 2 = F b + b 2

4 or equivalently b 2 = F b. This is a necessary and sufficient condition for b to be an element of Ran (A). Therefore b Ran (A) if and only if there is a vector b IR k such that Ik b = b F. Ik Since the matrix has k = rank(a) columns these columns must form a basis for F Ran (A). Basis for the Null (A): Since G is invertible we know that Ax = 0 if and only if GAx = 0. Since nullity(a) = n k we need only find n k linearly independent vectors such that GAx = 0 or equivalently [I k T x = 0. For this observe that T [I T = T + T = 0 I (n k) T where the matrix has n k linearly independent columns. Therefore the columns I (n k) of this matrix necessarily form a basis for Null (A). Basis for Ran (A T ): Recall that A T = rank(a) = k. Therefore to obtain a basis for A T we need only obtain k linearly independent vectors in the row space of A. But by construction the matrix Ik T 0 0 is row equivalent to A with the rows of [I k T necessarily linearly independent. Consequently the columns of the matrix [I k T T Ik = T T necessarily form a basis for the range of A T. Basis for Null (A T ): By (6) Null (A T ) = Ran (A). Therefore to obtain a basis for Null (A T ) we need only find m k linearly independent vectors in Ran (A). But we have Ik already seen that the columns of the matrix form a basis for the Ran (A) and F F [I k F T T = F I T F T = 0 (m k) F T with the columns of necessarily linearly independent. Therefore the columns of I (m k) this matrix must form a basis for Null (A T ). Recap: Putting this all together we find that if A IR m n is such that [A I has echelon form Ik T R (8) R 2 0 0 F I (m k)

with T IR k (n k) R IR k k and F IR (m k) k then a bases for Ran (A) Null (A) Ran (A T ) and Null (A T ) are given by the columns of the matrices Ik T Ik F T (9) F I (n k) T T and I (m k) respectively. Example:Consider the matrix A = 0 4 4 3. 2 3 5 Reducing [A I 3 to reduced echelon form gives 0 4 4 0 0 0 3 0 0 0 0 0 3 Using the notation from (8) we have m = 3 n = 4 k = 2 n k = 2 m k = [ 4 4 0 0 T = R 3 = R 2 = and F = [ 3. 0 This may be easier to see by conformally partitioning the reduced echelon form of [A I 3 as Ik T R R 2 = 0 4 4 0 0 0 3 0. 0 0 F I (m k) 0 0 0 0 3 Then the bases of the four fundamental subspaces are given as the columns of the following matrices: Ik Ran (A) = 0 0 F 3 4 4 T Null (A) = 3 I (n k) 0 0 0 Ran (A T Ik ) T T = 0 4 3 4 F Null (A T T ) = 3. I (m k). 5

6 Example:Consider the matrix A = 2 3 3 5. It has echelon form 0 0 2 G[A I = 0 2 0. 0 0 0 2 Show that the bases of the four fundamental subspaces are given as the columns of the following matrices and identify which subspace goes with which matrix: 0 0 0 0 2 2. 2 2 It may happen that the reduced echelon form of [A I does not take the form given in (8) if a zero pivot occurs. In this case formulas similar to the ones given in (9) are possible and can be described in a straightforward manner but the notational overhead is quite high. Nonetheless it is helpful to see what is possible therefore we consider one more reduced echelon structure below. The proof is left to the reader. Suppose A IR m n is such that [A I m has reduced echelon form (0) I k T 0 T 2 R R 2 0 0 I k2 T 22 R 2 R 22 0 0 0 0 F I m k where k := k + k 2 is the rank of A n k = t + t 2 =: t is the nullity of A T IR k t T 2 IR k t 2 and T 22 IR k 2 t 2. Then the bases of the four fundamental subspaces are given as the columns of the following matrices: Ran (A) Ik F I k 0 Ran (A T ) T T 0 0 I k2 T2 T T22 T T T 2 Null (A) I t 0 0 T 22 0 I t2 Null (A T ) [ F T I (m k). Example:Consider the matrix 2 2 0 2 3 4 5 5 5 2 0. 2 4 2 4 0

The reduced echelon form for [A I is the matrix 0 5 0 4 3 4 4 0 0 2 0 4 9 4 0 0 0 0 3 7 3 3 0 0 0 0 0 0 2 0 0 Show that the bases of the four fundamental subspaces are given as the columns of the following matrices and identify which subspace goes with which matrix: 5 4 0 0 2 4 0 0 0 0 2 0 5 2 0 0 0 0 0 3 0 0 0 0 0. 2 0 0 0 4 4 3 Hint: Identify all of the pieces in the reduced echelon form (0) from the partitioning I 0 5 0 4 3 4 4 0 k T 0 T 2 R R 2 0 0 I k2 T 22 R 2 R 22 = 0 2 0 4 9 4 0 0 0 0 3 7 3 3 0. 0 0 0 0 F I m k 0 0 0 0 0 2 0 0. 7