Chapter 3. Vector Spaces and Subspaces. Po-Ning Chen, Professor. Department of Electrical and Computer Engineering. National Chiao Tung University

Similar documents
Linear Algebra. Chapter Linear Equations

18.06 Problem Set 3 - Solutions Due Wednesday, 26 September 2007 at 4 pm in

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

1 Last time: inverses

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

Maths for Signals and Systems Linear Algebra in Engineering

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

18.06 Spring 2012 Problem Set 3

Chapter 1. Introduction to Vectors. Po-Ning Chen, Professor. Department of Electrical and Computer Engineering. National Chiao Tung University

MTH 362: Advanced Engineering Mathematics

7. Dimension and Structure.

Linear Algebra, Summer 2011, pt. 2

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

SUMMARY OF MATH 1600

SYMBOL EXPLANATION EXAMPLE

Math Final December 2006 C. Robinson

Math 369 Exam #2 Practice Problem Solutions

18.06 Problem Set 3 Solution Due Wednesday, 25 February 2009 at 4 pm in Total: 160 points.

pset3-sol September 7, 2017

GEOMETRY OF MATRICES x 1

THE NULLSPACE OF A: SOLVING AX = 0 3.2

Math 54 HW 4 solutions

Row Space and Column Space of a Matrix

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

INVERSE OF A MATRIX [2.2]

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Solution Set 3, Fall '12

Solutions to Math 51 First Exam April 21, 2011

Linear Algebra Highlights

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Math Linear Algebra Final Exam Review Sheet

Chapter 3. Vector spaces

Chapter 2. Solving Linear Equations. Po-Ning Chen, Professor. Department of Computer and Electrical Engineering. National Chiao Tung University

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Math 1553 Introduction to Linear Algebra

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

Math 346 Notes on Linear Algebra

Chapter 2: Matrix Algebra

Chapter 2: Linear Independence and Bases

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

CSL361 Problem set 4: Basic linear algebra

Chapter Two Elements of Linear Algebra

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Lecture 22: Section 4.7

Linear Algebra Handout

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

Working with Block Structured Matrices

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

Math 54 First Midterm Exam, Prof. Srivastava September 23, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall.

Question Total Score

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Solution: By inspection, the standard matrix of T is: A = Where, Ae 1 = 3. , and Ae 3 = 4. , Ae 2 =

Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound

Matrix Algebra for Engineers Jeffrey R. Chasnov

Review of Matrices and Block Structures

Cheat Sheet for MATH461

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

3.4 Elementary Matrices and Matrix Inverse

NOTES on LINEAR ALGEBRA 1

Linear Algebra (Math-324) Lecture Notes

Math 20F Practice Final Solutions. Jor-el Briones

MAT 2037 LINEAR ALGEBRA I web:

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

In this LAB you will explore the following topics using MATLAB. Investigate properties of the column space of a matrix.

Linear Systems. Math A Bianca Santoro. September 23, 2016

Stat 159/259: Linear Algebra Notes

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Lecture 2 Systems of Linear Equations and Matrices, Continued

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES

Vector Spaces, Orthogonality, and Linear Least Squares

2. Every linear system with the same number of equations as unknowns has a unique solution.

INVERSE OF A MATRIX [2.2] 8-1

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Math 314/814 Topics for first exam

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Math 344 Lecture # Linear Systems

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Gaussian elimination

Notes on Row Reduction

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Linear Systems. Carlo Tomasi

MTH 464: Computational Linear Algebra

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Linear Algebra. Min Yan

Math113: Linear Algebra. Beifang Chen

17. C M 2 (C), the set of all 2 2 matrices with complex entries. 19. Is C 3 a real vector space? Explain.

is Use at most six elementary row operations. (Partial

Transcription:

Chapter 3 Vector Spaces and Subspaces Po-Ning Chen, Professor Department of Electrical and Computer Engineering National Chiao Tung University Hsin Chu, Taiwan 30010, R.O.C.

3.1 Spaces of vectors 3-1 What is a Space? Answer: An area that is free to occupy. What is the Space of vectors? Answer: An area that is free to occupy by vectors. Example. v = [ v1 v 2 What is a vector space? ] can occupy a 2-dimensional plane. Answer: A vector space is not just a space occupied by vectors. It is a mathematical structure formed by vectors and rules (axioms) to handle these vectors. In other words, this term is reserved specifically for this use in mathematics!

3.1 Spaces of vectors 3-2 Definition (Vector space): A vector space is a space of vectors in V and scalars in F (that must be a field for which the definition can be found on Slide 3-4) with two (binary) operations: vector addition: v + w scalar-to-vector multiplication: av that satisfy the following axioms: 1. Commutativity of vector addition: v + w = w + v 2. Associativity of vector addition: u +(v + w) =(u + v)+w 3. Identity element of vector addition: There exists a unique zero vector 0 such that v + 0 = v for all vector v in the space 4. Inverse elements of vector addition: For every v, there exists a unique v such that v +( v) =0 5. Identity element of scalar(-to-vector) multiplication: There exists a unique scalar 1 such that 1v = v

3.1 Spaces of vectors 3-3 6. Compatibility of scalar(-to-vector) multiplication with scalar (or field) multiplication: a(bv) =(ab)v 7. Distributivity of scalar(-to-vector) multiplication with respect to vector addition: a(v + w) =av + aw 8. Distributivity of scalar(-to-vector) multiplication with respect to scalar (or field) addition: (a + b)v = av + bv Example of a vector space. 1. V = R n and F = R (with usual real-number addition and multiplication). 2. V =The set of all real-valued function f(t) andf = R. Note: We only need scalar and vector additions and scalar-to-vector multiplication. Hence, {f(t)} satisfies such demand. 3. V =The set of all 2 2 matrices and F = R (with usual matrix addition and scalar-to-matrix multiplication).

3.1 Spaces of vectors 3-4 AfieldisasetF with two binary operations, + and, such that the following axioms hold: 1. Closure under + and : a + b F and a b F. 2. Associativity of + and : a +(b +c) =(a +b)+c and a (b c) =(a b) c. 3. Commutativity of + and : a + b = b + a and a b = b a. 4. Existence of + and identity elements: a +0 = a and a 1=a (and 0 1). 5. Existence of + and inverse elements: a +( a) = 0 and for any a 0, a a 1 =1,where0hasno inverse. 6. Distributivity of over + :a (b + c) =a b + a c.

3.1 Spaces of vectors 3-5 Some important notes: A vector is not necessary of the form v 1.. v n A vector (in its most general form) is an element of a vector space. Example of a vector space. 1. V = R n and F = R. vector=.. v 1 v n 2. V =The set of all real-valued function f(t) andf = R. vector=f(t). [ ] v1,1 v 3. V =The set of all 2 2 matrices and F = R. vector= 1,2. v 2,1 v 2,2 [ ] In fact, it makes no difference in placing a (column) vector in R 4 v1,1 v into 1,2 v 2,1 v 2,2 as long as the operation defined is term-wisely based.

3.1 Sub-vector space of a vector space 3-6 In terminology, we usually term sub(-vector)space of a (vector) space for simplicity. Definition (Subspace): A sub(-vector)space S of a (vector) space V satisfies 1. that all the vectors it contains belong to the vector space, and 2. the eight axioms/rules. Equivalently, v S and w S v + w S, and v S and a F av S The two equivalent conditions imply the validity of Conditions 1. and 2. Exercise. Prove that the zero vector should belong to subspace S of the threedimensional real vector space R 3 with field R. Proof: av should belong to S for any a F = R. The proof is completed by taking a =0.

3.1 Sub-vector space of a vector space 3-7 Examples of subspace of R 3 : A single origin point. (A space always contains zero point.) Any line passes through the origin. Any plane passes through the origin. Choose any two vectors v and w, the set that consists of all the linear combination of the two vectors, i.e., av + bw. This is exactly equivalent to the two equivalent definitions of the subspace. I.e., v S and w S v + w S, and } This will be useful in proving the v S and a F av S legitimacy of a subspace!

3.1 Column space 3-8 Following the last example on the previous slide: S = { b R 3 : av + bw = b for some a, b R } = where A = [ v w ]. { b R 3 : [ v w ] [ ] a b } = b for some a, b R = { b R 3 : Ax = b for some x R 2} Every subspace of a space R n can be of the form: { b R m : A m n x n 1 = b m 1 for some x R n} Since it is a linear combination of the column vectors of A, itisalsotermed column space (denoted by C(A)). So, given A, a subspace is then defined (through the old friend Ax = b). If A m m invertible, S = V = R m. Example. A = I. If A m m non-invertible, S V = R m. Example. A = all-zero matrix.

3.1 Column space 3-9 Ax = b is solvable if, and only if, b C(A). C(A) is sometimes viewed as the subspace spanned by column vectors of A.

3.1 Column space 3-10 Examples of C(A): If A n n invertible, C(A) =R n. If A n 1, C(A) is a line in R n space. If A = [ a a a ] n n, C(A) is a line in Rn space. Exercise: Does there exist an A such that is an empty set? C(A) = { b R 3 : Ax = b for some x R 2} (Hint: What is the element that must be contained in all subspaces? The subspace that only consists of this element is denoted by Z, and is called zero subspace.)

3.1 Column space 3-11 Exercise: If C(A n n )=R n, is it possible that Ax = b has no solution for some b R n? (Hint: C(A) = { b R n : A n n x n 1 = b n 1 such that Ax = b admits solutions.) for some x R n} is the set of b Note from the textbook: We can also find the subspace of a subspace S. In textbook, it is denoted by SS. We can certainly have SSS, the subspace of the subspace SS of a subspace S of a space V.

3.2 Nullspace of a matrix A 3-12 Before introducing the nullspace, we give a formal definition of the column space. Definition (Column space): A column space of a matrix A, denoted by C(A), consists of all the vectors that are linear combinations of column vectors of A. It can always be represented as: C(A) = { b R m : A m n x n 1 = b m 1 for some x R n} Definition (Nullspace): A nullspace of a matrix A, denoted by N(A), consists of all the vectors that nullify the linear combinations of column vectors of A. It can always be represented as: N(A) = { x R n : A m n x n 1 = 0 m 1 } N(A) is a vector space. N(A) is a sub-space of R n. However, nullspace N(A) is not necessarily (often, not) a sub-space of column space C(A).

3.2 Determination of nullspace 3-13 How to easily determine the nullspace of A? Recall that in linear equations Ax = b, x is called the solutions. Specially when b =0, x 0is called the special solutions. Answer: Method of special solution. The subspace of R 3 can only be: A single point 0 Determined by no special solutions A line passing through 0 Determined by one special solution A plane passing through 0 Determined by two special solutions R 3 itself Determined by three special solutions

3.2 Determination of nullspace 3-14 Example. A = [ ] 1 2 2 0 2 0 N(A) = x R3 : [ ] 1 2 2 1 x x 0 2 0 2 = 0 x 3 The second row of A indicates x 2 =0. 2 The first row of A (i.e., x 1 +2x 3 = 0) indicates x = 0 is a special solution. 1 0 2 We then know (by intuition) that N(A) is a line passing through 0 and 0. 0 1 Question is Can we always rely on our intuition? Is there a systematic method to determine N(A)?

3.2 Determination of nullspace 3-15 Example (continued). The solution of Ax = 0 does not change by multiplying a proper matrix such as EAx = E0 = 0. So we do forward and backward eliminations (with no row exchange)and pivot normalization to obtain: [ ] 1 0 2 1 x x 0 1 0 2 = 0 x 3 Recall that this is called reduced row echelon form. We then have { x 1 +2x 3 =0 x 2 =0 We conclude that N(A) should be the intersection of two non-parallel planes. What will happen if we need to perform row exchanges during forward elimination!

3.2 Determination of nullspace 3-16 Example. A = [ ] 0 2 0 1 2 2 N(A) = x R3 : [ ] 0 2 0 1 x x 1 2 2 2 = 0 x 3 The first row of A indicates x 2 =0. 2 The second row of A (i.e., x 1 +2x 3 = 0) indicates x = 0 is a special 1 solution. 0 2 We then know (by intuition) that N(A) is a line passing through 0 and 0. 0 1 Question is Can we always rely on our intuition? Is there a systematic method to determine N(A)?

3.2 Determination of nullspace 3-17 Example (continued). The solution of Ax = 0 does not change by multiplying a proper matrix such as EAx = E0 = 0. So we do forward and backward eliminations (with row exchange) and pivot normalization to obtain: [ ] 1 0 2 1 x x 0 1 0 2 = 0 x 3 Recall that this is called reduced row echelon form. We then have { x 1 +2x 3 =0 x 2 =0 We conclude that N(A) should be the intersection of two non-parallel planes. What will happen if N(A) isaplane!

3.2 Determination of nullspace 3-18 Example. A = 1 1 2 3 2 2 8 10 3 3 10 13 Let s begin to work on forward elimination on the first column. We then obtain: 1 1 2 3 1st pivot 0 0 4 4 0 0 4 4 We continue to work on the second row (by choosing the left-most non-zero entry as the next pivot)! 1 1 2 3 1st pivot 0 0 4 4 2nd pivot 0 0 0 0 Since we only have two pivots, the nullspace N(A) shouldbeanintersectionof two R 4 hyperplanes, which can be represented as a linear combination of two special solutions.

3.2 Determination of nullspace 3-19 Proceed with the back elimination: 1 1 0 1 1st pivot 0 0 4 4 2nd pivot 0 0 0 0 and pivot normalization: 1 1 0 1 0 0 1 1 1st pivot 2nd pivot 0 0 0 0 So we know that { x 1,x 3 pivot variables x 2,x 4 free variables (free to choose its values for special solutions) We then assign (x 2,x 4 )=(1, 0) and (x 2,x 4 )=(0, 1) and obtain 1 1 1 x 1 = 0 and x 0 2 = 1 0 1 Then, N(A) consists of all linear combinations of the above two vectors.

3.2 Determination of nullspace 3-20 Remark. If we have three free variables, we may assign (1, 0, 0), (0, 1, 0) and (0, 0, 1) for special solutions. Exercise 1. How about four free variables? Exercise 2. How about zero free variables?

3.2 Determination of nullspace 3-21 In MATLAB: The special solutions for N(A) can be determined by: null(a); % Take A from the previous example. Note that the special solutions are not unique! The result is: answer = -0.4059-0.6597 0.7623 0.1373 0.3564-0.5225-0.3564 0.5225 Recall that the reduced row echelon form can be obtained by rref(a) The result is: answer = 1 1 0 1 0 0 1 1 0 0 0 0

3.2 Echelon matrix 3-22 It is very useful to find the (row) echelon matrix of a matrix A. Row echelon form: A form of matrices satisfying the below two conditions. Every all-zero row is below the other non-all-zero rows. The leading coefficient (the first nonzero number from the left, also called the pivot) of a non-all-zero row is always strictly to the right of the leading coefficient of the row above it. (row) Reduced echelon form: A form of matrices in row echelon form with the leading coefficient being one and also being the only non-zero entry in the same column. p x x x x x x 1 0 x x x 0 x 0 p x x x x x 0 1 x x x 0 x Example. A U = R = 0 0 0 0 0 p x 0 0 0 0 0 1 x 0 0 0 0 0 0 0 }{{} row echelon form 0 0 0 0 0 0 0 }{{} row reduced echelon form In the above example, N(A) =N(U) =N(R) is a 4-dimensional hyperplane (in a 7-dimensional space) since there are four free variables.

3.2 Echelon matrix 3-23 If A n n is invertible, then R = I. Exercise. Is the all-zero matrix in the row echelon form?

3.2 Echelon matrix 3-24 Example (Problem 20). Suppose column 1 + column 3 + column 5 = 0 in a 4 by 5 matrix with four pivots. Which column is sure to have no pivot (and which variable is free)? What is the special solution? What is the nullspace? Answer: Since column 1 + column 3 + column 5 = 0, column 5 has no pivot. It is not possible for columns 1 and 3 having no pivot. Check the row reduced echelon matrix R under the assumption that column 1 (or column 3) has no pivot. A contradiction will be obtained! Since column 5 has no pivot, the fifth variable is free. Since there are four pivots, column 5 has no pivot, and column 1 + column 3 +column5=0, the row reduced echelon matrix is 1 0 0 0 1 0 1 0 0 0 R = 0 0 1 0 1 0 0 0 1 0

3.2 Echelon matrix 3-25 The special solution is of the form x = x 1 x 2 x 3 x 4 1 satisfying Rx = 0. Hence, x = 1 0 1 0 1. The nullspace contains all multiples of x = 1 0 1 0 1 (a line in R 5 ).

3.2 Echelon matrix 3-26 Example (Problem 21). Construct a matrix whose nullspace consists of all combinations of (2, 2, 1, 0) and (3, 1, 0, 1). Answer: Two special solutions two free variables and two pivots. R is an m 4 matrix, where m 2. Take m = 2 for simplicity. R is of the form: R = [ ] 1 0 r1,3 r 1,4 0 1 r 2,3 r 2,4 The two special solutions (Rx = 0) give: [ ] 1 0 2 3 R = 0 1 2 1 Theanswerisnotunique!Forexample, 1 0 2 3 R = 0 1 2 1 0 0 0 0

3.2 Echelon matrix 3-27 Example (Problem 22). Construct a matrix whose nullspace consists of all multiples of (4, 3, 2, 1). Answer: One special solution one free variables and three pivots. R is an m 4 matrix, where m 3. Take m = 3 for simplicity. R is of the form: R = 1 0 0 r 1,4 0 1 0 r 2,4 0 0 1 r 3,4 The special solution (Rx = 0) give: 1 0 0 4 R = 0 1 0 3 0 0 1 2

3.2 Echelon matrix 3-28 Example (Problem 23). Construct a matrix whose column space contains (1, 1, 5) and (0, 3, 1) and whose nullspace contains (1, 1, 2). Answer: Linear combination of column vectors c 1 a 1 + c 1 a 2 + c 3 a 3 can be made equal to (1, 1, 5) and (0, 3, 1). So, let c 1 =1andc 2 = c 3 = 0 for the first solution, and let c 1 = c 3 =0andc 2 = 1 for the second solution. We then obtain 1 0 a 1,3 A = 1 3 a 2,3. 5 1 a 3,3 1 Solving A 1 = 0 gives 2 A = 1 0 0.5 1 3 2. 5 1 3

3.2 Echelon matrix 3-29 What is the relationship between column space and nullspace of a symmetric matrix A? Answer: Their vectors are perpendicular to each other. I.e., Proof: a N(A) andb C(A) a b =0. a N(A) implies Aa = 0. b C(A) implies Ax = b for some x. Hence, by A T = A, weobtain a b = b T a = x T A T a = x T Aa = x T 0 = 0. For n m, thecolumn space of A m n is a vector space of m-dimensional vectors, but the nullspace of A m n is a vector space of n-dimensional vectors. So, we can talk about the relation of column space and nullspace only when A m n is a square matrix, i.e., m = n. To connect a vector space of A with its null space, the text introduces subsequently the row space.

3.2 Echelon matrix 3-30 Definition (Row space): A row space of a matrix A, denoted by R(A), consists of all the vectors that are linear combinations of row vectors of A. Itcan always be represented as: } R(A) = {b R n : A T x = b n 1 for some x R m What is the relationship between row space and nullspace of a matrix A? Answer: Their vectors are perpendicular to each other. I.e., Proof: a N(A) andb R(A) a b =0. a N(A) implies Aa = 0. b R(A) implies A T x = b for some x. Hence, we obtain a b = b T a = x T Aa = x T 0 = 0. For a symmetric matrix, row space = column space. The row space can be defined in terms of reduced row echelon matrix R.

3.2 Echelon matrix 3-31 Exercise. What is the relationship between column space of A and nullspace of A T?

3.3 The rank and row reduced form 3-32 Now we turn to another important quantitative index of matrix rank. Definition (Rank): The rank of a matrix A is the number of non-zero pivots. How to determine the rank? Answer: By elimination, A upper triangular U Row reduced echelon R. The rank r is the dimension of the row space The non-pivot rows should be linear combinations of the pivot rows! the dimension of the column space

3.3 The rank and row reduced form 3-33 First note that C(A) = { b R m : A m n x n 1 = b m 1 for some x R n} and { a R m : E m m b m 1 = a m 1 for some b C(A) } has the same dimension if E is invertible. So, C(A) andc(r) has the same dimension as R = EA for some invertible E. Apparently, the non-pivot columns of R are linear combinations of pivot columns (cf., Slide 3-22). the number of (linearly) independent rows in A the number of (linearly) independent columns in A The dimension of the nullspace of A is (n r). I.e., (n r) =thenumberof free variables = the number of special solutions (for Ax = 0). Definition (Independence): A set of vectors {v 1,...,v m } is said to be linearly independent if the linear combination of them (a 1 v 1 + + a m v m )equals0 only when the coefficients are zero (a 1 = a 2 = = a m =0).

3.3 Pivot columns 3-34 How to locate the r pivot columns in A? Answer: R =rref(a). The r pivot columns of R = EA and the r pivot columns of A are exactly at thesamepositions. By R = EA, an interesting observation is that the first r columns of E 1 are exactly the r pivot columns of A! This is because the r pivot columns of R form an r r identity matrix. 1 3 0 2 1 Example. A = 0 0 1 4 3 1 3 0 2 1 R = 0 0 1 4 3 1 3 1 6 4 0 0 0 0 0 d 1,1 d 1,2 d 1,3 1 3 0 2 1 d 1,1 d 1,2 A = E 1 R = d 2,1 d 2,2 d 2,3 0 0 1 4 3 = d 2,1 d 2,2 d 3,1 d 3,2 d 3,3 0 0 0 0 0 d 3,1 d 3,2

3.3 Pivot columns 3-35 Can we find the special solutions directly using the fact that the r pivot columns of R form an r r identity matrix? Answer: Of course. For a matrix A m n,thereare(n r) special solutions. These special solutions satisfies Equivalently, Rx 1 = 0, Rx 2 = 0, Rx n r = 0. R m n [ x1 x 2 x n r ]n (n r) = R m nn n (n r) = 0 m (n r). By reordering the columns of R, it can be of the form: [ ] R m n = I r r F r (n r) 0 (m r) r 0 (m r) (n r) }{{} free variable columns N n (n r) = [ Fr (n r) I (n r) (n r) So, the special solutions = collect the free-variable columns, negate them, and add an identity matrix to the lower (n r) rows. ]

3.3 Pivot columns 3-36 1 3 0 2 1 Example. A = 0 0 1 4 3 1 3 0 2 1 R = 0 0 1 4 3 1 3 1 6 4 0 0 0 0 0 3 2 1 1 0 0 1 0 0 N = = 0 4 3 0 1 0 0 1 0 0 0 1 0 0 1 }{{}}{{} Put identity matrix Put F Example. A = [ 1 2 3 ] R = [ 1 2 3 ] linear independent pivot column of A 2nd column of A is a linear combination of 1st column of A linear independent pivot column of A 4th column of A is a linear combination of 1st & 3rd columns of A 5th column of A is a linear combination of 1st & 3rd columns of A N = 1 0 = 0 1 }{{} Put identity matrix 2 3 1 0 } 0 1 {{ } Put F

3.3 Pivot columns 3-37 In summary: The pivot columns are not linear combination of earlier columns. The free columns are linear combination of earlier pivot columns. Final question: How to find the matrix E such that R = rref(a) =EA? Answer: Forward and backward eliminations (with possibly row exchanges) on matrix [ Am n I m m ] will yield E m m [ Am n I m m ] m (n+m) m (n+m) = [ R m n E m m ] m (n+m). 1 3 0 2 1 Example. A = 0 0 1 4 3 1 3 1 6 4 1 3 0 2 1 1 0 0 R = rref 0 0 1 4 3 0 1 0 = 1 3 1 6 4 0 0 1 1 3 0 2 1 0 1 1 0 0 1 4 3 0 1 0 0 0 0 0 0 1 1 1

3.3 Pivot columns 3-38 [ ] 0 1 0 Example. A =. 1 0 0 ([ ]) 0 1 0 1 0 R = rref = 1 0 0 0 1 [ ] 1 0 0 0 1 0 1 0 1 0

3.3 Rank revisited 3-39 The matrix A can be decomposed into sum of outer products of r pairs of vectors. I.e., A = r i=1 u iv T i How to prove this? Or how to determine these {u i, v i } r i=1 pairs? Answer: (Denote Ẽ = E 1 for convenience.) A m n = Ẽm mr m n = [ ẽ 1 ẽ 2 ẽ m ] = [ ẽ 1 ẽ 2 ẽ m ] m m r T 1 0 T 0 T 0 T 0 T 0 T. m n m m r T 1 0 T. r T r 0 T. m n 0 T 0 T. 0 T + + r T r 0 T.... 1st pivot row.... rth pivot row... all-zero row.... all-zero row m n

3.3 Rank revisited 3-40 r T 1 0 T 0 T 0 T 0 T 0 T. = [ ] ẽ 1 ẽ m. + + [ ] 0 T ẽ 1 ẽ m r T r 0 T. 0 T 0 T = ẽ 1 r T 1 + + ẽ r r T r The determination of Ẽ and R (with A = decomposition of A. ẼR) gives the outer-product

3.3 Rank revisited 3-41 Exercise 1 (Problem 25): Show that every m-by-n matrix of rank r reduces to (m by r) times(r by n). Also, show that the columns of the m by r matrix equal the pivot columns of A. Hint: A m n = [ ] r T 1 r T 1 ẽ 1 ẽ r. and. = [ ] I r r F r (n r) (with column exchanges) m r r T r r n Exercise 2 (Problem 16): If A m n = ẽ A r T A and B n k = ẽ B r T B are two rank-1 matrices, prove that the rank of AB is 1 if, and only if, r T AẽB 0. Tip 1: If A m n = r i=1 ẽir T i is a rank-r matrix (so m r and n r), and B n k = r j=1 ēj r T j is a rank- r matrix (so n r and k r), showthat A m n B n k = [ ] Aē 1 Aē r m r R r k }{{} =ẼAB What is the condition under which AB remains rank r? Answer:Ẽ AB has rank r, or Aē 1,,Aē r are linearly independent. Tip 2: rank(ab) rank(b). Tip 3: rank(ab) rank(a). Tip 4: rank(a) =rank(a T ). r T r

3.4 The complete solution of Ax = b 3-42 Recall how we completely solve A m n x n 1 = 0 m 1. (Here, complete means that we wish to find all solutions.) Answer: R = rref(a) = [ Ir r F r (n r) 0 (m r) r 0 (m r) (n r) Then, every solution x for Ax = 0 is of the form ] N n (n r) = x (null) n 1 = N n (n r)v (n r) 1 for any v R n r. [ Fr (n r) I (n r) (n r) ] How to completely solve A m n x n 1 = b m 1? Answer: Identify one particular solution x (p) to A m n x (p) n 1 = b m 1. Then, every solution x for Ax = b is of the form x (p) + x (null).

3.4 The complete solution of Ax = b 3-43 Justification of the above answer (i.e., proof of the above answer): -Supposethatw satisfies Aw = b but cannot be expressed as x (p) + x (null). - Then, A(w x (p) )=Aw Ax (p) = b b = 0. Hence, (w x (p) ) N(A). - This implies w = x (p) + x (null) for some x (null) N(A); a contradiction is accordingly obtained. Exercise: When will there be only one solution to Ax = b? Answer: When N(A) consists of only 0, andx (p) exists! Note that x (p) exists if, and only if, b C(A).

3.4 The complete solution of Ax = b 3-44 How to find one x (p)? Answer: [ R d ] = rref( [ A b ] )= d 1 I r r F r (n r). d r. 0 (m r) r 0 (m r) (n r) 0 (m r) 1 It is the solution of Ax = b (or Rx = EAx = Eb = d) by setting all free variables equal to zeros, and the remaining variables equal to d 1,d 2,...,d r. 1 3 3 1 Example (Problem 3). A = 2 6 9 and b = 5. Findonex (p). 1 3 3 5 rref( [ A b ] 1 3 0 d 1 = 2 d 1 )= 0 0 1 d 2 =1. Then, x (p) = 0... free variable = 0. 0 0 0 0 d 2

3.4 The complete solution of Ax = b 3-45 Based on what we learn about rank, we can summarize the solutions of A m n x n 1 = b m 1 as follows. A Ax = b N(A) r = m r = n square and invertible 1solution {0 n 1 } r = m r < n rectangular (smaller height) solutions combination of (n r) linearly independent n-by-1 vectors r<m r= n rectangular (smaller width) 0or1solution {0 n 1 } r<m r<n not full rank 0or solutions combination of (n r) linearly independent n-by-1 vectors It may be easier to memorize this by the following equivalent table. A Ax = b N(A) r = m r = n square and invertible 1 solution combination of (n r) =0linearly C(A) =R m independent n-by-1 vectors r = m r<n rectangular (smaller height) solutions combination of (n r) > 0 linearly C(A) =R m independent n-by-1 vectors r<m r = n rectangular (smaller width) 0 or 1 solution combination of (n r) =0linearly C(A) R m independent n-by-1 vectors r<m r<n not full rank 0 or solutions combination of (n r) > 0 linearly C(A) R m independent n-by-1 vectors

3.4 Denote for convenience Ẽ = E 1 3-46 A Ax = b C(A) (a) r = m r = n square and invertible 1solution R m R r (b) r = m r < n rectangular (smaller height) solutions R m R r (c) r<m r= n rectangular (smaller width) 0or1solution combination of r linearly independent m-by-1 vectors (d) r<m r<n not full rank 0or solutions combination of r linearly independent m-by-1 vectors C(A) = { b R m : A m n x n 1 = b m 1 for some x R n} { ] } x n 1 = b m 1 for some x R n = = = [ b R m Ir r F : Ẽ r (n r) m m 0 (m r) r 0 (m r) (n r) { [ ] } b R m xr 1 + F : Ẽ r (n r) x (n r) 1 m m = b m 1 for some x R n 0 (m r) 1 } {b R r : Ẽ r r x r 1 = b r 1 for some x R r } {b R r : Ẽ r r x r 1 + Ẽr rf r (n r) x (n r) 1 = b r 1 for some x R n } {b R m : Ẽ m r x r 1 = b m 1 for some x R r } {b R m : Ẽ m r x r 1 + Ẽm rf r (n r) x (n r) 1 = b m 1 for some x R n (a) (b) (c) (d)

3.4 Denote for convenience Ẽ = E 1 3-47 C(A) = } {b R m : Ẽ m r x r 1 = b m 1 for some x R r } {b R m : Ẽ m r x r 1 = b m 1 for some x R r } {b R m : Ẽ m r x r 1 = b m 1 for some x R r } {b R m : Ẽ m r x r 1 = b m 1 for some x R r (a) (b) (c) (d)

3.4 Denote for convenience Ẽ = E 1 3-48 Example. Find the condition on b under which Ax = b has solutions. Answer: d 1 I r r F r (n r). [ ] [ ] d R d = rref( A b )= r. d r+1 0 (m r) r 0 (m r) (n r). d r+1 (b) =0 We will have (m r) conditions, i.e.,. d m (b) =0 1 2 3 5 Example. Find the condition on b under which 2 4 8 12 x = b has solutions. 3 6 7 13 Answer: rref( [ A b ] 1 2 0 2 4b 1 3 2 b 2 )= 0 0 1 1 b 1 + 1 2 b 2 d 3 (b) = 5b 1 + b 2 + b 3 =0. 0 0 0 0 5b 1 + b 2 + b 3 d m

3.4 Column space revisited 3-49 The above example gives an alternative way to define the column space of A. C(A) = { b R m : A m n x n 1 = b m 1 for some x R n} } = {b R m : Ẽ m r x r 1 = b m 1 for some x R r d r+1 (b) =0 = b R m :. d m (b) =0 Recall that by R = EA, an interesting observation is that r pivot columns of Ẽ = E 1 are exactly the r pivot columns of A! Hence, the above Ẽm r are exactly the r pivot columns of A. 1 2 3 5 Example. Find the column space of A 3 4 = 2 4 8 12. 3 6 7 13 Answer: C(A) = b R3 : 1 3 2 8 x 2 1 = b 3 1 for some x R 2 3 7 = { b R 3 : 5b 1 + b 2 + b 3 =0 }

3.5 Independence, basis and dimension 3-50 From the previous section, we learn that the dimension of column space C(A m n )isrankr, i.e., it is a linear combination of r linearly independent m 1 vectors. These linearly independent vectors span the space C(A). So they can be the basis of the vector space C(A). Definition (Basis): Independent vectors that span the space are called the basis of the space. Note: Span = All vectors in the space can be represented as a unique linear combination of the basis. (Please note again unique is an important key word here!) Uniqueness can be proved by independence as can be seen by the exercise on the bottom of this slide. Example. The all-zero vector 0 is always dependent on other vectors. Why? Exercise (Uniqueness). Suppose {v 1,...,v n } is a basis, and a vector v = a 1 v 1 + + a n v n = b 1 v 1 + + b n v n. Show that a i = b i for 1 i n. (Hint:Seep.172 of the text.)

3.5 Independence, basis and dimension 3-51 Example. Prove that if the columns of A are linearly independent, then Ax = 0 has a unique solution x = 0. Proof: Suppose Ax = m i=1 x ia i = 0 for some non-zero x. Then the columns of A are not linearly independent according to the definition in slide 3-33, which contradicts to the assumption that the columns of A are linearly independent! How to examine whether a group of vectors are independent or not? Answer: Place them as column vectors of a matrix A. They are independent if, and only if, Ax = 0 does not have non-zero solution. They are independent if, and only if, N(A) =N(R) ={0}. They are independent if, and only if, no free variables. They are independent if, and only if, A m n has full column rank, i.e., r = n.

3.5 Column space and row space of A m n with rank r 3-52 After the introduction of independence, we can re-define column space and row space as follows. Column space = linear combinations of r independent columns. These r independent columns are the basis of the column space. The dimension of C(A) isr. Row space = linear combinations of r independent rows. These r independent rows are the basis of the row space. The dimension of R(A) =C(A T )isr. Null space = linear combinations of n r independent vectors, each of which is perpendicular to r independent rows of A. These n r independent vectors are the basis of the null space. The dimension of N(A) isn r.

3.5 Column space and row space of A m n and R m n 3-53 Column space = linear combinations of r independent columns. It is possible that C(A) C(R). However, they have the same rank. Row space = linear combinations of r independent rows. R(A) =R(R). In fact, R(EA)=R(A) for invertible E. Null space = linear combinations of n r independent vectors, each of which is independent of r independent rows. N(A) =N(R). In fact, N(EA)=N(A) for invertible E. Tip: C(AE) =C(A) for invertible E.

3.5 Basis 3-54 The number of basis for a vector space is unique. Proof: Suppose w 1,...,w n and v 1,...,v m are two bases of the same vector space, and suppose n>m. By definition of basis, w 1,...,w n can be represented as linear combinations of v 1,...,v m. Hence, where W = [ w 1 w n ] = [ V a1 V a n ] = VAm n V = [ v 1 v m ] and A = [ a 1 a n ]. Then, m<nimplies that Ax = 0 has non-zero solution (at least n m free variables). Consequently, W x = VAx = V 0 = 0, i.e., a non-zero linear combination w 1,...,w n equal 0, which implies w n is linearly dependent on the other vectors w 1,...,w n 1. A contradiction to the linear independence property of a basis is obtained. The number of basis is also called the dimension ( 維度 ) of the space. The number of basis is sometimes referred as the degree of freedom ( 自由度 ) of the space.

3.5 Basis 3-55 This is an extension to the dimension or degree of freedom notion of the Euclidean space. Example. Find a basis of the vector space of polynomial equations of order m, a m x m + a m 1 x m 1 + + a 0,whereeacha i R. Answer: In this vector space, V is a set of polynomial equations of order m and scalar field F = R. One of the basis can be {1,x,x 2,...,x m }. All the vectors (i.e., polynomials) can be represented as a linear combination of the basis. Back to the Euclidean space, how to find the basis for a set of vectors? Answer: Approach 1) Put them as the rows of a matrix A. The r pivot rows of R = rref(a) are the answers. Approach 2) Put them as the columns of a matrix A. Determine the r pivot columns through R. Then, the r pivot columns of A (not R) are the answers.

3.5 Extension to matrix space and function space 3-56 Extension to Matrix space and Function space is only for your reference, and is excluded from our focus (specifically is excluded from the exam). It is however an important concept in the area of communications. E.g., to find a basis of a group of communication signals. Exercise. Can 0 be a part of the basis? Hint: 0 is linearly dependent on any vector, including itself. Definition (Independence): A set of vectors (v 1,...,v m )issaidtobelinearly independent if the linear combination of them (a 1 v 1 + + a m v m )equals 0 only when the coefficients are zero (a 1 = a 2 = = a m =0). We may then modify the definition of basis as: Definition (Basis): Independent (non-zero) vectors that span the space are called the basis of the space. Note: Span = All vectors in the space can be represented as a unique linear combinations of the basis. (Please note again unique is an important key word here!)

3.5 Extension to matrix space and function space 3-57 Exercise. What is the basis for null space {0}? What is the dimension of this null space? Hint 1: Put A = [ 0 ], and find the rank r of A. Hint 2: If we choose 0 to be the basis of such a null space, can all vectors in the space {0} be represented as a unique linear combinations of the basis? Answer: The dimension is 0 and no basis exists. Example (Challenge). What are all the matrices that have the column space 1 2 spanned by two vectors v 1 = 2 and v 2 = 3? 0 0 Answer: All 3 n matrices with rank r = 2 and with all columns are linear combination of v 1 and v 2. I.e., [ ] [ ] c v 1 v 1,1 c 1,2 c 1,n 2 with n 2andat c 2,1 c 2,2 c 2,n least two columns of C are not equal. Important notion: All the columns of A must lie in C(A), and the r pivot columns form a basis.

3.5 Extension to matrix space and function space 3-58 Example (Challenge). What are all the matrices that have the null space spanned 1 2 by two vectors v 1 = 2 and v 2 = 3? 0 0 Answer: All m 3 matrices with rank r = 1 and with all rows are linear combina- tion of the vector w = 0 (because w T v 1 = w T v 2 =0).I.e.,C m 3 =. w T 0 c 1,1 1 c m,1 with m 1 and at least one row is non-zero. Important notion: All the rows of A must lie in the dual space of N(A), i.e., row space R(A). Also, (n r) = 2 is the dimension of N(A m n ). Example (Challenge). Can the matrices in each of the above two examples form a vector space?

3.6 Dimensions of the four subspaces 3-59 What are the four subspaces? Column space C(A) Nullspace N(A) Row space R(A) =C(A T ) Left nullspace N(A T ): Solution y to A T y = 0 Equivalently, solution y to A = 0 T. }{{} y T on the left of A It is therefore named left nullspace.

3.6 Determination of the left nullspace 3-60 Exercise (Review of key idea). The last (m r) rowsofe are a basis of the left nullspace of A. Recall that the basis (special solutions, i.e., [ Fr (n r) I (n r) (n r) nullspace of A can be obtained through [ ] Ir r F R = rref(a) = r (n r). 0 (m r) r 0 (m r) (n r) ] ) for the This exercise tells you that the special solutions of the left nullspace is given by E m m.

3.6 Determination of the left nullspace 3-61 Proof: That E m m A m n = R are all zeros imply [ Er m E (m r) m Hence, A T n me T m (m r) = 0 n (m r). ] A m n = R m n and the last (m r) rowsof E (m r) m A m n = 0 (m r) n. Observe that the left nullspace requires A T n my m 1 = 0 n 1, and the columns of E m (m r) are linearly independent (because E is invertible), and the dimension of the left null space is (m r). The proof is then completed. Recall from slide 3-49 that by R = EA, an interesting observation is that r pivot columns of Ẽ = E 1 are exactly the r pivot columns of A! So these r columns are a basis of the column space of A. Now we further show that the last (m r) rowsofe are a basis of the left nullspace of A.

3.6 Column space and left nullspace (cf. slides 3-48 and 3-49) 3-62 Example. Find the condition on b under which Ax = b has solutions. Answer: d 1 I r r F r (n r). [ ] [ ] d R d = rref( A b )= r. d r+1 0 (m r) r 0 (m r) (n r). d r+1 (b) =0 We will have (m r) conditions, i.e.,. d m (b) =0 1 2 3 5 Example. Find the condition on b under which 2 4 8 12 x = b has solutions. 3 6 7 13 Answer: rref( [ A b ] 1 2 0 2 4b 1 3 2 b 2 )= 0 0 1 1 b 1 + 1 2 b 2 d 3 (b) = 5b 1 + b 2 + b 3 =0. 0 0 0 0 5b 1 + b 2 + b 3 d m

3.6 Column space and left nullspace (cf. slides 3-48 and 3-49) 3-63 The above example gives an alternative way to define the column space of A. } C(A) = {b R m : A m n x n 1 = Ẽm mr m n x n 1 = b m 1 for some x R n } = {b R m : Ẽ m r x r 1 = b m 1 for some x R r d r+1 (b) =0 = b R m :. d m (b) =0 Recall that by R = EA, an interesting observation is that r pivot columns of Ẽ = E 1 are exactly the r pivot columns of A! Hence, the above Ẽm r are exactly the r pivot columns of A. 1 2 3 5 Example. Find the column space of 2 4 8 12. 3 6 7 13 Answer: 1 3 C(A) = b R3 : 2 8 x 2 1 = b 3 1 for some x R 2 3 7 = { b R 3 : 5b 1 + b 2 + b 3 =0 }

3.6 Column space and left nullspace (cf. slides 3-48 and 3-49) 3-64 C(A) = b R m : d r+1 (b) =0. d m (b) =0 gives the (m r) basis of the left nullspace because the vectors in C(A) are orthogonal to those in N(A T ). 1 2 3 5 Example. Find the left nullspace of 2 4 8 12 whose column space is written 3 6 7 13 as C(A) = { b R 3 : 5b 1 + b 2 + b 3 =0 } Answer: N(A T ) = 5 y R3 : y = c 1 for some c R 1 [ 5 1 1 ] is exactly the last row of E (cf. slide 3-60).

3.6 Column spaces and row spaces 3-65 Tip. A m n = B m r C r n (and the rank of all matrices is r). Then, C(A m n )=C(B m r ) and R(A m n )=R(C r n ). If the rank of A is not r, then we can only have C(A m n ) C(B m r ) and R(A m n ) R(C r n ). [ ] [ ] [ ] 1 1 1 0 1 1 For example, for =,thenc(a) C (B). They are equal 0 0 0 1 0 0 }{{}}{{}}{{} when A and B have the same rank. A B C Exercise. Give A m n = B m r C r n. Then, the rank of A is no greater than r. Example. A m n = Ẽm mr m n,wherer = rref(a). Then, C(A m n )=C(Ẽm m) =C(A m r )=C(Ẽm r) and R(A m n )=R(R m n ), where A m r = Ẽm r consists of the r-pivot columns of A.