Linear Algebra. Min Yan

Similar documents
MATH 240 Spring, Chapter 1: Linear Equations and Matrices

SUMMARY OF MATH 1600

1. General Vector Spaces

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Chapter 1: Systems of Linear Equations

1 9/5 Matrices, vectors, and their applications

October 25, 2013 INNER PRODUCT SPACES

Elementary linear algebra

Review problems for MA 54, Fall 2004.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

is Use at most six elementary row operations. (Partial

Conceptual Questions for Review

MAT Linear Algebra Collection of sample exams

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Math113: Linear Algebra. Beifang Chen

Linear Algebra Highlights

Math Linear Algebra II. 1. Inner Products and Norms

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

LINEAR ALGEBRA REVIEW

Linear Algebra March 16, 2019

Math 1553, Introduction to Linear Algebra

Final Exam Practice Problems Answers Math 24 Winter 2012

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

NOTES on LINEAR ALGEBRA 1

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Math Linear Algebra Final Exam Review Sheet

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Linear Algebra Primer

Lecture Summaries for Linear Algebra M51A

Linear Algebra: Matrix Eigenvalue Problems

Chapter 3. Vector spaces

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Linear Algebra. and

Quizzes for Math 304

LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM

4. Linear transformations as a vector space 17

PRACTICE PROBLEMS FOR THE FINAL

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Elementary maths for GMT

2. Every linear system with the same number of equations as unknowns has a unique solution.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra problems

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Math 302 Outcome Statements Winter 2013

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Introduction to Matrices

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Math 113 Final Exam: Solutions

Digital Workbook for GRA 6035 Mathematics

Math 54 HW 4 solutions

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

Reduction to the associated homogeneous system via a particular solution

Problems in Linear Algebra and Representation Theory

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

x y B =. v u Note that the determinant of B is xu + yv = 1. Thus B is invertible, with inverse u y v x On the other hand, d BA = va + ub 2

Calculating determinants for larger matrices

LINEAR ALGEBRA SUMMARY SHEET.

Math 3108: Linear Algebra

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

MATH 583A REVIEW SESSION #1

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MTH 464: Computational Linear Algebra

MTH 2032 Semester II

ELEMENTARY LINEAR ALGEBRA

Coding the Matrix Index - Version 0

Extra Problems for Math 2050 Linear Algebra I

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Quantum Computing Lecture 2. Review of Linear Algebra

MA 242 LINEAR ALGEBRA C1, Solutions to First Midterm Exam

Linear Algebra Review

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Linear Algebra. Chapter Linear Equations

Linear Algebra Massoud Malek

Abstract Vector Spaces and Concrete Examples

ELEMENTARY LINEAR ALGEBRA

A Brief Outline of Math 355

Linear Algebra. Preliminary Lecture Notes

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 23, 2015

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Transcription:

Linear Algebra Min Yan January 2, 2018

2

Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom...................... 11 1.2 Span and Linear Independence...................... 12 1.2.1 Definition............................. 13 1.2.2 System of Linear Equations................... 16 1.2.3 Gaussian Elimination....................... 21 1.2.4 Row Echelon Form........................ 22 1.2.5 Reduced Row Echelon Form................... 27 1.2.6 Calculation of Span and Linear Independence......... 29 1.3 Basis.................................... 33 1.3.1 Definition............................. 33 1.3.2 Coordinate............................ 35 1.3.3 Construct Basis from Spanning Set............... 39 1.3.4 Construct Basis from Linearly Independent Set........ 41 1.3.5 Dimension............................. 43 1.3.6 On Infinite Dimensional Vector Space.............. 44 2 Linear Transformation 47 2.1 Definition................................. 47 2.1.1 Geometry and Example..................... 47 2.1.2 Linear Transformation of Linear Combination......... 49 2.1.3 Linear Transformation between Euclidean Spaces....... 51 2.2 Operation of Linear Transformation................... 54 2.2.1 Addition, Scalar Multiplication, Composition......... 54 2.2.2 Dual................................ 56 2.2.3 Matrix Operation......................... 60 2.3 Onto, One-to-one, and Inverse...................... 65 2.3.1 Definition............................. 65 2.3.2 Onto and One-to-one for Linear Transformation........ 66 2.3.3 Isomorphism............................ 71 2.3.4 Invertible Matrix......................... 74 3

4 CONTENTS 2.4 Matrix of Linear Transformation..................... 78 2.4.1 Matrix of General Linear Transformation............ 78 2.4.2 Change of Basis.......................... 83 2.4.3 Similar Matrix.......................... 86 3 Subspace 89 3.1 Span.................................... 91 3.1.1 Calculation of Span........................ 92 3.1.2 Calculation of Extension to Basis................ 94 3.2 Range and Kernel............................. 95 3.2.1 Range............................... 96 3.2.2 Rank................................ 99 3.2.3 Kernel............................... 101 3.2.4 General Solution of Linear Equation.............. 104 3.3 Sum and Direct Sum........................... 106 3.3.1 Sum of Subspace......................... 106 3.3.2 Direct Sum............................ 108 3.3.3 Projection............................. 112 3.3.4 Blocks of Linear Transformation................. 116 3.4 Quotient Space.............................. 118 3.4.1 Construction of the Quotient.................. 119 3.4.2 Universal Property........................ 122 3.4.3 Direct Summand......................... 125 4 Inner Product 129 4.1 Inner Product............................... 129 4.1.1 Definition............................. 129 4.1.2 Geometry............................. 132 4.1.3 Adjoint.............................. 135 4.2 Orthogonal Vector............................ 138 4.2.1 Orthogonal Set.......................... 138 4.2.2 Isometry.............................. 141 4.2.3 Orthogonal Matrix........................ 144 4.2.4 Gram-Schmidt Process...................... 145 4.3 Orthogonal Subspace........................... 149 4.3.1 Orthogonal Complement..................... 150 4.3.2 Orthogonal Projection...................... 152 5 Determinant 157 5.1 Algebra.................................. 157 5.1.1 Multilinear and Alternating Function.............. 158 5.1.2 Column Operation........................ 159 5.1.3 Row Operation.......................... 161

CONTENTS 5 5.1.4 Cofactor Expansion........................ 164 5.1.5 Cramer s Rule........................... 167 5.2 Geometry................................. 169 5.2.1 Volume.............................. 169 5.2.2 Orientation............................ 171 5.2.3 Determinant of Linear Operator................. 174 5.2.4 Geometric Axiom for Determinant............... 175 6 General Linear Algebra 177 6.1 Complex Linear Algebra......................... 178 6.1.1 Complex Number......................... 178 6.1.2 Complex Vector Space...................... 179 6.1.3 Complex Linear Transformation................. 182 6.1.4 Complexification and Conjugation................ 184 6.1.5 Conjugate Pair of Subspaces................... 187 6.1.6 Complex Inner Product..................... 189 6.2 Module over Ring............................. 193 6.2.1 Field and Ring.......................... 193 6.2.2 Abelian Group.......................... 197 6.2.3 Polynomial............................ 197 6.2.4 Trisection of Angle........................ 197 7 Spectral Theory 199 7.1 Eigenspace................................. 199 7.1.1 Invariant Subspace........................ 200 7.1.2 Eigenspace............................. 203 7.1.3 Characteristic Polynomial.................... 206 7.1.4 Diagonalisation.......................... 209 7.1.5 Complex Eigenvalue of Real Operator.............. 214 7.2 Orthogonal Diagonalisation....................... 218 7.2.1 Normal Operator......................... 218 7.2.2 Commutative -Algebra..................... 220 7.2.3 Hermitian Operator........................ 222 7.2.4 Unitary Operator......................... 226 7.3 Canonical Form.............................. 228 7.3.1 Generalised Eigenspace...................... 228 7.3.2 Nilpotent Operator........................ 232 7.3.3 Jordan Canonical Form...................... 236 7.3.4 Rational Canonical Form..................... 237 8 Tensor 243 8.1 Bilinear.................................. 243 8.1.1 Bilinear Map........................... 243

6 CONTENTS 8.1.2 Bilinear Function......................... 244 8.1.3 Quadratic Form.......................... 247 8.2 Hermitian................................. 253 8.2.1 Sesquilinear Function....................... 253 8.2.2 Hermitian Form.......................... 255 8.2.3 Completing the Square...................... 258 8.2.4 Signature............................. 260 8.2.5 Positive Definite......................... 261 8.3 Multilinear................................. 263 8.4 Invariant of Linear Operator....................... 263 8.4.1 Symmetric Function....................... 264

Chapter 1 Vector Space Linear algebra describes the most basic mathematical structure. The key object in linear algebra is vector space, which is characterised by the operations of addition and scalar multiplication. The key relation between objects is linear transformation, which is characterised by preserving the two operations. The key example is Euclidean space, which is the model for all finite dimensional vector spaces. The theory of linear algebra can be developed over any field, which is a number system where the usual four arithmetic operations are defined. In fact, a more general theory (of modules) can be developed over any ring, which is a system where only the addition, subtraction and multiplication (no division) are defined. Since the linear algebra of real vector spaces already reflects most of the true spirit of linear algebra, we will concentrate on real vector spaces until Chapter??. 1.1 Definition 1.1.1 Axioms of Vector Space Definition 1.1.1. A (real) vector space is a set V, together with the operations of addition and scalar multiplication such that the following are satisfied. 1. Commutativity: u + v = v + u. u + v : V V V, a u: R V V, 2. Associativity for Addition: ( u + v) + w = u + ( v + w). 3. Zero: There is an element 0 V satisfying u + 0 = u = 0 + u. 4. Negative: For any u, there is v (to be denoted u), such that u+ v = 0 = v+ u. 5. One: 1 u = u. 7

8 CHAPTER 1. VECTOR SPACE 6. Associativity for Scalar Multiplication: (ab) u = a(b u). 7. Distributivity in R: (a + b) u = a u + b u. 8. Distributivity in V : a( u + v) = a u + a v. Due to the associativity of addition, we may write u + v + w and even longer expressions without ambiguity. Example 1.1.1. The zero vector space { 0} consists of single element 0. This leaves no choice for the two operations: 0 + 0 = 0, c 0 = 0. It can be easily verified that all eight axioms hold. Example 1.1.2. The Euclidean space R n is the set of n-tuples x = (x 1, x 2,..., x n ), x i R. The i-th number x i is the i-th coordinate of the vector. The Euclidean space is a vector space with coordinate wise addition and scalar multiplication (x 1, x 2,..., x n ) + (y 1, y 2,..., y n ) = (x 1 + y 1, x 2 + y 2,..., x n + y n ), a(x 1, x 2,..., x n ) = (ax 1, ax 2,..., ax n ). Geometrically, we often express a vector in Euclidean space as a dot or an arrow from the origin 0 = (0, 0,..., 0) to the dot. Figure 1.1.1 shows that the addition is described by parallelogram, and the scalar multiplication is described by stretching and shrinking. (x 1 + y 1, x 2 + y 2 ) (y 1, y 2 ) 2(x 1, x 2 ) (x 1, x 2 ) 0.5(x 1, x 2 ) (y 1, y 2 ) Figure 1.1.1: Euclidean space R 2. For the purpose of calculation (especially when mixed with matrices), it is more convenient to write the vector as vertical n 1 matrix, or the transpose (indicated

1.1. DEFINITION 9 by superscript T ) of horizontal 1 n matrix x 1 x 2 x =. = (x 1 x 2 x n ) T. x n We can write x 1 y 1 x 1 + y 1 x 2. + y 2. = x 2 + y 2., x n y n x n + y n x 1 ax 1 a x 2. = ax 2.. x n ax n Example 1.1.3. All polynomials of degree n form a vector space P n = {a 0 + a 1 t + a 2 t 2 + + a n t n }. The addition and scalar multiplication are the usual operations of functions. In fact, the coefficients of the polynomial provides a one-to-one correspondence a 0 + a 1 t + a 2 t 2 + + a n t n P n (a 0, a 1, a 2,..., a n ) R n+1. Since the one-to-one correspondence preserves the addition and scalar multiplication, it identifies the polynomial space P n with the Euclidean space R n+1, as far as the two operations are concerned. Such identifications are isomorphisms. The rigorous definition of isomorphism and discussions about the concept will appear in Section??. Example 1.1.4. An m n matrix A is mn numbers arranged in m rows and n columns. The number a ij in the i-th row and j-column of A is called the (i, j)-entry of A. We also indicate the matrix by A = (a ij ). All m n matrices form a vector space M m n with the obvious addition and scalar multiplication. For example, in M 3 2 we have x 11 x 12 y 11 y 12 x 11 + y 11 x 12 + y 12 x 21 x 22 + y 21 y 22 = x 21 + y 21 x 22 + y 22, x 31 x 32 y 31 y 32 x 31 + y 31 x 32 + y 32 x 11 x 12 ax 11 ax 12 a x 21 x 22 = ax 21 ax 22. x 31 x 32 ax 31 ax 32 We have an isomorphism x 1 y 1 x 2 y 2 M 3 2 (x 1, x 2, x 3, y 1, y 2, y 3 ) R 6, x 3 y 3

10 CHAPTER 1. VECTOR SPACE that can be used to translate linear algebra problems about matrices to linear algebra problems in Euclidean spaces. We also have the general transpose isomorphism that identifies m n matrices with n n matrices (see Example 2.3.7 for the general formula). x 1 y 1 ( ) A = x 2 y 2 M 3 2 A T x1 x = 2 x 3 M y x 3 y 1 y 2 y 2 3. 3 3 Example 1.1.2 gives an isomorphism The transpose is also an isomorphism x 1 x 2 (x 1, x 2,..., x n ) R n. M n 1. x 1 x 2 x =. M n 1 x T = (x 1 x 2 x n ) M 1 n. x n The addition, scalar multiplication, and transpose of matrices are defined in the most obvious way. However, even simple definitions need to be justified. See Section?? for the justification of addition and scalar multiplication. See Definition 4.1.5 for the justification of transpose. Example 1.1.5. All sequences (x n ) n=1 of real numbers form a vector space V, with the addition and scalar multiplications given by x n (x n ) + (y n ) = (x n + y n ), a(x n ) = (ax n ). Example 1.1.6. All smooth functions form a vector space C, with the usual addition and scalar multiplication of functions. The vector space is not isomorphic to the usual Euclidean space because it is infinite dimensional. Exercise 1.1. Prove that (a + b)( x + y) = a x + b y + b x + a y in any vector space. Exercise 1.2. Introduce the following addition and scalar multiplication in R 2 (x 1, x 2 ) + (y 1, y 2 ) = (x 1 + y 2, x 2 + y 1 ), a(x 1, x 2 ) = (ax 1, ax 2 ). Check which axioms of vector space are true, and which are false.

1.1. DEFINITION 11 Exercise 1.3. Introduce the following addition and scalar multiplication in R 2 (x 1, x 2 ) + (y 1, y 2 ) = (x 1 + y 1, 0), a(x 1, x 2 ) = (ax 1, 0). Check which axioms of vector space are true, and which are false. Exercise 1.4. Introduce the following addition and scalar multiplication in R 2 (x 1, x 2 ) + (y 1, y 2 ) = (x 1 + Ay 1, x 2 + By 2 ), a(x 1, x 2 ) = (ax 1, ax 2 ). Show that this makes R 2 into a vector space if and only if A = B = 1. Exercise 1.5. Show that all convergent sequences form a vector space. Exercise 1.6. Show that all even smooth functions form a vector space. Exercise 1.7. Explain that the transpose of matrix satisfies (A + B) T = A T + B T, (aa) T = aa T, (A T ) T = A. Exercise 2.75 gives conceptual explanation of the equalities. 1.1.2 Consequence of Axiom Now we establish some basic properties the vector space. You can directly verify these properties in Euclidean spaces. However, the proof for general vector spaces can only use the axioms. Proposition 1.1.2. The zero vector is unique. Proof. Suppose 0 1 and 0 2 are two zero vectors. By applying the first equality in Axiom 3 to u = 0 1 and 0 = 0 2, we get 0 1 + 0 2 = 0 1. By applying the second equality in Axiom 3 to 0 = 0 1 and u = 0 2, we get 0 2 = 0 1 + 0 2. Combining the two equalities, we get 0 2 = 0 1 + 0 2 = 0 1. Proposition 1.1.3. If u + v = u, then v = 0. By Axioms 2, we also have v + u = u, then v = 0. Both properties are the cancelation law. Proof. Suppose u + v = u. By Axiom 3, there is w, such that w + u = 0. We use w instead of v in the axiom, because v is already used in the proposition. Then v = 0 + v (Axiom 3) = ( w + u) + v (choice of w) = w + ( u + v) (Axiom 2) = w + u (assumption) = 0. (choice of w)

12 CHAPTER 1. VECTOR SPACE Proposition 1.1.4. a u = 0 if and only if a = 0 or u = 0. Proof. First we prove 0 u = 0. By Axiom 7, we have 0 u + 0 u = (0 + 0) u = 0 u. By Proposition 1.1.3, we get 0 u = 0. Next we prove a 0 = 0. By Axioms 8 and 3, we have a 0 + a 0 = a( 0 + 0) = a 0. By Proposition 1.1.3, we get a 0 = 0. The equalities 0 u = 0 and a 0 = 0 give the if part of the proposition. The only if part means a u = 0 implies a = 0 or u = 0. This is the same as a u = 0 and a 0 imply u = 0. So we assume a u = 0 and a 0 and apply Axioms 5, 6 and a 0 = 0 (just proved) to get u = 1 u = (a 1 a) u = a 1 (a u) = a 1 0 = 0. Exercise 1.8. Directly verify Propositions 1.1.2, 1.1.3, 1.1.4 in R n. Exercise 1.9. Prove that the vector v in Axiom 4 is unique. This justifies the notation u. Exercise 1.10. Prove the more general version of the cancelation law: u + v 1 = u + v 2 implies v 1 = v 2. Exercise 1.11. We use Exercise 1.9 to define u v = u+( v). Prove the following properties ( 1) u = u, ( u) = u, ( u v) = u + v, ( u + v) = u v. 1.2 Span and Linear Independence The repeated use of addition and scalar multiplication gives the linear combination a 1 v 1 + a 2 v 2 + + a n v n. By using the axioms, it is easy to verify the usual properties of linear combinations such as (a 1 v 1 + + a n v n ) + (b 1 v 1 + + b n v n ) = (a 1 + b 1 ) v 1 + + (a n + b n ) v n, c(a 1 v 1 + + a n v n ) = ca 1 v 1 + + ca n v n.

1.2. SPAN AND LINEAR INDEPENDENCE 13 2 u + 3 v u v 2 u v 2 u + v u + v 3 u v 2 u 1.5 u u 0.5 u 0 2 u v u v u 2 v 5 u 4 v Figure 1.2.1: Linear combination. The linear combination produces many more vectors from several seed vectors. If we start with a nonzero seed vector u, then all its linear combinations a u form a straight line passing through the origin 0. If we start with two non-parallel vectors u and v, then all their linear combinations a u + b v form a plane passing through the origin 0. Exercise 1.12. Suppose each of w 1, w 2,..., w m is a linear combination of v 1, v 2,..., v n. Prove that a linear combination of w 1, w 2,..., w m is also a linear combination of v 1, v 2,..., v n. 1.2.1 Definition We often have mechanisms that produce new objects from existing objects. For example, the addition and multiplication produce new numbers from two existing numbers. Then linear combinations is comparable to all the new objects that can be produced by several seed objects. For example, by using the mechanism +1 and starting with the seed number 1, we can produce all the natural numbers N. For another example, by using multiplication and starting with all prime numbers as seed numbers, we can produce N. Two questions naturally arises. The first is whether the mechanism and the seed objects can produce all the objects. The answer is yes for the mechanism +1 and seed 1 producing all natural numbers. The answer is no for the following 1. mechanism +2, seed 1, producing N. 2. mechanism +1, seed 2, producing N. 3. mechanism +1, seed 1, producing Q (rational numbers). The answer is also yes for the multiplication and all prime numbers producing N. The second question is whether the way new objects are produced is unique. For example, 12 is obtained by applying +1 to 1 eleven times, and this process

14 CHAPTER 1. VECTOR SPACE is unique. For another example, 12 is obtained by multiplying the prime numbers 2, 2, 3 together, and this collection of prime numbers is unique. For the linear combination, the first question is span, and the second question is linear independence. Definition 1.2.1. A set of vectors v 1, v 2,..., v n span V if any vector in V can be expressed as a linear combination of v 1, v 2,..., v n. The vectors are linearly independent if the coefficient in the linear combination is unique a 1 v 1 + a 2 v 2 + + a n v n = b 1 v 1 + b 2 v 2 + + b n v n = a 1 = b 1, a 2 = b 2,..., a n = b n. The vectors are linearly dependent if they are not linearly independent. Example 1.2.1. The standard basis vector e i in R n has the i-th coordinate 1 and all other coordinates 0. For example, the standard basis vectors of R 3 are e 1 = (1, 0, 0), e 2 = (0, 1, 0), e 3 = (0, 0, 1). For any vector in R n, we can easily get the expression (x 1, x 2,..., x n ) = x 1 e 1 + x 2 e 2 + + x n e n. This shows that any vector can be expressed as a linear combination of the standard basis vectors, and therefore the vectors span R n. Moreover, the equality also implies that, if two expressions on the right are equal x 1 e 1 + x 2 e 2 + + x n e n = y 1 e 1 + y 2 e 2 + + y n e n, then the two expressions on the left are also equal (x 1, x 2,..., x n ) = (y 1, y 2,..., y n ). Of course this means exactly x 1 = y 1, x 2 = y 2,..., x n = y n. We conclude that the standard basis vectors are also linearly independent. Example 1.2.2. Any polynomial of degree n is of the form p(t) = a 0 + a 1 t + a 2 t 2 + + a n t n. The formula can be interpreted as that p(t) is a linear combination of monomials 1, t, t 2,..., t n. Therefore the monomials span P n. Moreover, if two linear combinations of monomials are equal a 0 + a 1 t + a 2 t 2 + + a n t n = b 0 + b 1 t + b 2 t 2 + + b n t n, then we may regard the equality as two polynomials being equal. From the high school experience on two polynomials being equal, we know that this means the coefficients are equal: a 1 = b 1, a 2 = b 2,..., a n = b n. We conclude that the monomials are also linearly independent.

1.2. SPAN AND LINEAR INDEPENDENCE 15 Exercise 1.13. Show that the following matrices in M 3 2 span the vector space and are linearly independent. 1 0 0 0 0 0 0 1 0 0 0 0 0 0, 1 0, 0 0, 0 0, 0 1, 0 0. 0 0 0 0 1 0 0 0 0 0 0 1 Proposition 1.2.2. A set of vectors v 1, v 2,..., v n is linearly independent if and only if the linear combination expression of the zero vector is unique a 1 v 1 + a 2 v 2 + + a n v n = 0 = a 1 = a 2 = = a n = 0. Proof. The property in the proposition is the special case of b 1 = = b n = 0 in the definition of linear independence. Conversely, if the special case holds, then a 1 v 1 + a 2 v 2 + + a n v n = b 1 v 1 + b 2 v 2 + + b n v n = (a 1 b 1 ) v 1 + (a 2 b 2 ) v 2 + + (a n b n ) v n = 0 = a 1 b 1 = a 2 b 2 = = a n b n = 0. (spacial case) The second implication is obtained by applying the special case. Proposition 1.2.3. A set of vectors is linearly dependent if and only if one vector is a linear combination of the other vectors. Proof. By Proposition 1.2.2, v 1, v 2,..., v n are linearly dependent if and only if there are a 1, a 2,..., a n, not all 0, such that a 1 v 1 + a 2 v 2 + + a n v n = 0. If a i 0, then the equality implies v i = a 1 a i v 1 a i 1 a i v i 1 a i+1 a i v i+1 a n a i v n. This expresses the i-th vector as the a linear combination of the other vectors. Conversely, if v i = a 1 v 1 + + a i 1 v i 1 + a i+1 v i+1 + + a n v n, then 0 = a 1 v 1 + + a i 1 v i 1 1 v i + a i+1 v i+1 + + a n v n, where the i-th coefficient is 1 0. By Proposition 1.2.2, the vectors are linearly dependent. Proposition 1.2.4. A single vector is linearly dependent if and only if it is the zero vector. Two vectors are linearly dependent if and only if one is a scalar multiple of another. Proof. The zero vector 0 is linearly dependent because 1 0 = 0, with the coefficient 1 0. Conversely, if v 0 and a v = 0, then by Proposition 1.1.4, we have a = 0. This proves that the non-zero vector is linearly independent.

16 CHAPTER 1. VECTOR SPACE By Proposition 1.2.3, two vectors u and v are linearly dependent if and only if either u is a linear combination of v, or v is a linear combination of u. Since the linear combination of a single vector is simply the scalar multiplication, the proposition follows. Exercise 1.14. Suppose a set of vectors is linearly independent. Prove that a smaller set is still linearly independent. Exercise 1.15. Suppose a set of vectors is linearly dependent. Prove that a bigger set is still linearly dependent. Exercise 1.16. Suppose v 1, v 2,..., v n is linearly independent. Prove that v 1, v 2,..., v n, v n+1 is still linearly independent if and only if v n+1 is not a linear combination of v 1, v 2,..., v n. Exercise 1.17. Show that v 1, v 2,..., v n span V if and only if v 1,..., v j,..., v i,..., v n span V. Moreover, the linear independence is also equivalent. Exercise 1.18. Show that v 1, v 2,..., v n span V if and only if v 1,..., c v i,..., v n, c 0, span V. Moreover, the linear independence is also equivalent. Exercise 1.19. Show that v 1, v 2,..., v n span V if and only if v 1,..., v i + c v j,..., v j,..., v n span V. Moreover, the linear independence is also equivalent. 1.2.2 System of Linear Equations We try calculate the concepts of span and linear independence in Euclidean space. Example 1.2.3. For the vectors v 1 = (1, 2, 3), v 2 = (4, 5, 6), v 3 = (7, 8, 9) to span R 3, it means that any vector b = (b 1, b 2, b 3 ) R 3 is expressed as a linear combinations x 1 v 1 + x 2 v 2 + x 3 v 3 = (x 1 + 4x 2 + 7x 3, 2x 1 + 5x 2 + 8x 3, 3x 1 + 6x 2 + 9x 3 ) = (b 1, b 2, b 3 ). It is easier to see the meaning in vertical form 1 4 7 x 1 + 4x 2 + 7x 3 b 1 x 1 2 + x 2 5 + x 3 8 = 2x 1 + 5x 2 + 8x 3 = b 2. 3 6 9 3x 1 + 6x 2 + 9x 3 b 3 We find that the span means that the system of linear equations x 1 + 4x 2 + 7x 3 = b 1, 2x 1 + 5x 2 + 8x 3 = b 2, 3x 1 + 6x 2 + 9x 3 = b 3,

1.2. SPAN AND LINEAR INDEPENDENCE 17 has solution for all right side. By Proposition 1.2.2, for the vectors to be linearly independent, it means that 1 4 7 x 1 + 4x 2 + 7x 3 0 x 1 2 +x 2 5 +x 3 8 = 2x 1 + 5x 2 + 8x 3 = 0 = x 1 = x 2 = x 3 = 0. 3 6 9 3x 1 + 6x 2 + 9x 3 0 In other words, the homogeneous system of linear equations x 1 + 4x 2 + 7x 3 = 0, 2x 1 + 5x 2 + 8x 3 = 0, 3x 1 + 6x 2 + 9x 3 = 0, has only the trivial solution x 1 = x 2 = x 3 = 0. Example 1.2.4. For the polynomials p 1 (t) = 1 + 2t + 3t 2, p 2 (t) = 4 + 5t + 6t 2, p 3 (t) = 7 + 8t + 9t 2 to span the vector space P 2, it means that any polynomial b 1 + b 2 t + b 3 t 2 can be expressed as a linear combination b 1 + b 2 t + b 3 t 2 = x 1 (1 + 2t + 3t 2 ) + x 2 (4 + 5t + 6t 2 ) + x 3 (7 + 8t + 9t 2 ) = (x 1 + 4x 2 + 7x 3 ) + (2x 1 + 5x 2 + 8x 3 )t + (3x 1 + 6x 2 + 9x 3 )t 2. The equality is the same as system of linear equations x 1 + 4x 2 + 7x 3 = b 1, 2x 1 + 5x 2 + 8x 3 = b 2, 3x 1 + 6x 2 + 9x 3 = b 3. Then p 1 (t), p 2 (t), p 3 (t) span P 2 if and only if the system has solution for all b 1, b 2, b 3. Similarly, the three polynomials are linearly independent if and only if 0 = x 1 (1 + 2t + 3t 2 ) + x 2 (4 + 5t + 6t 2 ) + x 3 (7 + 8t + 9t 2 ) = (x 1 + 4x 2 + 7x 3 ) + (2x 1 + 5x 2 + 8x 3 )t + (3x 1 + 6x 2 + 9x 3 )t 2 implies x 1 = x 2 = x 3 = 0. This is the same as that the homogeneous system of linear equations x 1 + 4x 2 + 7x 3 = 0, 2x 1 + 5x 2 + 8x 3 = 0, 3x 1 + 6x 2 + 9x 3 = 0, has only the trivial solution x 1 = x 2 = x 3 = 0. We see that the problems of span and linear independence in general vector spaces may be translated into the similar problems in the Euclidean spaces. In fact, the translation is given by the isomprhism a 0 + a 1 t + a 2 t 2 P 2 (a 0, a 1, a 2 ) R 3.

18 CHAPTER 1. VECTOR SPACE Example 1.2.5. Suppose v 1, v 2, v 3 span V. We ask whether v 1 + 2 v 2 + 3 v 3, 4 v 1 + 5 v 2 + 6 v 3, 7 v 1 + 8 v 2 + 9 v 3 still span V. The assumption is that any x V can be expressed as x = b 1 v 1 + b 2 v 2 + b 3 v 3 for some scalars b 1, b 2, b 3. What we try to conclude that any x can be expressed as x = x 1 ( v 1 + 2 v 2 + 3 v 3 ) + x 2 (4 v 1 + 5 v 2 + 6 v 3 ) + x 3 (7 v 1 + 8 v 2 + 9 v 3 ) for some scalars x 1, x 2, x 3. By b 1 v 1 + b 2 v 2 + b 3 v 3 = x 1 ( v 1 + 2 v 2 + 3 v 3 ) + x 2 (4 v 1 + 5 v 2 + 6 v 3 ) + x 3 (7 v 1 + 8 v 2 + 9 v 3 ) = (x 1 + 4x 2 + 7x 3 ) v 1 + (2x 1 + 5x 2 + 8x 3 ) v 2 + (3x 1 + 6x 2 + 9x 3 ) v 3, the question becomes the following: For any b 1, b 2, b 3 (which gives any x V ), find x 1, x 2, x 3 satisfying x 1 + 4x 2 + 7x 3 = b 1, 2x 1 + 5x 2 + 8x 3 = b 2, 3x 1 + 6x 2 + 9x 3 = b 3. In other words, we want the system of equations to have solution for all the right side. By Example 1.2.3, this means that the vectors (1, 2, 3), (4, 5, 6), (7, 8, 9) span R 3. We may also ask whether linearly independence of v 1, v 2, v 3 implies linearly independence of v 1 + 2 v 2 + 3 v 3, 4 v 1 + 5 v 2 + 6 v 3, 7 v 1 + 8 v 2 + 9 v 3. By the similar argument, the question becomes whether the homogeneous system of linear equations x 1 + 4x 2 + 7x 3 = 0, 2x 1 + 5x 2 + 8x 3 = 0, 3x 1 + 6x 2 + 9x 3 = 0, has only the trivial solution x 1 = x 2 = x 3 = 0. By Example 1.2.4, this means that the vectors (1, 2, 3), (4, 5, 6), (7, 8, 9) are linearly independent. In general, we want to know whether vectors v 1, v 2,..., v n R m span the Euclidean space or are linearly independent. We use vectors to form columns of a matrix a 11 a 12 a 1n a 1i a 21 a 22 a 2n A = (a ij ) =... = ( v a 2i 1 v 2 v n ), v i =., (1.2.1) a m1 a m2 a mn a mi

1.2. SPAN AND LINEAR INDEPENDENCE 19 and then denote the linear combination of vectors as A x A x = x 1 v 1 + x 2 v 2 + + x n v n = x 1 a 11 a 21. a m1 + x 2 a 12 a 22. a m2 + + x n a 11 x 1 + a 12 x 2 + + a 1n x n a 21 x 1 + a 22 x 2 + + a 2n x n =. a m1 x 1 + a m2 x 2 + + a mn x n a 1n a 2n. a mn (1.2.2) Then A x = b means a system of linear equations, and the columns of A correspond to the variables. a 11 x 1 + a 12 x 2 + + a 1n x n = b 1, a 21 x 1 + a 22 x 2 + + a 2n x n = b 2, a m1 x 1 + a m2 x 2 + + a mn x n = b m. We call A the coefficient matrix of the system, and call b = (b 1, b 2,..., b n ) the right side. The augmented matrix of the system is a 11 a 12 a 1n b 1 (A a 21 a 22 a 2n b 2 b) =..... a m1 a m2 a mn b m Example 1.2.3 shows that vectors span the Euclidean space if and only if the corresponding system A x = b has solution for all b, and vectors are linearly independent if and only if the homogeneous system A x = 0 has only the trivial solution (or the solution of A x = b is unique, by Definition 1.2.1). Example 1.2.4 shows that the span and linear independence of vectors in a general (finite dimensional) vector space can be translated to the Euclidean space via an isomorphism, and can then be further interpreted as the existence and uniqueness of the solution of a system of linear equations. We summarise the discussion into the following dictionary: v 1, v 2,..., v n span R m linearly independent A x = b solution always exist solution unique Exercise 1.20. Translate whether the vectors span Euclidean space or are linearly independent into the problem about some systems of linear equations..

20 CHAPTER 1. VECTOR SPACE 1. (1, 2, 3), (2, 3, 1), (3, 1, 2). 2. (1, 2, 3, 4), (2, 3, 4, 5), (3, 4, 5, 6). 3. (1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12). 4. (1, 2, 3), (2, 3, 1), (3, 1, a). 5. (1, 2, 3, 4), (2, 3, 4, 5), (3, 4, a, a). 6. (1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, a). Exercise 1.21. Translate the problem of polynomials span suitable polynomial vector space or are linearly independent into the problem about some systems of linear equations. 1. 1 + 2t + 3t 2, 2 + 3t + t 2, 3 + t + 2t 2. 2. 1 + 2t + 3t 2 + 4t 3, 2 + 3t + 4t 2 + 5t 3, 3 + 4t + 5t 2 + 6t 3. 3. 3 + 2t + t 2, 6 + 5t + 4t 2, 9 + 8t + 7t 2, 12 + 11t + 10t 2. 4. 1 + 2t + 3t 2, 2 + 3t + t 2, 3 + t + at 2. 5. 1 + 2t + 3t 2 + 4t 3, 2 + 3t + 4t 2 + 5t 3, 3 + 4t + at 2 + at 3. 6. 3 + 2t + t 2, 6 + 5t + 4t 2, a + 8t + 7t 2, b + 11t + 10t 2. Exercise 1.22. Translate the problem of polynomials span suitable polynomial vector space or are linearly independent into the problem about some systems of linear equations. 1. ( ) ( ) ( ) ( ) 1 2 3 4 2 1 4 3,,,. 3 4 1 2 4 3 2 1 2. ( ) ( ) ( ) ( ) 1 4 4 3 3 2 2 1,,,. 2 3 1 2 4 1 3 4 3. ( ) ( ) 1 2 3 4 5 6,. 4 5 6 1 2 3 4. ( ) ( ) ( ) 1 3 5 5 1 3 3 5 1,,. 2 4 6 6 2 4 4 6 2 Exercise 1.23. Show that a system is homogeneous if and only if x = 0 is a solution. Exercise 1.24. Let α = { v 1, v 2,..., v m } be a set of vectors in V. Let β = {a 11 v 1 +a 21 v 2 + +a m1 v m, a 12 v 1 +a 22 v 2 + +a m2 v m,..., a 1n v 1 +a 2n v 2 + +a mn v m } be a set of linear combinations of α. Let A = (a ij ) be the m n coefficient matrix. 1. Suppose β spans V. Prove that α spans V. 2. Suppose α spans V, and columns of A span R m. Prove that β spans V. 3. If β is linearly independent, can you conclude that α is linearly independent? 4. Suppose α is linearly independent, and columns of A are linearly independent. Prove that β is linearly independent. Exercise 1.25. Explain Exercises 1.17, 1.18, 1.19 by using Exercise 1.24.

1.2. SPAN AND LINEAR INDEPENDENCE 21 1.2.3 Gaussian Elimination In the high school, we learned to solve system of linear equations by simplifying the system. The simplification is done by combining equations to eliminate variables. Example 1.2.6. To solve the system of linear equations x 1 + 4x 2 + 7x 3 = 10, 2x 1 + 5x 2 + 8x 3 = 11, 3x 1 + 6x 2 + 9x 3 = 12, we may eliminate x 1 in the second and third equations by eq 2 2eq 1 (multiply the first equation by 2 and add to the second equation) and eq 3 3eq 1 (multiply the first equation by 3 and add to the third equation) to get x 1 + 4x 2 + 7x 3 = 10, 3x 2 6x 3 = 9, 6x 2 12x 3 = 18. Then we use eq 3 2eq 2 and 1 3 eq 2 (multiplying 1 3 to the second equation) to get x 1 + 4x 2 + 7x 3 = 10, x 2 + 2x 3 = 3, 0 = 0. The third equation is trivial. We get x 2 = 3 2x 3 from the second equation. Then we substitute this into the first equation to get x 1 = 2 + x 3. We conclude the general solution x 1 = 2 + x 3, x 2 = 3 2x 3, x 3 arbitrary. We can use Gaussian elimination because the method does not change the solutions of the system. For example, if eq 1, eq 2, eq 3 hold, then eq 1 = eq 1, eq 2 = eq 2 2eq 1, eq 3 = eq 3 3eq 1 hold. Conversely, if eq 1, eq 2, eq 3 hold, then eq 1 = eq 1, eq 2 = eq 2 + 2eq 1, eq 3 = eq 3 + 3eq 1 hold. The existence of solution means that there are x 1, x 2, x 3 satisfying 1 4 7 10 x 1 2 + x 2 5 + x 3 8 = 11. 3 6 9 12 In other words, the vector (10, 11, 12) is a linear combination of (1, 2, 3), (4, 5, 6), (7, 8, 9).

22 CHAPTER 1. VECTOR SPACE Example 1.2.7. By Example 1.2.3, for (1, 2, 3), (4, 5, 6), (7, 8, 9) to span R 3, it means that the system of linear equations x 1 + 4x 2 + 7x 3 = b 1, 2x 1 + 5x 2 + 8x 3 = b 2, 3x 1 + 6x 2 + 9x 3 = b 3, has solution for all b 1, b 2, b 3. By the same elimination as in Example 1.2.6, we get x 1 + 4x 2 + 7x 3 = b 1, (eq 1 = eq 1 = eq 1 ) x 2 + 2x 3 = 2 3 b 1 1 3 b 2, (eq 2 = 1 3 eq 2 = 2 3 eq 1 + 1 3 eq 2) 0 = b 1 2b 2 + b 3. (eq 3 = eq 3 2eq 2 = eq 1 2eq 2 + eq 3 ) The last equation shows that there is no solution unless b 1 2b 2 + b 3 = 0. Therefore the three vectors do not span R 3. We also know that, for the vectors to be linearly independent, it means that the homogeneous system of linear equations x 1 + 4x 2 + 7x 3 = 0, 2x 1 + 5x 2 + 8x 3 = 0, 3x 1 + 6x 2 + 9x 3 = 0, has only the trivial solution x 1 = x 2 = x 3 = 0. By the same elimination, we get x 1 + 4x 2 + 7x 3 = 0, x 2 + 2x 3 = 0, 0 = 0. It is easy to see that the simplfied system has non-trivial solution x 3 = 1, x 2 = 2 (from the second equation), x 1 = 4 ( 2) 7 1 = 1 (back substitution). Indeed, we can verify that 1 4 7 0 v 1 2 v 2 + v 3 = 1 2 2 5 + 1 8 = 0 = 0. 3 6 9 0 This explicitly shows that the three vectors are linearly independent. 1.2.4 Row Echelon Form The Gaussian elimination in Example 1.2.6 is equivalent to the following row operations on the augmented matrix 1 4 7 10 2 5 8 11 R 2 2R 1 1 4 7 10 R 3 2R 2 1 4 7 10 R 3 3R 1 0 3 6 9 1 3 R 2 0 1 2 3. (1.2.3) 3 6 9 12 0 6 12 18 0 0 0 0

1.2. SPAN AND LINEAR INDEPENDENCE 23 In general, we use three types of row operations that do not change solutions of system of linear equations. R i R j : exchange the i-th and j-th rows. cr i : multiplying a number c 0 to the i-th row. R i + cr j : add the c multiple of the j-th row to the i-th row. We use the third operation to create 0 (and therefore simpler matrix). We use first operation to rearrange the equations from the most complicated (i.e., longest) to the simplest (i.e., shortest). The simplest shape one can achieve by the three row operations is the row echelon form. For the system in Example 1.2.3, the row echelon form is 0, 0, arbitrary. (1.2.4) 0 0 0 0 The entries indicated by are called the pivots. The rows and columns containing the pivots are pivot rows and pivot columns. In the row echelon form (1.2.4), the first and second rows are pivot rows, and the first and second columns are pivot columns. In general, a row echelon form has the shape of upside down staircase, and the shape is characterized by the locations of the pivots. The pivots are the leading nonzero entries in the rows. They appear in the first several rows in later and later positions, and the subsequent rows are completely zero. The following are all the 2 3 row echelon forms ( 0 ), ( ) 0, 0 0 0 ( ), 0 0 ( ) 0 0, 0 0 0 ( ), 0 0 0 ( ) 0 0 0. 0 0 0 ( ) 0, 0 0 Exercise 1.26. Explain that the row operations do not change the solutions of the system of linear equations. Exercise 1.27. How does the exchange of columns of A affect the solution of A x = b? What about multiplying a nonzero number to a column of A? Exercise 1.28. Why the shape in (1.2.4) cannot be further improved? How can you improve the following shape to the upside down staircase by using row operations?

24 CHAPTER 1. VECTOR SPACE 1. 0 0 0 0 0. 2.. 3. 0 0 0 0 0 0. 4. 0 0 0. Exercise 1.29. Display all the 2 2 row echelon forms. How about 3 2 matrices? Exercise 1.30. How many m n row echelon forms are there? The row operations (1.2.3) and the resulting row echelon form (1.2.4) is for the augmented matrix of the system of linear equations in Example 1.2.6. We note that the system has solution because the last column is not pivot, which is the same as no rows of the form (0 0 0 ) (indicating contradictory equation 0 = ). Moreover, we note that x 1, x 2 can be expressed in terms of the later variable x 3 precisely because we can divide their coefficients 0 at the two pivots. In other words, the two variables are not free (they are determined by the other variable x 3 ) because they correspond to pivot columns of A. On the other hand, the variable x 3 is free precisely because it corresponds to non-pivot column. We summarise the discussion into the following dictionary: variables in A x = b free non-free columns in A non-pivot pivot Theorem 1.2.5. A system of linear equations A x = b has solution if and only if b is not a pivot column in the augmented matrix (A b). Moreover, the solution is unique if and only if all columns of A are pivot. Uniqueness means no freedom. No freedom means all variables are non-free. By the dictionary above, this means that all columns are pivot. Example 1.2.8. To solve the system of linear equations x 1 + 4x 2 + 7x 3 = 10, 2x 1 + 5x 2 + 8x 3 = 11, 3x 1 + 6x 2 + ax 3 = b, we carry out the following row operations on the augmented matrix (A 1 4 7 10 b) = 2 5 8 11 R 2 2R 1 1 4 7 10 R 3 3R 1 0 3 6 9 3 6 a b 0 6 a 21 b 30 1 4 7 10 R 3 2R 2 0 3 6 9. 0 0 a 9 b 12

1.2. SPAN AND LINEAR INDEPENDENCE 25 The row echelon form depends on the values of a and b. If a 9, then the result of the row operations is already a row echelon form 0. 0 0 Because all columns of A are pivot, and the last column b is not pivot, the system has unique solution. If a = 9, then the result of the row operations is 0. 0 0 0 b 12 If b 12, then this is the row echelon form 0. 0 0 0 Because the last column is pivot, the system has no solution when a = 9 and b 12. On the other hand, if b = 12, then the result of the row operations is the row echelon form 0. 0 0 0 0 Since the last column is not pivot, the system has solution. Since the third column is not pivot, the solution has a free variable. Therefore the solution is not unique. Exercise 1.31. Solve system of linear equations. Compare systems and their solutions. x 1 + 2x 2 = 3, 1. 4x 1 + 5x 2 = 6. x 1 = 3, 2. 4x 1 = 6. x 1 = 3, 3. 5x 2 = 6. 4. x 1 + 2x 2 = 3. 5. 4x 1 + 5x 2 = 6. 6. 7. 8. x 1 + 2x 2 = 3, 4x 1 + 5x 2 = 6, 7x 1 + 8x 2 = 9. x 1 + 2x 2 + 3x 3 = 0, 4x 1 + 5x 2 + 6x 3 = 0. x 1 + 4x 2 = 0, 2x 1 + 5x 2 = 0, 3x 1 + 6x 2 = 0. x 1 + 2x 2 = 1, 9. 4x 1 + 5x 2 = 1. 10. x 1 + 2x 2 = 1. x 1 + 2x 2 = 1, 11. 4x 1 + 5x 2 = 1, 7x 1 + 8x 2 = 1. Exercise 1.32. Solve system of linear equations. Compare systems and their solutions.

26 CHAPTER 1. VECTOR SPACE 1. x 1 + 2x 2 = a, 4x 1 + 5x 2 = b. 2. x 1 = a, 4x 1 = b. 3. x 1 = a, 5x 2 = b. 4. x 1 + 2x 2 = a. 5. 6. 7. 8. x 1 + 2x 2 = a, 4x 1 + 5x 2 = b, 7x 1 + 8x 2 = c. x 1 + ax 2 = 3, 4x 1 + 5x 2 = 6. x 1 + ax 2 = 1, 4x 1 + 5x 2 = 1. x 1 + ax 2 = b, 4x 1 + 5x 2 = 6. 9. 10. 11. 12. x 1 + ax 2 = 3, 4x 1 + bx 2 = 6. x 1 + ax 2 = 3, 4x 1 + 5x 2 = b. ax 1 + bx 2 = 3, 4x 1 + 5x 2 = 6. ax 1 + 2x 2 = b, 4x 1 + 5x 2 = 6. Exercise 1.33. Carry out the row operation and explain what the row operation tells you about some system of linear equations. 1 2 3 4 1 2 4 3 1 2 3 1 2 3 1. 3 4 5 6 5 6 7 8. 3. 3 4 6 5 5 6 a 7. 5. 2 3 4 3 4 1. 7. 2 3 1 3 1 2. 7 8 9 10 7 8 b 9 4 1 2 1 2 3 1 2 3 4 1 2 3 4 2. 3 4 5 6 5 6 7 a. 4. 3 4 5 6 5 6 7 8. 1 2 3 1 6. 2 3 1 2. 7 8 9 b 7 8 a b 3 1 2 3 Exercise 1.34. Solve system of linear equations. 1. a 1 x 1 = b 1, a 2 x 2 = b 2,. a n x n = b n. 4. x 1 + x 2 = b 1, x 2 + x 3 = b 2,. x n 1 + x n = b n 1. 2. 3. x 1 x 2 = b 1, x 2 x 3 = b 2,. x n 1 x n = b n 1. x 1 x 2 = b 1, x 2 x 3 = b 2,. x n 1 x n = b n 1, x n x 1 = b n. 5. 6. x 1 + x 2 = b 1, x 2 + x 3 = b 2,. x n 1 + x n = b n 1, x n + x 1 = b n. x 1 + x 2 + x 3 = b 1, x 2 + x 3 + x 4 = b 2,. x n 2 + x n 1 + x n = b n 2.

1.2. SPAN AND LINEAR INDEPENDENCE 27 7. x 1 + x 2 + x 3 = b 1, x 2 + x 3 + x 4 = b 2,. x n 2 + x n 1 + x n = b n 2, x n 1 + x n + x 1 = b n 1, x n + x 1 + x 2 = b n. 8. 9. x 1 + x 2 + x 3 + +x n = b 1, x 2 + x 3 + +x n = b 2,. x n = b n. x 2 + x 3 + +x n 1 + x n = b 1, x 1 + x 3 + +x n 1 + x n = b 2,. x 1 + x 2 + +x n 2 + x n 1 = b n. 1.2.5 Reduced Row Echelon Form The row echelon form is not unique. For example, we may further apply row echelon form to (1.2.3) to get 1 4 7 10 1 0 1 2 0 1 2 3 R 1 4R 2 0 1 2 3. 0 0 0 0 0 0 0 0 Both matrices are the row echelon forms of the augmented matrix of the system of linear equation in Example 1.2.6. More generally, for the row echelon form of the shape (1.2.4), we may use the similar idea to eliminate the entries above pivots. Then we may further divide rows by the pivot entries so that all pivot entries are 1. 0 1 0 a 1 b 1 0 0 0 1 a 2 b 2. (1.2.5) 0 0 0 0 0 0 0 0 0 0 0 0 The result is the simplest matrix (although the shape can not be further improved) we can achieve by row operations, and is called the reduced row echelon form. The reduced row echelon form is characterised by the properties that the pivot entries are 1, and the entries above the pivots are 0. If we start with the augmented matrix of a system of linear equations, then the reduced row echelon form (1.2.5) means the equations x 1 + a 1 x 3 = b 1, x 2 + a 2 x 3 = b 2. By moving the (non-pivot) free variable x 3 to the right, the equations become the general solution x 1 = b 1 a 1 x 3, x 2 = b 2 a 2 x 3, x 3 arbitrary. We see that the reduced echelon form is equivalent to the general solution. The equivalence also explicitly explains that pivot column means non-free and non-pivot column means free.

28 CHAPTER 1. VECTOR SPACE Example 1.2.9. The row operations (1.2.3) and the subsequent reduced row echelon form can also be regarded as for the augmented matrix of the homogeneous system x 1 + 4x 2 + 7x 3 + 10x 4 = 0, 2x 1 + 5x 2 + 8x 3 + 11x 4 = 0, 3x 1 + 6x 2 + 9x 3 + 12x 4 = 0. We have 1 4 7 10 0 1 4 7 10 0 1 0 1 2 0 2 5 8 11 0 0 1 2 3 0 0 1 2 3 0. 3 6 9 12 0 0 0 0 0 0 0 0 0 0 0 We can read the general solution directly from the reduced row echelon form x 1 = x 3 + 2x 4, x 2 = 2x 3 3x 4, x 3, x 4 arbitrary. If we delete x 3 (equivalent to setting x 3 = 0), then we delete the third column and apply the same row operations to the remaining four columns 1 4 10 0 1 4 10 0 1 0 2 0 2 5 11 0 0 1 3 0 0 1 3 0. 3 6 12 0 0 0 0 0 0 0 0 0 The result is still a reduced row echelon form, and corresponds to the general solution x 1 = 2x 4, x 2 = 3x 4, x 4 arbitrary. This is obtained precisely from the previous general solution by setting x 3 = 0. Exercise 1.35. Display all the 2 2, 2 3, 3 2 and 3 4 reduced row echelon forms. Exercise 1.36. Explain that the reduced row echelon form of any matrix is unique. Exercise 1.37. Given the reduced row echelon form of the system of linear equations, find the general solution. 1 0 a 1 b 1 1 a 1 a 2 b 1 1. 0 0 1 b 2. 4. 0 0 0 0. 0 0 0 0 0 0 0 0 2. 3. 1 0 a 1 b 1 0 0 0 1 b 2 0. 0 0 0 0 0 1 0 0 b 1 0 1 0 b 2. 0 0 1 b 3 5. 6. 1 a 1 a 2 b 1 0 0 0 0 0 0. 0 0 0 0 0 ( ) 1 a1 0 a 2 b 1. 0 0 1 a 3 b 2

1.2. SPAN AND LINEAR INDEPENDENCE 29 7. 8. 9. 10. ( ) 1 0 a1 a 2 b 1. 0 1 a 3 a 4 b 2 ( ) 1 0 a1 b 1. 0 1 a 2 b 2 ( ) 0 1 0 a1 b 1. 0 0 1 a 2 b 2 1 0 a 1 b 1 0 1 a 2 b 2 0 0 0 0. 0 0 0 0 11. 12. 1 0 a 1 0 a 2 b 1 0 1 a 3 0 a 4 b 2 0 0 0 1 a 5 b 3. 0 0 0 0 0 0 1 0 a 1 0 a 2 b 1 0 0 1 a 3 0 a 4 b 2 0 0 0 0 1 a 5 b 3 0 0 0 0 0 0 0 0 Exercise 1.38. Given the general solution of system of linear equations, find the reduced row echelon form. Moreover, which system is homogeneous? 1. x 1 = x 3, x 2 = 1 + x 3 ; x 3 arbitrary. 2. x 1 = x 3, x 2 = 1 + x 3 ; x 3, x 4 arbitrary. 3. x 2 = x 4, x 3 = 1 + x 4 ; x 1, x 4 arbitrary. 4. x 2 = x 4, x 3 = x 4 x 5 ; x 1, x 4, x 5 arbitrary. 5. x 1 = 1 x 2 + 2x 5, x 3 = 1 + 2x 5, x 4 = 3 + x 5 ; x 2, x 5 arbitrary. 6. x 1 = 1 + 2x 2 + 3x 4, x 3 = 4 + 5x 4 + 6x 5 ; x 2, x 4, x 5 arbitrary. 7. x 1 = 2x 2 + 3x 4 x 6, x 3 = 5x 4 + 6x 5 4x 6 ; x 2, x 4, x 5, x 6 arbitrary. 1.2.6 Calculation of Span and Linear Independence Example 1.2.10. In Example 1.2.3, the span and linear independence of vectors v 1 = (1, 2, 3), v 2 = (4, 5, 6), v 3 = (7, 8, 9) is translated into the existence (for all right side b) and uniqueness of the solution of A x = b, where A = ( v 1 v 2 v 3 ). The same row operations in (1.2.3) (restricted to the first three columns only) can be applied to the augmented matrix (A 1 4 7 b 1 1 4 7 b 1 b) = ( v 1 v 2 v 3 b) = 2 5 8 b 2. 3 6 9 b 3 0 1 2 b 2 0 0 0 b 3 Note that b = (b 1, b 2, b 3) is obtained from b by row operations The row operations can be reversed b R i R j ( ) cr i ( ) R i +cr j b. b R i R j ( ) c 1 R i ( ) R i cr j b.

30 CHAPTER 1. VECTOR SPACE We start with b = (0, 0, 1) (so that b 3 = 1 0), and use the reverse row operations to get b. Then by Theorem 1.2.5, this b is not a linear combination of (1, 2, 3), (4, 5, 6), (7, 8, 9). This shows that the three vectors do not span R 3. Moreover, since the third column is not pivot, by Theorem 1.2.5, the solution is not unique. Therefore the three vectors are linearly dependent. If we change v 3 to (7, 8, a), with a 0, then the same row operations give 1 4 7 b 1 1 4 7 b 1 ( v 1 v 2 v 3 b) = 2 5 8 b 2. 3 6 a b 3 0 1 2 b 2 0 0 a 9 b 3 Since a 9 0, the first three columns are pivot, and the last column is never pivot for all b. By Theorem 1.2.5, the solution always exists and is unique. We conclude that the following statements are equivalent. 1. (1, 2, 3), (4, 5, 6), (7, 8, a) span R 3 2. (1, 2, 3), (4, 5, 6), (7, 8, a) are linearly independent. 3. a 9. The discussion in Example 1.2.10 can be extended to the following criteria. Proposition 1.2.6. Let v 1, v 2,..., v n R m and A = ( v 1 v 2 are equivalent. v n ). The following 1. v 1, v 2,..., v n span R m. 2. A x = b has solution for all b R m. 3. All rows of A are pivot. In other words, the row echelon form of A has no zero row (0 0 0). The following are also equivalent. 1. v 1, v 2,..., v n are linearly independent. 2. The solution of A x = b his unique. 3. All columns of A are pivot. Example 1.2.11. Recall the row operation in Example 1.2.8 1 4 7 10 1 4 7 10 2 5 8 11 0 3 6 9. 3 6 a b 0 0 a 9 b 12

1.2. SPAN AND LINEAR INDEPENDENCE 31 By Proposition 1.2.6, the vectors (1, 2, 3), (4, 5, 6), (7, 8, a), (10, 11, b) span R 3 if and only if a 9 or b 12, and the four vectors are always linearly dependent. If we delete the third column and carry out the same row operation, then we find that (1, 2, 3), (4, 5, 6), (10, 11, b) span R 3 if and only if b 12. Moreover, the three vectors are linearly dependent if and only if b 12. If we delete both the third and fourth columns, then we find that (1, 2, 3), (4, 5, 6) do not span R 3, and the two vectors are linearly dependent. The two vectors (1, 2, 3), (4, 5, 6) in Example 1.2.11 form a 3 2 matrix. Since each column has at most one pivot, the number of pivots is at most 2. Therefore among the 3 rows, at least one row is not pivot. By Proposition 1.2.6, we conclude that v 1, v 2 cannot span R 3. The four vectors (1, 2, 3), (4, 5, 6), (7, 8, a), (10, 11, b) in Example 1.2.11 form a 3 4 matrix. Since each row has at most one pivot, the number of pivots is at most 3. Therefore among the 4 columns, at least one column is not pivot. By Proposition 1.2.6, we conclude that the four vectors are linearly dependent. The argument above can be extended to general results. Proposition 1.2.7. If n vectors span R m, then n m. If n vectors in R m are linearly independent, then n m. Equivalently, if n < m, then n vectors cannot span R m, and if n > m, then n vectors in R m are linearly dependent. Example 1.2.12. Without any calculation, we know that the vectors (1, 0, 2, π), (log 2, e, 100, 0.5), ( 3, e 1, sin 1, 2.3) cannot span R 4. We also know that the vectors (1, log 2, 3), (0, e, e 1 ), ( 2, 100, sin 1), (π, 0.5, 2.3) are linearly dependent. In Example 1.2.10 and 1.2.6, we saw that three vectors in R 3 span R 3 if and only if they are linearly independent. In general we have the following result. Theorem 1.2.8. Given n vectors in R m, any two of the following imply the third. The vectors span the Euclidean space. The vectors are linearly independent. m = n. Proof. The theorem can be split into two statements. The first is that, if n vectors in R m span R m and are linearly independent, then m = n. The second is that, for n vectors in R n, spanning R n is equivalent to linear independence.

32 CHAPTER 1. VECTOR SPACE The first statement is a consequence of Proposition 1.2.7. For the second statement, we consider the n n matrix with n vectors as the columns. By Proposition 1.2.6, we have 1. span R n there are n pivot rows. 2. linear independence there are n pivot columns. Then the second statement follows from number of pivot rows = number of pivots = number of pivot columns. Theorem 1.2.8 can be rephrased in terms of systems of linear equations. In fact, the proof of Theorem 1.2.8 essentially follows our theory of system of linear equations. Theorem 1.2.9. For a system of linear equations A x = b, any two of the following imply the third. The system has solution for all b. The solution of the system is unique. The number of equations is the same as the number of variables. Exercise 1.39. Determine whether the vectors span the vector space. Determine whether they are linearly independent. 1. (1, 1, 0, 0), ( 1, 1, 0, 0), (0, 0, 1, 1), (0, 0, 1, 1). 2. ( ) ( ) ( ) ( ) 1 0 1 0 0 1 0 1,,,. 0 1 0 1 1 0 1 0 3. ( ) ( ) 1 2 5 6,. 3 4 7 8 4. ( ) ( ) 1 2 2 4,. 3 4 6 8 5. e 1 e 2, e 2 e 3,..., e n 1 e n, e n e 1. 6. e 1, e 1 + 2 e 2, e 1 + 2 e 2 + 3 e 3,..., e 1 + 2 e 2 + + n e n. Exercise 1.40. Prove that if u, v, w span V, then the linear combinations of vectors span V. 1. u + 2 v + 3 w, 2 u + 3 v + 4 w, 3 u + 4 v + w, 4 u + v + 2 w. 2. u + 2 v + 3 w, 4 u + 5 v + 6 w, 7 u + 8 v + a w, 10 u + 11 v + b w, a 9 or b 12.

1.3. BASIS 33 Exercise 1.41. Prove that if u, v, w are linearly independent, then the linear combinations of vectors are still linearly dependent.. 1. u + 2 v + 3 w, 2 u + 3 v + 4 w, 3 u + 4 v + w. 2. u + 2 v + 3 w, 4 u + 5 v + 6 w, 7 u + 8 v + a w, a 9. 3. u + 2 v + 3 w, 4 u + 5 v + 6 w, 10 u + 11 v + b w, b 12. Exercise 1.42. Prove that for any u, v, w, the linear combinations of vectors are always linearly dependent. 1. u + 4 v + 7 w, 2 u + 5 v + 8 w, 3 u + 6 v + 9 w. 2. u + 2 v + 3 w, 2 u + 3 v + 4 w, 3 u + 4 v + w, 4 u + v + 2 w. 3. u + 2 v + 3 w, 4 u + 5 v + 6 w, 7 u + 8 v + a w, 10 u + 11 v + b w. 1.3 Basis We learned the concepts of span and linear independence, and also developed the method of calculating the concepts in the Euclidean space. To calculate the concepts in any vector space, we may translate from general vector space to the Euclidean space. The translation is by an isomorphism of vector spaces, and is constructed from a basis. 1.3.1 Definition Definition 1.3.1. A set of vectors is a basis if they span the vector space and are linearly independent. In other words, any vector can be uniquely expressed as a linear combination of the vectors in the basis. The basis is the perfect situation for a collection of vectors. Similarly, we may regard 1 as the basis for N with respect to the mechanism +1, and regard all prime numbers as the basis for N with respect to the multiplication. Example 1.2.1 gives the standard basis of the Euclidean space. Example 1.2.2 shows that the monomials form a basis of the polynomial vector space. Exercise 1.13 gives a basis of the matrix vector space. Example 1.2.10 shows that (1, 2, 3), (4, 5, 6), (7, 8, a) form a basis of R 3 if and only if a 9 Example 1.3.1. To determine wether v 1 = (1, 1, 0), v 2 = (1, 0, 1), v 3 = (1, 1, 1)