Math 3108: Linear Algebra

Similar documents
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

2. Every linear system with the same number of equations as unknowns has a unique solution.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

GENERAL VECTOR SPACES AND SUBSPACES [4.1]

Linear Algebra March 16, 2019

NOTES on LINEAR ALGEBRA 1

Elementary maths for GMT

Math 54 HW 4 solutions

Linear Algebra and Matrix Inversion

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Math 1553 Introduction to Linear Algebra

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Linear equations in linear algebra

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Math113: Linear Algebra. Beifang Chen

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Final Examination 201-NYC-05 December and b =

Chapter 1: Linear Equations

(c)

LINEAR ALGEBRA SUMMARY SHEET.

MATH 2030: ASSIGNMENT 4 SOLUTIONS

Chapter 2: Matrices and Linear Systems

Review Notes for Midterm #2

Math Linear Algebra Final Exam Review Sheet

Linear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay

MTH 464: Computational Linear Algebra

Solution to Homework 1

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

ELEMENTARY LINEAR ALGEBRA

MAT 2037 LINEAR ALGEBRA I web:

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Chapter 1: Linear Equations

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Chapter 3. Vector spaces

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

Chapter 1: Systems of Linear Equations

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Math 102, Winter 2009, Homework 7

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

A matrix over a field F is a rectangular array of elements from F. The symbol

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Math 2940: Prelim 1 Practice Solutions

Practical Linear Algebra: A Geometry Toolbox

Matrices and systems of linear equations

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Chapter 2 Notes, Linear Algebra 5e Lay

LINEAR ALGEBRA REVIEW

1 Last time: inverses

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

SUMMARY OF MATH 1600

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Introduction to Linear Algebra, Second Edition, Serge Lange

1. General Vector Spaces

Linear Equations in Linear Algebra

Section 2.2: The Inverse of a Matrix

Chapter 1 Matrices and Systems of Equations

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Linear Systems and Matrices

Chapter 7. Linear Algebra: Matrices, Vectors,

1 Last time: determinants

7. Dimension and Structure.

Extra Problems for Math 2050 Linear Algebra I

MATH2210 Notebook 2 Spring 2018

Chapter 2: Matrix Algebra

Review Solutions for Exam 1

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Linear Algebra Highlights

(a) only (ii) and (iv) (b) only (ii) and (iii) (c) only (i) and (ii) (d) only (iv) (e) only (i) and (iii)

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

MAT Linear Algebra Collection of sample exams

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

Lecture Summaries for Linear Algebra M51A

1 Matrices and Systems of Linear Equations

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

1 Determinants. 1.1 Determinant

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Matrix Arithmetic. j=1

ELEMENTARY LINEAR ALGEBRA

Linear Algebra. Min Yan

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

Online Exercises for Linear Algebra XM511

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra Homework and Study Guide

Lecture Notes in Linear Algebra

2018 Fall 2210Q Section 013 Midterm Exam II Solution

Transcription:

Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323

Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118 Chapter 3. Slides 119 136 Chapter 4. Slides 137 190 Chapter 5. Slides 191 234 Chapter 6. Slides 235 287 Chapter 7. Slides 288 323 2 / 323

Chapter 1. Linear Equations in Linear Algebra 1.1 Systems of Linear Equations 1.2 Row Reduction and Echelon Forms. Our first application of linear algebra is the use of matrices to efficiently solve linear systems of equations. 3 / 323

A linear system of m equations with n unknowns can be represented by a matrix with m rows and n + 1 columns: The system a 11 x 1 + + a 1n x n = b 1... a m1 x 1 + + a mn x n = b m corresponds to the matrix a 11 a 1n [A b =.. a m1 a mn b 1. b n. Similarly, every such matrix corresponds to a linear system. 4 / 323

A is the m n coefficient matrix: a 11 a 1n A =.. a m1 a mn a ij is the element in row i and column j. [A b is called the augmented matrix. 5 / 323

Example. The 2x2 system 3x + 4y = 5 6x + 7y = 8 corresponds to the augmented matrix [ 3 4 6 7 5 8 6 / 323

To solve linear systems, we manipulate and combine the individual equations (in such a way that the solution set of the system is preserved) until we arrive at a simple enough form that we can determine the solution set. 7 / 323

Example. Let us solve 3x + 4y = 5 6x + 7y = 8. Multiply the first equation by -2 and add it to the second: 3x + 4y = 5 0x y = 2. Multiply the second equation by 4 and add it to the first: 3x + 0y = 3 0x y = 2. Multiply the first equation by 1 3 and the second by 1: x + 0y = 1 0x + y = 2. 8 / 323

Example. (continued) We have transformed the linear system 3x + 4y = 5 6x + 7y = 8 into x + 0y = 1 0x + y = 2 in such a way that the solution set is preserved. The second system clearly has solution set {( 1, 2)}. Remark. For linear systems, the solution set S satisfies one of the following: S contains a single point (consistent system) S contains infinitely many points (consistent system), S is empty (inconsistent system). 9 / 323

The manipulations used to solve the linear system above correspond to elementary row operations on the augmented matrix for the system. Elementary row operations. Replacement: replace a row by the sum of itself and a multiple of another row. Interchange: interchange two rows. Scaling: multiply all entries in a row by a nonzero constant. Row operations do not change the solution set for the associated linear system. 10 / 323

Example. (revisited) [ 3 4 6 7 5 8 [ R 2 2R 1 +R 2 3 4 0 1 5 2 [ R 1 4R 2 +R 1 3 0 0 1 3 2 [ R 1 1 3 R 1 1 0 0 1 1 2 [ R 2 R 2 1 0 0 1 1 2. (i) it is simple to determine the solution set for the last matrix (ii) row operations preserve the solution set. 11 / 323

It is always possible to apply a series of row reductions to put an augmented matrix into echelon form or reduced echelon form, from which it is simple to discern the solution set. Echelon form: Nonzero rows are above any row of zeros. The leading entry (first nonzero element) of each row is in a column to the right of the leading entry of the row above it. All entries in a column below a leading entry are zeros. Reduced echelon form: (two additional conditions) The leading entry of each nonzero row equals 1. Each leading 1 is the only nonzero entry in its column. 12 / 323

Examples. 3 9 12 9 6 15 0 2 4 4 2 6 0 3 6 6 4 5 3 9 12 9 6 15 0 2 4 4 2 6 0 0 0 0 1 4 1 0 2 3 0 24 0 1 2 2 0 7 0 0 0 0 1 4 not in echelon from echelon form, not reduced reduced echelon form 13 / 323

Remark. Every matrix can be put into reduced echelon form in a unique manner. Definition. A pivot position in a matrix is a location that corresponds to a leading 1 in its reduced echelon form. A pivot column is a column that contains a pivot position. Remark. Pivot positions lie in columns corresponding to dependent variables for the associated systems. 14 / 323

Row Reduction Algorithm. 1. Begin with the leftmost column; if necessary, interchange rows to put a nonzero entry in the first row. 2. Use row replacement to create zeros below the pivot. 3. Repeat steps 1. and 2. with the sub-matrix obtained by removing the first column and first row. Repeat the process until there are no more nonzero rows. This puts the matrix into echelon form. 4. Beginning with the rightmost pivot, create zeros above each pivot. Rescale each pivot to 1. Work upward and to the left. This puts the matrix into reduced echelon form. 15 / 323

Example. 3 9 12 9 6 15 3 7 8 5 8 9 0 3 6 6 4 5 R 2 R 1 +R 2 R 3 3 2 R 2+R 3 3 9 12 9 6 15 0 2 4 4 2 6 0 3 6 6 4 5 3 9 12 9 6 15 0 2 4 4 2 6 0 0 0 0 1 4 The matrix is now in echelon form. 16 / 323

Example. (continued) 3 9 12 9 6 15 0 2 4 4 2 6 0 0 0 0 1 4 The matrix is now in reduced echelon form. 3 9 12 9 0 9 0 2 4 4 0 14 0 0 0 0 1 4 3 9 12 9 0 9 0 1 2 2 0 7 0 0 0 0 1 4 3 0 6 9 0 72 0 1 2 2 0 7 0 0 0 0 1 4 1 0 2 3 0 24 0 1 2 2 0 7 0 0 0 0 1 4. 17 / 323

Solving systems. Find the augmented matrix [A b for the given linear system. Put the augmented matrix into reduced echelon form [A b Find solutions to the system associated to [A b. Express dependent variables in terms of free variables if necessary. 18 / 323

Example 1. The system 2 4 4 1 2 2 1 1 0 2x 4y + 4z = 6 x 2y + 2z = 3 x y + 0z = 2 6 3 2 1 0 2 0 1 2 0 0 0 1 1 0 The solution set is x 2z = 1 y 2z = 1. S = {(1 + 2z, 1 + 2z, z) : z R}. 19 / 323

Example 2. The system 2 4 4 1 2 2 1 1 0 2x 4y + 4z = 6 x 2y + 2z = 4 x y + 0z = 2 6 4 2 1 0 2 0 1 2 0 0 0 1 1 1 x 2z = 1 y 2z = 1, 0 = 1. The solution set is empty the system is inconsistent. This is always the case when a pivot position lies in the last column. 20 / 323

Row equivalent matrices. Two matrices are row equivalent if they are connected by a sequence of elementary row operations. Two matrices are row equivalent if and only if they have the same reduced echelon form. We write A B to denote that A and B are row equivalent. 21 / 323

Chapter 1. Linear Equations in Linear Algebra 1.3 Vector Equations 1.4 The Matrix Equation Ax = b. 22 / 323

A matrix with one column or one row is called a vector, for example 1 [ 2 or 1 2 3. 3 By using vector arithmetic, for example 1 4 α + 4β α 2 + β 5 = 2α + 5β 3 6 3α + 6β we can write linear systems as vector equations., 23 / 323

The linear system x + 2y + 3z = 4 5x + 6y + 7z = 8 9x + 10y + 11z = 12 is equivalent to the vector equation 1 2 3 x 5 + y 9 6 10 + z 7 11 = in that they have the same solution sets, namely, S = {( 2 + z, 3 2z, z) : z R}. 4 8 12, 24 / 323

Geometric interpretation. The solution set S may be interpreted in different ways: S consists of the points of intersection of the three planes x + 2y + 3z = 4 5x + 6y + 7z = 8 9x + 10y + 11z = 12. S consists of the coefficients of the linear combinations of the vectors 1 2 3 5, 9 6 10, and 7 11 that yield the vector 4 8 12. 25 / 323

Linear combinations and the span. The set of linear combinations of the vectors v 1,... v n is called the span of these vectors: span{v 1,..., v n } = {α 1 v 1 + + α n v n : α 1,..., α n R}. A vector equation x v 1 + yv 2 = v 3 is consistent (that is, has solutions) if and only if v 3 span{v 1, v 2 }. 26 / 323

Example. Determine whether or not 1 1 1 span 2, 1 3 1 3 4, 1 4 5. This is equivalent to the existence of a solution to: 1 1 1 1 x 2 + y 3 + z 4 = 1. 3 4 5 1 The associated system is x + y + z = 1 2x + 3y + 4z = 1 3x + 4y + 5z = 1. 27 / 323

The augmented matrix is 1 1 1 2 3 4 3 4 5 1 1 1. The reduced echelon form is 1 0 1 0 1 2 0 0 0 0 0 1. The system is inconsistent. Thus 1 1 1 span 2, 1 3 1 3 4, 1 4 5. 28 / 323

Geometric description of span. Let S = span 0 1 1, T = span 0 1 1, 1 0 1. Then S is the line through the points (0, 0, 0) and (0, 1, 1). T is the plane through the points (0, 0, 0), (0, 1, 1), and (1, 0, 1). 29 / 323

Geometric description of span. (continued) Write v 1 = 0 1 1 The following are equivalent: Is v 3 spanned by v 1 and v 2?, v 2 = Can v 3 be written as a linear combination of v 1 and v 2? Is v 3 in the plane containing the vectors v 1 and v 2? 1 0 1. 30 / 323

Cartesian equation for span. Recall the definition of the plane T above. A point (x, y, z) belongs to T when the following vector equation is consistent: 0 1 x α 1 + β 0 = y. 1 1 z The augmented matrix and its reduced echelon form are as follows: 0 1 x 1 0 y 1 0 y 0 1 x. 1 1 z 0 0 z x y Thus the Cartesian equation for the plane is 0 = z x y. 31 / 323

Matrix equations. Consider a matrix of the form A = [ a 1 a 2 a 3, where the a j are column vectors. The product of A with a column vector is defined by x A y := xa 1 + ya 2 + za 3. z Thus all linear systems can be represented by matrix equations of the form AX = b. 32 / 323

Example. (Revisited) The system x + y + z = 1 2x + 3y + 4z = 1 3x + 4y + 5z = 1 is equivalent to the matrix equation AX = b, where 1 1 1 x 1 A = 2 3 4, X = y, b = 1 3 4 5 z 1 Remark. AX = b has a solution if and only if b is a linear combination of the columns of A.. 33 / 323

Question. When does the vector equation AX = b have a solution for every b R m? Answer. When the columns of A span R m. An equivalent condition is the following: the reduced echelon form of A has a pivot position in every row. To illustrate this, we study a non-example: 34 / 323

Non-example. Let 1 1 1 A = 2 3 4 3 4 5 reduced echelon form 1 0 1 0 1 2 0 0 0. This means that for any b 1, b 2, b 3 R, we will have 1 1 1 b 1 1 0 1 f 1 (b 1, b 2, b 3 ) A = 2 3 4 b 2 0 1 2 f 2 (b 1, b 2, b 3 ) 3 4 5 b 3 0 0 0 f 3 (b 1, b 2, b 3 ) for some linear functions f 1, f 2, f 3. However, the formula f 3 (b 1, b 2, b 3 ) = 0 imposes a constraint on the choices of b 1, b 2, b 3. That is, we cannot solve AX = b for arbitrary choices of b. 35 / 323

If instead the reduced echelon form of A had a pivot in any row, then we could use the reduced echelon form for the augmented system to find a solution to AX = b. 36 / 323

Chapter 1. Linear Equations in Linear Algebra 1.5 Solution Sets of Linear Systems 37 / 323

The system of equations AX = b is homogeneous if b = 0, inhomogeneous if b 0. For homogeneous systems: The augmented matrix for a homogeneous system has a column of zeros. Elementary row operations will not change this column. Thus, for homogeneous systems it is sufficient to work with the coefficient matrix alone. 38 / 323

Example. 2x 4y + 4z = 6 x 2y + 2z = 3 x y = 2 2 4 4 1 2 2 1 1 0 6 3 2 1 0 2 0 1 2 0 0 0 1 1 0. The solution set is On the other hand, x = 1 + 2z, y = 1 + 2z, z R. 2x 4y + 4z = 0 x 2y + 2z = 0 x y = 0 x = 2z, y = 2z, z R. 39 / 323

The solution set for the previous inhomogeneous system AX = b can be represented in parametric vector form: 2x 4y + 4z = 6 x 2y + 2z = 3 x y = 2 x = 1 + 2z y = 1 + 2z X = x y z = 1 1 0 + z 2 2 1, z R. The parametric form for the homogeneous system AX = 0 is given by x = 2z y = 2z X = = z, z R. x y z Both solution sets are parametrized by the free variable z R. 2 2 1 40 / 323

Example. Express the solution set for AX = b in parametric vector form, where 1 1 1 1 1 1 [A b = 1 1 0 2 0 2 0 0 2 2 2 2 Row reduction leads to 1 1 0 2 0 0 0 1 1 1 0 0 0 0 0 2 1 0, which means the solution set is x 1 = 2 x 2 + 2x 4, x 3 = 1 x 4 + x 5, x 2, x 4, x 5 R. 41 / 323

Example. (continued) In parametric form, the solution set is given by X = [ 2 x2 + 2x 4 x 2 1 x 4 + x 5 x 4 x 5 where x 2, x 4, x 5 R. = [ 2 0 1 0 0 + x 2 [ 1 1 0 0 0 + x 4 [ 2 0 1 1 0 + x 5 [ 0 0 1 0 1 To solve the corresponding homogeneous system, simply erase the first vector., 42 / 323

General form of solution sets. If X p is any particular solution to AX = b, then any other solutions to AX = b may be written in the form X = X p + X h, where X h is some solution to AX = 0. Indeed, given any solution X, A(X X p ) = AX AX p = b b = 0, which means that X X p solves the homogeneous system. Thus, to find the general solution to the inhomogeneous problem, it suffices to 1 Find the general solution to the homogeneous problem, 2 Find any particular solution to the inhomogeneous problem. Remark. Something similar happens in linear ODE. 43 / 323

Example. (Line example) Suppose the solution set of AX = b is a line passing through the points p = (1, 1, 2), q = (0, 3, 1). Find the parametric form of the solution set. First note that v = p q is parallel to this line. As q belongs to the solution set, the solution set is therefore X = q + tv, t R. Note that we may also write this as X = (1 t)q + tp, t R. Note also that the solution set to AX = 0 is simply tv, t R. 44 / 323

Example. (Plane example) Suppose the solution set of AX = b is a plane passing through p = (1, 1, 2), q = (0, 3, 1), r = (2, 1, 0). This time we form the vectors v 1 = p q, v 2 = p r. (Note that v 1 and v 2 are linearly independent, i.e. one is not a multiple of the other.) Then the plane is given by X = p + t 1 v 1 + t 2 v 2, t 1, t 2 R. (The solution set to AX = 0 is then the span of v 1 and v 2.) 45 / 323

Chapter 1. Linear Equations in Linear Algebra 1.7 Linear Independence 46 / 323

Definition. A set of vectors is (linearly) independent if S = {v 1,..., v n } x 1 v 1 + + x n v n = 0 = x 1 = = x n = 0 for any x 1,..., x n R. Equivalently, S is independent if the only solution to AX = 0 is X = 0, where A = [v 1 v n. Otherwise, we call S (linearly) dependent. 47 / 323

Example. Let v 1 = [ 1 2 3 Then, v 2 = [ 4 5 6 A, v 3 = [ 7 8 9 1 0 1 0 1 2 0 0 0, A := [ 1 4 7 2 5 8 3 6 9 In particular, the equation AX = 0 has a nontrivial solution set, namely 1 X = z 2, z R. 1 Thus the vectors are dependent.. 48 / 323

Dependence has another useful characterization: The vectors {v 1,..., v n } are dependent if and only if (at least) one of the vectors can be written as a linear combination of the others. Continuing from the previous example, we found that 1 AX = 0, where X = 2 1 (for example). This means v 1 2v 2 + v 3 = 0, i.e. v 1 = 2v 2 v 3. 49 / 323

Some special cases. If S = {v 1, v 2 }, then S is dependent if and only if v 1 is a scalar multiple of v 2 (if and only if v 1 and v 2 are co-linear). If 0 S, then S is always dependent. Indeed, if S = {0, v 1,, v n }, then a nontrivial solution to AX = 0 is 0 = 1 0 + 0v 1 + + 0v n. 50 / 323

Pivot columns. Consider [ 1 2 3 4 A = [v 1 v 2 v 3 v 4 = 2 4 5 6 3 6 7 8 [ 1 2 0 2 0 0 1 2 0 0 0 0 In particular, the vector equation AX = 0 has solution set X = x 2 [ 2 1 0 0 + x 4 [ 2 0 2 1 Thus {v 1, v 2, v 3, v 4 } are dependent., x 2, x 4 R. The pivot columns of A are relevant: By considering (x 2, x 4 ) {(1, 0), (0, 1)}, we find that v 1 and v 3 can be combined to produce v 2 or v 4 : v 2 = 2v 1 v 4 = 2v 1 + 2v 3.. 51 / 323

Let A be an m n matrix. We write A R m n. The number of pivots is bounded above by min{m, n}. If m < n ( short matrix), the columns of A are necessarily dependent. If m > n ( tall matrix), the rows of A are necessarily dependent. Example. A = [v 1 v 2 v 3 v 4 = [ 1 1 1 1 1 2 3 5 2 3 4 5 [ 1 0 1 0 0 1 2 0 0 0 0 1 The columns of A are necessarily dependent; indeed, setting the free variable x 3 = 1 yields the nontrivial combination v 1 2v 2 + v 3 = 0.. 52 / 323

Example. (continued) A = [ 1 1 1 1 1 2 3 5 2 3 4 5 Are the rows of A dependent or independent? The rows of A are the columns of the transpose of A, denoted A =. [ 1 1 2 1 2 3 1 3 4 1 5 5 [ 1 0 0 0 1 0 0 0 1 0 0 0 Now note that: Each column of A is a pivot column. = the solution set of A X = 0 is X = 0. = the columns of A are independent. = the rows of A are independent. 53 / 323

Chapter 1. Linear Equations in Linear Algebra 1.8 Introduction to Linear Transformations 1.9 The Matrix of a Linear Transformation 54 / 323

Definition. A linear transformation from R n to R m is a function T : R n R m such that T (u + v) = T (u) + T (v) for all u, v R n, T (αv) = αt (v) for all v R n and α R. Note that for any linear transformation, we necessarily have T (0) = T (0 + 0) = T (0) + T (0) = T (0) = 0. Example. Let A R m n. Define T (X) = AX for X R n. T : R n R m T (X + Y) = A(X + Y) = AX + AY = T (X) + T (Y) T (αx) = A(αX) = αax = αt (X) We call T a matrix transformation. 55 / 323

Definition. Let T : R n R m be a linear transformation. The range of T is the set R(T ) := {T (X) : X R n }. Note that R(T ) R m and 0 R(T ). We call T onto (or surjective) if R(T ) = R m. 56 / 323

Example. Determine if b is in the range of T (X) = AX, where [ [ 0 1 2 3 A = 3 0 4, b = 7. 5 6 0 11 This is equivalent to asking if AX = b is consistent. By row reduction: [ 0 1 2 [ 3 1 0 0 1 [A b = 3 0 4 7 0 1 0 1. 5 6 0 11 0 0 1 1 Thus b R(T ), indeed T (X) = b, where X = [ 1 1 1. 57 / 323

Example. Determine if T (X) = AX is onto, where [ 0 1 2 A = 2 3 4. 3 2 1 Equivalently, determine if AX = b is consistent for every b R m. Equivalently, determine if the reduced form of A has a pivot in every row: Thus T is not onto. A [ 1 0 1 0 1 2 0 0 0. 58 / 323

Example. (Continued) In fact, by performing row reduction on [A b we can describe R(T ) explicitly: Thus [ 0 1 2 2 3 4 3 2 1 b 1 b 2 b 3 [ 1 0 1 0 1 2 0 0 0 3 2 b 1 + 1 2 b 2 b 1 5 2 b 1 3 2 b 2 + b 3 R(T ) = {b R 3 : 5 2 b 1 3 2 b 2 + b 3 = 0}. 59 / 323

Definition. A linear transformation T : R m R n is one-to-one (or injective) if T (X) = 0 = X = 0. More generally, a function f is one-to-one if f (x) = f (y) = x = y. For linear transformations, the two definitions are equivalent. In particular, T is one-to-one if: for each b, the solution set for T (X) = b has at most one element. For matrix transformations T (X) = AX, injectivity is equivalent to: the columns of A are independent the reduced form of A has a pivot in every column 60 / 323

Example. Let T (X) = AX, where [ 1 2 3 4 A = 4 3 2 1 1 3 2 4 [ 1 0 0 1 0 1 0 1 0 0 1 1 T is not one-to-one, as every column does not have a pivot. T is onto, as every row has a pivot. 61 / 323

Summary. For a matrix transformation T (X) = AX. Let B denote the reduced echelon form of A. T is onto if and only if B has a pivot in every row. T is one-to-one if and only if B has a pivot in every column. 62 / 323

Matrix representations. Not all linear transformations are matrix transformations. However, each linear transformation T : R n R m has a matrix representation. Let T : R n R m. Let {e 1,..., e n } denote the standard basis vectors in R n, e.g. e 1 = Define the matrix [T R m n by 1 0. 0 R n. [T = [T (e 1 ) T (e n ). We call [T the matrix representation of T. Knowing [T is equiavalent to knowing T (see below). 63 / 323

Matrix representations. Suppose T : R n R m is linear, [T = [T (e 1 ) T (e n ), X = By linearity, [ x1. x n = x 1 e 1 + + x n e n. T (X) = x 1 T (e 1 ) + + x n T (e n ) = [T X. Example. Suppose T : R 3 R 4, with T (e 1 ) = [ 1 2 3 4 [ 2 2, T (e 2 ) = 3 4 [ 3 2, T (e 3 ) = 3 4. Then [T = [ 1 2 3 2 2 2 3 3 3 4 4 4. 64 / 323

Matrix representations. If T : R n R m is linear, then [T R m n, and so: [T R m n has m rows and n columns. T onto [T has pivot in every row. T one-to-one [T has pivot in every column. If m > n, then T cannot be onto. If m < n, then T cannot be one-to-one. 65 / 323

Linear transformations of the plane R 2. Suppose T : R 2 R 2 is linear. Then T (X) = [T X = [T (e 1 ) T (e 2 )X = xt (e 1 ) + yt (e 2 ). We consider several types of linear transformations with clear geometric meanings, including: shears, reflections, rotations, compositions of the above. 66 / 323

Example. (Shear) Let λ R and consider [T = [ 1 λ 0 1, T [ x y = [ x + λy y. Then [T [ 1 0 [T [ 0 1 [T [ 0 1 [ = 1 0, [ = λ 1, [ = λ 1. 67 / 323

Example. (Reflection across the line y = x) Let Note T (X) = [ 0 1 1 0 [ x y = [ y x. [T [ 1 0 [T [ 2 1 = [ 0 1 = [ 1 2 [, [T 0 [ 1 = 1 0, [, [T 2 [ 1 = 1 2. 68 / 323

Example. (Rotation by angle θ) Let [ cos θ sin θ T (X) = sin θ cos θ Then [ x y. [T [ 1 0 [T [ 0 1 [ = cos θ sin θ, [ = sin θ cos θ. 69 / 323

Example. (Composition) Let us now construct T that (i) reflects about the y-axis (x = 0) and then (ii) reflects about y = x. (i) [T 1 = We should then take [ 1 0 0 1 (ii) [T 2 = T (X) = T 2 T 1 (X) = T 2 (T 1 (X)), [ 0 1 1 0. that is, T (X) = [ 0 1 1 0 ([ 1 0 0 1 [ x y ) = [ 0 1 1 0 [ x y = [ y x Note that T = T 2 T 1 is a linear transformation, with [ 0 1 [T =. 1 0 70 / 323

2.1 Matrix Operations Chapter 2. Matrix Algebra 71 / 323

Addition and scalar multiplication of matrices. Let A, B R m n with entries A ij, B ij and let α R. We define A ± B and αa by specifying the ij th entry: (A ± B) ij := A ij ± B ij, (αa) ij = αa ij. Example. [ 1 2 3 4 [ 6 7 + 5 8 9 = [ 31 37 43 49 Matrix addition and scalar multiplication obey the usual rules of arithmetic. 72 / 323

Matrix multiplication. Let A R m r and B R r n have entries a ij, b ij. The matrix product AB R m n is defined via its ij th entry: (AB) ij = r a ik b kj. k=1 If a R r is a (row) vector and b R r is a (column) vector, then we write r ab = a b = a k b k (dot product). k=1 73 / 323

Matrix multiplication. (Continued) If we view A = a 1. a m R m r, B = [b 1 b n R r n, then (AB) ij = a i b j. We may also write AB = A[b 1 b n = [Ab 1 Ab n, where the product of a matrix and column vector is as before. 74 / 323

Example. Let Then A = [ 1 2 3 4 5 6 AB = [ 22 28 49 64 R 2 3, B =, BA = 1 2 3 4 5 6 9 12 15 19 26 33 29 40 51 Remark. You should not expect AB = BA in general. R 3 2. Can you think of any examples for which AB = BA does hold?. 75 / 323

Definition. The identity matrix I n R n n is given by { 1 i = j (I n ) ij = 0 i j. Properties of matrix multiplication. Let A R m n and α R. For B and C of appropriate dimensions: A(BC) = (AB)C A(B + C) = AB + AC (A + B)C = AC + BC α(ab) = (αa)b = A(αB) I m A = AI n = A. 76 / 323

Definition. If A R m n has ij th entry a ij, then the matrix transpose (or transposition) of A is the matrix A T R n m with ij th entry a ji. One also writes A T = A. Example. A = [ 1 2 3 4 5 6 = A T = [ 1 4 2 5 3 6. Thus the columns and rows are interchanged. Properties. (A T ) T = A (A + B) T = A T + B T (αa) T = αa T for α R (AB) T = B T A T 77 / 323

Proof of the last property. (AB) T ij = (AB) ji = k a jk b ki. (B T A T ) ij = k = k (B T ) ik (A T ) kj b ki a jk. Thus (AB) T = B T A T. 78 / 323

Example. The transpose of a row vector is a column vector. Let [ a = 1 2 [, b = 3 4. Then a, b R 2 1 (column vectors), a T, b T R 1 2 (row vectors): [ a T b = 11, ab T 3 4 =. 6 8 79 / 323

Key fact. If T 1 : R m R n and T 2 : R n R k are linear transformations, then the matrix representation of the composition is given by [T 2 T 1 = [T 2 [T 1. Remark. The dimensions are correct: T 2 T 1 : R m R k. [T 2 T 1 R k m [T 1 R n m [T 2 R k n [T 2 [T 1 R k m. For matrix transformations, this is clear: if T 1 (x) = Ax and T 2 (x) = Bx, then T 2 T 1 (x) = T 2 (T 1 (x)) = T 2 (Ax) = BAx. 80 / 323

Example. Recall that rotation by θ in R 2 is given by [ cos θ sin θ [T θ =. sin θ cos θ Thus rotation by 2θ is [ cos 2θ sin 2θ [T 2θ = sin 2θ cos 2θ. One can check that [T 2θ = [T θ 2 = [T θ [T θ. 81 / 323

Proof of the key fact. Recall that for T : R n R m : So [T = [T (e 1 ) T (e n ), T (x) = [T x. [T 2 T 1 = [T 2 (T 1 (e 1 )) T 2 (T 1 (e n )) = [[T 2 T 1 (e 1 ) [T 2 T 1 (e n ) = [T 2 [T 1 (e 1 ) T 1 (e n ) ( ) = [T 2 [T 1. In (*), we have used the column-wise definition of matrix multiplication. 82 / 323

Chapter 2. Matrix Algebra 2.2 The Inverse of a Matrix 2.3 Characterizations of Invertible Matrices 83 / 323

Definition. Let A R n n (square matrix). We call B R n n an inverse of A if AB = BA = I n. Remark. If A has an inverse, then it is unique. Proof. Suppose Then AB = BA = I n and AC = CA = I n. B = BI n = BAC = I n C = C. If A has an inverse, then we denote it by A 1. Note (A 1 ) 1 = A. Remark. If A, B R n n are invertible, then AB is invertible. Indeed, (AB) 1 = B 1 A 1. 84 / 323

Example. If A = [ 1 2 3 4 [, then A 1 2 1 = 3 2 1 2 Note that to solve AX = b, we may set X = A 1 b.. For example, the solution to [ [ 1 AX = is X = A 1 1 0 0 = [ 2 3 2. 85 / 323

Questions. 1. When does A R n n have an inverse? 2. If A has an inverse, how do we compute it? Note that if A is invertible (has an inverse), then: Ax = b has a solution for every b (namely, x = A 1 b). Equivalently, A has a pivot in every row. If Ax = 0, then x = A 1 0 = 0. Thus the columns of A are independent. Equivalently, A has a pivot in every column. Conversely, we will show that if A has a pivot in every column or row, then A is invertible. Thus all of the above conditions are equivalent. 86 / 323

Goal. If A has a pivot in every column, then A is invertible. Since A is square, this is equivalent to saying that if the reduced echelon form of A is I n, then A is invertible. Key observation. Elementary row operations correspond to multiplication by an invertible matrix. (See below.) With this observation, our hypothesis means that E k E 1 A = I n for some invertible matrices E j. Thus In particular, A is invertible. A = (E k E 1 ) 1 = E 1 1 E 1 k. Furthermore, this computes the inverse of A. Indeed, A 1 = E k E 1. 87 / 323

It remains to show that elementary row operations correspond to multiplication by a invertible matrix (known as elementary matrices). In fact, to write down the corresponding elementary matrix, one simply applies the row operation to I n. Remark. A does not need to be square; the following works for any A R n m. For concreteness, consider the 3x3 case. 88 / 323

Multiply row one by non-zero α R corresponds to multiplication by [ α 0 0 E = 0 1 0. 0 0 1 Indeed, [ α 0 0 0 1 0 0 0 1 Note that E is invertible: [ 1 2 3 4 5 6 7 8 9 E 1 = = [ 1 α 0 0 0 1 0 0 0 1 [ α 2α 3α 4 5 6 7 8 9. 89 / 323

Interchange rows one and two corresponds to multiplication by [ 0 1 0 E = 1 0 0. 0 0 1 Indeed, [ 0 1 0 1 0 0 0 0 1 [ 1 2 3 4 5 6 7 8 9 Note that E is invertible. In fact, E = E 1. = [ 4 5 6 1 2 3 7 8 9. 90 / 323

Multiply row three by α and add it to row two corresponds to multiplication by [ 1 0 0 E = 0 1 α. 0 0 1 Indeed, [ 1 0 0 0 1 α 0 0 1 [ 1 2 3 4 5 6 7 8 9 Note that E is invertible: E 1 = = [ [ 1 0 0 0 1 α 0 0 1 1 2 3 4 + 7α 5 + 8α 6 + 9α 7 8 9. 91 / 323

Summary. A is invertible if and only if there exist a sequence of elementary matrices E j so that E k E 1 A = I n. Note that E k E 1 [A I n = [E k E 1 A E k E 1 I n = [I n E k E 1 = [I n A 1. Thus A is invertible if and only if [A I n [I n A 1. 92 / 323

Example 1. [ 1 2 [A I 2 = 3 4 1 0 0 1 [ 1 0 0 1 2 1 3 2 1 2 = [I 2 A 1 Thus A is invertible, with A 1 as above. Example 2. [ [A I 3 = 3 3 3 1 0 1 1 3 5 1 0 0 0 1 0 0 0 1 [ 1 0 1 0 1 2 0 0 0 Thus A is not invertible. Note BA = U. 0 1 0 0 1 3 1 3 1 2 1 =: [U B 93 / 323

Some additional properties. If A is invertible, then (A T ) 1 = (A 1 ) T. Indeed, A T (A 1 ) T = (A 1 A) T = I T n = I n, and similarly (A 1 ) T A T = I n. Suppose AB is invertible. Then A[B(AB) 1 = (AB)(AB) 1 = I n. Thus A[B(AB) 1 b = b for any b R n, so that Ax = b has a solution for every b. Thus A has a pivot in every row, so that A is invertible. Similarly, B is invertible. Conclusion. AB is invertible if and only if A, B are invertible. 94 / 323

Some review. Let A R m n and x, b R m. Row pivots. The following are equivalent: A has a pivot in every row Ax = b is consistent for every b R m the columns of A span R m the transformation T (x) = Ax maps R n onto R m the rows of A are independent (see below) Column pivots. The following are equivalent: A has a pivot in every column Ax = 0 = x = 0 the columns of A are independent the transformation T (x) = Ax is one-to-one 95 / 323

Claim. If A has m pivots, then the rows of A R m n are independent. (The converse is also true why?) Proof. By hypothesis, BA = U, or equivalently A = B 1 U where B R m m is a product of elementary matrices and U has a pivot in each row. Suppose A T x = 0. Then (since U T has a pivot in each column), U T [(B 1 ) T x = 0 = (B 1 ) T x = 0 = x = 0 by invertibility. Thus the columns of A T (i.e. rows of A) are independent. 96 / 323

When A is square, all of the above equivalences hold, in addition to the following: There exists C R n n so that CA = I n. (This gives Ax = 0 = x = 0.) There exists D R n n so that AD = I n. (This gives Ax = b is consistent for every b.) A is invertible. A T is invertible. 97 / 323

Definition. Let T : R n R n be a linear transformation. We say T is invertible if there exists S : R n R n such that T S(x) = S T (x) = x for all x R n. If T (x) = Ax, this is equivalent to A being invertible, with S(x) = A 1 x. If T has an inverse, it is unique and denoted T 1. The following are equivalent for a linear transformation T : R n R n : T is invertible T is left-invertible (there exists S so that S T (x) = x) T is right-invertible (there exists S so that T S(x) = x). 98 / 323

2.5 Matrix Factorization Chapter 2. Matrix Algebra 99 / 323

Definition. A matrix A = [a ij R m n is lower triangular if a ij = 0 for all i < j. We call A unit lower triangular if additionally a ii = 1 for all i = 1,..., min{m, n}. Example. The elementary matrix E corresponding to the row replacement R j αr i + R j, i < j is unit lower triangular, as is its inverse. E.g. (in 3 3 case): [ 1 0 0 [ 1 0 0 R 2 αr 1 + R 2 = E = α 1 0, E 1 = α 1 0 0 0 1 0 0 1. 100 / 323

We can similarly define upper triangular and unit upper triangular matrices. Note that the product of (unit) lower triangular matrices is (unit) lower triangular: (AB) ij = a ik b kj = a ik b kj = 0 for i < j. k j k i The same is true for upper triangular matrices. Definition. We call P R m m a permutation matrix if it is a product of elementary row-exchange matrices. 101 / 323

LU Factorization. For any A R m n, there exists a permutation matrix P R m m and an upper triangular matrix U R m n (in echelon form) such that PA U. Moreover, the elementary matrices used to reduce PA to U may all be taken to be lower triangular and of the type Thus R j αr i + R j for some i < j. E k E 1 PA = U for some unit lower triangular (elementary) matrices E j, and so for some unit lower triangular L. PA = (E1 1 E 1 )U = LU k 102 / 323

The LU factorization is also used to solve systems of linear equations. Example. Solve Ax = b, where [ [ 1 0 2 2 A = LU = 1 1 0 2 [ 3, b = 3. 1. Solve Ly = b: [ [ 1 0 y1 = 1 1 y 2 [ 3 3 = { y1 = 3, y 1 + y 2 = 3 = { y1 = 3 y 2 = 0 2. Solve Ux = y. [ 2 2 0 2 [ x1 = x 2 [ 3 0 = = { 2x 1 + 2x 2 = 3 x 2 = 0 { x 1 = 3 2 x 2 = 0. 103 / 323

This process is computationally efficient when A is very large and solutions are required for systems in which A stays fixed but b varies. See the Numerical notes section in the book for more details. We next compute some examples of LU factorization, beginning with the case that P is the identity matrix. 104 / 323

Example 1. Let A = [ 2 3 4 1 1 3 2 4 1 2 3 4 We put A into upper triangular form form via row replacements: [ A R 2 1 2 3 4 1 2 R 1+R 2 3 7 0 R 3 1 2 R 2 0 2 1+R 3 1 7 0 2 1 2 [ R 3 1 2 3 4 1 3 R 2+R 3 3 7 0 2 0 2 = U. 7 0 0 1 3 We have three unit lower triangular elementary matrices E 1, E 2, E 3 so that E 3 E 2 E 1 A = U, i.e. A = E1 1 E 2 1 E 3 1 U. That is, we construct L via row reduction:. L = E1 1 2 1 3 1, A = LU. 105 / 323

Example 1. (continued) Note that [ 1 0 0 E 1 = 2 1 1 0, 0 0 1 [ (R 2 1 2 R 1 + R 2 ) 1 0 0 E 2 = 0 1 0, 1 0 1 2 [ (R 3 1 2 R 1 + R 3 ) E 3 = 1 0 0, (R 3 1 3 R 2 + R 3 ), [ E 3 E 2 E 1 = 0 1 0 0 1 3 1 1 0 0 1 2 1 0 1 2 3 1 1 In particular, E 3 E 2 E 1 L = I 3. [, L = (E 3 E 2 E 1 ) 1 1 0 0 1 = 2 1 0 1 2 1 3 1, 106 / 323

Example 1. (continued) Altogether, we have the LU factorization: [ [ [ 2 3 4 1 1 0 0 2 3 4 1 1 3 7 1 3 2 4 = 2 1 0 0 2 0 2 1 1 7 1 2 3 4 2 3 1 0 0 1 3. Let us next consider an example where we will not have P equal to the identity. 107 / 323

Example 2. Let A = [ 2 1 1 1 2 1 1 1 4 2 1 0. We try to put A into echelon by using the row replacements R 2 R 1 + R 2, R 3 2R 1 + R 3. This corresponds to [ 1 0 0 EA := 1 1 0 A = 2 0 1 [ 2 1 1 1 0 0 0 0 0 0 1 2 However, we now need a row interchange (R 2 R 3 ). This corresponds to multiplication by [ 1 0 0 P = 0 0 1. Not lower triangular! 0 1 0 108 / 323

Example 2. (continued) So far, we have written PEA = U with E unit lower triangular and U in echelon form. Thus (since P = P 1 ), A = E 1 PU. However, E 1 P is not lower triangular: [ 1 0 0 [ 1 0 0 E 1 P = 1 1 0 0 0 1 2 0 1 0 1 0 = [ 1 0 0 1 0 1 2 1 0 But if we multiply by P again, we get the desired factorization: [ 1 0 0 PA = LU, L = PE 1 P = 2 1 0. 1 0 1 109 / 323

Chapter 2. Matrix Algebra 2.7 Applications to Computer Graphics 110 / 323

Definition. We call (y, h) R n+1 (with h 0) homogeneous coordinates for x R n if x = 1 h y, that is, x j = 1 h y j for 1 j n. In particular, (x, 1) are homogeneous coordinates for x. Homogeneous coordinates can be used to describe more general transformations of R n than merely linear transformations. 111 / 323

Example. Let x 0 R n and define [ [ In x T ((x, 1)) = 0 x 0 1 1 [ x + x0 = 1. This is a linear transformation on homogeneous coordinates that corresponds to the translation x x + x 0. Note that translation in R n is not a linear transformation if x 0 0. (Why not?) 112 / 323

To represent a linear transformation on R n, say T (x) = Ax, in homogeneous coordinates, we use [ [ [ A 0 x Ax T ((x, 1)) = =. 0 1 1 1 We can then compose translations and linear transformations to produce either x Ax + x 0 or x A(x + x 0 ). 113 / 323

Graphics in three dimensions. Applying successive linear transformations and translation to the homogeneous coordinates of the points that define an outline of an object in R 3 will produce the homogeneous coordinates of the translated/deformed outline of the object. See the Practice Problem in the textbook. This also works in the plane. 114 / 323

Example 1. Find the transformation that translates by (0, 8) in the plane and then reflects across the line y = x. Solution: [ 0 1 0 1 0 0 0 0 1 [ 1 0 0 0 1 8 0 0 1 [ x y 1 = [ 0 1 8 1 0 0 0 0 1 [ x y 1 Example 2. Find the transformation that rotates points an angle θ about the point (3, 1): Solution: [ 1 0 3 0 1 1 0 0 1 [ cos θ sin θ 0 sin θ cos θ 0 0 0 1 [ 1 0 3 0 1 1 0 0 1 Note the order of operations in each example. [ x y 1 115 / 323

Example 1. (Continued) What is the effect of the transformation in Example 1 on the following vertices: (0, 0), (3, 0), (3, 4). Solution: 0 1 8 1 0 0 0 0 1 0 3 3 0 0 4 1 1 1 = 8 8 12 0 3 3 1 1 1 Thus (0, 0) ( 8, 0), (3, 0) ( 8, 3), (4, 5) ( 12, 3). 116 / 323

Perspective projection. Consider a light source at the point (0, 0, d) R 3, where d > 0. A ray of light passing through a point (x, y, z) R 3 with 0 z < d will intersect the xy-plane at a point (x, y, 0). Understanding the map (x, y, z) (x, y ) allows us to represent shadows. (One could also imagine projection onto other 2d surfaces.) By some basic geometry (similar triangles, for example), one can deduce x = x 1 z, y = y d 1 z. d In particular, we find that (x, y, 0, 1 z d ) are homogeneous coordinates for (x, y, 0). 117 / 323

Perspective projection. (Continued) Note that the mapping 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 d 1 x y z 1 = x y 0 1 z d takes homogeneous coordinates of (x, y, z) to the homogeneous coordinates of (x, y, 0). Using this, one can understand how the shadows of objects in R 3 would move under translations/deformations. 118 / 323

3.1 Introduction to determinants 3.2 Properties of determinants Chapter 3. Determinants 119 / 323

Definition. The determinant of a matrix A R n n, denoted det A, is defined inductively. Writing A = (a ij ), we have the following: If n = 1, then det A = a 11. If n 2, then det A = n ( 1) j+1 a 1j det A 1j, j=1 where A ij is the (n 1) (n 1) submatrix obtained by removing the i th row and j th column from A. 120 / 323

Example. Consider the 2 2 case: [ a b A = c d. The 1 1 submatrices A 11 and A 12 are given by Thus A 11 = d, A 12 = c. det A = 2 ( 1) 1+j a 1j det A 1j j=1 Conclusion. [ a b det c d = a 11 det A 11 a 12 A 12 = ad bc. = ad bc. 121 / 323

Examples. [ 1 2 det 3 4 [ λ 0 det 0 µ [ 1 2 det 3 6 = 1 4 2 3 = 2. = λµ = 1 6 2 3 = 0. 122 / 323

Example. Consider A = 1 2 3 1 3 4 1 3 6 Then A 11 = [ 3 4 3 6 [ 1 4, A 12 = 1 6 [ 1 3, A 13 = 1 3, and det A = 3 ( 1) 1+j a 1j det A 1j j=1 = 1 det A 11 2 det A 12 + 3 A 13 = 6 4 + 0 = 2. Note. Note the alternating ±1 pattern. 123 / 323

Definition. Given an n n matrix A, the terms C ij := ( 1) i+j det A ij are called the cofactors of A. Recall A ij is the the (n 1) (n 1) matrix obtained by removing the i th row and j th column from A. Note det A = n ( 1) 1+j a 1j A 1j = j=1 n a 1j C 1j. j=1 We call this the cofactor expansion of det A using row 1. 124 / 323

Claim. The determinant can be computed using the cofactor expansion of det A with any row or any column. That is, det A = = n a ij C ij for any i j=1 n a ij C ij for any j. i=1 The first expression is the cofactor expansion using row i. The second expression is the cofactor expansion using column j. 125 / 323

Example. Consider again [ 1 2 3 A = 1 3 4, det A = 2. 1 3 6 Let s compute the determinant using column 1: Using [ [ [ 3 4 2 3 2 3 A 11 =, A 3 6 21 =, A 3 6 31 = 3 4, det A = 3 a i1 C i1 i=1 = a 11 det A 11 a 21 det A 21 + a 31 det A 31 = 1 6 1 3 + 1 ( 1) = 2. Remark. Don t forget the factor ( 1) i+j in C ij. 126 / 323

Use the flexibility afforded by cofactor expansion to simplify your computations: use the row or columns with the most zeros. Example. Consider Using row 3, A = [ 1 2 3 4 0 5 0 6 0. det A = 6 det A 32 = 6 7 = 42. 127 / 323

Properties of the determinant. det I n = 1 det AB = det A det B det A T = det A Note that if A R n n is invertible, then 1 = det I n = det AA 1 = det A det(a 1 ). In particular, det A 0 and det(a 1 ) = [det A 1. 128 / 323

Determinants of elementary matrices. If E corresponds to R i αr i (for α 0), then det E = α. If E corresponds to a row interchange, then det E = 1. If E corresponds to R j R i + αr j, then det E = 1. In particular, in each case det E 0. Let s check these in the simple 2 2 case. 129 / 323

Determinants of elementary matrices. Recall that the elementary matrix corresponding to a row operation is obtained by applying this operation to the identity matrix. Scaling: Interchange: E = E = [ α 0 0 1 [ 0 1 1 0 = det E = α. = det E = 1. Replacement (R 2 αr 1 + R 2 ) [ 1 0 E = = det E = 1. α 1 130 / 323

Row reduction and determinants. Suppose A U, with Then U = E k E 1 A. det A = 1 det E 1 det E k det U. Suppose that U is in upper echelon form. Then det U = u 11 u nn. Indeed, this is true for any upper trianglular matrix (use the right cofactor expansion). Thus, row reduction provides another means of computing determinants! 131 / 323

Example 1. A = [ 1 2 3 2 4 10 3 8 9 [ 1 2 3 R 2 2R 1 +R 2 0 0 4 R 3 3R 1 +R 3 0 2 0 [ 1 2 3 R 2 R 3 0 2 0 0 0 4 = U. Then det A = 1 ( 1) det U = 1 2 4 = 8. 132 / 323

Example 2. A = [ 1 2 3 0 4 5 6 12 18 [ 1 2 3 0 4 5 0 0 0. Thus det A = 0. 133 / 323

Invertibility. We saw above that if U = (u ij ) is an upper echelon form for A, then det A = c u 11 u nn for some c 0. We also saw that if A is invertible, then det A 0. Equivalently, det A = 0 = A is not invertible. On the other hand, if A is not invertible then it has fewer than n pivot columns, and hence some u ii = 0. Thus A not invertible = det A = cu 11 u nn = 0. So in fact the two conditions are equivalent. 134 / 323

Invertibility Theorem. The following are equivalent: A is invertible. The reduced row echelon form of A is I n. A has n pivot columns (and n pivot rows). det A 0. 135 / 323

Examples. Recall A = [ 1 2 3 4 0 5 0 6 0 = det A = 42. Thus A is invertible and det A 1 = 1 42. If det AB = det A det B 0, then A, B, AB are invertible. Consider the matrix M(λ) = [ 2 λ 1 1 2 λ For which λ is M(λ) not invertible? Answer: Compute, λ R. det M(λ) = (2 λ) 2 1 = (λ 1)(λ 3) = λ = 1, 3. 136 / 323

4.1 Vector Spaces and Subspaces Chapter 4. Vector Spaces 137 / 323

Definition. A vector space V over a field of scalars F is a non-empty set together with two operations, namely addition and scalar multiplication, which obey the following rules: for u, v, w V and α, β F : u + v V u + v = v + u (u + v) + w = u + (v + w) there exists 0 V such that 0 + u = u there exists u V such that u + u = 0 αv V α(u + v) = αu + αv (α + β)u = αu + βu α(βu) = (αβ)u 1u = u 138 / 323

Remark 1. A field is another mathematical object with its own long list of defining axioms, but in this class we will always just take F = R or F = C. Remark 2. One typically just refers to the vector space V without explicit reference to the underlying field. Remark 3. The following are consequences of the axioms: 0u = 0, α0 = 0, u = ( 1)u. 139 / 323

Examples. V = R n and F = R V = C n and F = R or C V = P n (polynomials of degree n or less), and F = R V = S, the set of all doubly-infinite sequences (..., x 2, x 1, x 0, x 1,... ) and F = R V = F(D), the set of all functions defined on a domain D and F = R. 140 / 323

Definition. Let V be a vector space and W a subset of V. If W is also a vector space under vector addition and scalar multipliation, then W is a subspace of V. Equivalently, W V is a subspace if u + v W and αv W for any u, v W and any scalar α. Example 1. If 0 / W, then W is not a subspace of V. Thus W = {x R 2 : x 1 + x 2 = 2} is not a subspace of R 2. Example 2. The set W = {x R 2 : x 1 x 2 0} is not a subspace of R 2. Indeed, [1, 0 T + [0, 1 T / W. 141 / 323

Further examples and non-examples. W = R n is a subspace of V = C n with F = R W = R n is not a subspace of V = C n with F = C W = P n is a subspace of V = F(D) W = S +, the set of doubly-infinite sequences such that x k = 0 for k > 0 is a subspace of S W = {(x, y) R 2 : x, y Z} is not a subspace of R 2 142 / 323

Span as subspace. Let v 1,..., v k be a collection of vectors in R n. Then W := span{v 1,..., v k } = {c 1 v 1 + + c k v k : c 1,..., c k F } is a subspace of R n. Indeed, if u, v W and α F then u + v W and αu W. (Why?) 143 / 323

Subspaces associated with A R m n. The column space of A, denoted col(a) is the span of the columns of A. The row space of A, denoted row(a) is the span of the rows of A. The null space of A, denoted nul(a), is nul(a) = {x R n : Ax = 0} R n. Note that nul(a) is a subspace of R n ; however, {x R n : Ax = b} is not a subspace if b 0. (Why not?) 144 / 323

Example. Let [ A = 1 2 0 3 1 2 1 1 [ 1 2 0 3 0 0 1 4 The solution set to Ax = 0 is written in parametric vector form as 2 1 x 3 1 0 + x 0 4 1. 0 1. That is, nul(a) = span 2 1 0 0, 1 0 1 1. 145 / 323

Example. Let W = s + 3t 8t s t : s, t R. This is a subspace of R 3, since s + 3t 8t s t = s 1 0 1 + t 3 8 1, and hence W = span 1 0 1, 3 8 1. 146 / 323

Chapter 4. Vector Spaces 4.2 Null Spaces, Column Spaces, and Linear Transformations 147 / 323

Null space. Recall that the null space of A R m n is the solution set to Ax = 0, denoted nul(a). Note nul(a) is a subspace of R n : for x, y nul(a) and α R, A(x + y) = Ax + Ay = 0 + 0 = 0 A(αx) = αax = α0 = 0 (closed under addition) (closed under scalar ultiplication). In fact, by writing the solution set to Ax = 0 in parametric vector form, we can identify nul(a) as the span of a set of vectors. (We saw such an example last time.) 148 / 323

Column space. Recall that the column space of A R m n is the span of the columns of A, denoted col(a). Recall that col(a) is a subspace of R m. Note that b col(a) precisely when Ax = b is consistent. Note that col(a) = R m when A has a pivot in every row. Using row reduction, we can describe col(a) as the span of a set of vectors. 149 / 323

Example. [A b = 1 1 1 1 1 1 0 0 1 1 3 3 Thus b col(a) if and only if b 1 b 2 b 3 b 3 2b 2 3b 1 = 0, i.e. b = b 1 b 2 3b 1 + 2b 2 1 1 1 1 0 0 1 1 0 0 0 0 = b 1 b 1 b 1 + b 2 b 3 2b 2 3b 1 1 0 3 +b 2 0 1 2. In particular, col(a) = span 1 0 3, 0 1 2. 150 / 323

Definition. Let V and W be vector spaces. A linear transformation T : V W is a function such that for all u, v V and α F, T (u + v) = T (u) + T (v) and T (αu) = αt (u). Recall that linear transformations from R n to R m are represented by matrices: T (x) = Ax, A = [T = [T (e 1 ) T (e n ) R m n. In this case, col(a) = R(T ) (the range of T ). For linear transformations, one defines the kernel of T by N(T ) = {u V : T (u) = 0}. For matrix transformations, N(T ) = nul(a). 151 / 323

Review. We can add some new items to our list of equivalent conditions: Row pivots. A matrix A R m n has a pivot in every row if and only if col(a) = R m. Column pivots. A matrix A R m n has a pivot in every column if and only if nul(a) = {0}. Furthermore, if A is a square matrix, these two conditions are equivalent. 152 / 323

Chapter 4. Vector Spaces 4.3 Linearly Independent Sets; Bases 153 / 323

Definition. A set of vectors {v 1,..., v n } in a vector space V is linearly independent if c 1 v 1 + + c n v n = 0 = c 1 = = c n = 0. Otherwise, the set is linearly dependent. 154 / 323

Example 1. The set {cos t, sin t} is linearly independent in F(R); indeed, if c 1 cos t + c 2 sin t 0, then c 1 = 0 (set t = 0) and c 2 = 0 (set t = π 2 ). Example 2. The set {1, cos 2 t, sin 2 t} is linearly dependent in F(R); indeed, cos 2 t + sin 2 t 1 0. For a linearly dependent set of two or more vectors, at least one of the vectors can be written as a linear combination of the others. 155 / 323

Example. Show that p 1 (t) = 2t + 1, p 2 (t) = t, p 3 (t) = 4t + 3 are dependent vectors in P 1. We need to find a non-trivial solution to x 1 p 1 (t) + x 2 p 2 (t) + x 3 p 3 (t) = 0. Expanding the left-hand side, this is equivalent to t(2x 1 + x 2 + 4x 3 ) + (x 1 + 3x 3 ) = 0 for all t. This can only happen if x 1, x 2, x 3 satisfy 2x 1 + x 2 + 4x 3 = 0 x 1 + 3x 3 = 0. 156 / 323

Example. (Continued) To solve this linear system, we use the augmented matrix: [ [ 2 1 4 1 0 3. 1 0 3 0 1 2 The solution set is therefore (x 1, x 2, x 3 ) = ( 3z, 2z, z) for any z R. In particular, ( 3, 2, 1) is a solution, and hence 3p 1 (t) + 2p 2 (t) + p 3 (t) = 0, showing that {p 1, p 2, p 3 } are linearly dependent. 157 / 323

Definition. Let W be a subspace of V. A set of vectors B = {b 1,..., b n } is a basis for W if (i) B is linearly independent, and (ii) W = span(b). The plural of basis is bases. Examples. B = {e 1,..., e n } is the standard basis for R n. B = {1, t,..., t n } is the standard basis for P n. B = {v 1,..., v n } R n is a basis for R n if and only if A = [v 1 v n I n. Pivot in every column columns of A are independent, Pivot in every row col(a) = R n. 158 / 323

Bases for the null space. Recall that for A R m n we have the subspace nul(a) = {x R n : Ax = 0} R n. Suppose A = [ 1 2 3 4 2 4 6 8 1 1 1 1 [ 1 0 1 2 0 1 2 3 0 0 0 0. Thus nul(a) consists of vectors of the form x 3 [ 1 2 1 0 + x 4 [ 2 3 0 1 = x 3 u + x 4 v, x 3, x 4 R. In particular, u and v are independent and nul(a) = span{u, v}. Thus B = {u, v} is a basis for nul(a). 159 / 323

Bases for the column space. Consider A = [a 1 a 2 a 3 a 4 = [ 1 1 0 0 1 2 2 1 0 1 2 2 0 0 0 1 By definition, col(a) = span{a 1, a 2, a 3, a 4 }. [ 1 0 2 0 0 1 2 0 0 0 0 1 0 0 0 0 However, {a 1, a 2, a 3, a 4 } is not a basis for col(a). (Why not?) We see that a 1, a 2, a 4 are independent, while a 3 = 2a 1 + 2a 2 : [ 1 0 0 [ 1 0 0 1 0 0 1 [a 1 a 2 a 4 0 0 1, [a 1 a 2 a 3 0 0 0 0 0 0 0 Thus {a 1, a 2, a 4 } are independent and span{a 1, a 2, a 4 } = span{a 1, a 2, a 3, a 4 } = col(a), i.e. B = {a 1, a 2, a 4 } is a basis for col(a). 2 2 0 0. 160 / 323

Bases for the row space. Recall that row(a) is the span of the rows of A. A basis for row(a) is obtained by taking the non-zero rows in the reduced echelon form of A. This is based on the fact that A B = row(a) = row(b). Example. Consider A = [ 1 1 0 0 1 2 2 1 0 1 2 2 0 0 0 1 [ 1 0 2 0 0 1 2 0 0 0 0 1 0 0 0 0 = In particular, B = {b 1, b 2, b 3 } is a basis for row(a). [ b1 b 2 b 3 b 4. 161 / 323

Two methods. We now have two methods for finding a basis for a subspace spanned by a set of vectors. 1. Let W = span{a 1, a 2, a 3, a 4 } = row(a), where A = [ a1 a 2 a 3 a 4 = [ 1 1 0 0 1 2 2 1 0 1 2 2 0 0 0 1 Then B = {b 1, b 2, b 3 } is a basis for W. [ 1 0 2 0 0 1 2 0 0 0 0 1 0 0 0 0 = 2. Let W = span{a T 1, at 2, at 3, at 4 } = col(at ), where A T = [a T 1 a T 2 a T 3 a T 4 = [ 1 1 0 0 1 2 1 0 0 2 2 0 0 1 2 1 Then B = {a T 1, at 2, at 3 } is a basis for W. [ b1 b 2 b 3 b 4 [ 1 0 0 1 0 1 0 1 0 0 1 1 0 0 0 0. 162 / 323

4.4 Coordinate Systems Chapter 4. Vector Spaces 163 / 323

Unique representations. Suppose B = {v 1,..., v n } is a basis for some subspace W V. Then every v W can be written as a unique linear combination of the elements in B. Indeed, B spans W by definition. For uniqueness suppose Then v = c 1 v 1 + + c n v n = d 1 v 1 + d n v n. (c 1 d 1 )v 1 + + (c n d n )v n = 0, and hence linear independence of B (also by definition) implies c 1 d 1 = = c n d n = 0. 164 / 323

Definition. Given a basis B = {v 1,..., v n } for a subspace W, there is a unique pairing of vectors v W and vectors in R n, i.e. v W (c 1,..., c n ) R n where v = c 1 v 1 + + c n v n. We call (c 1,, c n ) the coordinates of v relative to B, or the B-coordinates of v. We write [v B = (c 1,, c n ). Example. If B = {e 1,..., e n } then [v B = v. 165 / 323

Example. Let B = {v 1, v 2, v 3 } and v be given as follows: v 1 = [ 1 0 0, v 2 = [ 1 1 0, v 3 = [ 0 0 1, v = [ 3 1 8 To find [v B, we need to solve A[v B = v where A = [v 1 v 2 v 3.. For this, we set up the augmented matrix: 1 1 0 3 1 0 0 [A v = 0 1 0 1 0 1 0 0 0 1 8 0 0 1 Thus [v B = (2, 1, 8), i.e. v = 2v 1 + v 2 + 8v 3. 2 1 8. 166 / 323

Example. Let B = {p 1, p 2, p 3 } P 2 and p P 2 be given by p 1 (t) = 1, p 2 (t) = 1 + t, p 3 (t) = t 2, p(t) = 3 + t + 8t 2. To find [p B, we need to write 3 + t + 8t 2 = x 1 + x 2 (1 + t) + x 3 t 2 = (x 1 + x 2 ) + x 2 t + x 3 t 2. This leads to the system x 1 + x 2 + 0x 3 = 3 0x 1 + x 2 + 0x 3 = 1 0x 1 + 0x 2 + x 3 = 8 This is the same system as in the last example the solution is [p B = (2, 1, 8). 167 / 323

Isomorphism property. Suppose B = {v 1,..., v n } is a basis for W. Define the function T : W R n by T (v) = [v B. Using the unique representation property, one can check: T (v + w) = [v + w B = [v B + [w B = T (v) + T (w), T (αv) = [αv B = α[v B = αt (v). Thus T is a linear transformation. Furthermore, T (v) = [v B = 0 = v = 0, i.e. T is one-to-one. We call T an isomorphism of W onto R n. 168 / 323

Example. (again) Let E = {1, t, t 2 }. This is a basis for P 2, and [p 1 E = v 1, [p 2 E = v 2, [p 3 E = v 3, [p E = v, using the notation from the previous two examples. In particular, finding [p B is equivalent to finding [v B. Indeed, recalling the isomorphism property of T (p) = [p E, p = x 1 p 1 + x 2 p 2 + x 3 p 3 T (p) = x 1 T (p 1 ) + x 2 T (p 2 ) + x 3 T (p 3 ) [p E = x 1 [p 1 E + x 2 [p 2 E + x 3 [p 3 E v = x 1 v 1 + x 2 v 2 + x 3 v 3. That is, [p B = [v B 169 / 323