Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Similar documents
Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Systems and Matrices

Linear Algebra Highlights

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math113: Linear Algebra. Beifang Chen

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Math Linear Algebra Final Exam Review Sheet

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Lecture Summaries for Linear Algebra M51A

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Lecture Notes in Linear Algebra

2. Every linear system with the same number of equations as unknowns has a unique solution.

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Chapter 7. Linear Algebra: Matrices, Vectors,

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Fundamentals of Engineering Analysis (650163)

NOTES FOR LINEAR ALGEBRA 133

LINEAR ALGEBRA REVIEW

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

1 Determinants. 1.1 Determinant

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

MTH 464: Computational Linear Algebra

3 Matrix Algebra. 3.1 Operations on matrices

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

4. Determinants.

Linear Algebra March 16, 2019

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

1 Matrices and Systems of Linear Equations. a 1n a 2n

Chapter 2: Matrices and Linear Systems

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

LINEAR ALGEBRA SUMMARY SHEET.

Determinants Chapter 3 of Lay

Online Exercises for Linear Algebra XM511

1. General Vector Spaces

1 Matrices and Systems of Linear Equations

Elementary maths for GMT

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Study Guide for Linear Algebra Exam 2

1 Matrices and Systems of Linear Equations

MATH2210 Notebook 2 Spring 2018

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

MATH 106 LINEAR ALGEBRA LECTURE NOTES

Daily Update. Math 290: Elementary Linear Algebra Fall 2018

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

1 9/5 Matrices, vectors, and their applications

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Graduate Mathematical Economics Lecture 1

Undergraduate Mathematical Economics Lecture 1

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Properties of the Determinant Function

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 3108: Linear Algebra

Chapter 2: Matrix Algebra

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

1 Last time: determinants

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

Inverting Matrices. 1 Properties of Transpose. 2 Matrix Algebra. P. Danziger 3.2, 3.3

Digital Workbook for GRA 6035 Mathematics

MTH 102A - Linear Algebra II Semester

Linear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay

Matrix & Linear Algebra

Math 302 Outcome Statements Winter 2013

Matrices and Linear Algebra

Math 54 HW 4 solutions

Foundations of Matrix Analysis

POLI270 - Linear Algebra

MATH 1210 Assignment 4 Solutions 16R-T1

2 b 3 b 4. c c 2 c 3 c 4

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Components and change of basis

LINEAR SYSTEMS, MATRICES, AND VECTORS

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

MAT Linear Algebra Collection of sample exams

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

II. Determinant Functions

MATH10212 Linear Algebra Lecture Notes

ELEMENTARY LINEAR ALGEBRA

det(ka) = k n det A.

Homework Set #8 Solutions

A matrix over a field F is a rectangular array of elements from F. The symbol

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Linear Algebra: Sample Questions for Exam 2

Introduction to Matrices and Linear Systems Ch. 3

ELEMENTARY LINEAR ALGEBRA

Chapter 3. Determinants and Eigenvalues

Chapter 1 Matrices and Systems of Equations

Transcription:

Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read as m by n) if A has m rows and n columns The (i, j)-entry of A [a i,j is a i,j [ For example, A is a 3 real matrix The (, 3)-entry of A is 3 Equality: Two matrices A and B are equal, ie, A B if A and B have the same order and the entries of A and B are the same Special Matrices: A zero matrix, denoted by O or O m,n, is an m n matrix whose all entries are zero A square matrix is a matrix is a matrix whose number of rows and number of columns are the same A diagonal matrix is a square n n matrix whose nondiagonal entries are zero The identity matrix of order n, denoted by I n, is the n n diagonal matrix whose diagonal entries are For example, I 3 is the 3 3 identity matrix An n matrix is called a column matrix or an n-dimensional (column) vector, denoted by x For example, x is a 3-dimensional vector which represents the position vector of the point (,, ) in the 3-space R 3 Matrix Operations: Transpose: The transpose of an m n matrix A, denoted by A T, is an n m matrix whose columns are corresponding rows of A, ie, (A T ) ij A ji

[ Example If A 3, then A T 3 Properties: Let A and B be two matrices with appropriate orders Then (A T ) T A (A + B) T A T + B T 3 (ca) T ca T for any scalar c 4 (AB) T B T A T Scalar Multiplication: Let A be a matrix and c be a scalar The scalar multiple, denoted by ca, is the matrix whose entries are c times the corresponding entries of A [ [ 4 Example If A, then A 3 6 Properties: Let A and B be two matrices of the same order and c and d be scalars Then c(a + B) ca + cb (c + d)a ca + da 3 c(da) (cd)a Sum: If A and B are m n matrices, then the sum A + B is the m n matrix whose entries are the sum of the corresponding entries of A and B, ie, (A + B) ij A ij + B ij [ [ [ Example If A and B, then A + B 3 3 Exercise Find A B Properties: Let A, B, and C be three matrices of the same order Then A + B B + A (commutative) (A + B) + C A + (B + C) (associative) 3 A + O A (additive identity O) Multiplication: Matrix-vector multiplication: If A is an m n matrix and x is an n-dimensional

vector, then their product A x is an n-dimensional vector whose (i, )-entry is a i x + a i x + + a im x n, the dot product of the row i of A and x Note that a x + a x + + a n x n a a a n A a x + a x + + a n x n x x a +x a + +x a n n a m x + a m x + + a mn x n [ Example If A 3 and x a m a m, then A x [ 3 a mn which is a linear combination of first and second columns of A with weights and respectively Matrix-matrix multiplication: If A is an m n matrix and B is an n p matrix, then their product AB is an m p matrix whose (i, j)-entry is the dot product the row i of A and the column j of B Example For A (AB) ij a i b j + a i b j + + a im b mj and B, we have AB [ [ 4 Properties: Let A, B, and C be three matrices of appropriate orders Then A(BC) (AB)C (associative) A(B + C) AB + AC (left-distributive) 3 (B + C)A BA + CA (right-distributive) 4 k(ab) (ka)b A(kB) for any scalar k 5 I m A A AI n for any m n matrix A (multiplicative identity I) Remark () The column i of AB is A(column i of B) [ Example For A and B AB () AB BA[ in general [ Example 3 4 [ 4 A [ 3 3 A [ 3 4, we have [ [ 3 4

(3) AB AC [ does not imply [ B C [ Example (4) AB O does [ not imply [ A O or [ B O Example [ [ Powers of a matrix: If A is an n n matrix and k is a positive integer, then k-th power of A, denoted by A k, is the product of k copies of A We use the convention A I n Example A [ A AA [ [, A Symmetric and Skew-symmetric Matrices: A square matrix A is symmetric if A T A and A is skew-symmetric if A T A A square matrix A can be written uniquely as a sum of a symmetric and a skew-symmetric matrix: Example [ 4 5 ([ 4 5 + A [ 4 5 ( A + A T ) + ) + ([ 4 5 ( A A T ) [ 4 5 ) [ 3 3 5 [ + 4

Solving a Linear System Systems of Linear Equations A system of linear equations with n variables x,, x n and m equations can be written as follows: a x + a x + + a n x n b a x + a x + + a n x n b a m x + a m x + + a mn x n b m () A solution is an n-tuple (s, s,, s n ) that satisfies each equation when we substitute x s, x s,, x n s n The solution set is the set of all solutions Example x + x 3 3 x x 3 The solution set (on R) is {( s + 3, s, s) s R} There are infinitely many solutions because of the free variable x 3 Possibilities of solutions of a linear system: System has no solution (Inconsistent) System has a solution (Consistent) (a) Unique solution (b) Infinitely many solutions x x x No solution x x 4 x x x x x x Unique solution x x x x 4x x x Infinitely many solutions Definition The system () is called an underdetermined system if m < n, ie, fewer equations than variables The system () is called an overdetermined system if m > n, ie, more equations than variables 5

The system () of linear equations can be written by a matrix equation and a vector equation: The matrix equation: A x b, where a a a n a a a n A a m a m a mn, x x x n, and b b b b m The augmented matrix: [A b a a a n b a a a n b a m a m a mn b m The vector equation: x a + x a + + x n an b, where A [ a a a n Example x 8x 3 8 x x + x 3 4x + 5x + 9x 3 9 The matrix equation is A x b where 8 A, x 4 5 9 The augmented matrix is The vector equation is x [A b 4 + x x x x 3 8 8 4 5 9 9 5 + x 3, and b 8 9 You may verify that one solution is (x, x, x 3 ) (9, 6, 3) Is it the only solution? 8 9 8 9 6

Row Operations There are three elementary row operations we perform on a matrix: Interchanging two rows (R i R j ) Multiplying a row by a nonzero scalar (cr i, c ) 3 Adding a scalar multiple of row i to row j (cr i + R j ) Steps of solving a linear system A x b are equivalent to elementary row operations on the augmented matrix [A b as illustrated by the following example Example x 8x 3 8 () x x + x 3 () 4x + 5x + 9x 3 9 (3) We do the following steps to solve the above system: Interchange () and (): x x + x 3 (3) x 8x 3 8 (3) 4x + 5x + 9x 3 9 (33) Corresponding row operation is [A 8 8 b R R 4 5 9 9 Replace (33) by 4(3)+(33): 8 8 4 5 9 9 x x + x 3 (4) x 8x 3 8 (4) 3x + 3x 3 9 (43) Corresponding row operation is 8 8 4R +R 3 4 5 9 9 8 8 3 3 9 7

3 Scale (4): x x + x 3 (5) x 4x 3 4 (5) 3x + 3x 3 9 (53) Corresponding row operation is 8 8 R 3 3 9 4 Replace (53) by 3(5)+(53): 4 4 3 3 9 x x + x 3 (6) x 4x 3 4 (6) x 3 3 (63) Corresponding row operation is 4 4 3R +R 3 3 3 9 5 Back substitutions: Remark So the solution set is {(9, 6, 3)} 4 4 3 (63) x 3 3 (6) x 4 + 4x 3 4 + 4 3 6 (6) x + x x 3 6 3 9 Two matrices are row equivalent if we can transform one matrix to another by elementary row operations If two linear systems have row equivalent augmented matrices, then they have the same solution set To solve A x b, using row operations we transform the augmented matrix [A b into an upper-triangular form called echelon form and then use back substitutions 8

3 Echelon Forms The leading entry of a row of a matrix is the left-most nonzero entry of the row Definition An m n matrix A is in echelon form (or REFrow echelon form) if all zero rows at the bottom, all entries in a column of a leading entry below the leading entry are zeros, and 3 the leading entry of each row is to the right of all leading entries in the rows above it A is in reduced echelon form (or RREFreduced row echelon form) if it satisfies two additional conditions: 4 the leading entry of each row is and 5 each leading is the only nonzero entry in its column Example The following matrices are in REF: 4 3 The following matrices are in RREF: 3,, 4 3 5 Definition A pivot position in a matrix A is a position of a leading in the RREF of A and corresponding column is a pivot column A pivot is a nonzero number in a pivot position of A that is used to create zeros below it in Gaussian elimination Example Pivot positions of the last matrix are (, ), (, 3), and (3, 4) The Gaussian elimination or row reduction algorithm to get the REF of a matrix is explained by the following example: 3 6 5 5 Example A 3 7 8 7 9 3 9 9 5 9

Start with the left-most nonzero column (first pivot column) and make its top entry nonzero by interchanging rows if needed This top nonzero entry is the pivot of the pivot column 3 6 5 5 3 7 8 7 9 3 9 9 5 R R 3 3 9 9 5 3 7 8 7 9 3 6 5 5 Create zeros below the pivot by row replacements 3 9 9 5 3 9 9 5 3 7 8 7 9 R +R 4 6 3 6 5 5 3 6 5 5 3 Ignore the column and row of the current pivot and repeat the preceding steps for the rest submatrix 3 9 9 5 3 9 9 5 4 6 3 R +R 3 4 6 (REF ) 3 6 5 5 4 To get RREF start with the right-most pivot, make it by scaling, and then create zeros above it by row replacements Repeat it for the rest of the pivots R 3 9 9 5 4 6 4 R 3 3 9 33 5 9R +R 3 9 9 5 4 6 9R 3+R R 3 +R 3 6 5 3 R 3 9 33 4 4 5 (RREF) Remark The above algorithm to get RREF is called Gauss-Jordan elimination The RREF of A is unique as it does not depend on the elementary row operations applied to A Steps to solve a linear system A x b (Gaussian elimination): Find the RREF of the augmented matrix [A b Write the system of linear equations corresponding to the RREF 3 If the new system is inconsistent, there is no solution of the original system Otherwise write the basic variables (variables corresponding to columns of leading s in RREF) in terms of constant or free variables (non-basic variables)

Example x 3x + x 4 x 6x + x 3 + x 4 x + 3x + x 3 + 4x 4 3 We find the RREF of the augmented matrix: 3 3 6 R +R 6 R 3 4 3 +R 3 6 R +R 3 3 6 Corresponding system is x 3x + x 4 x 3 + 6x 4 (RREF) where x and x 3 are basic variables (for leading s) and x and x 4 are free variables x + 3x x 4 x free x 3 6x 4 x 4 free The solution set is {( + 3s t, s, 6t, t) s, t R} If we solve the corresponding matrix equation A x b, the solution set is + 3s t 3 s 6t s, t R + s + t 6 s, t R t Possibilities of solutions of A x b from the RREF: System has no solution (inconsistent) iff the RREF of [A b has a row of the form [,,, c, c System has a solution (consistent) iff the RREF of [A b has no row of the form [,,, c, c (a) Unique solution if all but the last column of the RREF of [A b have leading s (there is no free variable) (b) Infinitely many solution if the RREF of [A b has a column, not the last column, having no leading (there is a free variable)

4 Geometry of Solution Sets Homogeneous linear system: A system of linear equations is homogeneous if its matrix equation is A x Note that is always a solution called the trivial solution Any nonzero solution is called a nontrivial solution Example x + x x 3 3x x 3 The corresponding matrix equation A x has the solution set s s R 3 which is also denoted by Span This solution set corresponds to the points 3 on the line in the 3-space R 3 passing through the point (,, 3) and the origin (,, ) Recall that the vector is the position vector of the point (,, 3) which is a 3 directed line segment from the origin (,, ) to the point (,, 3) x x x 3 The corresponding matrix equation A x has the solution set s + t s, t R Span, This solution set corresponds to the points on the plane in the 3-space R 3 passing through the points (,, ), (,, ), and the origin (,, ) Remark If A x has k free variables, then its solution set is the span of k vectors The solution set of A x : Span{ v,, v k } for some vectors v,, v k The solution set of A x b : { p + v A v } where A p b So a nonhomogenous solution is a sum of a particular solution and a homogeneous solution To justify it let y be a solution of A x b, ie, A y b Then A( y p ) b b Then ( y p ) v where A v Thus y p + v Geometrically we get the solution set of A x b by shifting the solution set of A x to the point whose position vector is p along the vector p

3 Fundamental ic Concepts on R n 3 Linear Span and Subspaces Definition A linear combination of vectors v, v,, v k of R n is a sum of their scalar multiples, ie, c v + c v + + c kvk for some scalars c, c,, c k The set of all linear combinations of a nonempty set S of vectors of R n is called the linear span or span of S, denoted by Span(S) or Span S, ie, Span{ v, v,, v k } {c v + c v + + c k vk c, c,, c k R} We define Span { } When Span{ v,, v k } R n, we say { v,, v k } spans R n Example Let S, Span(S) c + c c, c R Note that [,, T is not in Span(S) because there are no c, c for which c + c Thus S does not span R 3 But any vector of the form [a, b, T is in Span(S) because a x + x b x a b, x a + b ie, a b (a b) Thus S spans the following set Span(S) which is the xy-plane of R 3 + ( a + b) a b a, b R, Span(S) Definition A subspace of R n is a nonempty subset S of R n that satisfies three properties: 3

(a) is in S (b) u + v is in S for all u, v in S (c) c u is in S for all u in S and all scalars c In short, a subspace of R n is a nonempty subset S of R n that is closed under linear combination of vectors, ie, c u + d v is in S for all u, v in S and all scalars c, d When S is a subspace of R n, we sometimes denote it by S R n Example { }, R n R n, ie, { } and R n are subspaces of R n {[ } x Show that S x, y R, x y is a subspace of R y [ (a) S because (b) Let u, v S and c R Then (c) u [ x y and v [ x for some x, x, y, y R such that x y and x y Then u + v [ x y + [ x y y, [ x + x S y + y because (x + x ) (y + y ) (x y ) + (x y ) c [ x u c y because (cx ) (cy ) c(x y ) [ cx S cy Thus S (which is the line y x) is a subspace of R 3 Let S, Then Span(S) is a subspace of R3 First note that + Span(S) Thus Span(S) Let u, v Span(S) and c, d R Then u c + c and v d 4 + d,

for some c, c, d, d R Then c u + d v c c (cc + dd ) + c + d d + (cc + dd ) Thus Span(S) (which is the xy-plane) is a subspace of R 3 + d Span(S) Theorem 3 Let v, v,, v k R n Then Span{ v, v,, v k } is a subspace of R n Proof Since v Span{ v, v,, v k }, Span{ v, v,, v k } Let u, v Span{ v, v,, v k } and c, d R Then u c v + c v + + c k vk and v d v + d v + + d k vk for some c,, c k, d,, d k R Then c u + d v c(c v + c v + + c k vk ) + d(d v + d v + + d k vk ) (cc + dd ) v + (cc + dd ) v + + (cc k + dd k ) v k Span{ v, v,, v k } For a given matrix we have two important subspaces: the column space and the null space Definition The column space of an m n matrix A [ a a a n, denoted by CS (A) or Col A, is the span of its column vectors: CS (A) Span{ a, a,, a n } Remark Since each column is an m dimensional vector, CS (A) is a subspace of R m 3 4 Example Let A 4 6 and 3 b 3 Determine if b is in CS (A) 3 7 6 4 Note that b CS (A) if and only if b is a linear combination of columns of A if and only if A x b has a solution 3 4 3 3 4 3 3 4 3 4 6 3 4R +R 6 8 5 3 R +R 3-6 8 5 (REF ) 3R 3 7 6 4 +R 3 6 5 Thus A x b is consistent and consequently b is in CS (A) Theorem 3 An m n matrix A has a pivot position in every row if and only if A x b is consistent for any b R n if and only if CS (A) R m 5

Example Since A [ 3 4 5 has a pivot position in each row, CS (A) R Definition The null space of an m n matrix A, denoted by NS (A) or Nul A, is the solution set of A x : NS (A) { x R n A x } Theorem 33 Let A be an m n matrix A Then NS (A) is a subspace of R n Proof Since A, NS (A) Thus NS (A) Let u, v NS (A) and c, d R Then A u and A v Then Thus c u + d v NS (A) [ Example Let A 3 A(c u + d v ) c(a u ) + d(a v ) c + d Find NS (A) We find the solution set of A x [A [ [ 3 R 3 /3 Corresponding system is x x 3 3 x x 3 3 [ R +R /3 /3 where x and x are basic variables (for leading s) and x 3 is a free variable (RREF ) NS (A) x 3 3 x 3 3 x 3 x x 3 3 x x 3 3 x 3 free x 3 R x 3 3 3 x 3 R Span 3 Remark If an m n matrix A has k non-pivot columns (ie, k free variables for A x ), then NS (A) is a span of k vectors in R n For a proof see Theorem 39 6

3 Linear Independence Definition A set S { v, v,, v k } of vectors of R n is linearly independent if the only linear combination of vectors in S that produces is a trivial linear combination, ie, c v + c v + + c k vk c c c k S { v, v,, v k } is linearly dependent if S is not linearly independent, ie, there are scalars c, c,, c k, not all zero, such that Remark { } is linearly dependent as c v + c v + + c k vk { v } is linearly independent if and only if v 3 Let S { v, v,, v k } and A [ v v v k Then S is linearly independent if and only if is the only solution of A x if and only if NS (A) { } Example Determine if the following vectors are linearly independent [ v, [ v 3 We investigate if c v + c v c c [A [ 3 [ R +R - (REF ) Each column of A is a pivot column giving no free variables So there is a unique solution of A x which is Thus v and v are linearly independent Note that each of v and v is not a multiple of the other Determine if the columns of A are linearly independent for A A 3 4 3 5 8 4 7 R +R R +R 3 3 4 4 3 3 4 3 5 8 4 7 (REF ) A has a non-pivot column giving a free variable So there are infinitely many solutions of A x Thus the columns of A are linearly dependent Verify that one solution 7

is (x, x, x 3, x 4 ) (,, 3, ) So we get the following linear dependence relation among the columns of A: 3 4 + 3 3 5 + 8 4 7 Remark The columns of an m n matrix are linearly dependent when m < n because A would have a non-pivot column giving a free variable for solutions of the system A x Theorem 34 A set S { v, v,, v k } of k vectors in R n is linearly dependent if and only if there exists a vector in S that is a linear combination of the other vectors in S Proof Let S { v, v,, v k } be a set of k vectors in R n First suppose S is linearly dependent Then there are scalars c, c,, c k, not all zero, such that c v + c v + + c k vk Choose i {,,, k} such that c i Then c v + c v + + c k vk c i vi c v + + c i vi + c i+vi+ + + c kvk v i c v c i vi c i+ vi+ c k vk c i c i c i c i Conversely suppose there is i {,,, k} such that vi d v + + d i vi + d i+ vi+ + + d k vk, for some scalars d,, d i, d i+,, d k producing : Then we have a nontrivial linear combination d v + + d i vi v i + d i+ vi+ + + d k vk Thus S { v, v,, v k } is linearly dependent in R n Example For A [ 3 4 a a a3 a4 3 5 8, we have shown that the columns are 4 7 linearly dependent and a + a 3 a 3 + a 4 We can write the first column in terms of the other columns: a a + 3 a 3 a 4 In fact we can write any column in terms of the others (which may not be the case for any given linearly dependent set of vectors) 8

33 Basis and Dimensions Definition A basis of a nontrivial subspace S of R n is a subset B of S such that (a) Span(B) S and (b) B is linearly independent set We define the basis of the trivial subspace { } to be B The number of vectors in a basis B is the dimension of S denoted by dim (S) or dim S Example For the subspace S {[ x y } x, y R, x y of R, {[ S Span {[ } {[ } Also is linearly independent Thus B is a basis of S and dim (S) B Note that there infinitely many bases of S,,, } Among infinitely many bases of R n, B { e, e,, e n } is called the standard basis of R n For any x [x, x,, x n T R n, x x e + x e + + x n en, ie, x x x 3 x n x + x + + x n Span(B) Thus Span(B) R n To show linear independence, let x e + x e + + x n en, ie, x +x + +x n x x x 3 x n x x x n So B is linearly independent Thus B is a basis of R n and dim (R n ) B n 9

Now we present some important theorems regarding bases of a subspace of R n Theorem 35 (Unique Representation Theorem) Let S be a subspace of R n Then B { b, b,, b k } is a basis of S if and only if each vector v of S is a unique linear combination of b, b,, b k, ie, v c b + c b + + c k bk for unique scalars c, c,, c k Proof Let B { b, b,, b k } be a basis of S Consider a vector v of S Since S Span B, v c b +c b + +c n bk for some scalars c, c,, c k To show these scalars are unique, let v d b + d b + + d n bk for some scalars d, d,, d k Then v v (c b + c b + + c k bn ) (d b + d b + + d n bk ) (c d ) b + (c d ) b + + (c k d k ) b n Since B { b, b,, b k } is linearly independent, (c d ) (c d ) (c k d k ) which implies d c, d c,, d k c k The converse follows similarly (exercise) Theorem 36 (Reduction Theorem) Let S be a subspace of R n If a set B { b, b,, b k } of vectors of S spans S, then either B is a basis of S or a subset of B is a basis of S Proof Suppose B { b, b,, b k } spans S If B is linearly independent, then B is a basis of S Otherwise there is a vector, say b, which is a linear combination of other vectors in B Let B S \ { b } { b,, b k } We can verify that Span B Span B S If B is linearly independent, then B is a basis of S Otherwise there is a vector, say b, which is a linear combination of other vectors in B Let B B \ { b } { b 3,, b k } We can verify that Span B Span B S Proceeding this way we end up with a subset B m of B for some m k such that B m is linearly independent and Span B m S which means B m is a basis of S Similarly we can prove the following: Theorem 37 (Extension Theorem) Let S be a subspace of R n If a set B { b, b,, b k } of vectors of S is linearly independent, then either B is a basis of S or a superset of B is a basis of S Example Use Reduction Theorem to find a basis of CS (A) for A 3 4 3 5 8 4 7 Write A [ a a a3 a4 and B { a, a, a 3, a 4 } Then CS (A) Span S Verify that a4 a a + 3 a 3 (exercise) Then B is not linear independent and CS (A) Span B Span{ a, a, a 3, a 4 } Span{ a, a, a 3 } Verify that { a, a, a 3 } is linearly independent Thus { a, a, a 3 } is a basis of CS (A) Definition The rank of a matrix A, denoted by rank (A), is the dimension of its column space, ie, rank (A) dim (CS (A))

Theorem 38 The pivot columns of a matrix A form a basis for CS (A) and rank (A) is the number of pivot columns of A Proof (Sketch) Suppose R is the RREF of A Then A x if and only if R x, ie, linear dependence relation among columns of A is the same as that of R Since the pivot columns of R are linearly independent, so are the pivot columns of A By Reduction Theorem we can show that the pivot columns of R span CS (R) Then the pivot columns of A span CS (A) Thus the pivot columns of A form a basis for CS (A) and rank (A) dim (CS (A)) is the number of pivot columns of A Remark If R is the RREF of A, then CS (A) CS (R) in general 3 4 Example Find rank (A) and a basis of CS (A) for A 3 5 8 4 7 A 3 4 3 4 3 5 8 R +R 4 (REF ) R +R 3 4 7 3 Since A has 3 pivot columns a, a, and a 3, rank (A) 3 and a basis of CS (A) is { a, a, a 3 }, 3 ie,, 3, 5 4 Definition The nullity of a matrix A, denoted by nullity (A), is the dimension of its null space, ie, nullity (A) dim (NS (A)) Theorem 39 nullity (A) is the number of non-pivot columns of A Proof (Sketch) Suppose B [ b b b n is the RREF of an m n matrix A Then A x if and only if B x, ie, NS (A) NS (B) Suppose b b b k are the pivot columns of B and the rest non-pivot columns Then for i k +,, n, bi c i b + c i b + + c ik bk k c ij bj for some c ij R j B x x b + x b + + x n bn ( k x b + x b + + x k bk + x k+ j ( ) ( n x + x j c j, b + + x k + jk+ c k+,j bj ) n jk+ + + x n ( k j x j c j,k ) bk c n,j bj ) Since { b, b,, b k } is linearly independent, x i n jk+ x jc j,i for i,, k Then we can write x as a linear combination of n k linearly independent vectors that span NS (B) (exercise) Thus dim (NS (A)) dim (NS (B)) n k

Remark The non-pivot columns of A do not form a basis for NS (A) 3 4 Example Find nullity (A) and a basis of NS (A) for A 3 5 8 4 7 3 4 A 3 5 8 (RREF ) 4 7 3 Since A has one non-pivot column, nullity (A) A x which becomes x x 4 x x 4 x 3 + 3x 4 To find a basis of NS (A), we solve where x, x and x 3 are basic variables (for leading s) and x 4 is a free variable NS (A) x 4 x 4 3x 4 x 4 Thus a basis of NS (A) is x 4 R x x 4 x x 4 x 3 3x 4 x 4 free x 4 3 3 x 4 R Theorem 3 (Rank-Nullity Theorem) For an m n matrix A, rank (A) + nullity (A) n Span 3 Proof rank (A) + nullity (A) the sum of numbers of pivot and non-pivot columns of A which is n Example If A is a 4 5 matrix with rank 3, then by the Rank-Nullity Theorem nullity (A) n rank (A) 5 3 Now we investigate the relation of rank (A) with the dimension of the row space of A

Definition Each row of an m n matrix A is called a row vector which can be identified r with a (column) vector in R n r The row space of an m n matrix A, denoted by r m RS (A) or Row A, is the span of its row vectors: Remark RS (A) Span{ r, r,, r m } Since each row is an m dimensional vector, RS (A) is a subspace of R n The row i of A is the column i of A T Then RS (A) CS ( A T ) 3 Elementary row operations may change the linear dependence relations among rows (unlike columns) but they do not change the row space For example, RS (A) RS (RREF of A) r Example Consider A Write A r, where r [,,,, r r3 [,,,, r 3 [,,, Then RS (A) CS ( A ) T Span{ r, r, r 3 } is a subspace of R 4 A R (REF) Note that r 3 r + r in A, but not in R Since the row 3 of R is r r + r 3 in A, the span of the rows of R is the same as that of A, ie, RS (R) RS (A) Note that the nonzero rows of B are linearly independent and span RS (R) RS (A), ie, they form a basis of RS (R) RS (A) Definition The row rank of a matrix A is the dimension of its row space Theorem 3 Let A be an m n matrix with REF R Then the nonzero rows of R form a basis for RS (R) RS (A) and the row rank of A the (column) rank of A the number of pivot positions of A Proof Each nonzero row of R is not a linear combination of the other nonzero rows Thus the nonzero rows of R are linearly independent and span RS (R) RS (A), ie, they form a basis of RS (R) RS (A) Recall that the rank of A is the number of pivot columns (hence pivot positions) of R The number of pivot positions of R equals to the number of nonzero rows of R which is the row rank B and consequently the row rank of A Remark For an m n matrix A, rank (A) min{m, n} 3

Example For the 3 4 matrix A in the preceding example, rank (A) min{3, 4} 3 Since it has two nonzero rows in its REF, the row rank of A rank (A) What is the smallest and largest possible nullity of a 5 7 matrix A? First note rank (A) min{5, 7} 5 Now by the Rank-Nullity Theorem, nullity (A) 7 rank (A) 7 5 So the smallest possible nullity of A is In that case the row rank of A rank (A) 5 Similarly nullity (A) 7 rank (A) 7 So the largest possible nullity of A is 7 In that case the row rank of A rank (A) 34 Linear Transformations Definition A function T : V W from a subspace V of R n to a subspace W of R m is called a linear transformation if (a) T ( u + v ) T ( u ) + T ( v ) for all u, v V and (b) T (c v ) ct ( v ) for for all v V and all scalars c R In short, a function T : V W is a linear transformation if it preserves the linearity among vectors: T (c u + d v ) ct ( u ) + dt ( v ) for all u, v V and all scalars c, d R Example The projection T : R 3 R 3 of R 3 onto the xy-plane of R 3 is defined by x x T x x for all x x x R 3 x 3 x 3 Sometimes it is simply denoted by T (x, x, x 3 ) (x, x, ) in terms of row vectors To show it is a linear transformation let x (x, x, x 3 ) and y (y, y, y 3 ) in R 3 and c, d R Then T (c x + d y ) T (cx + dy, cx + dy, cx 3 + dy 3 ) (cx + dy, cx + dy, ) (cx, cx, ) + (dy, dy, ) ct ( x ) + dt ( y ) [ For the matrix A, define the shear transformation T : R R by T ( x ) A x Let x, y R and c, d R Then T (c x + d y ) A(c x + d y ) ca x + da y ct ( x ) + dt ( y ) Thus T is a linear transformation which transforms the square formed by (, ),(, ),(, ),(, ) to the parallelogram formed by (, ), (, ), (3, ), (, ) 4

Definition A matrix transformation is the linear transformation T : R n R m defined by T ( x ) A x for some m n matrix A It is denoted by x A x From the definition of a linear transformation we have the following properties Proposition For a linear transformation T : V W where V R n and W R m, (a) T ( n ) m and (b) for all v,, v k V and all c,, c k R, T (c v + c v + + c k vk ) c T ( v ) + c T ( v ) + + c k T ( v k ) Example Consider the function T : R 3 R 3 defined by T (x, x, x 3 ) (x, x, 5) Since T (,, ) (,, 5) (,, ), T is not a linear transformation Theorem 3 For a linear transformation T : R n R m, there exists a unique m n matrix A, called the standard matrix of T, for which T ( x ) A x for all x R n Moreover, A [T ( e ) T ( e ) T ( e n ) where e i is the ith column of I n Proof Let x [x, x,, x n T R n We can write x x e + x e + + x n en Then Example T ( x ) T (x e + x e + + x nen ) x T ( e ) + x T ( e ) + + x n T ( e n ) x [T ( e ) T ( e ) T ( x e n ) x n A x Use the standard matrix to find the rotation transformation T : R R that rotates each point of R about the origin through an angle θ counterclockwise By trigonometry we have T ( ([ ) e ) T [ cos θ sin θ and T ( e ) T ([ ) Then the standard matrix is A [T ( e ) T ( [ cos θ sin θ e ) sin θ cos θ T ( x ) A x, ie, T ([ x x ) 5 [ x cos θ x sin θ x sin θ + x cos θ [ sin θ cos θ Thus for all x R

Consider the linear transformation T : R R 3 defined by T (x, x ) (x x, x + 3x, 4x ) Note that T ( e ) T (, ) (,, ) and T ( e ) T (, ) (, 3, 4) The standard matrix of T is A [T ( e ) T ( e ) 3 4 For any given linear transformation T : R n R m, the domain space is R n and the codomain space is R m We study a subspace of the domain space called Kernel or Null Space and a subspace of the codomain space called Image Space or Range Definition The kernel or null space of a linear transformation T : R n R m, denoted by ker(t ) or ker T, is the following subspace of R n : ker T { x R n T ( x ) m } The nullity of T, denoted by nullity (T ), is the dimension of ker T, ie, nullity (T ) dim (ker T ) Remark If A is the standard matrix of a linear transformation T : R n R m, then ker T NS (A) and nullity (T ) nullity (A) Example The linear transformation T : R 3 R defined by T (x, x, x 3 ) (x, x ) has the standard matrix A [T ( e ) T ( e ) T ( [ e 3 ) Note that and nullity (T ) nullity (A) ker T NS (A) Span, Definition The image space or range of a linear transformation T : R n R m, denoted by im(t ) or im T or T (R n ), is the following subspace of R m : im T {T ( x ) x R n } The rank of T, denoted by rank (T ), is the dimension of im T, ie, rank (T ) dim (im T ) Remark If A is the standard matrix of a linear transformation T : R n R m, then im T CS (A) and rank (T ) rank (A) 6

Example The linear transformation T : R R 3 defined by T (x, x ) (x, x, ) has the standard matrix A [T ( e ) T ( e ) Note that im T CS (A) Span,, and rank (T ) rank (A) Theorem 33 (Rank-Nullity Theorem) For a linear transformation T : R n R m, rank (T ) + nullity (T ) n Proof Let A be the m n standard matrix of T Then by the Rank-Nullity Theorem on A, rank (T ) + nullity (T ) rank (A) + nullity (A) n Example The linear transformation T : R 3 R defined by T (x, x, x 3 ) (x, x ) has nullity (T ) (see examples before) Then by the Rank-Nullity Theorem, rank (T ) 3 nullity (T ) Now we discuss two important types of linear transformation T : R n R m Definition Let T : R n R m be a linear transformation T is onto if each b R m has a pre-image x in R n under T, ie, T ( x ) b T is one-to-one if each b R m has at most one pre-image in R n under T Example The linear transformation T : R 3 R defined by T (x, x, x 3 ) (x, x ) is onto because each (x, x ) R has a pre-image (x, x, ) R 3 under T But T is not oneto-one because T (,, ) T (,, ) (, ), ie, (, ) has two distinct pre-images (,, ) and (,, ) under T The linear transformation T : R R 3 defined by T (x, x ) (x, x, ) is one-to-one because T (x, x ) T (y, y ) (x, x, ) (x, x, ) (x, x ) (y, y ) But T is not onto because (,, ) R 3 has no pre-image (x, x ) R under T 3 The linear transformation T : R R defined by T (x, x ) (x + x, x x ) is one-to-one and onto (exercise) Theorem 34 Let T : R n R m be a linear transformation with the standard matrix A Then the following are equivalent (a) T (ie, x A x ) is one-to-one 7

(b) ker T NS (A) { n } (c) nullity (T ) nullity (A) (d) The columns of A are linearly independent Proof (b), (c), and (d) are equivalent by the definitions (a) (b) Suppose T (ie, x A x ) is one-to-one Let x ker T NS (A) Then A x n Also n A n m Since x A x is one-to-one, x n Thus NS (A) { n } (b) (a) Suppose ker T NS (A) { n } Let x, y R n such that A x A y Then A( x y ) m Then x y NS (A) { n } which implies x y n, ie, x y Thus x A x is one-to-one Example The linear transformation T : R R 3 defined by T (x, x ) (x, x, ) has the standard matrix A [T ( e ) T ( e ) Note that the columns of A are linearly independent, ker T NS (A) { }, and nullity (T ) nullity (A) x A x ) is one-to-one Thus T (ie, Theorem 35 Let T : R n R m be a linear transformation with the standard matrix A Then the following are equivalent (a) T (ie, x A x ) is onto (b) im T CS (A) R m (c) rank (T ) rank (A) m (d) Each row of A has a pivot position Proof (b), (c), and (d) are equivalent by the definitions (a) (b) Suppose T (ie, x A x ) is onto Let b R m Since x A x is onto, b A x for some x R n Then b A x CS (A) Thus im T CS (A) R m (b) (a) Suppose im T CS (A) R m Let b R m Since b CS (A) R m, b A x for some x R n Thus x A x is onto Example The linear transformation T : R 3 R defined by T (x, x, x 3 ) (x, x ) has the standard matrix A [T ( e ) T ( e 3 ) T ( [ e ) Note that each row of A has a pivot position, im T CS (A) R, and rank (T ) rank (A) Thus T (ie, x A x ) is onto Definition A linear transformation T : R n R n is an isomorphism if it is one-to-one and onto 8

Example The linear transformation T : R R defined by T (x, x ) (x + x, x x ) is one-to-one and onto consequently an isomorphism Showing T is one-to-one is enough to show T is an isomorphism by the following theorem Theorem 36 Let T : R n R n be a linear transformation with the n n standard matrix A Then the following are equivalent (a) T (ie, x A x ) is an isomorphism (b) T (ie, x A x ) is one-to-one (c) ker T NS (A) { n } (d) nullity (T ) nullity (A) (e) The columns of A are linearly independent (f) T (ie, x A x ) is onto (g) im T CS (A) R n (h) rank (T ) rank (A) n (i) Each row and column of A has a pivot position Proof (b), (c), (d), and (e) are equivalent by Theorem 34 (f), (g), (h), and (i) are equivalent by Theorem 35 Now for the n n standard matrix A, rank (A)+nullity (A) n Thus nullity (A) if and only if rank (A) n, ie, (d) and (h) are equivalent Since (b) and (f) are equivalent, they are equivalent to (a) Example What can we say about CS (A), NS (A), rank (A), nullity (A), and pivot positions of a 3 3 matrix with three linearly independent columns? What about x A x? By the preceding theorem, CS (A) R 3, NS (A) { 3 }, rank (A) 3, nullity (A), A has 3 pivot positions, and x A x is a one-to-one linear transformation from R 3 onto R 3 9

4 Inverse and Determinant of a Matrix 4 Inverse of a Matrix Definition An n n matrix A is invertible if there an n n matrix B such that AB BA I n This B is called the inverse of A, denoted by A, for which AA A A I n An invertible matrix is called a nonsingular matrix A square matrix that is not invertible is called a singular matrix [ [ [ 3 Example For A and B, AB BA So B A 4 6 5 Theorem 4 Let A and B be two n n invertible matrices Then the following hold (a) A is invertible and (A ) A (b) A T is invertible and (A T ) (A ) T (c) For c, ca is invertible and (ca) c A (d) AB is invertible and (AB) B A Proof (a) and (c) are exercises For (b) note that A T (A ) T (A A) T I T n I n and (A ) T A T (AA ) T I T n I n For (d) note that (AB)(B A ) A(BB )A AI n A AA I n and (B A )(AB) B (A A)B B I n B B B I n [ [ [ [ Example For A and B, A 3 4 5 4 and B 3 5 [ [ [ 3 Verify (A T ) 4 3 (A 4 ) T, (5A) 4 5 3 5 A, and [ [ 3 7 (AB) 6 7 B A 6 3 How do we know a given square matrix A is invertible? How do we find A? Theorem 4 Let A be an n n matrix Then the following are equivalent (a) A is invertible (b) A x b has a unique solution for each b R n 3

(c) The RREF of A is I n Proof (b) (c) A x b has a unique solution for each b R n if and only if each column of the RREF of A has a leading if and only if the RREF of A is I n (a) (b) Suppose A is invertible Let b R n Then A x b x A b (b) (a) Suppose A x b has a unique solution for each b R n Let A v i e i for i,,, n Then A[ v v v n [A v A v A v n [ e e e n I n To show A [ v v v n, it suffices to show [ v v v n A I n Since A[ v v v n I n, A[ v v v n A I n A A Let b i be the ith column of [ v v v n A for i,,, n Then A b i a i But A e i a i By the uniqueness of solution of A x a i, b i e i for i,,, n Thus [ v v v n A [ e e e n I n To find A for an invertible matrix A, we investigate how row operations on A are obtained from premultiplying A by elementary matrices Definition An n n elementary matrix is obtained by applying an elementary row operation on I n Example E ij is obtained by R i R j on I n Note that E ij A is obtained by R i R j on A 4 3 4 A 3 R R 4 3 3 3 3 E A For c, E i (c) is obtained by cr i on I n Note that E i (c)a is obtained by cr i on A 3 3 3 E A 4 R 4 3 3 3 ( ) E E A 3 E ij (c) is obtained by cr i + R j on I n Note that E ij (c)a is obtained by cr i + R j on A ) E A R +R 3 E ( 3 3 3 E 3 ()E ( 3 3 ) E A 3

Remark Elementary matrices are invertible Moreover, E ij E ij, E i (c) E i ( c) for c, and E ij (c) E ij ( c) Theorem 43 Let A be an n n invertible matrix A sequence of elementary row operations that reduces A to I n also reduces I n to A Proof Since A is invertible, the RREF of A is I n Suppose I n is obtained from A by successively premultiplying by elementary matrices E, E,, E k, ie, Postmultiplying by A, we get E k E k E A I n E k E k E AA I n A E k E k E I n A Gauss-Jordan elimination: Find the RREF of [A I n If the the RREF of A is I n, then A is invertible and the RREF of [A I n is [I n A Otherwise A is not invertible Example R Thus A [A I 3 R +R 3 4 3 3 3 4 3 3 5 6 R R 4R 3+R 3R +R E (3) are successively applied on A to get I 3 : 3 4 3 3 4 4 3 5 6 [I 3 A Notice how elementary matrices E, E 3 (), E 3 ( 4), E ( ), E (3)E ( ) E 3 ( 4)E 3 ()E A I 3 Verify that the product of those elementary matrices is A : ( ) A E (3)E E 3 ( 4)E 3 ()E Remark For an m n matrix A there is a generalized inverse called the Moore-Penrose inverse, denoted by A +, which can be found using the singular-value decomposition of A 3

4 Invertible Matrix Theorem Theorem 44 (Invertible Matrix Theorem) Let A be an n n matrix Then the following are equivalent (a) A is invertible (b) A x b has a unique solution for each b R n (c) The RREF of A is I n (d) T (ie, x A x ) is an isomorphism (e) T (ie, x A x ) is one-to-one (f) ker T NS (A) { n } (g) nullity (T ) nullity (A) (h) The columns of A are linearly independent (i) T (ie, x A x ) is onto (j) im T CS (A) R n (k) rank (T ) rank (A) n (l) Each row and column of A has a pivot position Proof (a), (b), and (c) are equivalent by Theorem 4 Also (d)-(l) are equivalent by Theorem 36 Since A is a square matrix, (c) and (l) are equivalent Example What can we say about CS (A), NS (A), rank (A), nullity (A), and pivot positions of a 3 3 invertible matrix? What about x A x? By the IMT, CS (A) R 3, NS (A) { 3 }, rank (A) 3, nullity (A), A has 3 pivot positions, and x A x is an isomorphism, ie, a one-to-one linear transformation from R 3 onto R 3 Also A x b has a unique solution A b for each b R 3 Remark In general the conditions in the IMT are not equivalent for a non-square matrix Example The linear transformation T [: R 3 R defined by T (x, x, x 3 ) (x, x ) has 3 standard matrix A Note that T is onto but not one-to-one Equivalently the columns of A span R but they are not linearly independent The linear transformation T : R R 3 defined by T (x, x ) (x, x, ) has 3 standard matrix A Note that T is one-to-one but not onto Equivalently the columns of A are linearly independent but they do not span R 3 33

Definition A linear transformation T : R n R n is invertible if there is another linear transformation S : R n R n such that T (S( x )) S(T ( x )) x for all x R n This S is called the inverse of T, denoted by T, for which T T T T I, the identity function on R n Remark It is well-known that a function is invertible if it is one-to-one and onto So a linear transformation T : R n R n is an isomorphism if and only if it invertible Example The linear transformation T : R R defined by T (x, x ) (x + x, 3x + 5x ) is one-to-one and onto consequently invertible How to find T : R R? Theorem 45 Let T : R n R n be a linear transformation with the standard matrix A Then T is invertible if and only if A is invertible Also T : R n R n is given by T ( x ) A x Proof T is invertible (ie, an isomorphism) if and only if A is invertible by the IMT Let S : R n R n be a linear transformation defined by S( x ) A x Then for all x R n, Thus S T T (S( x )) T (A x ) A(A x ) I n x x and S(T ( x )) S(A x ) A (A x ) I n x x Example The isomorphism T : R R defined by T (x, x ) (x +x, 3x +5x ) has the standard matrix A [T ( e ) T ( [ [ 5 e ) Since A 3 5, T 3 : R R is given by T ( x ) A x, ie, T (x, x ) ( 5x + x, 3x x ) Verify that for all [x, x T R, T (T (x, x )) T ( 5x + x, 3x x ) (x, x ) and T (T (x, x )) T (x + x, 3x + 5x ) (x, x ) 43 Determinant of a Matrix In this section we study the determinant of an n n matrix A [a ij, denoted by det(a) or det A or A or a a a n a a a n a m a m a mn To define det(a) recursively, we denote A(i, j) for the the matrix obtained from A by deleting row i and column j of A 34

Definition If A [a, then det(a) a If A a a For an n n matrix A [a ij where n 3, [ a a a a, then det(a) a a det(a) n ( ) +i a i det A(, i) a det A(, ) a det A(, )+ +( ) n+ a n det A(, n) i Example We find det(a) for A 3 3 5 4 det(a) a det A(, ) a det A(, ) + a 3 det A(, 3) 3 5 4 5 4 + 3 3 (3 4 5 ) ( 4 5 ) + 3( 3 ) 8 Definition For an n n matrix A [a ij where n, the (i, j) minor, denoted by m ij, is m ij det A(i, j) and the (i, j) cofactor, denoted by c ij, is c ij ( ) i+j m ij ( ) i+j det A(i, j) Remark We defined det(a) as the cofactor expansion along the first row of A: det(a) n ( ) +i a i det A(, i) i n a i c i i But it can be proved that det(a) is the cofactor expansion along any row or column of A Theorem 46 Let A be an n n matrix Then for each i, j,,, n, det(a) n a ij c ij j n a ij c ij i The preceding theorem can be proved using the following equivalent definition of determinant: det(a) ) n (sgn(σ) a iσ(i), σ S n where σ runs over all n! permutations σ of {,,, n} (This requires study of permutations) Corollary 47 Let A [a ij be an n n matrix (a) det(a T ) det(a) (b) If A is a triangular matrix, then det(a) a a a nn 35 i

Proof (Sketch) (a) Note that the (i, j) cofactor of A is the (j, i) cofactor of A T The cofactor expansions along the first rows to get det(a) would be same as cofactor expansions along the first columns to get det(a T ) (b) If A is an upper-triangular matrix, then by cofactor expansions along the first rows we get det(a) a a a nn Similarly if A is a lower-triangular matrix, then by cofactor expansions along the first columns we get det(a) a a a nn Example A 3 4 5 3 3 4 3 3 We compute det(a) using rows or columns with maximum number of zeros at a step So first we choose column and do cofactor expansion along it: 3 3 det(a) 4 3 3 Now we have 5 choices: row,3,4 and column, We do cofactor expansion along row 4: 3 det(a) 4 3 + 3 3 3 4 3 Since the second determinant is a determinant of an upper-triangular matrix, its determinant is 3 4 4 We do cofactor expansion along column 3 for the first determinant ( ( det(a) 4 3 ) ) + 3 4 3 + 3 4 ( ((4 ) + ( 3 3 4)) + 7) 6 Some applications of determinants: Determinant as volume: Suppose a hypersolid S in R n is given by n concurrent edges that are represented by column vectors of an n n matrix A Then the volume of S is det(a) Let r [a, b, c T, r [a, b, c T, r 3 [a 3, b 3, c 3 T A [ a a a 3 r r r3 b b b 3 and the volume of the c c c 3 parallelepiped with concurrent edges given by r, r, r 3 is det(a) a (b c 3 b 3 c ) a (b c 3 b 3 c )+a 3 (b c b c ) 36

Equation of a plane: Consider the plane passing through three distinct points P (x, y, z ), P (x, y, z ) and P 3 (x 3, y 3, z 3 ) Let P (x, y, z) be a point on the plane So the volume of the parallelepiped with concurrent edges P P, P P and P 3 P is zero x x x x x x 3 y y y y y y 3 z z z z z z 3 3 Volume after transformation: Let T : R n R n be a linear transformation with the standard matrix A Let S be a bounded hypersolid in R n Then the volume of T (S) is det(a) times the volume of S Example Let A x + y [ a b x a + y b and D {(x, y) x + y } Consider T : R R defined by T ([x, y T ) A[x, y T Note T (D) {(x, y) x a ellipse the area of T (D) det(a) A(D) ab π πab + y b } So the area of 4 Change of variables: Suppose variables x,, x n are changed to v,, v n by n differentiable functions f,, f n so that v f (x,, x n ) v f (x,, x n ) v n f n (x,, x n ) So we have a function F : R n R n defined by F (x,, x n ) (f (x,, x n ),, f n (x,, x n )) The Jacobian matrix of F : R n R n is the following f x (f,, f n ) (x,, x n ) f n x f x n f n x n 37

The change of variables formula for integrals is G( v )d v G( x ) (f,, f n ) (x,, x n ) d x F (U) Example inscribed by the ellipse x a U So (x, y) F (r, θ) (ar cos θ, br sin θ) and F ([, [, π) is the region The Jacobian matrix is (x, y) (r, θ) [ x r y r + y b x θ y θ [ a cos θ ar sin θ and (x, y) b sin θ br cos θ (r, θ) abr By the change of variables formula, d v π F ([, [,π) θ r (x, y) (r, θ) drdθ ab π 5 Wronskian: The Wroskian of n real-values differentiable functions f,, f n is f (x) f n (x) f (x) f n(x) W (f,, f n )(x) f (n ) (x) f n (n ) (x) f,, f n are linearly independent functions iff W (f,, f n ) is not identically zero 44 Properties of Determinants Theorem 48 For an n n matrix A and n n elementary matrices E ij, E i (c), E ij (c), we have det E ij, det E i (c) c, det E ij (c), and det(e ij A) det A (det E ij )(det A), det(e i (c)a) c det A (det E i (c))(det A), det(e ij (c)a) det A (det E ij (c))(det A) Proof Use cofactor expansion and induction on n Theorem 49 Let A be an n n matrix Then A is invertible if and only if det(a) Proof Suppose A is invertible Then A is invertible and there are elementary matrices E, E,, E k such that E k E k E A I n Postmultiplying by A, we get E k E k E A det(e k E k E ) det(a) 38

By successively applying Theorem 48, we get det(a) det(e k E k E ) det(e k ) det(e k ) det(e ) For the converse, suppose that A is not invertible Then the RREF R of A is not I n So R is an upper-triangular matrix with the last row being a zero row and consequently det(r) Suppose E, E,, E t are elementary matrices for which E te t E A R Then det(e te t E A) det(r) det(e t) det(e t ) det(e ) det(a), by Theorem 48 Since det(e i) for i,,, t, det(a) Remark We extend the IMT by adding one more equivalent condition: (a) A is invertible (m) det(a) Theorem 4 Let A and B be two n n matrices Then det(ab) det(a) det(b) Proof Case A is not invertible By the IMT, rank (A) < n Since CS (AB) CS (A), rank (AB) rank (A) < n and consequently AB is also not invertible By the IMT, det(a) and det(ab) Thus det(ab) det(a) det(b) Case A is invertible There are elementary matrices E, E,, E k such that E k E k E A Postmultiplying by B, we get AB E k E k E B By successively applying Theorem 48, we get det(ab) det(e k E k E B) det(e k ) det(e k ) det(e ) det(b) det(e k E k E ) det(b) det(a) det(b) Corollary 4 Let A be an n n matrix (a) For all scalars c, det(ca) det(ci n A) det(ci n ) det(a) c n det(a) (b) If A is invertible, then det(a) det(a ) det(aa ) det(i n ) 3 Example A 3 5 Is A invertible? Compute det(a T ), det(4a 5 ), and det(a ) Since det(a), A is invertible and we have det(a T ) det(a), det(4a 5 ) 4 3 (det A) 5 48, and det(a ) (det A) ( ) / 39

Theorem 4 (Cramer s Rule) Let A be an n n invertible matrix and b R n The unique solution x [x, x,, x n T of A x b is given by x i det(a i( b )), i,,, n, det(a) where A i ( b ) is the matrix obtained from A by replacing its ith column by b Proof Let i {,,, n} Note that A[ e e i x ei+ e n [A e A e i A x A e i+ A e n Since det([ e e i x ei+ e n ) x i, Thus x i det(a i( b )) det(a) [ A A i A x A i+ A n [ A A i b Ai+ A n A i ( b ) det(a i ( b )) det(a[ e e i x ei+ e n ) det(a) det([ e e i x ei+ e n ) det(a)x i Example We solve A x b by Cramer s Rule, where A 3 5, x x x, and b 4 x 3 Since det(a) 5, there is a unique solution [x, x, x 3 T and by Cramer s Rule, x det(a ( 8 5 b )) 4 5 det(a) 5 5 3 x det(a ( 3 8 5 b )) 4 det(a) 5 5 x 3 det(a 3( 3 8 b )) 5 det(a) 5 5 Thus the unique solution is [x, x, x 3 T [3,, T 4 8

Definition Let A be an n n matrix The cofactor matrix, denoted by C [c ij, is an n n matrix where c ij is the (i, j) cofactor of A The adjoint or adjugate of A, denoted by adj A or adj(a), is the transpose of the cofactor matrix of A, ie, adj A C T Theorem 43 Let A be an n n invertible matrix Then A adj A det(a) Proof Since AA I n, A(the column j of A ) e j By Cramer s Rule, the (i, j)-entry of A, ie, the ith entry of the column j of A, is det(a i ( e j )) det(a) Example [ a b For invertible A c d ( )i+j det(a(j, i)) det(a), A det(a) adj A ad bc For invertible A 3 5, 4 A det(a) adj A 5 [ c c c c c c c 3 c c c 3 c 3 c 3 c 33 c ji det(a) (CT ) ij det(a) (adj A) ij det(a) T T 5 [ ad bc d b 3 7 4 We end by the following useful multilinear property of determinant: T c a T 5 [ ad bc d c 3 4 7 Theorem 44 Let A [ a a a n be an n n matrix Then for all x, y R n and for all scalars c, d, det[ a a i (c x + d y ) a i+ a n c det[ a a i x ai+ a n + d det[ a a i y ai+ a n Proof (Sketch) Find determinants by the cofactor expansion along the ith column Example 3a + 4s c 3b + 4t d 3a + 4s c 3b + 4t d (by transposing) 3a c 3b d + 4s c 4t d (by multilinearity of determinant) 3 a c b d + 4 s c t d (by multilinearity of determinant) 3(ad cb) + 4(sd ct) b a 4

5 Eigenvalues and Eigenvectors 5 Basics of Eigenvalues and Eigenvectors Definition Let A be an n n matrix If A x λ x for some nonzero vector x and some scalar λ, then λ is an eigenvalue of A and x is an eigenvector of A corresponding to λ [ Example Consider A, λ 3, [ v, [ u 3 Since A [ [ [ [ 3 v 3 λ v, 3 is an eigenvalue of A and v is an 3 3 eigenvector of A corresponding to the eigenvalue 3 Since A [ [ [ [ u λ λ u for all scalars λ, u is not an 3 3 eigenvector of A Remark An eigenvalue can be a complex number and an eigenvector can be a complex vector [ [ [ [ [ i Example Consider A Since i, i is an [ i i eigenvalue of A and is an eigenvector of A corresponding to the eigenvalue i i Remark An eigenvector must be a nonzero vector by definition equivalent: So the following are λ is an eigenvalue of A A x λ x for some nonzero vector x 3 (A λi) x for some nonzero vector x 4 (A λi) x has a nontrivial solution x 5 A λi is not invertible (by IMT) 6 det(a λi) Definition det(λi A) is a polynomial of λ and it is the characteristic polynomial of A det(λi A) is the characteristic equation of A Remark Since the roots of the characteristic polynomial are the eigenvalues of the n n matrix A, A has n eigenvalues, not necessarily distinct Definition The multiplicity of a root λ in det(λi A) is the algebraic multiplicity of the eigenvalue λ of A 4