Math 321: Linear Algebra

Similar documents
Math 321: Linear Algebra

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra Highlights

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Lecture Summaries for Linear Algebra M51A

Math113: Linear Algebra. Beifang Chen

Matrices and systems of linear equations

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MAT Linear Algebra Collection of sample exams

Linear Algebra March 16, 2019

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

(b) The nonzero rows of R form a basis of the row space. Thus, a basis is [ ], [ ], [ ]

2. Every linear system with the same number of equations as unknowns has a unique solution.

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Chapter 1 Vector Spaces

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Math Linear Algebra Final Exam Review Sheet

7. Dimension and Structure.

MATH 2360 REVIEW PROBLEMS

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Linear equations in linear algebra

Chapter 2: Matrix Algebra

NOTES on LINEAR ALGEBRA 1

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

SYMBOL EXPLANATION EXAMPLE

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math 54 HW 4 solutions

The definition of a vector space (V, +, )

We start with two examples that suggest the right definition.

MAT 2037 LINEAR ALGEBRA I web:

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Chapter 7. Linear Algebra: Matrices, Vectors,

Linear Algebra Primer

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra problems

CSL361 Problem set 4: Basic linear algebra

Digital Workbook for GRA 6035 Mathematics

Linear Algebra Practice Problems

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Vector Spaces, Orthogonality, and Linear Least Squares

Linear Algebra Massoud Malek

Math 313 Chapter 1 Review

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Math 110: Worksheet 3

Comps Study Guide for Linear Algebra

Abstract Vector Spaces and Concrete Examples

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Matrix & Linear Algebra

Linear Algebra (Math-324) Lecture Notes

Solutions to Math 51 First Exam April 21, 2011

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 4377/6308 Advanced Linear Algebra

Linear Systems. Math A Bianca Santoro. September 23, 2016

MAC Module 1 Systems of Linear Equations and Matrices I

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

Spring 2014 Math 272 Final Exam Review Sheet

Elementary maths for GMT

We see that this is a linear system with 3 equations in 3 unknowns. equation is A x = b, where

Dimension and Structure

ORIE 6300 Mathematical Programming I August 25, Recitation 1

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

is Use at most six elementary row operations. (Partial

Daily Update. Math 290: Elementary Linear Algebra Fall 2018

LINEAR ALGEBRA REVIEW

Math 2030 Assignment 5 Solutions

MAT 242 CHAPTER 4: SUBSPACES OF R n

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Elementary Linear Algebra

1 - Systems of Linear Equations

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Systems of Linear Equations

4.3 - Linear Combinations and Independence of Vectors

Math Final December 2006 C. Robinson

MATH PRACTICE EXAM 1 SOLUTIONS

Chapter Two Elements of Linear Algebra

Math 1553 Introduction to Linear Algebra

Chapter 1 Matrices and Systems of Equations

Linear Algebra Lecture Notes-I

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Linear Algebra Practice Problems

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Transcription:

Math 32: Linear Algebra T Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J Hefferon E-mail: kapitula@mathunmedu

Prof Kapitula, Spring 23 Contents Two examples: Linear Systems [(a]network flow The assumption is that flow into nodes equals flow out of nodes, and that branches connect the various nodes The nodes can be thought of as intersections, and the branches can be thought of as streets 6 x 2 5 x 4 x 3 3 (a (b Approximation of data - find the line of best fit (least-squares For example, find a line which best fits the points (,, (2,, (4, 2, (5, 3 The answer is y = 3/5 + 7/ x Solving Linear Systems Gauss Method Definition A linear equation is of the form a x + a 2 x 2 + + a n x n = d, where x,, x n : variables a,, a n R: coefficients d R: constant A linear system is a collection of one or more linear equations x = s,, x n = s n solves the system An n-tuple (s,, s n is a solution if Example [(i] (a 3x 4x 2 + 6x 3 = 5: linear equation (b x x 2 3x 3 = 4: nonlinear equation

Math 32 Class Notes 2 Example The system has the solution (x, x 2 = (, 6 2x + x 2 = 8, 2x + 3x 2 = 6 Theorem 2 (Gauss Theorem If a linear system is changed from one to another by the operations [(]one equation is swapped with another (swapping an equation is multiplied by a nonzero constant (rescaling an equation is replaced by the sum of itself and a multiple of another (pivoting then the two systems have the same set of solutions Remark 3 These operations are known as the elementary reduction operations, row operations, or Gaussian operations Why? The idea is to convert a given system into an equivalent system which is easier to solve Example Work the system which has the solution (4, 3x + 6x 2 = 2, x 2x 2 = 4, Definition 4 In each row, the first variable with a nonzero coefficient is the row s leading variable A system is in echelon form if each leading variable is to the right of the leading variable in the row above it Example Upon using the Gaussian operations one has that The solution is (, 4, x x 3 = 3x + x 2 = x x 2 x 3 = 4 Theorem 5 A linear system has either x x 3 = x 2 = 4 x 3 = [(]no solution (inconsistent a unique solution an infinite number of solutions In the latter two cases the system is consistent Illustrate this graphically with systems of two equations in two unknowns (b (a (c Unique solution Inconsistent Infinitely many solutions 2 Describing the Solution Set Definition 6 The variables in an echelon-form linear system which are not leading variables are free variables Example For the system x + 2x 2 4x 4 = 2, x 3 7x 4 = 8 the leading variables are x, x 3, and the free variables are x 2, x 4 The solution is parameterized by the free variables via x = 2 2x 2 + 4x 4, x 3 = 8 + 7x 4, so that the solution set is {(2 2x 2 + 4x 4, x 2, 8 + 7x 4, x 4 : x 2, x 4 R}

3 Prof Kapitula, Spring 23 Definition 7 An m n matrix A is a rectangular array of numbers with m rows and n columns If the numbers are real-valued, then we say that A = (a i,j R m n, where a i,j R is the entry in row i and column j Example Find different entries for the matrix ( 3 4 A = 2 6 3 Definition 8 A vector (column vector is a matrix with a single column, ie, a R n := R n A matrix with a single row is a row vector The entries of a vector are its components Remark 9 For vectors we will use the notation a = (a i R n Definition Consider two vectors u = (u i, v = (v i R n The algebraic operations are: [(i]vector sum: u + v = (u i + v i scalar multiplication: r v = (rv i for any r R Consider the linear system x 3x 2 + 2x 3 = 8, x x 3 = 5, 2x 2 + x 3 = 7 Matrices associated with this system are 3 2 (coefficient matrix, 2 3 2 8 5 2 7 (augmented matrix The Gaussian operations can be performed on the augmented matrix to put the system into echelon form: 3 2 8 5 3 2 8 2 7 5 Why? It eases the bookkeeping The solution is s = (, 6, 5, which can be written in vector notation as s = 6 5 Example The solution to the system is Using vector notation yields { x + 2x 2 4x 4 = 2, x 3 7x 4 = 8 {(2 2x 2 + 4x 4, x 2, 8 + 7x 4, x 4 : x 2, x 4 R} 2 8 + x 2 2 + x 4 It is clear that the system has infinitely many solutions 4 7 : x 2, x 4 R} 3 General=Particular+Homogeneous In the previous example it is seen that the solution has two parts: a particular solution which depends upon the right-hand side, and a homogeneous solution, which is independent of the right-hand side We will see that this feature holds for any linear system Recall that equation j in a linear system has the form a j, x + + a j,n x n = d j

Math 32 Class Notes 4 Definition A linear equation is homogeneous if it has a constant of zero A linear system is homogeneous is all of the constants are zero Remark 2 A homogeneous system always has at least one solution, the zero vector Lemma 3 For any homogeneous linear system there exist vectors β,, β k such that any solution of the system is of the form x = c β + c k βk, c,, c k R Here k is the number of free variables in an echelon form of the system Definition 4 The set {c β + c k βk : c,, c k R} is the span of the vectors { β,, β k } (b (a Proof: The augmented matrix for the system is of the form (A, where A R m n Use Gauss method to reduce the system to echelon form Furthermore, use the Gauss-Jordan reduction discussed in Section 3 to put the system in reduced echelon form The coefficient associated with each leading variable (the leading entry will then be one, and there will be zeros above and below each leading entry in the reduced matrix For example, A In row j the reduced system is then of the form which can be rewritten as 6 4 3 x lj + a j,lj+x lj+ + + a j,n x n =, x lj = a j,lj+x lj+ a j,n x n Since the system is in echelon form, l j < l j < l j+ Suppose that the free variables are labelled as x f,, x fk Since the system is in reduced echelon form, one has that in the above equation a j,lj+i = for any i such that x lj+i is a leading variable Thus, after a renaming of the variables the above equation can be rewritten as x lj = β j,f x f + + β j,fk x fk The vectors β j, j =,, k, can now be constructed, and in vector form the solution is given by x = x f β + + x fk βk Lemma 5 Let p be a particular solution for a linear system The solution set is given by { p + h : h is a homogeneous solution} Proof: Let s be any solution, and set h = s p In row j we have that a j, (s p + + a j,n (s n p n = (a j, s + + a j,n s n (a j, p + + a j,n p n = d j d j =, so that h = s p solves the homogeneous equation Now take a vector of the form p + h, where p is a particular solution and h is a homogeneous solution Similar to above, a j, (p + h + + a j,n (p n + h n = (a j, p + + a j,n p n + (a j, h + + a j,n h n = d j + = d j, so that s = p + h solves the system

5 Prof Kapitula, Spring 23 Remark 6 While a homogeneous solution always exists, this is not the case for a particular solution Example Consider the linear system The homogeneous solution is given by x + x 3 + x 4 = 2x x 2 + x 4 = 3 x + x 2 + 3x 3 + 2x 4 = b h = x3 2 + x 4 x + x 3 + x 4 = x 2 + 2x 3 + x 4 = 5 = b +, x 3, x 4 R If b, then no particular solution exists Otherwise, one has that p = 5 If b =, the solution set is given by x = p + h Definition 7 A square matrix is nonsingular if it is the coefficient matrix for a homogeneous system with the unique solution x = Otherwise, it is singular Remark 8 In order for a square matrix to be nonsingular, it must be true that for the row-reduced matrix there are no free variables Consider the two examples: A 2, B The homogeneous system associated with A has infinitely many solutions, whereas the one associated with B has only one 3 Reduced Echelon Form 3 Gauss-Jordan Reduction Definition 9 A matrix is in reduced echelon form if [(a] (a it is in echelon form (b each leading entry is a one (c each leading entry is the only nonzero entry in its column Definition 2 The Gauss-Jordan reduction is the process of putting a matrix into reduced echelon form Example Consider 2 3 4 5 6 7 8 9 2 The solution is then given by echelon x = 2 3 4 2 3 2 3 + c 2 reduced echelon, c R 2 2 3

Math 32 Class Notes 6 Remark 2 The above coefficient matrix is singular Definition 22 Two matrices are row equivalent if they can be row-reduced to a common third matrix by the elementary row operations Example This can be written as A C B, ie, from above A = 2 3 4 2 5 6 7 8, B = 2 3, C = 9 2 Remark 23 [(a] (a Elementary row operations are reversible 2 3 4 2 3 (b If two coefficient matrices are row equivalent, then the associated homogeneous linear systems have the same solution 32 Row Equivalence Definition 24 A linear combination of the vectors x,, x n is an expression of the form c i x i = c x + + c n x n, where c,, c n R i Remark 25 The span of the set { x,, x n } is the set of all linear combinations of the vectors Lemma 26 (Linear Combination Lemma A linear combination of linear combinations is a linear combination Proof: Let the linear combinations i c,i x i through i c m,i x i be given, and consider the new linear combination ( n ( n d c,i x i + + d m c m,i x i i= Multiplying out and regrouping yields ( m d i c i, x + + i= which is again a linear combination of x,, x n i= ( m d i c i,n x n, Corollary 27 If two matrices are row equivalent, then each row of the second is a linear combination of the rows of the first Proof: The idea is that one row-reduces a matrix by taking linear combinations of rows If A and B are row equivalent to C, then each row of C is some linear combination of the rows of A and another linear combination of the rows of B Since row-reduction is reversible, one can also say that the rows of B are linear combinations of the rows of C By the Linear Combination Lemma one then gets that the rows of B are linear combinations of the rows of A Definition 28 The form of an m n matrix is the sequence l,, l m, where l i is the column number of the leading entry if row i, and l i = if row i has no leading entry (ie, it is a zero row Example If then the form is, 3, A = i= 2 4 3,

7 Prof Kapitula, Spring 23 Lemma 29 If two echelon form matrices are row equivalent, then their forms are equal sequences Remark 3 For a counterexample to an if and only if statement, consider A = 2, B = 2 Both have the form, 2,, yet the matrices are clearly not row equivalent Proof: Let B, D R m n be row equivalent Let the form associated with B be given by l,, l m, and let the form associated with D by given by k,, k m Let the rows of B be denoted by β,, β m, with and the rows of D by δ,, δ m, with β j = (β j,,, β j,m, δ j = (δ j,,, δ j,m, We need to show that l i = k i for each i Let us first show that it holds for i = The rest will follow by an induction argument (Problem 222 If β is the zero row, then since B is in echelon form the matrix B is the zero matrix By the above corollary this implies that D is also the zero matrix, so we are then done Therefore, assume that β and δ are not zero rows Since B and D are row equivalent, we then have that or in particular, β = s δ + s 2 δ 2 + + s m δ m, β,j = β,l = m s i δ i,j ; i= m s i δ i,l i= By the definition of the form we have that β i,j = if j < l, and β,l Similarly, δ i,j = if j < k, and δ,k If l < k then the right-hand side of the above equation is zero Since β,l, this then clearly implies that l k Writing δ as a linear combination of β,, β m and using the same argument as above shows that l k ; hence, l = k Corollary 3 Any two echelon forms of a matrix have the same free variables, and consequently the same number of free variables Lemma 32 Each matrix is row equivalent to a unique reduced echelon form matrix Example Suppose that A 3 2, B 2 3 [(a]are A and B row equivalent? No Is either matrix nonsingular? No,

Math 32 Class Notes 8 2 Vector Spaces 2 Definition of Vector Space 2 Definition and Examples Definition 2 A vector space over R consists of a set V along with the two operations + and such that [(a] (a if u, v, w V, then v + w V and v + w = w + v ( v + w + u = v + ( w + u there is a V such that v + = v for all v V (zero vector for each v V there is a w V such that v + w = (additive inverse (b if r, s R and v, w V, then r v V and (r + s v = r v + s v r ( v + w = r v + r w (rs v = r (s v v = v Example V = R n with v + w = (v i + w i and r v = (rv i Example V = R m n with A + B = (a i,j + b i,j and r A = (ra i,j Example V = P n = { n i= a ix i : a,, a n R} with (p + q(x = p(x + q(x and (r p(x = rp(x Remark 22 The set V = P = {p P n : n N} is an infinite-dimensional vector space, whereas P n is finite-dimensional Example V = { n i= a i cos(iθ : a,, a n R} with (f + g(x = f(x + g(x and (r f(x = rf(x Example V = R + with x + y = xy and r x = x r We have that with = this is a vector space Example V = R 2 with v + w = ( ( v + w rv, r v = v 2 + w 2 v 2 The answer is no, as the multiplicative identities such as (r + s v = r v + s v are violated Example V = R 2 with v + w = The answer is no, as there is no multiplicative identity ( ( v + w rv, r v = v 2 + w 2 Example The set {f : R R : f + f = } is a vector space, whereas the set {f : R R : f + f = } is not 22 Subspaces and Spanning Sets Definition 23 A subspace is a subset of a vector space that is itself a vector space under the inherited operations Remark 24 Any vector space V has the trivial subspace { } and the vector space itself as subspaces These are the improper subspaces Any other subspaces are proper

9 Prof Kapitula, Spring 23 Lemma 25 A nonempty S V is a subspace if x, y S implies that r x + s y S for any r, s R Proof: By assumption the subset S is closed under vector addition and scalar multiplication Since S V the operations in S inherit the same properties as those operations in V ; hence, the set S is a subspace Example Examples of subspaces are [(a]s = { x R 3 : x +2x 2 +5x 3 = } S = {p P 6 : p(3 = } S = {A R n n : a i,j = for i > j} (upper triangular matrices S = {A R n n : a i,j = for i < j} (lower triangular matrices If A R n n, the trace of A, denoted trace(a, is given by trace(a = i a i,i The set S = {A R n n : trace(a = } Definition 26 If S = { x,, x n }, the span of S will be denoted by [S] Remark 27 From now on the multiplication r x V will be written r x V, where the multiplication will be assumed to be the multiplication associated with V Lemma 28 The span of any nonempty subset is a subspace (b (d (a (c (e Proof: Let u, v [S] be given There then exist scalars such that u = i r i x i, v = i s i x i For given scalars p, q R one then sees that ( ( p u + q v = p r i x i + q s i x i = i i i (pr i + qs i x i, so that p u + q v [S] Hence, [S] is a subspace Example The set of solutions to a homogeneous linear system with coefficient matrix A will be denoted by N (A, the null space of A It has been previously shown that there is a set of vectors S = { β,, β k } such that N (A = [S] Hence, N (A is a subspace 22 Linear Independence 22 Definition and Examples Definition 29 The vectors v,, v n V are linearly independent if and only if the only solution to c v + + c n v n = is c = = c n = Otherwise, the vectors are linearly dependent Example [(a] (a { v, v 2, v 3 } R 3, where v = 3, v 2 = is a linearly dependent set, as 7 v + 2 v 2 + v 3 = 5 8 2, v 3 = (b { + x, x, 3x + x 2 } P 2 is a linearly independent set 3 5 3,

Math 32 Class Notes Lemma 2 If S V and v V is given, then [S] = [S { v}] if and only if v [S] Proof: If [S] = [S { v}], then since v [S { v}] one must have that v [S] Now suppose that v [S], with S = { x,, x n }, so that v = i c i x i If w [S { v}], then one can write w = d v + i d i x i, which can be rewritten as w = i (d c i + d i x i [S] Hence, [S { v}] [S] It is clear that [S] [S { v}] Lemma 2 If S V is a linearly independent set, then for any v V the set S { v} is linearly independent if and only if v / [S] Proof: Let S = { x,, x n } If v [S], then v = i c i x i, so that v + i c i x i = Hence, S { v} is a linearly dependent set Now suppose that S { v} is a linearly dependent set There then exist constants c,, c n, some of which are nonzero, such that c v + i c i x i = If c =, then i c i x i =, which contradicts the fact that the set S is linearly independent Since c, upon setting d i = c i /c one can then write v = i d i x i, so that v [S] Corollary 22 Let S = { x,, x n } V, and define S = { x }, S j = S j { x j } j = 2,, n S is linearly dependent set if and only if there is an l n such that the set S l is a linearly independent set and S l is a linearly dependent set Remark 23 In other words, x l = l i= c i x i Example In a previous example we had S = { v, v 2, v 3 } with S 2 = { v, v 2 } being a linearly independent set and S 2 { v 3 } being a linearly dependent set The above results imply that [S 2 ] = [S] Theorem 24 Let S = { x,, x n } V The set S has a linearly independent subset with the same span Proof: Suppose that S is not a linearly independent set This implies that for some 2 l n that x l = l i= c i x i Now define S = { x,, x l, x l+,, x n } Since S = S { x l } and x l [S ], one has that [S] = [S ] When considering S, remove the next dependent vector (if it exists from the set { x l+,, x n }, and call this new set S Using the same reasoning as above, [S ] = [S ], so that [S] = [S ] Continuing in this fashion and using an induction argument, we can then remove all of the linearly dependent vectors without changing the span 23 Basis and Dimension 23 Basis Definition 25 The set S = { β, β 2, } is a basis for V if [(a] (a the vectors are linearly independent (b V = [S] Remark 26 A basis will be denoted by β, β 2,

Prof Kapitula, Spring 23 Definition 27 Set e j = (e j,i R n to be the vector which satisfies e j,i = δ i,j The standard basis for R n is given by e,, e n Remark 28 A basis is not unique For example, one basis for R 2 is the standard basis, whereas another is ( ( 2, 2 5 Example [(a] (a Two bases for P 3 are, x, x 2, x 3 and x, + x, + x + x 2, x + x 3 (b Consider the subspace S R 2 2 which is given by Since any B S is of the form S = {A R 2 2 : a, + 2a 2,2 =, a,2 3a 2, = } B = c ( 2 + c 2 ( 3 and the two above matrices are linearly independent, a basis for S is given by ( ( 2 3, Lemma 29 The set S = { β,, β n } is a basis if and only if each v V can be expressed as a linear combination of the vectors in S in a unique manner Proof: If S is a basis, then by definition v [S] Suppose that v can be written in two different ways, ie,, v = i c i βi, v = i d i βi One clearly then has that i (c i d i β i =, which, since the vectors in S are linearly independent, implies that c i = d i for i =,, n Suppose that V = [S] Since = i β i, and since vectors are expressed uniquely as linear combinations of the vectors of S, by definition the vectors in S are linearly independent Hence, S is a basis Definition 22 Let B = β,, β n be a basis for V For a given v V there are unique constants c,, c n such that v = i c i β i The representation of v with respect to B is given by c c 2 Rep B ( v := The constants c,, c n are the coordinates of v with respect to B Example For P 3, consider the two bases B =, x, x 2, x 3 and D = x, + x, + x + x 2, x + x 3 One then has that 2 Rep B ( 2x + x 3 = 2, Rep D ( 2x + x 3 = B c n B D

Math 32 Class Notes 2 232 Dimension Definition 22 A vector space is finite-dimensional if it has a basis with only finitely many vectors Example An example of an infinite-dimensional space is P, which has as a basis, x,, x n, Definition 222 The transpose of a matrix A R m n, denoted by A T R n m, is formed by interchanging the rows and columns of A Example A = 4 2 5 3 6 A T = ( 2 3 4 5 6 Theorem 223 In any finite-dimensional vector space all of the bases have the same number of elements Proof: Let B = β,, β k be one basis, and let D = δ,, δ l be another basis Suppose that k > l For each i =,, k we can write β i = j a i,jδ j, which yields a matrix A R k l Now let v V be given Since B and D are bases, there are unique vectors Rep B ( v = v B = (vi B Rk and Rep D ( v = v D = (vi D Rl such that k v = vi B β l i = vj D δ j The above can be rewritten as i= ( l k a i,j vi B j= i= j= δj = l vj D δ j This then implies that the vector v B is a solution to the linear system with the augmented matrix (A T v D, where A T = (a j,i R l k is the transpose of A Since l < k, when A T is row-reduced it will have free variables, which implies that the linear system has an infinite number of solutions This contradiction yields that k l If k < l, then by writing δ i = j c i,j β j and using the above argument one gets that k l Hence, k = l Definition 224 The dimension of a vector space V, dim(v, is the number of basis vectors Example [(a] (a Since the standard basis for R n has n vectors, dim(r n = n (b Since a basis for P n is, x,, x n, one has that dim(p n = n + (c Recall that the subspace has a basis This implies that dim(s = 2 j= S = {A R 2 2 : a, + 2a 2,2 =, a,2 3a 2, = } ( 2 ( 3, Corollary 225 No linearly independent set can have more vectors than dim(v Proof: Suppose that S = { x,, x n } is a linearly independent set with n > dim(v Since S V one has that [S] V Furthermore, since the set is linearly independent, dim([s] = n This implies that dim(v n, which is a contradiction Corollary 226 Any linearly independent set S can be expanded to make a basis

3 Prof Kapitula, Spring 23 Proof: Suppose that dim(v = n, and that dim([s] = k < n There then exist n k linearly independent vectors v,, v n k such that v i / [S] The set S = S { v,, v n k } is a linearly independent set with dim([s ] = n As a consequence, S is a basis for V Corollary 227 If dim(v = n, then a set of n vectors S = { x,, x n } is linearly independent if and only if V = [S] Proof: Suppose that V = [S] If the vectors are not linearly independent, then (upon a possible reordering there is an l < n such that for S = { x,, x l } one has [S] = [S ] with the vectors in S being linearly independent This implies that V = [S ], and that dim(v = l < n Now suppose that the vectors are linearly independent If [S] V, then there is at least one vector v such that S { v} is linearly independent with [S { v}] = V This implies that dim(v n + Remark 228 Put another way, the above corollary states that if dim(v = n and the set S = { x,, x n } is linearly independent, then S is a spanning set for V 233 Vector Spaces and Linear Systems Definition 229 The null space of a matrix A, denoted by N (A, is the set of all solutions to the homogeneous system for which A is the coefficient matrix Example Consider A basis for N (A is given by A = 2 3 4 5 6 7 8 9 2 2 Definition 23 The row space of a matrix A, denoted by Rowspace(A, is the span of the set of the rows of A The row rank is the dimension of the row space Example If A = 2 then Rowspace(A = [{ (, ( 2 }], and the row rank is 2 Lemma 23 The nonzero rows of an echelon form matrix make up a linearly independent set Proof: We have already seen that in an echelon form matrix no nonzero row is a linear combination of the other rows Corollary 232 Suppose that a matrix A has been put in echelon form The nonzero rows of the echelon form matrix are a basis for Rowspace(A Proof: If A B, where B is in echelon form, then it is known that each row of A is a linear combination of the rows of B The converse is also true; hence, Rowspace(A = Rowspace(B Since the rows of B are linearly independent, they form a basis for Rowspace(B, and hence Rowspace(A Definition 233 The column space of a matrix A, denoted by R(A, is the span of the set of the columns of A The column rank is the dimension of the column space Remark 234 A basis for R(A is found by determining Rowspace(A T, and the column rank of A is the dimension of Rowspace(A T,

Math 32 Class Notes 4 Example Consider A = 2 3 2 4, A T = 3 5 2 5 8 2 8 A basis for Rowspace(A is (, ( 4, and a basis for R(A is, 2 Theorem 235 The row rank and column rank of a matrix are equal Proof: First, let us note that row operations do not change the column rank of a matrix If A = ( a a n R m n, where each column a i R m, then finding the set of homogeneous solutions for the linear system with coefficient matrix A is equivalent to solving c a + + c n a n = Row operations leave unchanged the the set of solutions (c,, c n ; hence, the linear independence of the vectors is unchanged, and the dependence of one vector on the others remains unchanged Now bring the matrix to reduced echelon form, so that each column with a leading entry is one of the e i s from the standard basis The row rank is equal to the number of rows with leading entries The column rank of the reduced matrix is equal to that of the original matrix It is clear that the column rank of the reduced matrix is also equal to the number of leading entries Definition 236 The rank of a matrix A, denoted by rank(a, is its row rank Remark 237 Note that the above statements imply that rank(a = rank(a T Theorem 238 If A R m n, then rank(a + dim(n (A = n Proof: Put the matrix A in reduced echelon form One has that rank(a is the number of leading entries, and that dim(n (A is the number of free variables It is clear that these numbers sum to n 234 Combining Subspaces Definition 239 If W,, W k are subspaces of V, then their sum is given by W + + W k = [W W k ] Let a basis for W j be given by w,j,, w l(j,j If v W + + W k, then this implies there are constants c i,j such that l( l(k v = c i, w i, + + c i,k w i,k i= Note that i c i,j w i,j W j For example, when considering P 3 suppose that a basis for W is, + x 2, and that a basis for W 2 is x, + x, x 3 If p W + W 2, then i= p(x = c + c 2 ( + x 2 + d x + d 2 ( + x + d 3 x 3 Q: How does the dimension of each W i relate to the dimension of W + + W k? In the above example, dim(w = 2, dim(w 2 = 3, but dim(p 3 = dim(w + W 2 = 4 Thus, for this example dim(w + W 2 dim(w + dim(w 2 Definition 24 A collection of subspaces {W,, W k } is independent if for each i =,, k, W i ( j i W j = { }

5 Prof Kapitula, Spring 23 Example Suppose that W = [ ], W 2 = [ ], W 3 = [, ] It is clear that W i W j = { } for i j However, the subspaces are not independent, as W 3 (W W 2 = [ ] Definition 24 A vector space V is the direct sum of the subspaces W,, W k if [(a]the subspaces are independent V = W + + W k In this case we write V = W W k Example [(a] (a R n = [ e ] [ e 2 ] [ e n ] (b P n = [] [x] [x n ] Lemma 242 If V = W W k, then dim(v = i dim(w i Example In a previous example we had that R 3 = W + W 2 + W 3, with Thus, the sum cannot be direct dim(w = dim(w 2 =, dim(w 3 = 2 Proof: First show that the result is true for k = 2, and then use induction to prove the general result Let a basis for W be given by β,, β k, and a basis for W 2 be given by δ,, δ l This yields that dim(w = k and dim(w 2 = l Since W W 2 = { }, the set { β,, β k, δ,, δ l } is linearly independent, and forms a basis for [W W 2 ] Since V = [W W 2 ], this then yields that a basis for V is β,, β k, δ,, δ l ; thus, dim(v = k + l = dim(w + dim(w 2 Definition 243 If V = W W 2, then the subspaces W and W 2 are said to be complements Definition 244 For vectors u, v R n, define the dot product (or inner product to be u v = The dot product has the properties that n u i v i [(a] u v = v u (a u + b v w = a u w + b v w u u, with u u = if and only if u = Definition 245 If U R n is a subspace, define the orthocomplement of U to be U = { v R n : v u = for all u U} Proposition 246 U is a subspace i=

Math 32 Class Notes 6 (b (a (c Proof: Let v, w U Since (a v + b w u = a v u + b w u = for any u U, this implies that a v + b w U Hence U is a subspace Example If U = [ e ] R 2, then U = [ e 2 ], and if U = [ e, e 2 ] R 3, then U = [ e 3 ] Remark 247 If A = ( a a n R m n, recall that R(A = [ a a n ] Thus, b R(A if and only if b = i c i a i Furthermore, N (A = { x = (x i : i x i a i = } Theorem 248 If A R m n, then N (A T = R(A Proof: Suppose that x N (A T, so that x a i = for i =,, n As a consequence, x ( i c i a i =, so that x R(A Hence, N (A T R(A Similarly, if y R(A, one gets that y N (A T, so that R(A N (A T Remark 249 Alternatively, one has that N (A = R(A T Theorem 25 If U R n is a subspace, then R n = U U Proof: Let a basis for U be given by β,, β k, and set A = ( β β k R n k By construction, rank(a = k (and rank(a T = k By the previous theorem, we have that U = N (A T Since dim(n (A T + rank(a T = n, we get that dim(u = n k We must now show that U U = { } Let δ U U, which implies that δ = i c iβ i By using the linearity of the inner product k δ δ = c i ( β i δ =, so that δ = Remark 25 A consequence of the above theorem is that (U = U i= Example Suppose that a basis for U is β,, β k The above theorem shows us how to compute a basis for U Simply construct the matrix A = ( β β k, and then find a basis for N (A T = U For example, suppose that U = [ a, a 2 ], where Since a = 2 4 A T one has that U = N (A T = [ δ, δ 2 ], where δ = 2 2, a 2 = ( 2 4 2 5, δ2 = 3 4 7, 4 5 Corollary 252 Consider a linear system whose associated augmented matrix is (A b consistent if and only if b N (A T The system is

7 Prof Kapitula, Spring 23 Proof: If A = ( a a n, then the system is consistent if and only if b R(A, ie, b = i c i a i By the above theorem R(A = N (A T Example As a consequence, the system is consistent if and only if b δ i =, where δ,, δ k is a basis for N (A T For example, suppose that A is as in the previous example Then for b = (b i R 4, the associated linear system will be consistent if and only if δ b =, δ 2 b = In other words, the components of the vector b must satisfy the linear system which implies that b S = [ b, b 2 ], where b = 2b 2b 2 + b 3 = 4b 5b 2 + b 4 = 2 4, b2 = Note that S = R(A, so that a basis for R(A is b, b 2 Further note that this is consistent with the reduced echelon form of A T 2 5

Math 32 Class Notes 8 3 Maps Between Spaces 3 Isomorphisms 3 Definition and Examples Definition 3 Let V and W be vector spaces A map f : V W is one-to-one if v v 2 implies that f( v f( v 2 The map is onto if for each w W there is a v V such that f( v = w Example [(a] (a The map f : P n R n+ given by is one-to-one and onto a + a x + + a n x n a a a n (b The map f : R 2 2 R 4 given by ( a b c d a b c d is one-to-one and onto Definition 32 The map f : V W is an isomorphism if [(a]f is one-to-one and onto f is linear, ie, (b (a f( v + v 2 = f( v + f( v 2 f(r v = rf( v for any r R We write V = W, and say that V is isomorphic to W Remark 33 If V = W, then we can think that V and W are the same Example [(a] (a In the above examples it is easy to see that the maps are linear Hence, P ( n = R n+ and R 2 2 = R 4 (b In general, R m n = R mn Definition 34 If f : V V is an isomorphism, then we say that f is an automorphism Example [(a] (a The dilation map d s : R 2 R 2 given by d s ( v = s v for some nonzero s R is an automorphism (b The rotation map t θ : R 2 R 2 given by is an automorphism t θ ( v = Lemma 35 If f : V W is linear, then f( = ( cos θ v sin θ v 2 sin θ v + cos θ v 2

9 Prof Kapitula, Spring 23 Proof: Since f is linear, f( = f( v = f( v = Lemma 36 The statement that f : V W is linear is equivalent to f(c v + + c n v n = c f( v + + c n f( v n Proof: Proof by induction By definition the statement holds for n =, so now suppose that it holds for n = N This yields N N f( c i v i + c N+ v N+ = f( c i v i + f(c N+ v N+ i= = i= N c i f( v i + f(c N+ v N+ i= Definition 37 Let U, V be vector spaces The external direct sum, W = U V, is defined by along with the operations W = {( u, v : u U, v V }, w + w 2 = ( u + u 2, v + v 2, r w = (r u, r v Lemma 38 The external direct sum W = U V is a vector space Furthermore, dim(w = dim(u + dim(v Proof: It is easy to check that W is a vector space Let S u = { u,, u k } be a basis for U, and let S V = { v k+,, v l } be a basis for V Given a w = ( u, v W, it is clear that one can write w = ( i c i u i, j d j v j ; hence, a potential basis for W is w i = { ( u i,, i =,, k (, v i, i = k +,, l We need to check that the vectors w,, w l are linearly independent Writing i c i w i = is equivalent to the equations k l c i u i =, c i v i = i= i=k+ Since S U and S V are bases, the only solution is c i = for all i Example A basis for P 2 R 2 is given by and P 2 R 2 = R 5 via the isomorphism (,, (x,, (x 2,, (, e, (, e 2, (a + a x + a 2 x 2, c e + c 2 e 2 a a a 2 c c 2

Math 32 Class Notes 2 32 Dimension Characterizes Isomorphism Note that in all of the examples up to this point, if U = V, then it was true that dim(u = dim(v The question: does the dimension of two vector spaces say anything about whether or not they are isomorphic? Lemma 39 If V = W, then dim(v = dim(w Proof: Let f : V W be an isomorphism Let β,, β n be a basis for V, and consider the set S W = {f( β,, f( β n } First, the set is linearly independent, as = n c i f( β n i = f( c iβi i= implies that i c iβ i = (f is one-to-one, which further implies that c i = for all i Since f is onto, for each w W there is a v V such that f( v = w Upon writing v = i d iβ i and using the linearity of the function f we get that n w = d i f( β i Hence, S W is a basis, and we then have the result Lemma 3 If dim(v = dim(w, then the two spaces are isomorphic i= Proof: It will be enough to show that if dim(v = n, then V = R n A similar result will yield W = R n, which would then yield V = R n = W, ie, V = W Let B = β,, β n be a basis for V, and consider the map Rep B : V R n given by c c 2 Rep B ( v = c n i= n, v = c iβi The map is clearly linear The map is one-to-one, for if Rep B ( u = Rep B ( v with u = i c i β i, v = i d i β i, then c i = d i for all i, which implies that u = v Finally, the map is clearly onto Hence, Rep B is an isomorphism, so that V = R n Theorem 3 V = W if and only if dim(v = dim(w Corollary 32 If dim(v = k, then V = R k Example (cont [(a] (a Since dim(r m n = mn, R m n = R mn (b Since dim(p n = n +, P n = R n+ i= 32 Homomorphisms 32 Definition Definition 33 If h : V W is linear, then it is a homomorphism Example [(a]

2 Prof Kapitula, Spring 23 (a The projection map π : R 3 R 2 given by π( x x 2 x 3 = ( x x 2 is a homomorphism However, it is not an isomorphism, as the map is not one-to-one, ie, π(r e 3 = for any r R (b The derivative map d/dx : P n P n given by d dx (a + a x + + a n x n = a + 2a 2 x + + na n x n is a homomorphism However, it is not an isomorphism, as the map is not one-to-one, ie, d/dx(a = for any a R Definition 34 If h : V V, then it is called a linear transformation Theorem 35 Let v,, v n be a basis for V, and let { w,, w n } W be given There exists a unique homomorphism h : V W such that h( v j = w j for j =,, n Proof: Set h : V W to be the map given by h(c v + + c n v n = c w + + c n w n The map is linear, for if u = i c i v i, u 2 = i d i v i, then ( h(r u + r 2 u 2 = h (r c i + r 2 d i v i i = i (r c i + r 2 d i w i = r h( u + r 2 h( u 2 The map is unique, for if g : V W is a homomorphism such that g( v i = w i, then g( v = g( i c i v i = i c i g( v i = i c i w i = h( v; hence, g( v = h( v for all v V, so that they are the same map Example [(a] (a The rotation map t θ ; R 2 R 2 is an automorphism which satisfies ( ( cos θ sin θ t θ ( e =, t sin θ θ ( e 2 = cos θ One then has t θ ( v = v ( cos θ sin θ + v 2 ( sin θ cos θ (b Suppose that a homomorphism h : P 2 P 3 satisfies h( = x, h(x = 2 x2, h(x 2 = 3 x3 Then h(a + a x + a 2 x 2 = a x + 2 a x 2 + 3 a 2x 3 The map is not an isomorphism, as it is not onto

Math 32 Class Notes 22 322 Rangespace and Nullspace Definition 36 Let h : V W be a homomorphism The range space is given by R(h := {h( v : v V } The rank of h, rank(h, satisfies rank(h = dim(r(h Lemma 37 R(h is a subspace Proof: Let w, w 2 R(h be given There then exists v, v 2 such that h( v i = w i Since h(c v + c 2 v 2 R(h and h(c v + c 2 v 2 = c w + c 2 w 2, one has that c w + c 2 w 2 R(h Hence, it is a subspace Remark 38 [(a] (a rank(h dim(w (b h is onto if and only if rank(h = dim(w Example If h : R 2 2 P 3 is given by ( h(a = (a + bx + cx 2 + dx 3 a b, A = c d, then a basis for R(h is x, x 2, x 3, so that that rank(h = 3 Definition 39 The inverse map h : W V is given by h ( w := { v : h( v = w} Lemma 32 Let h : V W be a homomorphism, and let S R(h be a subspace Then is a subspace In particular, h ( is a subspace h (S := { v V : h( v S} Definition 32 The null space (kernel of the homomorphism h : V W is given by The nullity of N (h is dim(n (h N (h := { v V : h( v = } = h ( Example Again consider the map h : R 2 2 P 3 It is clear that h(a = if and only if a + b =, c = d =, so that a basis for N (h is given by ( Note that for this example, rank(h + dim(n (h = 4 Theorem 322 Let h : V W be a homomorphism Then rank(h + dim(n (h = dim(v Remark 323 Compare this result to that for matrices, where if A R m n, then rank(a + dim(n (A = n Proof: Let B N = β,, β k be a basis for N (h, and extend that to a basis B V = β,, β k, v,, v l for V, where k + l = n Set B R = h( v,, h( v l We need to show that B R is a basis for R(h First consider = i c ih( v i = h( i c i v i Thus, i c i v i N (h, so that i c i v i = i d i β i Since B V is a basis, this yields that c = = c l = d = = d k =, so that B R is a linearly independent set

23 Prof Kapitula, Spring 23 Now suppose that h( v R(h Since v = i a i β i + i b i v i, upon using the fact that h is linear we get that h( v = h( i a i βi + i b i h( v i = + i b i h( v i Hence, B R is a spanning set for R(h B N is a basis for N (h, and B R is a basis for R(h The result is now clear Remark 324 [(a] (a It is clear that rank(h dim(v, with equality if and only if dim(n (h = (b If dim(w > dim(v, then h cannot be onto, as rank(h dim(v < dim(w Lemma 325 Let h : V W be a homomorphism dim(n (h = if and only if h is one-to-one Proof: If h is one-to-one, then the only solution to h( v = is v = Hence, dim(n (h = Now suppose that dim(n (h = From the above lemma we have that if B V = v,, v n is a basis for V, then B R = h( v,, h( v n is a basis for R(h Suppose that there is a w W such that h( u = h( u 2 = w We have that u = i a i v i, u 2 = i b i v i, so that upon using the linearity of h, a i h( v i = i i b i h( v i Since B R is a basis, this implies that a i = b i for all i, so that u = u 2 Hence, h is one-to-one Definition 326 A one-to-one homomorphism is nonsingular 33 Computing Linear Maps 33 Representing Linear Maps with Matrices Recall that if B = v,, v n is a basis for V, then uniquely defined homomorphism h : V W is given by h( v = h( c i v i := c i h( v i, i i ie, the homomorphism is determined by its action on the basis Definition 327 Let A = ( a a 2 a n R m n, and let c R n The matrix-vector product is defined by A c = i c i a i Remark 328 [(a] (a Matrix multiplication is a homomorphism from R n R m (b A linear system can be written as A x = b, where A is the coefficient matrix and x is the vector of variables Example Suppose that h : P R 3, and that B = 2, + 4x, D = e, 2 e 2, e + e 3

Math 32 Class Notes 24 are the bases for these spaces Suppose that It is easy to check that h(2 = Rep D (h(2 = Thus, if p = c 2 + c 2 ( + 4x, ie, /2, h( + 4x = 2, Rep D (h( + 4x = Rep B (p = ( c c 2, and since h(p = c h(2 + c 2 h( + 4x, by using the fact that Rep D is linear one gets that Rep D (h(p = c Rep D (h(2 + c 2 Rep D (h( + 4x If one defines the matrix Rep B,D (h := (Rep D (h(2 Rep D (h( + 4x, then one has that For example, if then so that Rep D (h(p = Rep B,D (h Rep B (p Rep B (p = ( 2 Rep D (h(p = = = (= p(x = 8x, /2 /2 2 3/2 2 h(p = 2 3 2 2 = 5 ( 2 + Definition 329 Let h : V W be a homomorphism Suppose that B = v,, v n is a basis for V, and D = w,, w m is a basis for W Set hj := Rep D (h( v j, j =,, n The matrix representation of h with respect to B, D is given by Rep B,D (h := ( h h2 h n R m n

25 Prof Kapitula, Spring 23 Lemma 33 Let h : V W be a homomorphism Then Rep D (h( v = Rep B,D (h Rep B ( v Remark 33 As a consequence, all linear transformations can be thought of as a matrix multiplication Example [(a] (a Suppose that V = [e x, e 3x ], and that h : V V is given by h(v = v(x dx Since h(e x = e x, h(e 3x = 3 e3x, we have Rep B,B (h = ( /3 (b Suppose that a basis for V is B = v, v 2, v 3, and that a basis for W is D = w, w 2, w 3 Further suppose that h( v = w + 3 w 2, h( v 2 = w 2 w 3, h( v 3 = w + 4 w 3 We then have that Thus, if v = 2 v v 2 + v 3, we have that so that h( v = w + 5 w 2 + 5 w 3 Rep B,D (h = 3 4 Rep D (h( v = Rep B,D (h Rep B ( v = 5 5, 332 Any Matrix Represents a Linear Map Example Suppose that h : P 2 R 3, with bases B =, x, x 2 and D = e, e 2 + e 3, e e 3, is represented by the matrix H = 2 In order to decide if b = e + 3 e 2 R(h, it is equivalent to determine if Rep D ( b R(H Since Rep D ( b = 2 3 3, and (H b 2 3 4 we have that Rep D ( b / R(H; hence, b / R(h Note that rank(h = 2, so that h is neither one-to-one nor onto Let us find a basis for R(h and N (h We have H, H T 2,

Math 32 Class Notes 26 so that R(H = [ 2, ], N (H = [ ] Using the fact that b R(h if and only if Rep D ( b R(H, and v N (h if and only if Rep B ( v N (H, then yields R(h = [ 3 2, ], N (h = [ x + x 2 ] Theorem 332 Let A R m n The map h : R n R m defined by h( x := A x is a homomorphism Proof: Set A = ( a a 2 a n, and recall that A x = i x i a i Since A(r x + s y = h is a homomorphism n n n (rx i + sy i a i = r x i a i + s y i a i = ra x + sa y, i= i= i= Theorem 333 Let h : V W be a homomorphism which is represented by the matrix H rank(h = rank(h Then Proof: Let B = v,, v n be a basis for V, and let W have a basis D, so that H = (Rep D (h( v Rep D (h( v n The rank of H is the number of linearly independent columns of H, and the rank of h is the number of linearly independent vectors in the set {h( v,, h( v n } Since Rep D : W R m is an isomorphism, we have that a set in R(h is linearly independent if and only if the related set in R(Rep D (h is linearly independent (problem 328, ie, {h( v,, h( v k } is linearly independent if and only if {Rep D (h( v,, Rep D (h( v k } is linearly independent The conclusion now follows Corollary 334 [(a] (a h is onto if and only if rank(h = m (b h is one-to-one if and only if rank(h = n (c h is nonsingular if and only if m = n and rank(h = n 34 Matrix Operations 34 Sums and Scalar Products Definition 335 Let A = (a i,j, B = (b i,j R m n Then [(a] (a A + B = (a i,j + b i,j (b ra = (ra i,j for any r R Lemma 336 Let g, h : V W be homomorphisms represented with respect to the bases B and D by the matrices G, H The map g + h is represented by G + H, and the map rh is represented by rh

27 Prof Kapitula, Spring 23 342 Matrix Multiplication Definition 337 Let G R m n and H = ( h h2 h p R n p Then Example It is easy to check that GH = (G h G h 2 G h p R m p 2 3 2 ( 2 3 5 = 3 5 Remark 338 Matrix multiplication is generally not commutative For example: [(a]if A R 2 3 and B R 3 2, then AB R 2 2 while BA R 3 3 if A = ( 2 ( 5 2, B = 4 4, then AB = ( 2 4 ( 9 5, BA = 2 4 Lemma 339 Let g : V W and h : W U be homomorphisms represented by the matrices G, H The map h g : V U is represented by the matrix HG (b (a Proof: Give the commutative diagram: V B g W D h U E Rep B Rep D Rep E R n R m G H R p Example Consider the maps t θ, d r : R 2 R 2 given by t θ ( v := ( ( cos θ v sin θ v 2 3v, d sin θ v + cos θ v r ( v := 2 v 2 The matrix representations for these maps are ( cos θ sin θ T θ := sin θ cos θ ( 3, D r := Rotation followed by dilation is represented by the matrix ( 3 cos θ 3 sin θ D r T θ = sin θ cos θ, while dilation followed by rotation is represented by the matrix T θ D r = ( 3 cos θ sin θ 3 sin θ cos θ

Math 32 Class Notes 28 343 Mechanics of Matrix Multiplication Definition 34 The identity matrix is given by I = ( e e 2 e n R n n Remark 34 Assuming that the multiplication makes sense, I v = v for any vector v R n, and consequently AI = A, IB = B Definition 342 A diagonal matrix D = (d i,j R n n is such that d i,j = for i j Definition 343 An elementary reduction matrix R R n n is formed by applying a single row operation to the identity matrix Example Two examples are I 2ρ+ρ2 2, I ρ ρ2 Lemma 344 Let R be an elementary reduction matrix Then RH is equivalent to performing the Gaussian operation on the matrix H Corollary 345 For any matrix H there are elementary reduction matrices R,, R k such that R k R k R H is in reduced echelon form 344 Inverses Definition 346 Suppose that A R n n The matrix is invertible if there is a matrix A such that AA = A A = I Remark 347 If it exists, the matrix A is given by the product of elementary reduction matrices For example, ( ( ( 2ρ +ρ 2 /3ρ 2 ρ 2+ρ A = I 2 3 Thus, by setting R = ( 2 (, R 2 = /3 we have that R 3 R 2 R A = I, so that A = R 3 R 2 R (, R 3 = Lemma 348 A is invertible if and only if it is nonsingular, ie, the linear map defined by h( x := A x is an isomorphism Proof: A can be row-reduced to I if and only if h is an isomorphism Remark 349 When computing A, do the reduction (A I (I A (if possible Example [(a], (a For A given above, (A I ( /3 /3 2/3 /3 (b For a general A R 2 2 we have that ( a b c d if ad bc = ad bc ( d b c a,

29 Prof Kapitula, Spring 23 35 Change of Basis 35 Changing Representations of Vectors For the vector space V let one basis be given by B = β,, β n, and let another be given by D = δ,, δ n Define the homomorphism h : V V by h( β j := β j, ie, h = id, the identity map The transformation matrix H associated with the identity map satisfies h j = Rep D ( β j Definition 35 The change of basis matrix H = Rep B,D (id for the bases B, D is the representation of the identity map with respect to these bases, and satisfies h j = Rep D ( β j Example Suppose that We have that It can be checked that ( B = ( Rep D ( so that the change of basis matrix is (, 2 V B Rep B R 2 ( 2/3 = /3 H = ( 2, D = id V D Rep D R 2 H=Rep B,D (, Rep D ( 2 ( 2/3 /3 (, ( = Recall that H Rep B ( v = Rep D ( v For the vector v = (, 3 T one has that Rep B ( v = (, 2 T, so that Rep D ( v = H Rep B ( v = ( 4 3 5, What about the change of basis matrix from D to B? It must be H (use a commutative diagram to show it Example (cont The change of basis matrix from D to B is ( H 3 3 = 2 Note that this implies that ( 2 Rep B ( ( 3 = (, Rep B ( ( 3 = 2 352 Changing Map Representations Consider the homomorphism h : V W, and suppose that V has bases B, ˆB, while W has bases D, ˆD With respect to B, D there is a transformation matrix H, while with respect to ˆB, ˆD there is a transformation matrix Ĥ, ie, V B h id V ˆB H h Ĥ W D id W ˆD

Math 32 Class Notes 3 We have that Ĥ = Rep D, ˆD(id H Rep ˆB,B (id = Rep D, ˆD(id H Rep B, ˆB(id = Rep ˆD,D (id H Rep ˆB,B (id The idea is that, if possible, we wish to choose bases ˆB, ˆD such that h( β j = a j δj for some a j R Definition 35 H, Ĥ Rm n are matrix equivalent if there are nonsingular matrices P, Q such that Ĥ = P HQ Lemma 352 Matrix equivalent matrices represent the same map with respect to appropriate pairs of bases Example In the standard basis E 2 consider the homomorphism ( ( 4 2 4 2 h( x = x, H = Consider the basis We have that so that ( D = Rep D,E2 (id = ( 2, ( 2 Ĥ = Rep D,E2 (id H Rep D,E2 (id =, ( 2 3 hence, the homomorphism has the desired property with the basis D, ie, ( ( ( ( 2 2 h( = 2, h( = 3 Remark 353 The above example does not really change if, with respect to the standard basis B =, x, the homomorphism h : P 2 P 2 satisfies h( = 4 + x, h(x = 2 + x, so that the matrix representing the homomorphism is that given above If we set D = + x, 2 + x, then one then has that h( + x = 2( + x, h(2 + x = 3(2 + x ; 36 Projection 36 Orthogonal Projection into a Line Let the line l be given by l := [ s], and let v R n be a given vector The orthogonal projection of v onto l is given by v p = c p s, where c p is chosen so that v = ( v c p s + c p s, ( v c p s s = (give a picture Note that the second condition implies that v c p s l The second condition yields that c p = v s s s Definition 354 The orthogonal projection of v onto the line spanned by s is the vector proj [ s] ( v = v s s s s

3 Prof Kapitula, Spring 23 Remark 355 [(a] (a By construction, v proj [ s] ( v [ s] (b Since proj [ s] : R n R n is a homomorphism, it is represented by a matrix P [ s], which is given by P [ s] := s s (s s, s 2 s,, s n s = ( s( s s s,, s n s s Note that rank(p [ s] =, and hence dim(n (P [ s] = n Further note that [ s] = N (P [ s] Example Suppose that v = 4, s = 2 Then proj [ s] ( v = 6 s 362 Gram-Schmidt Orthogonalization Definition 356 The vectors v,, v k R n are mutually orthogonal if v i v j = for any i j Theorem 357 Suppose that the nonzero vectors v,, v k R n are mutually orthogonal The set { v,, v k } is then linearly independent Proof: Suppose that i c i v i = For each j take the dot product with v j, so that v j = v j ( i c i v i = i c i ( v j v i = c j v j v j Since the vectors are nonzero, this implies that c j = Hence, the vectors are linearly independent Corollary 358 The set { v,, v k } is a basis Definition 359 The basis given above is an orthogonal basis Recall that if v R n, then v 2 = v v Lemma 36 Suppose that B = κ,, κ l is an orthogonal basis for the subspace S R n If v = i c i κ i S, then [(a]c i = v κ i κ i κ i v 2 = l c 2 i κ i κ i i= (b (a Proof: Follows immediately from the fact that B is an orthogonal basis For part (a consider v κ i, and for part (b simply look at v v Lemma 36 Let κ,, κ l be an orthogonal basis for a subspace S For a given v set p = i c i κ i, c i = v κ i κ i κ i Then p v S

Math 32 Class Notes 32 Proof: First note that κ j ( p v = κ j p κ j v = κ j ( i c i κ i c j κ j κ j = ( i c i κ j κ i c j κ j κ j = c j κ j κ j c j κ j κ j =, so that p v is orthogonal to each κ j If x S, so that x = i d i κ i, by using the linearity of the dot product it is clear that ( p v x = Hence, p v S Definition 362 Define the homomorphism proj S : R n R n by proj S ( v = i proj [ κi]( v Remark 363 From the above one has that v proj S ( v S The matrix representation for proj S, P S, is given by P S = P [ κi] i Since κ,, κ l is an orthogonal basis, one has that rank(p S = l, and hence that dim(s = dim(n (P S = n l Recall that if S = [ β,, β k ], then a basis for S is not at all unique The question is how to find an orthogonal basis for S For example, suppose that S = [, 2 3, 5 6 7 ] Keep in mind that dim(s 3 Set Define κ 2 by If S = [ κ, κ 2 ], now define κ 2 = 2 3 κ = proj [ κ]( 2 3 = κ 3 = 5 6 7 proj S ( 5 6 7 = = 5 6 7 proj [ κ]( 5 6 7 proj [ κ2]( 5 6 7 An orthogonal basis for S is then so that dim(s = 2,,