Math 321: Linear Algebra

Similar documents
Math 321: Linear Algebra

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Linear Algebra Highlights

Lecture Summaries for Linear Algebra M51A

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

MAT Linear Algebra Collection of sample exams

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Math113: Linear Algebra. Beifang Chen

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Linear Algebra March 16, 2019

Math Linear Algebra Final Exam Review Sheet

2. Every linear system with the same number of equations as unknowns has a unique solution.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Matrices and systems of linear equations

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

(b) The nonzero rows of R form a basis of the row space. Thus, a basis is [ ], [ ], [ ]

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Chapter 7. Linear Algebra: Matrices, Vectors,

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Linear Algebra problems

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Final December 2006 C. Robinson

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

NOTES on LINEAR ALGEBRA 1

Linear Algebra Practice Problems

Chapter 1 Vector Spaces

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH 2360 REVIEW PROBLEMS

Chapter 2: Matrix Algebra

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra Primer

LINEAR ALGEBRA REVIEW

Linear equations in linear algebra

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Digital Workbook for GRA 6035 Mathematics

Linear Algebra Practice Problems

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Spring 2014 Math 272 Final Exam Review Sheet

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra (Math-324) Lecture Notes

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

SYMBOL EXPLANATION EXAMPLE

Daily Update. Math 290: Elementary Linear Algebra Fall 2018

Comps Study Guide for Linear Algebra

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

The definition of a vector space (V, +, )

1. Select the unique answer (choice) for each problem. Write only the answer.

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

MATH 235. Final ANSWERS May 5, 2015

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

7. Dimension and Structure.

Abstract Vector Spaces and Concrete Examples

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

1. General Vector Spaces

CSL361 Problem set 4: Basic linear algebra

Notes on Mathematics

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Reduction to the associated homogeneous system via a particular solution

Math 54 HW 4 solutions

Math 313 Chapter 1 Review

NOTES FOR LINEAR ALGEBRA 133

We start with two examples that suggest the right definition.

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

A Brief Outline of Math 355

Math 110: Worksheet 3

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Lecture Notes in Linear Algebra

Matrix & Linear Algebra

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM

Fundamentals of Engineering Analysis (650163)

LINEAR ALGEBRA SUMMARY SHEET.

ANSWERS. E k E 2 E 1 A = B

Online Exercises for Linear Algebra XM511

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

MATH 369 Linear Algebra

Linear Systems and Matrices

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Chapter Two Elements of Linear Algebra

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Chapter 1 Matrices and Systems of Equations

We see that this is a linear system with 3 equations in 3 unknowns. equation is A x = b, where

Transcription:

Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu

Prof. Kapitula, Spring 23 Contents. Linear Systems 2.. Solving Linear Systems........................................ 2... Gauss Method........................................ 2..2. Describing the Solution Set................................. 4..3. General=Particular+Homogeneous............................. 5.3. Reduced Echelon Form........................................ 7.3.. Gauss-Jordan Reduction................................... 7.3.2. Row Equivalence....................................... 8 2. Vector Spaces 2.. Definition of Vector Space...................................... 2... Definition and Examples................................... 2..2. Subspaces and Spanning Sets................................ 2.2. Linear Independence......................................... 3 2.2.. Definition and Examples................................... 3 2.3. Basis and Dimension......................................... 4 2.3.. Basis.............................................. 4 2.3.2. Dimension........................................... 6 2.3.3. Vector Spaces and Linear Systems............................. 7 2.3.4. Combining Subspaces.................................... 9 3. Maps Between Spaces 23 3.. Isomorphisms............................................. 23 3... Definition and Examples................................... 23 3..2. Dimension Characterizes Isomorphism........................... 25 3.2. Homomorphisms........................................... 26 3.2.. Definition........................................... 26 3.2.2. Rangespace and Nullspace.................................. 27 3.3. Computing Linear Maps....................................... 29 3.3.. Representing Linear Maps with Matrices.......................... 29 3.3.2. Any Matrix Represents a Linear Map........................... 3 3.4. Matrix Operations.......................................... 32 3.4.. Sums and Scalar Products.................................. 32 3.4.2. Matrix Multiplication.................................... 33 3.4.3. Mechanics of Matrix Multiplication............................. 34 3.4.4. Inverses............................................ 34 3.5. Change of Basis............................................ 35 3.5.. Changing Representations of Vectors............................ 35 3.5.2. Changing Map Representations............................... 36 3.6. Projection............................................... 37 3.6.. Orthogonal Projection into a Line............................. 37 3.6.2. Gram-Schmidt Orthogonalization.............................. 38 3.6.3. Projection into a Subspace................................. 4 Topic: Line of Best Fit........................................ 4 4. Determinants 43 4.. Definition............................................... 43 4... Exploration.......................................... 43 4..2. Properties of Determinants................................. 43 4.3. Other Formulas............................................ 45 4.3.. Laplace s Expansion..................................... 45

Math 32 Class Notes 2 Topic: Cramer s Rule........................................... 45 5. Similarity 47 5.2. Similarity............................................... 47 5.2.. Definition and Examples................................... 47 5.2.2. Diagonalizability....................................... 48 5.2.3. Eigenvalues and Eigenvectors................................ 49 Topic: Stable Populations......................................... 5 Topic: Method of Powers......................................... 5 Topic: Symmetric Matrices........................................ 52 Two examples:. Linear Systems (a Network flow. The assumption is that flow into nodes equals flow out of nodes, and that branches connect the various nodes. The nodes can be thought of as intersections, and the branches can be thought of as streets. 6 x 2 5 x 4 x 3 3 (b Approximation of data - find the line of best fit (least-squares. For example, find a line which best fits the points (,, (2,, (4, 2, (5, 3. The answer is y = 3/5 + 7/ x... Solving Linear Systems... Gauss Method Definition.. A linear equation is of the form a x + a 2 x 2 + + a n x n = d, where x,..., x n : variables a,..., a n R: coefficients

3 Prof. Kapitula, Spring 23 d R: constant. A linear system is a collection of one or more linear equations. An n-tuple (s,..., s n is a solution if x = s,..., x n = s n solves the system. Example. (i 3x 4x 2 + 6x 3 = 5: linear equation (ii x x 2 3x 3 = 4: nonlinear equation Example. The system 2x + x 2 = 8, 2x + 3x 2 = 6 has the solution (x, x 2 = (, 6. Theorem.2 (Gauss Theorem. If a linear system is changed from one to another by the operations ( one equation is swapped with another (swapping (2 an equation is multiplied by a nonzero constant (rescaling (3 an equation is replaced by the sum of itself and a multiple of another (pivoting then the two systems have the same set of solutions. Remark.3. These operations are known as the elementary reduction operations, row operations, or Gaussian operations. Why? The idea is to convert a given system into an equivalent system which is easier to solve. Example. Work the system which has the solution (4,. 3x + 6x 2 = 2, x 2x 2 = 4, Definition.4. In each row, the first variable with a nonzero coefficient is the row s leading variable. A system is in echelon form if each leading variable is to the right of the leading variable in the row above it. Example. Upon using the Gaussian operations one has that The solution is (, 4,. x x 3 = 3x + x 2 = x x 2 x 3 = 4 Theorem.5. A linear system has either ( no solution (inconsistent (2 a unique solution (3 an infinite number of solutions. In the latter two cases the system is consistent. x x 3 = x 2 = 4 x 3 =

Math 32 Class Notes 4 Illustrate this graphically with systems of two equations in two unknowns. Unique solution Inconsistent Infinitely many solutions..2. Describing the Solution Set Definition.6. The variables in an echelon-form linear system which are not leading variables are free variables. Example. For the system x + 2x 2 4x 4 = 2, x 3 7x 4 = 8 the leading variables are x, x 3, and the free variables are x 2, x 4. The solution is parameterized by the free variables via x = 2 2x 2 + 4x 4, x 3 = 8 + 7x 4, so that the solution set is {(2 2x 2 + 4x 4, x 2, 8 + 7x 4, x 4 : x 2, x 4 R}. Definition.7. An m n matrix A is a rectangular array of numbers with m rows and n columns. If the numbers are real-valued, then we say that A = (a i,j R m n, where a i,j R is the entry in row i and column j. Example. Find different entries for the matrix ( 3 4 A = 2 6 3 Definition.8. A vector (column vector is a matrix with a single column, i.e., a R n := R n. A matrix with a single row is a row vector. The entries of a vector are its components. Remark.9. For vectors we will use the notation a = (a i R n. Definition.. Consider two vectors u = (u i, v = (v i R n. The algebraic operations are: (i vector sum: u + v = (u i + v i (ii scalar multiplication: r v = (rv i for any r R.

5 Prof. Kapitula, Spring 23 Consider the linear system Matrices associated with this system are 3 2 (coefficient matrix, 2 x 3x 2 + 2x 3 = 8, x x 3 = 5, 2x 2 + x 3 = 7. 3 2 8 5 2 7 (augmented matrix. The Gaussian operations can be performed on the augmented matrix to put the system into echelon form: 3 2 8 5 3 2 8. 2 7 5 Why? It eases the bookkeeping. The solution is s = (, 6, 5, which can be written in vector notation as s = 6. 5 Example. The solution to the system is Using vector notation yields { x + 2x 2 4x 4 = 2, x 3 7x 4 = 8 {(2 2x 2 + 4x 4, x 2, 8 + 7x 4, x 4 : x 2, x 4 R}. 2 8 + x 2 2 + x 4 It is clear that the system has infinitely many solutions. 4 7 : x 2, x 4 R}...3. General=Particular+Homogeneous In the previous example it is seen that the solution has two parts: a particular solution which depends upon the right-hand side, and a homogeneous solution, which is independent of the right-hand side. We will see that this feature holds for any linear system. Recall that equation j in a linear system has the form a j, x + + a j,n x n = d j. Definition.. A linear equation is homogeneous if it has a constant of zero. A linear system is homogeneous is all of the constants are zero. Remark.2. A homogeneous system always has at least one solution, the zero vector. Lemma.3. For any homogeneous linear system there exist vectors β,..., β k such that any solution of the system is of the form x = c β + c k βk, c,..., c k R. Here k is the number of free variables in an echelon form of the system.

Math 32 Class Notes 6 Definition.4. The set {c β + c k βk : c,..., c k R} is the span of the vectors { β,..., β k }. Proof: The augmented matrix for the system is of the form (A, where A R m n. Use Gauss method to reduce the system to echelon form. Furthermore, use the Gauss-Jordan reduction discussed in Section 3. to put the system in reduced echelon form. The coefficient associated with each leading variable (the leading entry will then be one, and there will be zeros above and below each leading entry in the reduced matrix. For example, A In row j the reduced system is then of the form which can be rewritten as 6 4 3. x lj + a j,lj+x lj+ + + a j,n x n =, x lj = a j,lj+x lj+ a j,n x n. Since the system is in echelon form, l j < l j < l j+. Suppose that the free variables are labelled as x f,..., x fk. Since the system is in reduced echelon form, one has that in the above equation a j,lj+i = for any i such that x lj+i is a leading variable. Thus, after a renaming of the variables the above equation can be rewritten as x lj = β j,f x f + + β j,fk x fk. The vectors β j, j =,..., k, can now be constructed, and in vector form the solution is given by x = x f β + + x fk βk. Lemma.5. Let p be a particular solution for a linear system. The solution set is given by { p + h : h is a homogeneous solution}. Proof: Let s be any solution, and set h = s p. In row j we have that a j, (s p + + a j,n (s n p n = (a j, s + + a j,n s n (a j, p + + a j,n p n = d j d j =, so that h = s p solves the homogeneous equation. Now take a vector of the form p + h, where p is a particular solution and h is a homogeneous solution. Similar to above, a j, (p + h + + a j,n (p n + h n = (a j, p + + a j,n p n + (a j, h + + a j,n h n = d j + = d j, so that s = p + h solves the system. Remark.6. While a homogeneous solution always exists, this is not the case for a particular solution.

7 Prof. Kapitula, Spring 23 Example. Consider the linear system The homogeneous solution is given by x + x 3 + x 4 = 2x x 2 + x 4 = 3 x + x 2 + 3x 3 + 2x 4 = b h = x3 2 + x 4 x + x 3 + x 4 = x 2 + 2x 3 + x 4 = 5 = b +, x 3, x 4 R. If b, then no particular solution exists. Otherwise, one has that p = 5. If b =, the solution set is given by x = p + h. Definition.7. A square matrix is nonsingular if it is the coefficient matrix for a homogeneous system with the unique solution x =. Otherwise, it is singular. Remark.8. In order for a square matrix to be nonsingular, it must be true that for the row-reduced matrix there are no free variables. Consider the two examples: A 2, B. The homogeneous system associated with A has infinitely many solutions, whereas the one associated with B has only one..3. Reduced Echelon Form.3.. Gauss-Jordan Reduction Definition.9. A matrix is in reduced echelon form if (a it is in echelon form (b each leading entry is a one (c each leading entry is the only nonzero entry in its column. Definition.2. The Gauss-Jordan reduction is the process of putting a matrix into reduced echelon form.

Math 32 Class Notes 8 Example. Consider 2 3 4 5 6 7 8 9 2 The solution is then given by echelon x = 2 3 4 2 3 2 3 + c 2 Remark.2. The above coefficient matrix is singular. reduced echelon, c R. 2 2 3. Definition.22. Two matrices are row equivalent if they can be row-reduced to a common third matrix by the elementary row operations. Example. This can be written as A C B, i.e., from above 2 3 4 2 A = 5 6 7 8, B = 2 3, C = 9 2 Remark.23. (a Elementary row operations are reversible. 2 3 4 2 3 (b If two coefficient matrices are row equivalent, then the associated homogeneous linear systems have the same solution...3.2. Row Equivalence Definition.24. A linear combination of the vectors x,..., x n is an expression of the form c i x i = c x + + c n x n, where c,..., c n R. i Remark.25. The span of the set { x,..., x n } is the set of all linear combinations of the vectors. Lemma.26 (Linear Combination Lemma. A linear combination of linear combinations is a linear combination. Proof: Let the linear combinations i c,i x i through i c m,i x i be given, and consider the new linear combination ( n ( n d c,i x i + + d m c m,i x i. i= Multiplying out and regrouping yields ( m d i c i, x + + i= which is again a linear combination of x,..., x n. i= ( m d i c i,n x n, i=

9 Prof. Kapitula, Spring 23 Corollary.27. If two matrices are row equivalent, then each row of the second is a linear combination of the rows of the first. Proof: The idea is that one row-reduces a matrix by taking linear combinations of rows. If A and B are row equivalent to C, then each row of C is some linear combination of the rows of A and another linear combination of the rows of B. Since row-reduction is reversible, one can also say that the rows of B are linear combinations of the rows of C. By the Linear Combination Lemma one then gets that the rows of B are linear combinations of the rows of A. Definition.28. The form of an m n matrix is the sequence l,..., l m, where l i is the column number of the leading entry if row i, and l i = if row i has no leading entry (i.e., it is a zero row. Example. If then the form is, 3,. A = 2 4 3, Lemma.29. If two echelon form matrices are row equivalent, then their forms are equal sequences. Remark.3. For a counterexample to an if and only if statement, consider A = 2, B = 2. Both have the form, 2,, yet the matrices are clearly not row equivalent. Proof: Let B, D R m n be row equivalent. Let the form associated with B be given by l,..., l m, and let the form associated with D by given by k,..., k m. Let the rows of B be denoted by β,..., β m, with and the rows of D by δ,..., δ m, with β j = (β j,,..., β j,m, δ j = (δ j,,..., δ j,m, We need to show that l i = k i for each i. Let us first show that it holds for i =. The rest will follow by an induction argument (Problem 2.22. If β is the zero row, then since B is in echelon form the matrix B is the zero matrix. By the above corollary this implies that D is also the zero matrix, so we are then done. Therefore, assume that β and δ are not zero rows. Since B and D are row equivalent, we then have that or in particular, β = s δ + s 2 δ 2 + + s m δ m, β,j = β,l = m s i δ i,j ; i= m s i δ i,l. i= By the definition of the form we have that β i,j = if j < l, and β,l. Similarly, δ i,j = if j < k, and δ,k. If l < k then the right-hand side of the above equation is zero. Since β,l, this then clearly implies that l k. Writing δ as a linear combination of β,..., β m and using the same argument as above shows that l k ; hence, l = k.

Math 32 Class Notes Corollary.3. Any two echelon forms of a matrix have the same free variables, and consequently the same number of free variables. Lemma.32. Each matrix is row equivalent to a unique reduced echelon form matrix. Example. Suppose that A 3 2, B 2 3, (a Are A and B row equivalent? No. (b Is either matrix nonsingular? No.

Prof. Kapitula, Spring 23 2. Vector Spaces 2.. Definition of Vector Space 2... Definition and Examples Definition 2.. A vector space over R consists of a set V along with the two operations + and such that (a if u, v, w V, then v + w V and v + w = w + v ( v + w + u = v + ( w + u there is a V such that v + = v for all v V (zero vector for each v V there is a w V such that v + w = (additive inverse (b if r, s R and v, w V, then r v V and (r + s v = r v + s v r ( v + w = r v + r w (rs v = r (s v v = v. Example. V = R n with v + w = (v i + w i and r v = (rv i. Example. V = R m n with A + B = (a i,j + b i,j and r A = (ra i,j. Example. V = P n = { n i= a ix i : a,..., a n R} with (p + q(x = p(x + q(x and (r p(x = rp(x. Remark 2.2. The set V = P = {p P n : n N} is an infinite-dimensional vector space, whereas P n is finite-dimensional. Example. V = { n i= a i cos(iθ : a,..., a n R} with (f + g(x = f(x + g(x and (r f(x = rf(x. Example. V = R + with x + y = xy and r x = x r. We have that with = this is a vector space. Example. V = R 2 with ( v + w v + w = v 2 + w 2 ( rv, r v = v 2 The answer is no, as the multiplicative identities such as (r + s v = r v + s v are violated. Example. V = R 2 with v + w = The answer is no, as there is no multiplicative identity. ( ( v + w rv, r v = v 2 + w 2 Example. The set {f : R R : f + f = } is a vector space, whereas the set {f : R R : f + f = } is not... 2..2. Subspaces and Spanning Sets

Math 32 Class Notes 2 Definition 2.3. A subspace is a subset of a vector space that is itself a vector space under the inherited operations. Remark 2.4. Any vector space V has the trivial subspace { } and the vector space itself as subspaces. These are the improper subspaces. Any other subspaces are proper. Lemma 2.5. A nonempty S V is a subspace if x, y S implies that r x + s y S for any r, s R. Proof: By assumption the subset S is closed under vector addition and scalar multiplication. Since S V the operations in S inherit the same properties as those operations in V ; hence, the set S is a subspace. Example. Examples of subspaces are (a S = { x R 3 : x + 2x 2 + 5x 3 = } (b S = {p P 6 : p(3 = } (c S = {A R n n : a i,j = for i > j} (upper triangular matrices (d S = {A R n n : a i,j = for i < j} (lower triangular matrices (e If A R n n, the trace of A, denoted trace(a, is given by trace(a = i a i,i. The set S = {A R n n : trace(a = } Definition 2.6. If S = { x,..., x n }, the span of S will be denoted by [S]. Remark 2.7. From now on the multiplication r x V will be written r x V, where the multiplication will be assumed to be the multiplication associated with V. Lemma 2.8. The span of any nonempty subset is a subspace. Proof: Let u, v [S] be given. There then exist scalars such that u = i r i x i, v = i s i x i. For given scalars p, q R one then sees that ( ( p u + q v = p r i x i + q s i x i = i i i (pr i + qs i x i, so that p u + q v [S]. Hence, [S] is a subspace. Example. The set of solutions to a homogeneous linear system with coefficient matrix A will be denoted by N (A, the null space of A. It has been previously shown that there is a set of vectors S = { β,..., β k } such that N (A = [S]. Hence, N (A is a subspace.

3 Prof. Kapitula, Spring 23 2.2. Linear Independence 2.2.. Definition and Examples Definition 2.9. The vectors v,..., v n V are linearly independent if and only if the only solution to c v + + c n v n = is c = = c n =. Otherwise, the vectors are linearly dependent. Example. (a { v, v 2, v 3 } R 3, where v = 3, v 2 = is a linearly dependent set, as 7 v + 2 v 2 + v 3 = 5 8 2, v 3 = (b { + x, x, 3x + x 2 } P 2 is a linearly independent set 3 5 3, Lemma 2.. If S V and v V is given, then [S] = [S { v}] if and only if v [S]. Proof: If [S] = [S { v}], then since v [S { v}] one must have that v [S]. Now suppose that v [S], with S = { x,..., x n }, so that v = i c i x i. If w [S { v}], then one can write w = d v + i d i x i, which can be rewritten as w = i (d c i + d i x i [S]. Hence, [S { v}] [S]. It is clear that [S] [S { v}]. Lemma 2.. If S V is a linearly independent set, then for any v V the set S { v} is linearly independent if and only if v / [S]. Proof: Let S = { x,..., x n }. If v [S], then v = i c i x i, so that v + i c i x i =. Hence, S { v} is a linearly dependent set. Now suppose that S { v} is a linearly dependent set. There then exist constants c,..., c n, some of which are nonzero, such that c v + i c i x i =. If c =, then i c i x i =, which contradicts the fact that the set S is linearly independent. Since c, upon setting d i = c i /c one can then write v = i d i x i, so that v [S]. Corollary 2.2. Let S = { x,..., x n } V, and define S = { x }, S j = S j { x j } j = 2,..., n. S is linearly dependent set if and only if there is an l n such that the set S l is a linearly independent set and S l is a linearly dependent set.

Math 32 Class Notes 4 Remark 2.3. In other words, x l = l i= c i x i. Example. In a previous example we had S = { v, v 2, v 3 } with S 2 = { v, v 2 } being a linearly independent set and S 2 { v 3 } being a linearly dependent set. The above results imply that [S 2 ] = [S]. Theorem 2.4. Let S = { x,..., x n } V. The set S has a linearly independent subset with the same span. Proof: Suppose that S is not a linearly independent set. This implies that for some 2 l n that x l = l i= c i x i. Now define S = { x,..., x l, x l+,..., x n }. Since S = S { x l } and x l [S ], one has that [S] = [S ]. When considering S, remove the next dependent vector (if it exists from the set { x l+,..., x n }, and call this new set S. Using the same reasoning as above, [S ] = [S ], so that [S] = [S ]. Continuing in this fashion and using an induction argument, we can then remove all of the linearly dependent vectors without changing the span. 2.3. Basis and Dimension 2.3.. Basis Definition 2.5. The set S = { β, β 2,... } is a basis for V if (a the vectors are linearly independent (b V = [S] Remark 2.6. A basis will be denoted by β, β 2,.... Definition 2.7. Set e j = (e j,i R n to be the vector which satisfies e j,i = δ i,j. The standard basis for R n is given by e,..., e n. Remark 2.8. A basis is not unique. For example, one basis for R 2 is the standard basis, whereas another is ( ( 2,. 2 5 Example. (a Two bases for P 3 are, x, x 2, x 3 and x, + x, + x + x 2, x + x 3. (b Consider the subspace S R 2 2 which is given by S = {A R 2 2 : a, + 2a 2,2 =, a,2 3a 2, = }. Since any B S is of the form B = c ( 2 + c 2 ( 3, and the two above matrices are linearly independent, a basis for S is given by ( ( 2 3,.

5 Prof. Kapitula, Spring 23 Lemma 2.9. The set S = { β,..., β n } is a basis if and only if each v V can be expressed as a linear combination of the vectors in S in a unique manner. Proof: If S is a basis, then by definition v [S]. Suppose that v can be written in two different ways, i.e., v = i c i βi, v = i d i βi. One clearly then has that i (c i d i β i =, which, since the vectors in S are linearly independent, implies that c i = d i for i =,..., n. Suppose that V = [S]. Since = i β i, and since vectors are expressed uniquely as linear combinations of the vectors of S, by definition the vectors in S are linearly independent. Hence, S is a basis. Definition 2.2. Let B = β,..., β n be a basis for V. For a given v V there are unique constants c,..., c n such that v = i c i β i. The representation of v with respect to B is given by c c 2 Rep B ( v :=. The constants c,..., c n are the coordinates of v with respect to B. c n B. Example. For P 3, consider the two bases B =, x, x 2, x 3 and D = x, + x, + x + x 2, x + x 3. One then has that 2 Rep B ( 2x + x 3 = 2, Rep D ( 2x + x 3 =. B D

Math 32 Class Notes 6 2.3.2. Dimension Definition 2.2. A vector space is finite-dimensional if it has a basis with only finitely many vectors. Example. An example of an infinite-dimensional space is P, which has as a basis, x,..., x n,.... Definition 2.22. The transpose of a matrix A R m n, denoted by A T R n m, is formed by interchanging the rows and columns of A. Example. A = 4 2 5 3 6 A T = ( 2 3 4 5 6. Theorem 2.23. In any finite-dimensional vector space all of the bases have the same number of elements. Proof: Let B = β,..., β k be one basis, and let D = δ,..., δ l be another basis. Suppose that k > l. For each i =,..., k we can write β i = j a i,jδ j, which yields a matrix A R k l. Now let v V be given. Since B and D are bases, there are unique vectors Rep B ( v = v B = (vi B Rk and Rep D ( v = v D = (vi D Rl such that k v = vi B β l i = vj D δ j. The above can be rewritten as i= ( l k a i,j vi B j= i= j= δj = l vj D δ j. This then implies that the vector v B is a solution to the linear system with the augmented matrix (A T v D, where A T = (a j,i R l k is the transpose of A. Since l < k, when A T is row-reduced it will have free variables, which implies that the linear system has an infinite number of solutions. This contradiction yields that k l. If k < l, then by writing δ i = j c i,j β j and using the above argument one gets that k l. Hence, k = l. j= Definition 2.24. The dimension of a vector space V, dim(v, is the number of basis vectors. Example. (a Since the standard basis for R n has n vectors, dim(r n = n. (b Since a basis for P n is, x,..., x n, one has that dim(p n = n +. (c Recall that the subspace S = {A R 2 2 : a, + 2a 2,2 =, a,2 3a 2, = }. has a basis This implies that dim(s = 2. ( 2 ( 3,.

7 Prof. Kapitula, Spring 23 Corollary 2.25. No linearly independent set can have more vectors than dim(v. Proof: Suppose that S = { x,..., x n } is a linearly independent set with n > dim(v. Since S V one has that [S] V. Furthermore, since the set is linearly independent, dim([s] = n. This implies that dim(v n, which is a contradiction. Corollary 2.26. Any linearly independent set S can be expanded to make a basis. Proof: Suppose that dim(v = n, and that dim([s] = k < n. There then exist n k linearly independent vectors v,..., v n k such that v i / [S]. The set S = S { v,..., v n k } is a linearly independent set with dim([s ] = n. As a consequence, S is a basis for V. Corollary 2.27. If dim(v = n, then a set of n vectors S = { x,..., x n } is linearly independent if and only if V = [S]. Proof: Suppose that V = [S]. If the vectors are not linearly independent, then (upon a possible reordering there is an l < n such that for S = { x,..., x l } one has [S] = [S ] with the vectors in S being linearly independent. This implies that V = [S ], and that dim(v = l < n. Now suppose that the vectors are linearly independent. If [S] V, then there is at least one vector v such that S { v} is linearly independent with [S { v}] = V. This implies that dim(v n +. Remark 2.28. Put another way, the above corollary states that if dim(v = n and the set S = { x,..., x n } is linearly independent, then S is a spanning set for V. 2.3.3. Vector Spaces and Linear Systems Definition 2.29. The null space of a matrix A, denoted by N (A, is the set of all solutions to the homogeneous system for which A is the coefficient matrix. Example. Consider A = 2 3 4 5 6 7 8 9 2. A basis for N (A is given by 2. Definition 2.3. The row space of a matrix A, denoted by Rowspace(A, is the span of the set of the rows of A. The row rank is the dimension of the row space.

Math 32 Class Notes 8 Example. If A = 2 then Rowspace(A = [{ (, ( 2 }], and the row rank is 2., Lemma 2.3. The nonzero rows of an echelon form matrix make up a linearly independent set. Proof: We have already seen that in an echelon form matrix no nonzero row is a linear combination of the other rows. Corollary 2.32. Suppose that a matrix A has been put in echelon form. The nonzero rows of the echelon form matrix are a basis for Rowspace(A. Proof: If A B, where B is in echelon form, then it is known that each row of A is a linear combination of the rows of B. The converse is also true; hence, Rowspace(A = Rowspace(B. Since the rows of B are linearly independent, they form a basis for Rowspace(B, and hence Rowspace(A. Definition 2.33. The column space of a matrix A, denoted by R(A, is the span of the set of the columns of A. The column rank is the dimension of the column space. Remark 2.34. A basis for R(A is found by determining Rowspace(A T, and the column rank of A is the dimension of Rowspace(A T. Example. Consider 2 A = 3 5 8 2 4, A T = 3 5 2 8 A basis for Rowspace(A is (, ( 4, and a basis for R(A is,. 2 2. Theorem 2.35. The row rank and column rank of a matrix are equal. Proof: First, let us note that row operations do not change the column rank of a matrix. If A = ( a... a n R m n, where each column a i R m, then finding the set of homogeneous solutions for the linear system with coefficient matrix A is equivalent to solving c a + + c n a n =. Row operations leave unchanged the the set of solutions (c,..., c n ; hence, the linear independence of the vectors is unchanged, and the dependence of one vector on the others remains unchanged. Now bring the matrix to reduced echelon form, so that each column with a leading entry is one of the e i s from the standard basis. The row rank is equal to the number of rows with leading entries. The column rank of the reduced matrix is equal to that of the original matrix. It is clear that the column rank of the reduced matrix is also equal to the number of leading entries.

9 Prof. Kapitula, Spring 23 Definition 2.36. The rank of a matrix A, denoted by rank(a, is its row rank. Remark 2.37. Note that the above statements imply that rank(a = rank(a T. Theorem 2.38. If A R m n, then rank(a + dim(n (A = n. Proof: Put the matrix A in reduced echelon form. One has that rank(a is the number of leading entries, and that dim(n (A is the number of free variables. It is clear that these numbers sum to n. 2.3.4. Combining Subspaces Definition 2.39. If W,..., W k are subspaces of V, then their sum is given by W + + W k = [W W k ]. Let a basis for W j be given by w,j,..., w l(j,j. If v W + + W k, then this implies there are constants c i,j such that l( l(k v = c i, w i, + + c i,k w i,k. i= Note that i c i,j w i,j W j. For example, when considering P 3 suppose that a basis for W is, + x 2, and that a basis for W 2 is x, + x, x 3. If p W + W 2, then i= p(x = c + c 2 ( + x 2 + d x + d 2 ( + x + d 3 x 3. Q: How does the dimension of each W i relate to the dimension of W + + W k? In the above example, dim(w = 2, dim(w 2 = 3, but dim(p 3 = dim(w + W 2 = 4. Thus, for this example dim(w + W 2 dim(w + dim(w 2. Definition 2.4. A collection of subspaces {W,..., W k } is independent if for each i =,..., k, W i ( j i W j = { }. Example. Suppose that W = [ ], W 2 = [ ], W 3 = [, ]. It is clear that W i W j = { } for i j. However, the subspaces are not independent, as W 3 (W W 2 = [ ].

Math 32 Class Notes 2 Definition 2.4. A vector space V is the direct sum of the subspaces W,..., W k if (a the subspaces are independent (b V = W + + W k. In this case we write V = W W k. Example. (a R n = [ e ] [ e 2 ] [ e n ] (b P n = [] [x] [x n ] Lemma 2.42. If V = W W k, then dim(v = i dim(w i. Example. In a previous example we had that R 3 = W + W 2 + W 3, with Thus, the sum cannot be direct. dim(w = dim(w 2 =, dim(w 3 = 2. Proof: First show that the result is true for k = 2, and then use induction to prove the general result. Let a basis for W be given by β,..., β k, and a basis for W 2 be given by δ,..., δ l. This yields that dim(w = k and dim(w 2 = l. Since W W 2 = { }, the set { β,..., β k, δ,..., δ l } is linearly independent, and forms a basis for [W W 2 ]. Since V = [W W 2 ], this then yields that a basis for V is β,..., β k, δ,..., δ l ; thus, dim(v = k + l = dim(w + dim(w 2. Definition 2.43. If V = W W 2, then the subspaces W and W 2 are said to be complements. Definition 2.44. For vectors u, v R n, define the dot product (or inner product to be u v = The dot product has the properties that (a u v = v u (b (a u + b v w = a u w + b v w (c u u, with u u = if and only if u =. n u i v i. i=

2 Prof. Kapitula, Spring 23 Definition 2.45. If U R n is a subspace, define the orthocomplement of U to be U = { v R n : v u = for all u U}. Proposition 2.46. U is a subspace. Proof: Let v, w U. Since (a v + b w u = a v u + b w u = for any u U, this implies that a v + b w U. Hence U is a subspace. Example. If U = [ e ] R 2, then U = [ e 2 ], and if U = [ e, e 2 ] R 3, then U = [ e 3 ]. Remark 2.47. If A = ( a... a n R m n, recall that R(A = [ a... a n ]. Thus, b R(A if and only if b = i c i a i. Furthermore, N (A = { x = (x i : i x i a i = }. Theorem 2.48. If A R m n, then N (A T = R(A. Proof: Suppose that x N (A T, so that x a i = for i =,..., n. As a consequence, x ( i c i a i =, so that x R(A. Hence, N (A T R(A. Similarly, if y R(A, one gets that y N (A T, so that R(A N (A T. Remark 2.49. Alternatively, one has that N (A = R(A T. Theorem 2.5. If U R n is a subspace, then R n = U U. Proof: Let a basis for U be given by β,..., β k, and set A = ( β... β k R n k. By construction, rank(a = k (and rank(a T = k. By the previous theorem, we have that U = N (A T. Since dim(n (A T + rank(a T = n, we get that dim(u = n k. We must now show that U U = { }. Let δ U U, which implies that δ = i c iβ i. By using the linearity of the inner product k δ δ = c i ( β i δ =, so that δ =. Remark 2.5. A consequence of the above theorem is that (U = U. i= Example. Suppose that a basis for U is β,..., β k. The above theorem shows us how to compute a basis for U. Simply construct the matrix A = ( β... β k, and then find a basis for N (A T = U. For example, suppose that U = [ a, a 2 ], where a = 2 4, a 2 = 3 4 7.

Math 32 Class Notes 22 Since A T ( 2 4 2 5, one has that U = N (A T = [ δ, δ 2 ], where δ = 2 2, δ2 = 4 5. Corollary 2.52. Consider a linear system whose associated augmented matrix is (A b. consistent if and only if b N (A T. The system is Proof: If A = ( a... a n, then the system is consistent if and only if b R(A, i.e., b = i c i a i. By the above theorem R(A = N (A T. Example. As a consequence, the system is consistent if and only if b δ i =, where δ,..., δ k is a basis for N (A T. For example, suppose that A is as in the previous example. Then for b = (b i R 4, the associated linear system will be consistent if and only if δ b =, δ 2 b =. In other words, the components of the vector b must satisfy the linear system which implies that b S = [ b, b 2 ], where b = 2b 2b 2 + b 3 = 4b 5b 2 + b 4 = 2 4, b2 = Note that S = R(A, so that a basis for R(A is b, b 2. Further note that this is consistent with the reduced echelon form of A T. 2 5.

23 Prof. Kapitula, Spring 23 3. Maps Between Spaces 3.. Isomorphisms 3... Definition and Examples Definition 3.. Let V and W be vector spaces. A map f : V W is one-to-one if v v 2 implies that f( v f( v 2. The map is onto if for each w W there is a v V such that f( v = w. Example. (a The map f : P n R n+ given by is one-to-one and onto. a + a x + + a n x n a a. a n (b The map f : R 2 2 R 4 given by is one-to-one and onto. ( a b c d a b c d Definition 3.2. The map f : V W is an isomorphism if (a f is one-to-one and onto (b f is linear, i.e., f( v + v 2 = f( v + f( v 2 f(r v = rf( v for any r R We write V = W, and say that V is isomorphic to W. Remark 3.3. If V = W, then we can think that V and W are the same. Example. (a In the above examples it is easy to see that the maps are linear. Hence, P ( n = R n+ and R 2 2 = R 4. (b In general, R m n = R mn. Definition 3.4. If f : V V is an isomorphism, then we say that f is an automorphism. Example. (a The dilation map d s : R 2 R 2 given by d s ( v = s v for some nonzero s R is an automorphism.

Math 32 Class Notes 24 (b The rotation map t θ : R 2 R 2 given by is an automorphism. t θ ( v = ( cos θ v sin θ v 2 sin θ v + cos θ v 2 Lemma 3.5. If f : V W is linear, then f( =. Proof: Since f is linear, f( = f( v = f( v =. Lemma 3.6. The statement that f : V W is linear is equivalent to f(c v + + c n v n = c f( v + + c n f( v n. Proof: Proof by induction. By definition the statement holds for n =, so now suppose that it holds for n = N. This yields N N f( c i v i + c N+ v N+ = f( c i v i + f(c N+ v N+ i= = i= N c i f( v i + f(c N+ v N+. i= Definition 3.7. Let U, V be vector spaces. The external direct sum, W = U V, is defined by along with the operations W = {( u, v : u U, v V }, w + w 2 = ( u + u 2, v + v 2, r w = (r u, r v. Lemma 3.8. The external direct sum W = U V is a vector space. Furthermore, dim(w = dim(u+ dim(v. Proof: It is easy to check that W is a vector space. Let S u = { u,..., u k } be a basis for U, and let S V = { v k+,..., v l } be a basis for V. Given a w = ( u, v W, it is clear that one can write w = ( i c i u i, j d j v j ; hence, a potential basis for W is w i = { ( u i,, i =,..., k (, v i, i = k +,..., l.

25 Prof. Kapitula, Spring 23 We need to check that the vectors w,..., w l are linearly independent. Writing i c i w i = is equivalent to the equations k l c i u i =, c i v i =. i= i=k+ Since S U and S V are bases, the only solution is c i = for all i. Example. A basis for P 2 R 2 is given by and P 2 R 2 = R 5 via the isomorphism (,, (x,, (x 2,, (, e, (, e 2, (a + a x + a 2 x 2, c e + c 2 e 2 a a a 2 c c 2. 3..2. Dimension Characterizes Isomorphism Note that in all of the examples up to this point, if U = V, then it was true that dim(u = dim(v. The question: does the dimension of two vector spaces say anything about whether or not they are isomorphic? Lemma 3.9. If V = W, then dim(v = dim(w. Proof: Let f : V W be an isomorphism. Let β,..., β n be a basis for V, and consider the set S W = {f( β,..., f( β n }. First, the set is linearly independent, as = n c i f( β n i = f( c iβi i= implies that i c iβ i = (f is one-to-one, which further implies that c i = for all i. Since f is onto, for each w W there is a v V such that f( v = w. Upon writing v = i d iβ i and using the linearity of the function f we get that n w = d i f( β i. Hence, S W is a basis, and we then have the result. i= i= Lemma 3.. If dim(v = dim(w, then the two spaces are isomorphic. Proof: It will be enough to show that if dim(v = n, then V = R n. A similar result will yield W = R n, which would then yield V = R n = W, i.e., V = W. Let B = β,..., β n be a basis for V, and consider the map Rep B : V R n given by c c 2 Rep B ( v =. c n n, v = c iβi. i=

Math 32 Class Notes 26 The map is clearly linear. The map is one-to-one, for if Rep B ( u = Rep B ( v with u = i c i β i, v = i d i β i, then c i = d i for all i, which implies that u = v. Finally, the map is clearly onto. Hence, Rep B is an isomorphism, so that V = R n. Theorem 3.. V = W if and only if dim(v = dim(w. Corollary 3.2. If dim(v = k, then V = R k. Example (cont.. (a Since dim(r m n = mn, R m n = R mn (b Since dim(p n = n +, P n = R n+ 3.2. Homomorphisms 3.2.. Definition Definition 3.3. If h : V W is linear, then it is a homomorphism. Example. (a The projection map π : R 3 R 2 given by π( x x 2 x 3 = ( x x 2 is a homomorphism. However, it is not an isomorphism, as the map is not one-to-one, i.e., π(r e 3 = for any r R. (b The derivative map d/dx : P n P n given by d dx (a + a x + + a n x n = a + 2a 2 x + + na n x n is a homomorphism. However, it is not an isomorphism, as the map is not one-to-one, i.e., d/dx(a = for any a R. Definition 3.4. If h : V V, then it is called a linear transformation. Theorem 3.5. Let v,..., v n be a basis for V, and let { w,..., w n } W be given. There exists a unique homomorphism h : V W such that h( v j = w j for j =,..., n. Proof: Set h : V W to be the map given by h(c v + + c n v n = c w + + c n w n.

27 Prof. Kapitula, Spring 23 The map is linear, for if u = i c i v i, u 2 = i d i v i, then ( h(r u + r 2 u 2 = h (r c i + r 2 d i v i i = i (r c i + r 2 d i w i = r h( u + r 2 h( u 2. The map is unique, for if g : V W is a homomorphism such that g( v i = w i, then g( v = g( i c i v i = i c i g( v i = i c i w i = h( v; hence, g( v = h( v for all v V, so that they are the same map. Example. (a The rotation map t θ ; R 2 R 2 is an automorphism which satisfies t θ ( e = ( cos θ sin θ ( sin θ, t θ ( e 2 = cos θ. One then has t θ ( v = v ( cos θ sin θ + v 2 ( sin θ cos θ. (b Suppose that a homomorphism h : P 2 P 3 satisfies h( = x, h(x = 2 x2, h(x 2 = 3 x3. Then h(a + a x + a 2 x 2 = a x + 2 a x 2 + 3 a 2x 3. The map is not an isomorphism, as it is not onto. 3.2.2. Rangespace and Nullspace Definition 3.6. Let h : V W be a homomorphism. The range space is given by R(h := {h( v : v V }. The rank of h, rank(h, satisfies rank(h = dim(r(h. Lemma 3.7. R(h is a subspace. Proof: Let w, w 2 R(h be given. There then exists v, v 2 such that h( v i = w i. Since h(c v + c 2 v 2 R(h and h(c v + c 2 v 2 = c w + c 2 w 2, one has that c w + c 2 w 2 R(h. Hence, it is a subspace. Remark 3.8. (a rank(h dim(w (b h is onto if and only if rank(h = dim(w

Math 32 Class Notes 28 Example. If h : R 2 2 P 3 is given by ( h(a = (a + bx + cx 2 + dx 3 a b, A = c d, then a basis for R(h is x, x 2, x 3, so that that rank(h = 3. Definition 3.9. The inverse map h : W V is given by h ( w := { v : h( v = w}. Lemma 3.2. Let h : V W be a homomorphism, and let S R(h be a subspace. Then is a subspace. In particular, h ( is a subspace. h (S := { v V : h( v S} Definition 3.2. The null space (kernel of the homomorphism h : V W is given by The nullity of N (h is dim(n (h. N (h := { v V : h( v = } = h (. Example. Again consider the map h : R 2 2 P 3. It is clear that h(a = if and only if a + b =, c = d =, so that a basis for N (h is given by (. Note that for this example, rank(h + dim(n (h = 4. Theorem 3.22. Let h : V W be a homomorphism. Then rank(h + dim(n (h = dim(v. Remark 3.23. Compare this result to that for matrices, where if A R m n, then rank(a + dim(n (A = n. Proof: Let B N = β,..., β k be a basis for N (h, and extend that to a basis B V = β,..., β k, v,..., v l for V, where k + l = n. Set B R = h( v,..., h( v l. We need to show that B R is a basis for R(h. First consider = i c ih( v i = h( i c i v i. Thus, i c i v i N (h, so that i c i v i = i d i β i. Since B V is a basis, this yields that c = = c l = d = = d k =, so that B R is a linearly independent set. Now suppose that h( v R(h. Since v = i a i β i + i b i v i, upon using the fact that h is linear we get that h( v = h( i a i βi + i b i h( v i = + i b i h( v i.

29 Prof. Kapitula, Spring 23 Hence, B R is a spanning set for R(h. B N is a basis for N (h, and B R is a basis for R(h. The result is now clear. Remark 3.24. (a It is clear that rank(h dim(v, with equality if and only if dim(n (h =. (b If dim(w > dim(v, then h cannot be onto, as rank(h dim(v < dim(w. Lemma 3.25. Let h : V W be a homomorphism. dim(n (h = if and only if h is one-to-one. Proof: If h is one-to-one, then the only solution to h( v = is v =. Hence, dim(n (h =. Now suppose that dim(n (h =. From the above lemma we have that if B V = v,..., v n is a basis for V, then B R = h( v,..., h( v n is a basis for R(h. Suppose that there is a w W such that h( u = h( u 2 = w. We have that u = i a i v i, u 2 = i b i v i, so that upon using the linearity of h, a i h( v i = i i b i h( v i. Since B R is a basis, this implies that a i = b i for all i, so that u = u 2. Hence, h is one-to-one. Definition 3.26. A one-to-one homomorphism is nonsingular. 3.3. Computing Linear Maps 3.3.. Representing Linear Maps with Matrices Recall that if B = v,..., v n is a basis for V, then uniquely defined homomorphism h : V W is given by h( v = h( c i v i := c i h( v i, i i i.e., the homomorphism is determined by its action on the basis. Definition 3.27. Let A = ( a a 2 a n R m n, and let c R n. The matrix-vector product is defined by A c = c i a i. i Remark 3.28. (a Matrix multiplication is a homomorphism from R n R m. (b A linear system can be written as A x = b, where A is the coefficient matrix and x is the vector of variables. Example. Suppose that h : P R 3, and that B = 2, + 4x, D = e, 2 e 2, e + e 3 are the bases for these spaces. Suppose that h(2 =, h( + 4x = 2.

Math 32 Class Notes 3 It is easy to check that Rep D (h(2 = Thus, if p = c 2 + c 2 ( + 4x, i.e., /2, Rep D (h( + 4x = Rep B (p = ( c c 2, and since h(p = c h(2 + c 2 h( + 4x, by using the fact that Rep D is linear one gets that Rep D (h(p = c Rep D (h(2 + c 2 Rep D (h( + 4x. If one defines the matrix then one has that For example, if then so that Rep B,D (h := (Rep D (h(2 Rep D (h( + 4x, Rep D (h(p = Rep B,D (h Rep B (p. Rep B (p = ( 2 Rep D (h(p = h(p = 2 = 5 = = (= p(x = 8x, /2 /2 2 3/2 3 2. 2 2 ( 2 +. Definition 3.29. Let h : V W be a homomorphism. Suppose that B = v,..., v n is a basis for V, and D = w,..., w m is a basis for W. Set hj := Rep D (h( v j, j =,..., n. The matrix representation of h with respect to B, D is given by Rep B,D (h := ( h h2 h n R m n. Lemma 3.3. Let h : V W be a homomorphism. Then Rep D (h( v = Rep B,D (h Rep B ( v.

3 Prof. Kapitula, Spring 23 Remark 3.3. As a consequence, all linear transformations can be thought of as a matrix multiplication. Example. (a Suppose that V = [e x, e 3x ], and that h : V V is given by h(v = v(x dx. Since h(e x = e x, h(e 3x = 3 e3x, we have Rep B,B (h = ( /3. (b Suppose that a basis for V is B = v, v 2, v 3, and that a basis for W is D = w, w 2, w 3. Further suppose that h( v = w + 3 w 2, h( v 2 = w 2 w 3, h( v 3 = w + 4 w 3. We then have that Thus, if v = 2 v v 2 + v 3, we have that so that h( v = w + 5 w 2 + 5 w 3. Rep B,D (h = 3 4. Rep D (h( v = Rep B,D (h Rep B ( v = 5 5, 3.3.2. Any Matrix Represents a Linear Map Example. Suppose that h : P 2 R 3, with bases B =, x, x 2 and D = e, e 2 + e 3, e e 3, is represented by the matrix H =. 2 In order to decide if b = e + 3 e 2 R(h, it is equivalent to determine if Rep D ( b R(H. Since Rep D ( b = 2 3 3, and (H b 2 3 4 we have that Rep D ( b / R(H; hence, b / R(h. Note that rank(h = 2, so that h is neither one-to-one nor onto. Let us find a basis for R(h and N (h. We have H, H T 2, so that R(H = [ 2, ], N (H = [ ].

Math 32 Class Notes 32 Using the fact that b R(h if and only if Rep D ( b R(H, and v N (h if and only if Rep B ( v N (H, then yields R(h = [ 3 2, ], N (h = [ x + x 2 ]. Theorem 3.32. Let A R m n. The map h : R n R m defined by h( x := A x is a homomorphism. Proof: Set A = ( a a 2... a n, and recall that A x = i x i a i. Since A(r x + s y = h is a homomorphism. n n n (rx i + sy i a i = r x i a i + s y i a i = ra x + sa y, i= i= i= Theorem 3.33. Let h : V W be a homomorphism which is represented by the matrix H. Then rank(h = rank(h. Proof: Let B = v,..., v n be a basis for V, and let W have a basis D, so that H = (Rep D (h( v... Rep D (h( v n. The rank of H is the number of linearly independent columns of H, and the rank of h is the number of linearly independent vectors in the set {h( v,..., h( v n }. Since Rep D : W R m is an isomorphism, we have that a set in R(h is linearly independent if and only if the related set in R(Rep D (h is linearly independent (problem 3...28, i.e., {h( v,..., h( v k } is linearly independent if and only if {Rep D (h( v,..., Rep D (h( v k } is linearly independent. The conclusion now follows. Corollary 3.34. (a h is onto if and only if rank(h = m (b h is one-to-one if and only if rank(h = n (c h is nonsingular if and only if m = n and rank(h = n 3.4. Matrix Operations 3.4.. Sums and Scalar Products Definition 3.35. Let A = (a i,j, B = (b i,j R m n. Then (a A + B = (a i,j + b i,j (b ra = (ra i,j for any r R.

33 Prof. Kapitula, Spring 23 Lemma 3.36. Let g, h : V W be homomorphisms represented with respect to the bases B and D by the matrices G, H. The map g + h is represented by G + H, and the map rh is represented by rh. 3.4.2. Matrix Multiplication Definition 3.37. Let G R m n and H = ( h h2 h p R n p. Then GH = (G h G h 2 G h p R m p. Example. It is easy to check that 2 3 2 ( 2 3 = 5 3 5 Remark 3.38. Matrix multiplication is generally not commutative. For example: (a if A R 2 3 and B R 3 2, then AB R 2 2 while BA R 3 3. (b if then A = AB = ( 2 ( 2 4 ( 5 2, B = 4 4, ( 9 5, BA = 2 4. Lemma 3.39. Let g : V W and h : W U be homomorphisms represented by the matrices G, H. The map h g : V U is represented by the matrix HG. Proof: Give the commutative diagram: V B g W D h U E Rep B Rep D Rep E R n R m G H R p Example. Consider the maps t θ, d r : R 2 R 2 given by ( ( cos θ v sin θ v t θ ( v := 2 3v, d sin θ v + cos θ v r ( v :=. 2 v 2 The matrix representations for these maps are ( cos θ sin θ T θ := sin θ cos θ ( 3, D r :=.

Math 32 Class Notes 34 Rotation followed by dilation is represented by the matrix ( 3 cos θ 3 sin θ D r T θ = sin θ cos θ, while dilation followed by rotation is represented by the matrix ( 3 cos θ sin θ T θ D r = 3 sin θ cos θ. 3.4.3. Mechanics of Matrix Multiplication Definition 3.4. The identity matrix is given by I = ( e e 2 e n R n n. Remark 3.4. Assuming that the multiplication makes sense, I v = v for any vector v R n, and consequently AI = A, IB = B. Definition 3.42. A diagonal matrix D = (d i,j R n n is such that d i,j = for i j. Definition 3.43. An elementary reduction matrix R R n n is formed by applying a single row operation to the identity matrix. Example. Two examples are I 2ρ+ρ2 2, I ρ ρ2. Lemma 3.44. Let R be an elementary reduction matrix. Then RH is equivalent to performing the Gaussian operation on the matrix H. Corollary 3.45. For any matrix H there are elementary reduction matrices R,..., R k R k R k R H is in reduced echelon form. such that 3.4.4. Inverses Definition 3.46. Suppose that A R n n. The matrix is invertible if there is a matrix A such that AA = A A = I.

35 Prof. Kapitula, Spring 23 Remark 3.47. If it exists, the matrix A is given by the product of elementary reduction matrices. For example, ( ( ( 2ρ +ρ 2 /3ρ 2 ρ 2+ρ A = I. 2 3 Thus, by setting R = ( 2 (, R 2 = /3 we have that R 3 R 2 R A = I, so that A = R 3 R 2 R. (, R 3 =, Lemma 3.48. A is invertible if and only if it is nonsingular, i.e., the linear map defined by h( x := A x is an isomorphism. Proof: A can be row-reduced to I if and only if h is an isomorphism. Remark 3.49. When computing A, do the reduction (A I (I A (if possible. Example. (a For A given above, (A I ( /3 /3 2/3 /3. (b For a general A R 2 2 we have that ( a b c d if ad bc. = ad bc ( d b c a, 3.5. Change of Basis 3.5.. Changing Representations of Vectors For the vector space V let one basis be given by B = β,..., β n, and let another be given by D = δ,..., δ n. Define the homomorphism h : V V by h( β j := β j, i.e., h = id, the identity map. The transformation matrix H associated with the identity map satisfies h j = Rep D ( β j. Definition 3.5. The change of basis matrix H = Rep B,D (id for the bases B, D is the representation of the identity map with respect to these bases, and satisfies h j = Rep D ( β j. Example. Suppose that ( B = (, 2 ( 2, D = (,. We have that V B Rep B R 2 id V D Rep D R 2 H=Rep B,D