Chapter 1: Linear Equations

Similar documents
Chapter 1: Linear Equations

Chapter 1: Systems of Linear Equations

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

MTH 464: Computational Linear Algebra

1 Last time: multiplying vectors matrices

Linear equations in linear algebra

System of Linear Equations

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Linear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay

Math 220 F11 Lecture Notes

Chapter 2: Matrix Algebra

Span & Linear Independence (Pop Quiz)

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Linear Equations in Linear Algebra

Solutions of Linear system, vector and matrix equation

Linear Equations in Linear Algebra

1 Last time: inverses

Review for Chapter 1. Selected Topics

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

Span and Linear Independence

Linear Independence Reading: Lay 1.7

Math113: Linear Algebra. Beifang Chen

1.1 SOLUTIONS. Replace R2 by R2 + (2)R1 and obtain: 2. Scale R2 by 1/3: Replace R1 by R1 + ( 5)R2:

Linear Algebra March 16, 2019

Math 3108: Linear Algebra

The Gauss-Jordan Elimination Algorithm

Row Reduction and Echelon Forms

March 19 - Solving Linear Systems

Lecture 03. Math 22 Summer 2017 Section 2 June 26, 2017

Lecture 2 Systems of Linear Equations and Matrices, Continued

Chapter 1. Vectors, Matrices, and Linear Spaces

ICS 6N Computational Linear Algebra Vector Equations

MAT 2037 LINEAR ALGEBRA I web:

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

1111: Linear Algebra I

Linear transformations

LINEAR ALGEBRA SUMMARY SHEET.

Math Linear algebra, Spring Semester Dan Abramovich

Linear Algebra (wi1403lr) Lecture no.4

1300 Linear Algebra and Vector Geometry Week 2: Jan , Gauss-Jordan, homogeneous matrices, intro matrix arithmetic

Section 1.8/1.9. Linear Transformations

Solving Systems of Linear Equations

Math 54 HW 4 solutions

Matrices and RRE Form

Linear Algebra Exam 1 Spring 2007

Topic 14 Notes Jeremy Orloff

1 Last time: determinants

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Section Gaussian Elimination

1300 Linear Algebra and Vector Geometry

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Linear Independence x

1 Last time: linear systems and row operations

Linear Algebra. Chapter Linear Equations

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

1. b = b = b = b = 5

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Review Solutions for Exam 1

1 Last time: row reduction to (reduced) echelon form

MATH240: Linear Algebra Exam #1 solutions 6/12/2015 Page 1

2018 Fall 2210Q Section 013 Midterm Exam I Solution

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

Sept. 3, 2013 Math 3312 sec 003 Fall 2013

MATH10212 Linear Algebra B Homework Week 4

We know how to identify the location of a point by means of coordinates: (x, y) for a point in R 2, or (x, y,z) for a point in R 3.

Math 3C Lecture 20. John Douglas Moore

web: HOMEWORK 1

Linear Algebra Handout

VECTORS [PARTS OF 1.3] 5-1

Study Guide for Linear Algebra Exam 2

Chapter 6: Orthogonality

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

2. Every linear system with the same number of equations as unknowns has a unique solution.

1 - Systems of Linear Equations

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Math 2940: Prelim 1 Practice Solutions

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Chapter 2: Matrices and Linear Systems

Last Time. x + 3y = 6 x + 2y = 1. x + 3y = 6 y = 1. 2x + 4y = 8 x 2y = 1. x + 3y = 6 2x y = 7. Lecture 2

Solution to Homework 1

All of my class notes can be found at

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

(i) [7 points] Compute the determinant of the following matrix using cofactor expansion.

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Math 4377/6308 Advanced Linear Algebra

Linear Algebra Highlights

MTH 464: Computational Linear Algebra

Linear Algebra I Lecture 10

MATH10212 Linear Algebra Lecture Notes

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Row Reduced Echelon Form

Review Notes for Midterm #2

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Math Linear Algebra Final Exam Review Sheet

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Transcription:

Chapter : Linear Equations (Last Updated: September, 6) The material for these notes is derived primarily from Linear Algebra and its applications by David Lay (4ed).. Systems of Linear Equations Before we spend a significant amount of time laying the groundwork for Linear Algebra, let s talk about some linear algebra you already know. Example. Consider the following systems of equations: x + y = 3 x y = 4 x y = x + y = 4 x + y = x + y = The first system has no solutions. One can see this by solving (via elimination or substitution) or by recognizing these (linear) equations as equations of two parallel lines. That is, intersections between the lines correspond to common solutions for the equations. The second system has exactly one solution. The corresponding lines intersect at one point (, ). The third system has infinitely many solutions as both equations correspond to the same line. Exercise. Solve the following system of equations in three variables using the elimination method. x + y z = 4 x y + 3z = 3 x + y z = 8 Interpret your solution geometrically. We will now define several of the terms we have already used. Definition. The equation a x + + a n x n = b is called linear with variables x i, coefficients a i (real or complex) and constant b. A solution to a linear equations is a set (s,..., s n ) such that substituting the s i for x i in the left-hand side produces a true statement. A system of linear equations is a set of linear equations in the same variables and a solution to the system is a common solution to all the equations in the system. A system is consistent if it has at least one solution and inconsistent if it has no solution. Two systems with the same solution set are said to be equivalent.

When confronted with a system, we are most often interested in the following two questions: () (Existence) Is the system consistent? () (Uniqueness) If the system consistent, is there a unique solution? What we will find is that solving a system of equation can be done much more quickly and efficiently using matrix techniques. First we will lay out notation for this process. Subsequently we will outline the process (Gaussian elimination) and explain how it mirrors the elimination method. An m n matrix M is a rectangular array with m rows and n columns. We denote by M ij the entry in the ith row and jth column of the matrix M. Consider a system of m equations in n variables. The m n matrix C formed by setting C ij to be the coefficient of x j in the ith equation is called the coefficient matrix of this system. The augmented matrix A is an m (n + ) matrix formed just as C but whose last column contains the constants of each system. Example. Consider the system in Example. The coefficient matrix and augmented matrix of this system are 3 and 4 3 3 8 Exercise. Recall your solution to the previous exercise. In each step, write the augmented matrix of the system. What observations can you make about the final matrix? Each action we take in solving a system (via elimination) corresponds to an operation on the augmented matrix of the system. We will make these operations more precise now. Elementary Row Operations. () (Replacement) Replace one row by the sum of itself and a multiple of another row. () (Interchange) Interchange two rows. (3) (Scaling) Multiply all entries in a row by a nonzero constant. Two matrices are said to be row equivalent if one is obtainable from the other by a series of elementary row operations. It then follows that two linear systems are equivalent if and only if their augmented matrices are row equivalent.

. Row reduction and echelon forms The process we lay out in this section is essential to everything we do in this course. In many ways, row reduction of a matrix is just the elimination method for solving systems. The leading entry of a row in a matrix is the first nonzero entry when read left to right. Definition. A rectangular matrix is in (row) echelon form if it has the following three properties. () All nonzero rows are above any rows of all zeros. () Each leading entry of a row is in a column to the right of the leading entry of the row above it. (3) All entries in a column below a leading entry are zeros. This form is reduced if in addition (4) The leading entry in each nonzero row is. (5) Each leading is the only nonzero entry in its column. 3 Example 3. The following matrix is in row echelon form (REF):. 5 3 5 The following matrix is in reduced row echelon form (RREF):. Any matrix may be row reduced (via the elementary row operations) into a matrix in REF, but this matrix is not unique. On the other hand, every matrix is equivalent to one and only one matrix in RREF form. We won t prove this right now, but I hope to later once we have learned about linear independence/dependence. Hence, it makes sense to speak of the echelon form of the matrix. Definition 3. A pivot position in a matrix A is a location in A that corresponds to a leading in the reduced echelon form of A. A pivot column is a column of A that contains a pivot position. Row reduction algorithm (Gaussian Elimination) () Begin with the leftmost nonzero column (this is a pivot column). () Interchange rows as necessary so the top entry is nonzero. (3) Use row operations to create zeros in all positions below the pivot. (4) Ignoring the row with containing the pivot, repeat -3 to the remaining submatrix. Repeat until there are no more rows to modify. (5) Beginning with the rightmost pivot and working upward and to the left, create zeros above each pivot. Make each pivot by multiplying.

4 5 Example 4. Put the following matrix into RREF. 4 5 4 5 4 4 5 4 Note that the above example corresponds to a system with a unique solution. (If I did it right, that solution is (,, ).) But of course, solutions need not be unique (or exist at all). Example 5. The following matrix is in RREF. 5 4 This corresponds to the system x 5x 3 = x + x 3 = 4 = The variables x and x correspond to pivot columns. These are called basic variables. Since x 3 does not correspond to a pivot column, it is called a free variable. This is because there is a solution for any choice of x 3. We will often write solutions in parametric form with free variables listed as such and basic variables solved for in terms of the free variables. x = 5x 3 + x = x 3 + 4 x 3 is free Exercise. The following matrix is in RREF. Translate this into a system and identify the basic and free variables. Write the solution to this system in parametric form. 6 3 4 5 7 Theorem 6. A (linear) system is consistent if and only if the rightmost column of the augmented matrix is not a pivot column. If a linear system is consistent, then the solution set contains either (i) a unique solution (no free variables) or (ii) infinitely many solutions. The first condition in the theorem is equivalent to no row of the form RREF form of the matrix. [ ] b, b in the

3. Vector equations A matrix with one column is said to be a column vector, which for now we will just call a vector and denote it by v or v. (There is a corresponding notion of a row vector but columns are more appropriate for our use now.) The dimension of a vector is the number of rows. [ ] [ ] w e Example 7. u = v = w = w, w i R. 5 π w 3 Two vectors are equal if they have the same dimension and all corresponding entries are equal. A vector whose entries are all zero is called a zero vector and denoted. We denote the set of n-dimensional vectors with entries in R (resp. C) by R n (resp. C n ). The standard operations on vectors in R n (or C n ) are scalar multiplication and addition. (Scalar Multiplication) For c R and v R n, (Addition) For u, v R n, v cv u v u + v v cv u v u + v cv = c =. u + v = + =...... v n cv n u n v n u n + v n [ ] [ ] 3 Example 8. Let u = and v =. Compute u + v. 4 For all u, v, w R n and all scalars c, d R, we have the following algebraic properties of vectors. (i) u + v = v + u (ii) (u + v) + w = u + (v + w) (iii) u + = u (v) c(u + v) = cu + cv (vi) (c + d)u = cu + du (vii) c(du) = (cd)u (iv) u + ( u) = u + ( )u = (viii) u = u [ ] a An aside on R. We visualize a vector as an arrow with endpoint at (, ) and pointing to b [ ] a (a, b). Any scalar multiple by c R, c,, of points in the same direction but is longer if b c > and shorter if < c <. Vector addition can be visualized via the parallelogram rule. It is important to remember this geometric fact: a parallelogram is uniquely determined by three points in the plane. Parallelogram Rule for Addition. If u, v R are represented by points in the plan, then u + v corresponds to the fourth vertex of a parallelogram whose other vertices are, u, and v.

[ ] [ ] 3 Example 9. Let u = and v =. Find u + v geometrically and confirm that it is correct via the rule above for vector addition. Now we turn to one of the most fundamental concepts in linear algebra: span. The other (linear independence) will be introduced in section 7. Definition 4. A linear combination of v,..., v p R n with weights c,..., c p R is defined as the vector y = c v + c v + + c n v n. The set of all linear combinations of v,..., v p is called the span of v,..., v p (or the subset of R n spanned by v,..., v p ) and is denoted Span{v,..., v p }. Geometrically, we think of the span of one vector v as line through the origin since any vector in Span{v} is of the form xv for some scalar (weight) x. Similarly, the span of two vectors v, v which are not scalar multiples forms a plane through the three points, v, v. A reasonable question is when a given vector is in the span of a particular set of vectors? 3 Example. Determine whether w = 3 is a linear combination of u = and v =. 7 We are asking whether there exists x, x such that x u + x v = w, which is equivalent to 3 3x + x x + x = 3 x + x = 3. 3 7 x 3x This gives a system of three equations and two unknowns. We form the corresponding augemented matrix and row reduce to find the solution x = and x =. Theorem. Let e i R n denote the vector of all zeros except a in the ith spot. Then R n = Span{e,..., e n }. Proof. Clearly any linear combination of the e i lives in R n. Suppose a = a = a e + + a n e n {e,..., e n }. 7 a. Rn. a n 3 Then A vector equation [ x v + +x ] n v n = b has the same solution as the linear system whose augmented matrix is v v n b. In particular, b is a linear combination of v,..., v n if and only if there exists a solution to the corresponding linear system. A linear system/vector equation is said to be homogeneous if b =. Such a system is always consistent since we can take x i = for all i. This solution is known as the trivial solution. A homogeneous system has a nontrivial solution if and only if the equation has at least one free variable.

5 Example. Let v = 5, v = 3, v 3 =, and b =. Find all solutions to the 5 6 5 homogeneous system x v + x v + x 3 v 3 = and the system x v + x v + x 3 v 3 = b. We form the augmented matrix of the system and row reduce, 5 5 5 3. 5 6 We see that x 3 is a free variable. In standard parametric form, the solution is x = 5 x 3 x = x 3 is free. /5 Hence, all solutions could be represented in parametric vector form x 3. We repeat with the non-homogeneous system. We form the augmented matrix of the system and row reduce, 5 5 5 5 3. 5 6 5 In parametric form the solution is x = 5 x 3 + 5 x = x 3 is free /5 /5 and in parametric vector form x 3 +. This solution is similar to the one before. We call the portion of the solution containing the free variable x 3 the homogeneous solution of the system.

4. The matrix equation Ax = b If A is an m n matrix with columns a,..., a n R m and x R n, then the product of A and x, denoted Ax, is the linear combination of A using the corresponding entries in x as weights, that is x ] x Ax = [a a a n = x a + x a + + x n a n.. Observe that the result will be a (column) vector with m rows. [ ] Example 3. Let A = and x =. Compute Ax. 3 5 Example 4. Consider the system from last lecture: We could write this as the matrix equation 3 x n 3x + x = x + x = 3 x 3x = 7. 3 [ x x ] = 3. A solution to the matrix equation Ax = b is a vector s R n such that replacing x with s and multiplying produces the vector b R m. We say the matrix equation is consistent if at least one such s exists. The equation Ax = b is said to be homogeneous if b =. Aside on inverses. If A = [a] is a matrix and x a column vector with entry, so x = x, then the equation Ax = ax = b where b R. Assuming a, the solution to this equation is just x = a b. Can we generalize this to matrix equations? What does A mean? We ll return to this question in Chapter. Theorem 5. If A is an m n matrix with columns a,..., a n and b R m, the matrix equation Ax = b has the same solution as the vector equation x a + x a + + x n a n 7 = b, which in [ turn has the ] same solution set as the system of linear equations whose augmented matrix is a a n b. Another way to read the previous theorem is this: The equation Ax = b has a solution if and only if b is a linear combination of the columns of A.

3 4 Example 6. Let A = 4 6 and write b = 3 7 (Partial solution) Row reduce A b to REF form 3 4 b 4 b + 4b b b b 3 b 3 + 3b (b + 4b ). Is A consistent for all choices of b. If the last column contains a pivot then there is no solution. Hence, the matrix equation has a solution if and only if b 3 + 3b (b + 4b ) =. Let s summarize what we ve got so far and try to put some rigor behind these statements. Theorem 7. Let A be an m n matrix. The following are equivalent. () For each b R m, Ax = b has a solution. () Each b R m is a linear combination of the columns of A. (3) The columns of A span R m. (4) A has a pivot in every row. Proof. () (). Let a,..., a n denote the columns of A. By definition of the matrix product, Ax = x a + + x n a n = b, and so by definition, b is a linear combination of the a i. () (3). If b R m is a linear combination of the columns of A, then b Span{a,..., a n }. Hence, Span{a,..., a n } = R m. (3) (4). Suppose A did not have apivot in every row. Then, in particular, A does not have a. pivot in the last row. But then b = is not in the span of RREF(A). But this implies that Span{a,..., a n } R m. (4) (). Since A has a pivot in every row, then the statement is clearly true for RREF(A). Since [A b] and RREF([A b]) have the same solution space, then the claim holds..

5. Solution sets of linear systems Recall, a system of linear equations is said to be homogeneous if it can be written as Ax = where A is an m n matrix and x R m. Such a system always has one solution,, called the trivial solution. The homogeneous system has a nontrivial solution if and only if the equation has at least one free variable. Example 8. Solve the following homogeneous system with one equation: 3x + x 5x 3 =. A general solution is x = 3 x + 5 3 x 3 with x and x 3 free. We can write the solution in parametric vector form as x = x x x 3 = 3 x + 5 3 x 3 This is called parametric vector form. Let u = x x 3 = /3 5/3 x + /3 and v = Span{u, v}, which represents a plane through the origin in R 3. 5/3 x 3.. Then the solution set is In general, the parametric vector form of a solution is of the form x = u x i + + u k x ik + p where the x ij are free variables and p is a particular solution not contained in Span{u,..., u k }. Example 9. Solve the following system with one equation: 3x + x 5x 3 =. We recognize that this is almost the same equation as before and, in fact, it should have the same homogeneous solution. We need a particular solution. One such (relatively obvious) solution is ( /3,, ). As before, we write the solution in parametric vector form as x 3 x + 5 3 x 3 3 /3 5/3 /3 x = = = x + x 3 +. x x x 3 x 3 /3 This corresponds to translation of the homogeneous solution by the vector p =. The next theorem is another version of the fact we discussed previously on choices of solutions. Theorem. Suppose Ax = b is consistent for some b and let p be a solution. Then the solution set of Ax = b is the set of all vectors w = p + v h, where v h is any solution of the homogeneous equation Ax =.

6. Applications of linear systems We will only discuss one type of application right now. We may return to others as time permits. Chemical equations describe quantities of substances consumed and produced by chemical reactions. (Friendly reminder: atoms are neither destroyed nor created, just changed. Example. When propane gas burns, propane C 3 H 8 combines with oxygen O to form carbon dioxide CO and water H O according to an equation of the form (x )C 3 H 8 + (x )O (x 3 )CO + (x 4 )H O. To balance the equation means to find x i such that the total number of atoms on the left equals the C total on the right. We translate the chemicals into vectors H, O 3 C 3 H 8 = 8, O =, CO =, H O =. Balancing now becomes the linear system, 3 x 8 + x = x 3 + x 4. Equivalently, x 3 8 + x x 3 x 4 =. We form the augmented matrix and row reduce, 3 /4 8 5/4. 3/4 Thus, the solution is x = (/4)x 4, x = (5/4)x 4, x 3 = (3/4)x 4 with x 4 free. Since only nonnegative solutions make sense in this context, any solution with x 4 is valid. For example, setting x 4 = 4 gives C 3 H 8 + 5O 3CO + 4H O.

7. Linear Independence Definition 5. An index set of vectors {v,..., v p } in R n is said to be linearly independent if the vector equation () x v + + x n v n = has only the trivial solution. Otherwise the set is said to be linearly dependent. That is, there exist weights c,..., c p not all zero such that Example. Define the vectors v = {v, v, v 3 } is linearly independent. c v + + c n v n =. 3, v = 5, and v 3 = We set up the augmented matrix corresponding to the equation () and row reduce 3 5. 7 7. Show that the set Hence, the only solution is the trivial one and so the set is linearly independent. 4 Example 3. Define the vectors v =, v = 5, and v 3 =. Show that the set {v, v, v 3 } is linearly dependent. We set up the augmented matrix corresponding to the equation () and row reduce 4 5. 3 6 3 6 There is a nontrivial solution (in particular, x 3 is free). Hence, the set is linearly dependent. A consequence of this is that one of the vectors is a linear combination of the other two. I ll leave it as an exercise to indentify which one and how. Exercise. A set of one vector is always linearly independent. A set of two vectors is linearly dependent if and only if one vector is a multiple of the other. Theorem 4. An indexed set S = {v,..., v p }, p, is linearly dependent if and only if at least one of the vectors is a linear combination of the others. In fact, if S is linearly dependent and v, then some v j, j >, is a linear combination of the preceding vectors.

Proof. Suppose S is linearly dependent. Then there exists weights c,..., c p such that c v + + c p v p =. If c, then v = c c v + + cp c v p, so v is a linear combination of v,..., v p. Note that it must be true that at least one of c,..., c p is nonzero. Conversely, if v is a linear combination of v,..., v p, then there exists weights c,..., c p such that v = c v + + c p v p. Equivalently, v + c v + + c p v p =, so S is linearly dependent. Both arguments hold with any vector in place of v. Example 5. Let u, v, w R 3 with u, v linearly independent. Then {u, v, w} is linearly dependent if and only if w Span{u, v}. Theorem 6. Any set {v,..., v p } R n is linearly dependent if p > n. {[ ] [ ] [ ]} 4 Example 7. The set,, is linearly dependent. Theorem 8. If a set S = {v,..., v p } R n contains the zero vector, then the set is linearly dependent. Proof. Suppose v =, then v + v + + v p =. A similar argument holds if any v i is the zero vector.

8. Introduction to Linear Transformations Given an m n matrix A, the rule T (x) = Ax defines a map from R n to R m. We will be almost exclusively interested in these types of maps. Example 9. Define T (x) = Ax where 3 [ ] 3 3 A = 3 5, u =, b =, c =. 7 5 5 () Find T (u). () Find x R whose image under T is b. (3) Is the x in part () unique? (4) Is c in the range of T? Definition 6. A transformation T from R n to R m (written T : R n R m ) is a rule that assigns to each vector x R n a vector T (x) R m, called the image of x. We call R n the domain and R m the codomain. The set of all images under T is called the range. A transformation is linear if () T (u + v) = T (u) + T (v) for all u, v R n. () T (cu) = ct (u) for all c R and u R n. Note that, in this context, the range of T is the set of all linear combinations of the columns of A. Example 3. Let T (x) = Ax where A is defined as follows. () A =. T is a transformation R 3 R 3 but the range of T is equivalent in some way to R. We say T is a projection of R 3 onto R. [ ] 3 () A =. T is called a shear transformation because it leaves fixed the second component of x. [ ] x (3) Consider the map T : R R given by T (x) =. This map is not linear (because (a + b) a + b in general). Exercise. If T is a linear transformation, then T () =. The next theorem says that linear transformations and matrix transformations are the same thing. However, not all transformations are linear. I will leave it as an exercise to give an example of a transformation that is not linear. x

Theorem 3. A transformation T : R n R m is linear if and only if there exists a unique m n matrix A such that T (x) = Ax. Proof. ( ) Suppose T (x) = Ax. We must show that for all u, v R n and c R, A(u + v) = Au + Av and A(cu) = cau. Denote the columns of A by a,..., a n. A(u + v) = a (u + v ) + + a n (u n + v n ) = (a u + a v ) + + (a n u n + a n v n ) = (a u + + a n u n ) + (a v + + a n v n ) = Au + Av. A(cu) = a (cu ) + + a n (cu n ) = ca u + + ca n u n = c(a u + + a n u n ) = cau. ( ) Suppose T is linear. Let {e,..., e n } be the standard basis vectors of R n. For x R n, ] x = [e e n = x e + + x n e n. By linearity, T (x) = T (x e + + x n e n ) = x T (e ) + + x n T (e n ) [ ] = T (e ) T (e n ) x. [ ] Set A = T (e ) T (e n ). We need only show that A is unique. Suppose T (x) = Bx for some m n matrix B with columns {b,..., b n }. Then T (e ) = Be = b. But T (e ) = Ae = a, so a = b. Repeating for each column of B gives B = A. We call A the standard matrix of T. Example 3. Suppose [ ] T : [ R ] R be [ the ] transformation [ ] that rotates each point θ counterclockwise. Then and. Hence, the standard matrix of T is cos θ sin θ sin θ cos θ [ ] cos θ sin θ A = sin θ cos θ. Example 33. Describe the transformations given by the following matrices, [ ] [ ] [ ] [ ] [ ] 3 k,,,,. 3

9. The matrix of a linear transformation In this section we investigate some special properties that a linear transformation may possess. Definition 7. A mapping (not necessarily linear) T : R n R m is said to be onto if each b R m is the image of at least one x R n. T is one-to-one (-) if each b R m is the image of at most one x R n. Another way to phrase one-to-one is T (x) = T (y) x = y. Example 34. () Projections are onto but not -. [ ] x () The map T : R R given by T (x) = is - but not onto. x [ ] 3 (3) The map T : R R given by T (x) = Ax where A = is - and onto. We will develop criteria for - and onto based on the standard matrix of a linear transformation. Theorem 35. Let T : R n R m be a linear transformation. Then T is - if and only if T (x) = has only the trivial solution. Proof. ( ) Assume T is -. Since T is linear, T () =. Because T is -, T (x) = = T () implies x =. Hence T (x) = has only the trivial solution. ( ) Assume T (x) = has only the trivial solution. Suppose T (x) = T (y). Then T (x y) = so x y =, so x = y and T is -. The set {x : T (x) = } is called the kernel of T. Another way to state the previous theorem is to say that T is - if and only if its kernel contains only. Theorem 36. Let T : R n R m be a linear transformation with standard matrix A. () T is onto if and only if the columns of A span R m. () T is - if and only if the columns of A are linearly independent. Proof. () The columns of A span R m if and only if for every b R m there exists x R n such that x a + + x a n = b. This is equvialent to Ax = b, which is equivalent to T (x) = b and this holds if and only if T is onto. () T is - if and only if T (x) = has only the trivial solution and this holds if and only if Ax = has only the trivial solution. That is, x a + + x n a n = has only the trivial solution. This is equivalent to the columns of A being linearly independent.