Linear Algebra Handout References Some material and suggested problems are taken from Fundamentals of Matrix Algebra by Gregory Hartman, which can be found here: http://www.vmi.edu/content.aspx?id=779979. I ll reference this book as FMA in the rest of this handout. Matrix Addition and Multiplication You need to know how to do multiply a matrix by a scalar and how to add matrices. You should know how the dimensions need to match in order to multiply matrices (and what the dimensions of the resulting product will be) and how to multiply two matrices. See sections. and. of FMA if you need a review of these topics or practice problems. Elementary Row Operations Definition (Elementary Row [Column] Operations).. Multiply a row by a scalar and add it to another row (replacing the latter row with that sum).. Multiply a row by a nonzero scalar.. Interchange/swap two rows. [There are analogous definitions for elementary column operations.] To do these operations using matrix multiplication, we first create an elementary matrix by starting with an appropriately sized identity matrix and then performing that operation on it. Then we multiply our matrix A by that elementary matrix. To perform row operations we pre-multiply A by the elementary matrix, i.e., multiply it on the left: EA; to perform column operations we post-multiply A by the elementary matrix, i.e., multiply it on the right: AE.. Let A = (a) Create elementary matrices that perform the following operations on A: Swap rows and. Multiply row by -. Add times row to row. Zero out row [technically not an elementary row operation, but still the same idea].
Swap rows and. (b) Now perform all of those row operations on A in the order listed. (c) Now find a single matrix that will perform all of those row operations on A. (d) Create elementary matrices that perform the following operations on A: Add times column to column. Multiply column by. Zero out column [technically not an elementary column operation, but still the same idea]. Interchange columns and. (e) Now perform all of those column operations on A in the order listed. (f) Now find a single matrix that will perform all of those column operations on A.. Find the matrix of the elementary row operations which swaps the second and fifth rows of matrices with five rows. Find the square of this matrix and interpret the result in terms of elementary row operations. Now do the same for elementary column operations.. Find a sequence of matrices E, E, E, and E which reduce A = E A =, E E A =, E E E A =, and E E E E A = I. to the following forms: Reduced Row Echelon Form (Discussed in sections. and. of FMA) Definition (Reduced Row Echelon Form). A matrix is in reduced row echelon form (abbr. rref) if its entries satisfy the following conditions.. The first nonzero entry in each row is a (called a leading ).. Each leading comes in a column to the right of the leading s in rows above it.. All rows of all s come at the bottom of the matrix.. If a column contains a leading, then all other entries in that column are. A matrix that satisfies the first three conditions is said to be in row echelon form. Algorithm (Gaussian Elimination). Here is the algorithm (known as Gaussian Elimination ) for putting a matrix in rref: Forward Steps. Working from left to right, consider the first column that isn t all zeros that hasn t already been worked on. Then working from top to bottom, consider the first row that hasn t been worked on.. If the entry in the row and column that we are considering is zero, interchange rows with a row below the current row so that that entry is nonzero. If all entries below are zero, we are done with this column; start again at step.. Multiply the current row by a scalar to make its first entry a (a leading ).. Repeatedly use Elementary Row Operation to put zeros underneath the leading one. 5. Go back to step and work on the new rows and columns until either all rows or columns have been worked on.
If the above steps have been followed properly, then the following should be true about the current state of the matrix:. The first nonzero entry in each row is a (a leading ).. Each leading is in a column to the right of the leading s above it.. All rows of all zeros come at the bottom of the matrix. Note that this means we have just put a matrix into row echelon form. The next steps finish the conversion into reduced row echelon form. These next steps are referred to as the backward steps. These are much easier to state. Backward Steps. Starting from the right and working left, use Elementary Row Operation repeatedly to put zeros above each leading. Work through the following exercises from FMA: From section.:, 9, 7, 9,. From section.:, 9,,. Solving Matrix/Vector Equations (A x = b) and Matrix/Matrix Equations (A X = B) (See sections. and.5 of FMA.) To solve either of these types of equations, First form the augmented matrix by writing down the entries of A on one side of the augmented matrix (typically the left if doing this by hand, and the need to be on the left if doing the next step in Mathematica) and the entries of b or B on the right. Then put the augmented matrix into rref using elementary row operations. Read off the solution. If solving a matrix/vector equation, the independent/free variables correspond to columns in which there is no leading and the dependent/basic/bound variables correspond to columns in which there is a leading. We then write the basic variables in terms of the free variables and can further break up the solution into its particular part and the part corresponding to the associated homogeneous equation A x =. For example, if we ended up with an augmented matrix of the form 5 5 then x is free while x and x are basic. From the first row we see that x + 5x = so x = 5x, and from the second row x = 5. Thus the solution vector would look like 5x x = x 5 5 = + x. 5
5 There is the particular solution and any scalar multiple of would be a solution 5 to A x =. If solving a matrix/matrix equation, the solution is the right side of the augmented matrix as long as the left side where A was originally is now the identity matrix! (If not then there is no solution.) Note that when solving a matrix/vector equation there are three possible scenarios:. There is no solution. (This happens when you encounter a contradiction in the equations, which will look like having = something nonzero when you read the solution off of the rref augmented matrix.). There is one unique solution. (This will happen when the part of the augmented matrix corresponding to A becomes the identity in the rref.). There are infinitely many solutions. (This happens if there are free variables.). Work through the following exercises from FMA: From section.: 9,,. From section.5: 7, 9,.. For each of the following A and b solve the equation A x = b using Mathematica: (a) A =, b = 6 (b) A = 5, b =. For each of the following A and B solve the equation A X = B using Mathematica: (a) A =, B = 6 6 6 (b) A = 5, B = 6 6
5 Span and Spanning Sets Given a set of vectors, we would like to know which other vectors we can create by adding scalar multiples of the vectors in the set. Definition. The span of a set of vectors V = { v, v,..., v n } is the set of all linear combinations of the vectors in V, i.e., all vectors of the form where α, α,..., α n are scalars (real numbers). x = α v + α v + + α n v n We denote the span of V by span(v ) or, if listing out the individual vectors, span{ v, v,..., v n } So how do we figure out which vectors are in the span of vectors v, v,..., v n? The general steps are:. Create a matrix by making v, v,..., v n the rows, maintaining the same relative position (i.e., v becomes row, v becomes row, and so on). This may seem unnatural since we typically write vectors in column form, but we re going to be performing elementary row operations on this matrix we ve created and making the vectors the rows will mean that we re performing those same operations on v, v,..., v n.. Now put this matrix into rref.. The span of the rows of the rref matrix will be the same as the span of the original set { v, v,..., v n } (we call this a basis for the span of those vectors). In other words, any vector that can be created from a linear combination of vectors from { v, v,..., v n } can also be created from a linear combination of the rows of the rref form of the matrix. If any of the rows are all s, then that tells us that the corresponding vectors in the original set are redundant and not necessary Example. Given the two sets of vectors V = u =, u =, u = 6 7 and V = w =, w = 5. Is span(v ) = span(v )? Let A be the matrix consisting of the V vectors and B be the matrix consisting of the V vectors. Putting these matrices into rref yields: / A = rref 8/ 6 7 and B = rref /. 5 8/ 5
The nonzero row of each rref are the same, so V and V do span the same set of vectors, and we can generate any vector of the form x = α + α / 8/. Is the vector r = in span We want to find α and α such that α / /, + α 8/ This leads us to the following system of equations:? What about the vector r =? 8/ α = α = = α = α 8 α = Now we could take the values from the first and third equations and verify that they work in the second and fourth equations as well (and if they don t then r is not in the span). This is fine for lower dimensional vectors, but it isn t the most elegant way of doing it in higher dimensions with more vectors. Instead, we can solve this system the way we have before by forming our augmented matrix and then putting it into rref (note that when we do this we end up with the vectors in the columns instead of the rows as we had before): / 8/ rref. (This isn t quite rref since not all of the rows of zeros are at the bottom of the matrix, but that doesn t matter for what we re doing here.) Now we read off the solution of α =, α =. We repeat this process for the vector r. Putting the vectors into a matrix and row reducing yields rref / 8/ The third row says that =, which is a contradiction, so r is not in the span of these vectors.. (a) Find the span of the vectors v = 5, v =, v =, v = 8 6
(b) Is the vector in span{ v, v, v, v }? What about the vector. If possible, write 5 as a linear combination of the vectors,, 5. 7?. Show that. Show that span,, = span,. 6 span,,, 5 = span,,. 6 Linear (In)Dependence Closely related to span and spanning sets is the notion of linear independence. Definition. Vectors v, v,..., v n are said to be linearly dependent if there exist scalars α, α,..., α n not all zero such that α v + α v + + α n v n =. Theorem. The vectors v, v,..., v n (with k ) are linearly dependent if and only if at least one v i is a linear combination of the others. Definition 5. Vectors v, v,..., v n are said to be linearly independent if there does not exist scalars α, α,..., α n not all zero such that α v + α v + + α n v n =. To determine whether a set of vectors is linearly dependent, we will attempt to solve the equation α v + α v + + α n v n = using the same methods we did when trying to determine whether a vector is in the span of a set of vectors: set up the augmented matrix (putting the vectors in as columns) and put it into rref. If we don t get a contradiction in the resulting solutions (saying that = (something nonzero) then they are dependent. If there is a contradiction, then they re independent. Determine whether each set of vectors is linearly dependent or independent. If dependent, then express one of the vectors as a linear combination of the other(s).. { }, 8 7
{ } / c.,, where c is any real number. 6c 5.,,.,, 7 Determinants (Discussed in sections. and. of FMA.) You ve seen and determinants in Calc III and we ll go over those again in class as well as how to find determinants of higher dimension matrices. Section. in FMA also goes over the general process. With the time remaining in this semester, there are only two properties of determinants on which we ll focus:. Given an n n matrix A, A is invertible if and only if det(a).. If we have a set of n n-dimensional vectors, the the determinant of the matrix formed by those vectors (putting them into the matrix as either columns or rows it doesn t matter which) is if and only if the vectors are linearly dependent (so a nonzero determinant means that they are linearly independent). Note: While both of these are very useful and important from a theoretical perspective, from a computational perspective they are not as useful since computing determinants requires significantly more operations than putting a matrix into rref. Also, the second property is only useful in that specific case if the dimension doesn t match the number of vectors then we need to use the techniques we used when finding spans.. Work through the following exercises from FMA section.:,,,,. 8