Linear Algebra, Summer 2011, pt. 2

Size: px
Start display at page:

Download "Linear Algebra, Summer 2011, pt. 2"

Transcription

1 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces Examples of vector spaces The column space The null space The row space and left null space Linear independence, basis, and dimension An aside Rank, linear independence Span, basis Dimension Finding the dimension and a basis for the four fundamental subspaces The previous section, compressed to a page Putting it all together: Solving Ax = b. 22 Inverses. We ve already seen how to invert an elementary matrix, but it was only happy coincidence that it was so easy. However, there is an easy algorithm (easy to remember, sort of a pain to calculate) to find an inverse.

2 Let s suppose for concreteness that A = It will turn out that A is invertible (remember that this is not always true!), and so whatever 3 3 matrix A is, it must be the case that AA = I. Recalling how we multiply matrices a column at a time, this means that A times the first column of A must be (,, ) T. That is to say, we wish to find an x such that Ax = (,, ) T. Then x will be the first column of A. Similarly, to find y, the second column of A, we will solve Ay = (,, ) T. We perform a similar calculation to find the third column of A. The fastest way to do this in practice is simply to solve all three equations at the same time. That is to say, we set up the augmented matrix [A I], then row reduce and back substitute. We will be left with [I A ], assuming A is invertible (otherwise, row reduction will go wrong at some point). Using the A above, we have [A I] = Row reducing, then back substituting (this process is called Gauss-Jordan elimination), we find 2 [I A ] =. 2. We also investigate what happens when this goes wrong. Suppose A = Then row reduction breaks down at 2. Then we conclude that the matrix A was singular. 2

3 2 Vector Spaces. Much of what we ve done so far has been working up the foundations of using matrices as great calculating tools and ways of storing information. Vector spaces hold an important theoretical spot in mathematics. One way to approach them is to realize that many results we ll prove rely only on the following two ingredients:. We have a set V, meaning just a collection of objects with no special a priori relations between them. A helpful V to keep in mind is the collection of vectors in R 2, though we ll create a collection of other vector spaces. 2. We will then define two relations on the set V, called addition and scalar multiplication. These two operations have to obey a certain number of properties, which should be familiar, since vectors in R 2 satisfy them. Properties of vector spaces. (a) Addition is associative. (b) Addition is commutative. (c) There exists an identity element of addition. (d) Every vector has an additive inverse. (e) Scalar multiplication distributes over vector addition. (f) Scalar addition distributes. (a + b)v = av + bv. (g) Scalar multiplication commutes. (h) There exists an identity element of scalar multiplication. Let s gather a stable of examples of vector spaces. 2. Examples of vector spaces. The collection of all n dimensional vectors. We also call this space R n. The scalars are real numbers, and vector addition is, well, vector addition. 3

4 All 5 3 matrices. It is usually helpful to think of vectors as just thin matrices, but in this case we think of a matrix as a vector, with the usual addition and scalar multiplication. All polynomials of degree less than or equal to 3. Notice that we also know how to multiply two polynomials, but we ignore that while we think of this space as a vector space. Also notice that polynomials of degree exactly 3 is not a vector space, since there is no vector. All sequences. Given two sequences (,, 2, 3, 4, 5,...) and (, /2, /4, /8,...), we may add entries pointwise, and perform scalar multiplication similarly. We might call this space R, in analogy to R n All functions on R. Given two functions f(x) and g(x), we can add them h(x) = f(x) + g(x), or multiply by scalars. Thinking geometrically, any plane through the origin in R 3 is also a vector space: we can add any two vectors on that plane, or multiply any vector and stay on that plane. We call this plane a subspace of R 3, and these sorts of objects will be a major object of study. Inasmuch as the definition of a vector space was somewhat esoteric and nitpicky, the requirement to be a subspace is much more real: Definition 2. (Subspace) A subspace of a vector space V is a subset of S of V so that. If v, w are in S, then v + w is in S, and 2. For any scalar c and vector v in S, cv is in S. Intuitively, we say that S is a subspace if it is closed under addition and scalar multiplication. Let s start with some examples of subsets S that are not subspaces, then talk about subsets that are. Example 2. (Not a subspace.) Let S = {(x, y) R 2 y = x 2 }. This is a nice graph that we can draw, but it is not closed under either scalar multiplication or addition. Let s show that thoroughly: we can write an element of S by (t, t 2 ). Checking addition, (t, t 2 ) + (s, s 2 ) = (t + s, t 2 + s 2 ), 4

5 but (unless s or t is zero), (t + s) 2 t 2 + s 2, so we are out of the set. Similarly, we check scalar multiplication (note we don t have to do this, since we already know S is not a subspace!) so long as c and t are not zero. c(t, t 2 ) = (ct, ct 2 ) (ct, c 2 t 2 ), Example 2.2 (Not a subspace.) Wait, you might be saying the problem is that y = x 2 is curved, but subspaces should be flat! This is a good point, and true, but we should be careful because not all flat subsets are subspaces: Let S = {(x, y) R 2 : y = 2 x}. Then S is a line in the plane, but one that doesn t go through the origin. Again, we can write a point in S by (t, 2 t), but (t, 2 t) + (s, 2 s) = (t + s, 4 t s) (t + s, 2 t s), and c(t, 2 t) = (ct, 2c ct) (ct, 2 ct). The problem in the above example was really that the origin is not in S. Notice that it must always be, since if v S, then v S, so v v = is in S. Now let s find all the subspaces of R 2. Example 2.3 (Subspaces of R 2.) One subspace of R 2 is just the zero vector. It is closed under vector addition (since + = ), and scalar multiplication (c = ). Now suppose we have one other element in our subspace S, and call it v. Then certainly all scalar multiples of v must be in S. This corresponds to the line with slope v. In fact, it turns out that this is a subspace: by construction it is closed under scalar multiplication, and if we add cv + dv = (c + d)v, the sum of two scalar multiples of v is also a scalar multiple of v. Now suppose we add another vector w that is not on the line determined by v. By the previous argument we will need all the points on the line generated by w, but then also all sums of the form ( v w av + bw = v 2 w 2 5 ) ( a b ).

6 We can see, either geometrically or analytically with some work, that with the proper choice of a and b, we can get to any point on the plane (this relies on the fact that v and w are non-collinear). Hence R 2 has only the subspaces, every line through the origin, and R 2 itself. Moving up a dimension, R 3 will have subspaces, lines through the origin, planes through the origin, and R 3. Example 2.4 (Subspaces of matrix spaces.) The collection D of n n diagonal matrices is a subspace of M n n, the vector space of all n n matrices. We can verify this since adding two diagonal matrices leaves the matrix diagonal, as does multiplying by a scalar. You might also think of M n n as being a wacky way of writing R n2, and the diagonal matrices are the subspace generated by n nonzero entries. More concretely, M 2 2 can be thought of as R 4, and the diagonal matrices are those vectors in R 4 with the second and third entries zero. 2.2 The column space. We will be studying four important subspaces that are generated by a matrix. They are the row space, the column space, the null space, and the left null space. First will be the column space. We begin with a motivating example. Example 2.5 Suppose we have a system of equations with the coefficient matrix 3 3 A = One interesting question to ask is: For what vectors b does Ax = b have a solution? Another way of thinking of this is as follows: given any vector x R 2, A will give us a vector b R 3. What does the collection of all these vectors look like? Our intuition of multiplying matrices tells us that the answer to these questions for this particular A is the collection of linear combinations of columns of A. We call this the column space of A, and denote it by C(A): C(A) = {(3a + 3b, 2a, 3a + 2b) T R 3 : a, b R}. 6

7 Looking above, it is not immediately clear that C(A) is a subspace in general (or even in particular). In order to show that this is a subspace, we must show that if b and c are vectors in C(A), then so is b + c and kb for all real numbers k. But since b C(A), there is a x such that Ax = b, and a y so that Ay = c. Then A(x + y) = Ax + Ay = b + c, so x + y is the desired vector whose image is b + c. Similarly, A(kx) = kax = kb, so kx is the desired vector to show closure under scalar multiplication. We have shown that the column space of an m n matrix A is always a subspace of R m. Now we have a quick answer to when the equation Ax = b has a solution: precisely when b lies in C(A). Notice that if the row echelon form of a square n n matrix has n pivots, then we can solve Ax = b for all b, so the column space is all of R n. We will discuss a strategy for calculating the column space later, but notice that you can always write it down as a linear combination of all the columns of A, but there is interest in getting rid of useless columns: those columns that can be written as a sum of other ones. For example, the column space of A = can be written, without thinking, as C(A) = {a( 2,, 3) T + b(3, 2, 3) T + c(,, ) T + d(2, 2, 2) T : a, b, c, d R}. However, the sharp reader will notice that we do not need to include this last vector (2, 2, 2) T, as we could equally well just have c+2d times (,, ) T. In this sense, the last vector (or, equivalently, the second to last vector) is extraneous, and we ll write our column space slightly more compactly as C(A) = {a( 2,, 3) T + b(3, 2, 3) T + c(,, ) T : a, b, c R}. It is important to see that these two subspaces are actually the same, just we write the second one more compactly. To use words that we define later the three vectors above are linearly independent, and span the subspace C(A). The second way of writing makes it clear that C(A) has dimension 3 (well, makes it clear once we have defined the word dimension!) 7

8 2.3 The null space. The next subspace generated by a matrix A that we will look at is called the null space, and is denoted by N(A). The null space is the collection of solutions x to the equation Ax =. Hence if A is m n, then N(A) is a subspace of R n (contrast this with C(A) which is a subspace of R m ). First let s confirm that N(A) is indeed a subspace: Closure under vector addition. Suppose x, y N(A). By our definition, this means that Ax = and Ay =. We need to show that x + y is also in the null space. But this just requires the following calculation: A(x + y) = Ax + Ay = + =. Hence N(A) is closed under vector addition. Closure under scalar multiplication. Now suppose k R and x N(A). This means that Ax =. We need to show that kx N(A), which means that A(kx) =. This is just the calculation A(kx) = kax = k =. Hence N(A) is closed under vector addition. We close with a quick example: Example 2.6 (The null space.) We return to the matrix A = 2 2, whose column space we calculated earlier. We also noted that twice the third column is equal to the fourth column. This is the same as observing that the vector (,, 2, ) T N(A), since A = = Since N(A) is a subspace, this means all multiples of (,, 2, ) T are also in N(A). Soon we will develop the tools to show that these are all the 8

9 vectors of N(A), though you may be able to convince yourself by looking at the matrix (or seeking to solve Ax = ). In any case, N(A) = {a(,, 2, ) T : a R} 2.4 The row space and left null space. The row space will be the collection of all d such that there exists an x with xa = d. This is equivalent to the column space of A T. More will be mentioned on this later. Similarly, the left null space is the collection of all x such that xa =. This is the nullspace of A T. Summarizing where all these subspaces live, suppose A is an m n matrix: The column space is a collection of m vectors b so that there is an n vector x with Ax = b. The null space is a collection of n vectors x so that Ax =, where is the m vector consisting of all zeros. The row space is a collection of n vectors d such that there is a m vector x with xa = d. The left null space is a collection of m vectors x such that xa =, where is the n vector consisting of all zeros. 3 Linear independence, basis, and dimension. 3. An aside. We now have many balls in the air, and it helps to reflect on those: We are still waiting to have a really complete solution of the equation Ax = b for general m n coefficient matrices A. The strategy for this will be to first check that b is in the column space of A, then we find a particular solution x p so that Ax p = b. Next we notice that we can add elements y N(A) to x p and still have a solution: A(x p + y) = Ax p + Ay = b + = b. 9

10 Thus, information about the column space will let us know for which b s the equation Ax = b is solvable, and information about the null space will let us know, in a precise sense, how many solutions there are. We learned about vector spaces and subspaces, in particular the column space, null space, row space, and left null space. These will give us information about the matrix, but we are still seeking algorithms for finding these subspaces, as well as language for describing them. This section is mainly concerned with that goal (namely, linear independence, basis, and dimension). We will then use this language to accomplish our goals up above. 3.2 Rank, linear independence. Let us use the matrix A = Factoring, we get A = LU = , hence the pivots are in positions (, ), (2, 3) and (3, 4). We call the number of pivots that a matrix has the rank of a matrix. We have just discovered that rank(a) = 3. The rank will be our first attempt to glean information from a matrix by associating it with a number. The second will be the determinant, which gets significantly more attention in most introductions, but the rank is surprisingly informative for a simple calculation. The information it gives we can already see: The rank of a matrix is the number of linearly independent rows.

11 In the matrix A, our last row could also be written as a sum of the first two rows. Let s now finally define the words linearly independent. Note that this is a vector space definition, and applies to all vector spaces, not just R n. Definition 3. (Linear Independence.) We say a collection of vectors v, v 2,..., v n are linearly independent if the equation a v + a 2 v 2 + a n v n = has only the solution a = a 2 = = a n =. We should note that setting each of the weights a j = for j =,..., n will always satisfy the above equation, so we are only concerned with nonzero solutions. Also note that linear independence applies to a collection of vectors rather than a single vector. We give two examples: Example 3. (Linear Dependence.) The matrix A introduced above has linearly dependent rows, since ( 2, 3,, 2) + (2, 3,, 2) + (,, 4, 9) + ( ) (,, 2, 4) = (,,, ). The columns are also linearly dependent (this is not a coincidence!) since the first column plus the second column is equal to the zero vector (in this case the weights (a, a 2, a 3, a 4 ) = (,,, )). Example 3.2 (Linear Independence.) The vectors (,, ), (,, ) and (,, ) are linearly independent. We can see this since the sum has a particularly simple form: a (,, ) + a 2 (,, ) + a 3 (,, ) = (a, a 2, a 3 ). This vector is zero only when a = a 2 = a 3 =. Example 3.3 (Linear dependence of two vectors.) Given two nonzero vectors v and v 2, the linear independence picture is particularly clear: if, with a, a 2, then we have a v + a 2 v 2 =

12 v = cv 2, for c := a 2 /a. Geometrically, this means that if v and v 2 are linearly dependent, then they are parallel: they point in the same direction. A final example that will be important for our four fundamental subspaces: Example 3.4 (Upper triangular matrices.) Let s go back to our matrix A, but in particular the row echelon form of A; U = 2 4. We already noted that U has 3 linearly independent rows: the first, second and third (any collection of vectors that includes the zero vector is automatically linearly dependent!) First note that the columns are linearly dependent, since 3/2 of the first column plus the second column gives the zero vector. However, we claim that U also has three linearly independent columns, the first, third and fourth. To show this, we must show that a 2 + a a = means that a = a 2 = a 3 =. But notice first that a 3 must be, since it is the only number in the third coordinate. Then using back substitution, 2a 2 + 4a 3 = 2a 2 =. Hence a 2 =. Finally, the first coordinate gives us that a =. This example was general: If A is upper triangular, then the collection of nonzero rows is linearly independent, and the collection of columns containing pivots is linearly independent. We can see that this is true in general, since back substitution will always give us that each of these coefficients need to be zero. 2

13 3.3 Span, basis. Now we define the span of a set of vectors. Again, this is a vector space definition, though it is helpful for the intuition to think of it in the context of R n. Definition 3.2 (Span.) The span of a set of vectors v, v 2,..., v n is the set of all linear combinations of the vectors: span{v, v 2,..., v n } = {a v + a 2 v a n v n : a,..., a n R}. Notice that the span of a collection of vectors is a subspace of the vector space they live in. For example, the column space of a matrix is defined as the span of the columns of the matrix. Also, we point out that the span is not picky: it will eat any collection of vectors. We also use the word span as a verb: If S is a subspace and span({v, v 2,..., v n }) = S, then we say that the vectors {v, v 2,..., v n } span S. Example 3.5 (Span is not picky.) We simply note that R 2 = span{(, ), (, )} = span{(, ), (, ), (, ), (, ), (8, 2)}. By not picky, we mean that the first set above is a much more compact way of writing vectors whose span is R 2. The second set is linearly dependent, meaning there are extraneous vectors in it. What happens when we throw out the extraneous vectors? basis. We get a Definition 3.3 (Basis.) Given a subspace S of a vector space V, a basis for S is a collection of vectors {v, v 2,..., v n } such that. The span span({v, v 2,..., v n }) = S, and 2. the vectors {v, v 2,..., v n } are linearly independent. A basis is great because condition tells us every vector x in S can be written as x = a v + a 2 v a n v n. 3

14 Now suppose also that Then by subtracting we find x = b v + b 2 v b n v n. = (a b )v + (a 2 b 2 )v (a n b n )v n. We then invoke condition 2 of a basis to find that (a j b j ) = for j =,..., n. Hence a j = b j for each j. This means that every vector in a subspace can be written uniquely as a linear combination of basis vectors. Condition lets us use the words can be written, and condition 2 gives us the word uniquely. It is important to note that a subspace does not have a unique basis. For example, {(, ), (, )} is a basis for R 2, as is {(, ), (, )}. This leads to a more general observation: Example 3.6 (Bases of R 2.) How do we show that (, ) and (, ) are linearly independent? We will put them as the columns of a matrix A, and try to find a nonzero solution to Ax =. In particular, we are solving x (, ) T + x 2 (, ) T = ( ) ( x x 2 ) = ( We may either observe that through row reduction rank(a) = 2, meaning A has 2 linearly independent rows, and 2 linearly independent columns, or actually solve through back substitution to find that x = x 2 = is the only solution. Now notice that whenever a 2 2 matrix A has rank 2, the columns will be linearly independent. Also recall that being rank 2 means A has two pivots, which means that A will be invertible. Hence A exists, and so Ax = b has a solution for any b, namely A b. Thus there is a linear combination of the columns of A that give b for any b R 2. Which means that the span of the columns is all of R 2. This is a great fact: The columns of any invertible 2 2 matrix form a basis for R 2. The same analysis as above shows that there is nothing special about the number 2. The columns of an invertible n n matrix will be a basis for R n. We will provide through examples how the columns of a singular square matrix are not a basis for R n. ). 4

15 We approach the same example, geometrically. Example 3.7 (Bases of R 2, geometrically.) Notice that any single vector in R 2 is linearly independent, but the span will only be all the points lying on that line through the origin. Conversely, any three vectors v, v 2 and v 3 (assuming no two of the vectors are parallel) will span R 2, but are linearly dependent, since you can draw a path using only the directions v and v 2 to get to v 3. The right number is two:; any two non-collinear vectors in R 2 will span R 2 and be linearly independent. These two examples were the same, just one argument was made analytically, the other geometrically Dimension. Now we have seen that bases do not have unique bases, but one thing is unique: any two bases of a vector space S will have the same number of basis elements. We call this number the dimension of S. In general, when we encounter a subspace or vector space, the two questions we will ask are:. What is the dimension? 2. What is a basis? Knowing the answer to the first question tells you quite a bit about a vector space. Knowing the answer to the second question lets you know everything about a vector space (in fact, we typically define a vector space by just writing down a basis). First, let s prove the assertion we made earlier: Proposition 3. Every basis for a vector space V has the same number of elements. Proof This will be a proof by contradiction. We will assume that we have two different bases for V, and that the bases have a different number of elements, then show that one of them wasn t actually a basis. So let {v, v 2,..., v m } and {w, w 2,..., w n } be two bases for V, with m < n. Since the basis elements are also elements of V, we can write each w j uniquely as a linear combination of the v k s: m w j = a,j v + a 2,j v a m,j v m = a k,j v k. 5 k=

16 Written as a matrix, this says that W = V A, where W is the matrix with columns w j, V is the matrix with columns v j, and A = (a i,j ) is the (unique) coefficient matrix we just wrote down. Now A is m n, so V A is m n, i.e., this matrix corresponds to a system of m linear equations in n unknowns. Then there exists a nonzero solution to V Ax =, hence W x =, hence there is a linear combination of the columns of W giving zero, and so {w, w 2,..., w n } was not actually linearly independent. 3.4 Finding the dimension and a basis for the four fundamental subspaces. As an application of all the definitions we just gave, we give an algorithm for finding first what the dimension of the four fundamental subspaces is, and then finding a basis. We will go through an example, while at the same time computing a concrete example. For that example we let A = Step. Preliminaries. Perform Gauss-Jordan elimination on the matrix (A I). Note that in general, if A is m n, then this identity matrix is m m. Remember also the difference between Gaussian elimination and Gauss-Jordan elimination. Gaussian elimination gives us Proceeding to back substitute (i.e., completing Gauss-Jordan elimination) gives

17 We will denote this process by (A I) (R E), where R stands for reduced row echelon form, and E stands for elimination. Notice that the rows of E correspond exactly to the linear combination of rows of A that give the rows of R. That is to say, 4 times row of A, minus 3 times row 2 will give the first row of R. As a bit of language, we will call the columns that contain pivots pivot columns, and those that don t free columns. We will denote by r the rank of the matrix, so in this case the rank is 2. In general, we will have r pivot columns, and n r free columns (recall that A is m n). Between the matrices A, R and E, we can read off nearly all the information we need. The null space will still take a bit of work. Step 2. Calculating the column space. We can read off both the dimension and find a basis right away: the dimension of C(A) is r, and a basis consists of the columns of A that correspond to the pivot columns of E. Hence, for our example, C(A) has dimension 2, and a basis is given by, 3 4. It is a (possibly misleading) accident that these two columns generate the same subspace as,, which are the two pivot columns of R. This is not true in general! We should say why this is a basis (and why the corresponding columns of R are not). The second question is easier: there s no reason for it to be. As a simple example, consider the matrix ( 2 2 ). 7

18 Its reduced row echelon form is ( and we can see that a basis for the column space for the first is just the span of (, 2) T, while the column space for the second is generated by (, ) T, so the two are not equal. To explain why these columns of A form a basis for C(A), first note that the collection of all columns of A certainly span C(A): this is the definition of column space. However, the columns of A are not necessarily linearly independent. The information we get from R is that ), Rx = if and only if Ax =. Further, each free column of R is a linear combination of the pivot columns, so the pivot columns are the biggest linearly independent collection of columns of R, and hence the biggest linearly independent collection of columns of A. Step 3. Calculating the null space. The dimension of N(A) is easy: each free column corresponds to a basis element for the null space. Hence in general the dimension of the null space is n r. To find the null space, we will have to actually solve Ax =. However, we have already put A into reduced row echelon form, R, so we can solve instead Rx =. What we do is set one of our free variables equal to, and the rest equal to, then solve. This will be our first basis vector. Then we repeat for each free variable to get a basis. Working through an example will make this clear. We are solving 2 2 x x 2 x 3 x 4 x 5 =, 8

19 where the free variables have been underlined. First we set x 2 =, and x 4 = x 5 =. Multiplying out, we get x + 2 x 3 = Hence, x = 2, and x 3 =. Then our first basis vector is 2. We can confirm this by noting that the second column of A is twice the first column. Proceeding, we set x 4 =, and x 2 = x 5 =. Now we multiply out to find: x + x 3 + =. Hence, x = x 3 =, and the second basis vector for N(A) is. Again we can confirm that the sum of the first and third columns of A gives the fourth column of A. Finally, we set x 5 =, and x 2 = x 4 =. Then multiplying out gives: x + 2 x 3 + =, 9

20 so x = 2, x 3 =, and the final basis vector is 2 Hence, a basis for N(A) is given by 2,., 2. Our algorithm guarantees that this collection is linearly independent: if we refer to the above basis by {v, v 2, v 3 }, then 2c c 2 2c 3 c c v + c 2 v 2 + c 3 v 2 = c 2 c 3 c 2. c 3 Hence, to equal, the second coordinate says that c =, the fourth says c 2 =, and the fifth says that c 3 =. From this we conclude that these vectors are linearly independent. They also span the null space, since this is a complete solution to Rx =. Step 4. The row space. Great news! Row operations do not change the row space (contrast this with how row operations do change the column space above). Hence, the dimension of the row space is also r, and a basis is each of the pivot rows of R (though the pivot rows of A also form a basis, it is just often messier). In our example, we get that the dimension of C(A T ) is 2, and a basis is { ( 2 2 ), ( ) }. 2

21 Step 5. The left null space. More great news! Since E keeps track of our row operations, we can take the rows of E corresponding to the zero rows of R to get a basis for the null space. These are exactly the combinations of rows that give zero. The dimension will be m r: the number of rows, minus the number of pivots. In our example, the dimension of the left null space is, and a basis is { ( ) }. 3.5 The previous section, compressed to a page. Suppose we have a matrix A. Then if we take the augmented matrix [ A I ] and use row operations to reduce A to reduced row echelon form, R, then we call the matrix on the right E, for elimination matrix (note the matrix E s relationship to the elimination matrices from chapter ). That is, we use row reduction to go from: [ A I ] [ R E ] Now we can easily read off the rank, r of the matrix A, by counting the pivot variables in R, as well as calculate: Dimension of C(A): Is just the rank, r. Basis for C(A): Is the r columns of A that correspond to the pivot columns of R. Dimension of C(A T ): Is also the rank, r. Basis for C(A T ): These are the first r ows of R (since row operations do not change the row space, and the first r rows are the pivot rows). Dimension of N(A): This is the number of free variables, which is n r. Basis for N(A): We find n r solutions to the system of equations Rx =, by setting one free variable equal to at a time, while leaving the rest equal to zero, and solving. Dimension of N(A T ): Since this is just the null space of A T, which has r pivots and m r free variables, this must have dimension m r. Basis for N(A T ): Take the bottom m r rows of E. 2

22 4 Putting it all together: Solving Ax = b. Finally, given a coefficient matrix A, you can:. Write succinctly for which b s there is a solution (by writing a linear combination of basis vectors). 2. Given a b, say whether Ax = b has a solution. (this is the same as asking if b is in C(A)) 3. Find all solutions to Ax = b, by taking any solution x p, and adding on all vectors x n in the null space (since A(x p + x n ) = b + = b). 4. Give a number summarizing how many solutions there are (this is the dimension of the null space). We go through one example in detail. Example 4. (Solving Ax = b.) Suppose we wish to solve the system of linear equations { x +2y +3z = 2 The coefficient matrix here is x +2y +4z = 3 A = ( We will answer the above questions in order, but first we will find a basis for the column space and null space. If we go through (A I) (R E), we ll get (R E) = ). ( Hence C(A) has dimension 2, and basis {( ) ( )} 3,. 4 However, we also know that every 2 dimensional subspace of R 2 is all of R 2, so C(A) is all of R 2, and has the much nicer basis ). 22

23 {( ) (, )}. Now N(A) will have dimension, and we can just see from A that a nonzero vector generating N(A) (and therefore a basis) is 2.. Ax = b has a solution whenever b C(A) = R 2. Hence, there are no restrictions on b. 2. The vector (2, 3) T is certainly in R 2, so there will be a solution. 3. We notice that a particular solution is x p = (,, ) T, so our complete solution is 2 2t x = + t = t, where t is any real number. 4. Finally, we can say that the solution space of Ax = b has dimension (the dimension of the null space). 23

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

22A-2 SUMMER 2014 LECTURE 5

22A-2 SUMMER 2014 LECTURE 5 A- SUMMER 0 LECTURE 5 NATHANIEL GALLUP Agenda Elimination to the identity matrix Inverse matrices LU factorization Elimination to the identity matrix Previously, we have used elimination to get a system

More information

Math 54 HW 4 solutions

Math 54 HW 4 solutions Math 54 HW 4 solutions 2.2. Section 2.2 (a) False: Recall that performing a series of elementary row operations A is equivalent to multiplying A by a series of elementary matrices. Suppose that E,...,

More information

Span and Linear Independence

Span and Linear Independence Span and Linear Independence It is common to confuse span and linear independence, because although they are different concepts, they are related. To see their relationship, let s revisit the previous

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

Linear Algebra. Chapter Linear Equations

Linear Algebra. Chapter Linear Equations Chapter 3 Linear Algebra Dixit algorizmi. Or, So said al-khwarizmi, being the opening words of a 12 th century Latin translation of a work on arithmetic by al-khwarizmi (ca. 78 84). 3.1 Linear Equations

More information

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial

More information

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 4. Solving Systems of Equations. Chapter 4 Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.

More information

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES LECTURES 14/15: LINEAR INDEPENDENCE AND BASES MA1111: LINEAR ALGEBRA I, MICHAELMAS 2016 1. Linear Independence We have seen in examples of span sets of vectors that sometimes adding additional vectors

More information

Gaussian elimination

Gaussian elimination Gaussian elimination October 14, 2013 Contents 1 Introduction 1 2 Some definitions and examples 2 3 Elementary row operations 7 4 Gaussian elimination 11 5 Rank and row reduction 16 6 Some computational

More information

Solving Systems of Equations Row Reduction

Solving Systems of Equations Row Reduction Solving Systems of Equations Row Reduction November 19, 2008 Though it has not been a primary topic of interest for us, the task of solving a system of linear equations has come up several times. For example,

More information

Chapter 3. Vector spaces

Chapter 3. Vector spaces Chapter 3. Vector spaces Lecture notes for MA1111 P. Karageorgis pete@maths.tcd.ie 1/22 Linear combinations Suppose that v 1,v 2,...,v n and v are vectors in R m. Definition 3.1 Linear combination We say

More information

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C = CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

1 Last time: inverses

1 Last time: inverses MATH Linear algebra (Fall 8) Lecture 8 Last time: inverses The following all mean the same thing for a function f : X Y : f is invertible f is one-to-one and onto 3 For each b Y there is exactly one a

More information

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES ENGINEERING MATH 1 Fall 2009 VECTOR SPACES A vector space, more specifically, a real vector space (as opposed to a complex one or some even stranger ones) is any set that is closed under an operation of

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions Math 308 Midterm Answers and Comments July 18, 2011 Part A. Short answer questions (1) Compute the determinant of the matrix a 3 3 1 1 2. 1 a 3 The determinant is 2a 2 12. Comments: Everyone seemed to

More information

Linear independence, span, basis, dimension - and their connection with linear systems

Linear independence, span, basis, dimension - and their connection with linear systems Linear independence span basis dimension - and their connection with linear systems Linear independence of a set of vectors: We say the set of vectors v v..v k is linearly independent provided c v c v..c

More information

Linear Systems. Math A Bianca Santoro. September 23, 2016

Linear Systems. Math A Bianca Santoro. September 23, 2016 Linear Systems Math A4600 - Bianca Santoro September 3, 06 Goal: Understand how to solve Ax = b. Toy Model: Let s study the following system There are two nice ways of thinking about this system: x + y

More information

Linear Independence Reading: Lay 1.7

Linear Independence Reading: Lay 1.7 Linear Independence Reading: Lay 17 September 11, 213 In this section, we discuss the concept of linear dependence and independence I am going to introduce the definitions and then work some examples and

More information

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices. Exam review This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices. Sample question Suppose u, v and w are non-zero vectors in R 7. They span

More information

Linear Algebra, Summer 2011, pt. 3

Linear Algebra, Summer 2011, pt. 3 Linear Algebra, Summer 011, pt. 3 September 0, 011 Contents 1 Orthogonality. 1 1.1 The length of a vector....................... 1. Orthogonal vectors......................... 3 1.3 Orthogonal Subspaces.......................

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

More information

Lecture 2 Systems of Linear Equations and Matrices, Continued

Lecture 2 Systems of Linear Equations and Matrices, Continued Lecture 2 Systems of Linear Equations and Matrices, Continued Math 19620 Outline of Lecture Algorithm for putting a matrix in row reduced echelon form - i.e. Gauss-Jordan Elimination Number of Solutions

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound

Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound Copyright 2018 by James A. Bernhard Contents 1 Vector spaces 3 1.1 Definitions and basic properties.................

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4 Department of Aerospace Engineering AE6 Mathematics for Aerospace Engineers Assignment No.. Decide whether or not the following vectors are linearly independent, by solving c v + c v + c 3 v 3 + c v :

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

1 Last time: determinants

1 Last time: determinants 1 Last time: determinants Let n be a positive integer If A is an n n matrix, then its determinant is the number det A = Π(X, A)( 1) inv(x) X S n where S n is the set of n n permutation matrices Π(X, A)

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge! Math 5- Computation Test September 6 th, 6 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge! Name: Answer Key: Making Math Great Again Be sure to show your work!. (8 points) Consider the following

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1 4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1 Two vectors are orthogonal when their dot product is zero: v w = orv T w =. This chapter moves up a level, from orthogonal vectors to orthogonal

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

6 EIGENVALUES AND EIGENVECTORS

6 EIGENVALUES AND EIGENVECTORS 6 EIGENVALUES AND EIGENVECTORS INTRODUCTION TO EIGENVALUES 61 Linear equations Ax = b come from steady state problems Eigenvalues have their greatest importance in dynamic problems The solution of du/dt

More information

Rectangular Systems and Echelon Forms

Rectangular Systems and Echelon Forms CHAPTER 2 Rectangular Systems and Echelon Forms 2.1 ROW ECHELON FORM AND RANK We are now ready to analyze more general linear systems consisting of m linear equations involving n unknowns a 11 x 1 + a

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

14: Hunting the Nullspace. Lecture

14: Hunting the Nullspace. Lecture Math 2270 - Lecture 14: Hunting the Nullspace Dylan Zwick Fall 2012 This is the second of two lectures covering the nulispace of a matrix A. In this lecture we move on from the what of a nulispace, and

More information

Linear algebra and differential equations (Math 54): Lecture 10

Linear algebra and differential equations (Math 54): Lecture 10 Linear algebra and differential equations (Math 54): Lecture 10 Vivek Shende February 24, 2016 Hello and welcome to class! As you may have observed, your usual professor isn t here today. He ll be back

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Math 0 Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination and substitution

More information

pset3-sol September 7, 2017

pset3-sol September 7, 2017 pset3-sol September 7, 2017 1 18.06 pset 3 Solutions 1.1 Problem 1 Suppose that you solve AX = B with and find that X is 1 1 1 1 B = 0 2 2 2 1 1 0 1 1 1 0 1 X = 1 0 1 3 1 0 2 1 1.1.1 (a) What is A 1? (You

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

THE NULLSPACE OF A: SOLVING AX = 0 3.2

THE NULLSPACE OF A: SOLVING AX = 0 3.2 32 The Nullspace of A: Solving Ax = 0 11 THE NULLSPACE OF A: SOLVING AX = 0 32 This section is about the space of solutions to Ax = 0 The matrix A can be square or rectangular One immediate solution is

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

4 Elementary matrices, continued

4 Elementary matrices, continued 4 Elementary matrices, continued We have identified 3 types of row operations and their corresponding elementary matrices. To repeat the recipe: These matrices are constructed by performing the given row

More information

Math 369 Exam #2 Practice Problem Solutions

Math 369 Exam #2 Practice Problem Solutions Math 369 Exam #2 Practice Problem Solutions 2 5. Is { 2, 3, 8 } a basis for R 3? Answer: No, it is not. To show that it is not a basis, it suffices to show that this is not a linearly independent set.

More information

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 2 Notes, Linear Algebra 5e Lay Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

MH1200 Final 2014/2015

MH1200 Final 2014/2015 MH200 Final 204/205 November 22, 204 QUESTION. (20 marks) Let where a R. A = 2 3 4, B = 2 3 4, 3 6 a 3 6 0. For what values of a is A singular? 2. What is the minimum value of the rank of A over all a

More information

MATH 2360 REVIEW PROBLEMS

MATH 2360 REVIEW PROBLEMS MATH 2360 REVIEW PROBLEMS Problem 1: In (a) (d) below, either compute the matrix product or indicate why it does not exist: ( )( ) 1 2 2 1 (a) 0 1 1 2 ( ) 0 1 2 (b) 0 3 1 4 3 4 5 2 5 (c) 0 3 ) 1 4 ( 1

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:

More information

Linear Algebra Handout

Linear Algebra Handout Linear Algebra Handout References Some material and suggested problems are taken from Fundamentals of Matrix Algebra by Gregory Hartman, which can be found here: http://www.vmi.edu/content.aspx?id=779979.

More information

18.06 Problem Set 3 Due Wednesday, 27 February 2008 at 4 pm in

18.06 Problem Set 3 Due Wednesday, 27 February 2008 at 4 pm in 8.6 Problem Set 3 Due Wednesday, 27 February 28 at 4 pm in 2-6. Problem : Do problem 7 from section 2.7 (pg. 5) in the book. Solution (2+3+3+2 points) a) False. One example is when A = [ ] 2. 3 4 b) False.

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

R b. x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 1 1, x h. , x p. x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9

R b. x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 1 1, x h. , x p. x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 The full solution of Ax b x x p x h : The general solution is the sum of any particular solution of the system Ax b plus the general solution of the corresponding homogeneous system Ax. ) Reduce A b to

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Solution Set 3, Fall '12

Solution Set 3, Fall '12 Solution Set 3, 86 Fall '2 Do Problem 5 from 32 [ 3 5 Solution (a) A = Only one elimination step is needed to produce the 2 6 echelon form The pivot is the in row, column, and the entry to eliminate is

More information

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han Linear Algebra for Beginners Open Doors to Great Careers Richard Han Copyright 2018 Richard Han All rights reserved. CONTENTS PREFACE... 7 1 - INTRODUCTION... 8 2 SOLVING SYSTEMS OF LINEAR EQUATIONS...

More information

Chapter 1: Linear Equations

Chapter 1: Linear Equations Chapter : Linear Equations (Last Updated: September, 6) The material for these notes is derived primarily from Linear Algebra and its applications by David Lay (4ed).. Systems of Linear Equations Before

More information

Lecture 6: Finite Fields

Lecture 6: Finite Fields CCS Discrete Math I Professor: Padraic Bartlett Lecture 6: Finite Fields Week 6 UCSB 2014 It ain t what they call you, it s what you answer to. W. C. Fields 1 Fields In the next two weeks, we re going

More information

Matrices related to linear transformations

Matrices related to linear transformations Math 4326 Fall 207 Matrices related to linear transformations We have encountered several ways in which matrices relate to linear transformations. In this note, I summarize the important facts and formulas

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2 MATH 167: APPLIED LINEAR ALGEBRA Chapter 2 Jesús De Loera, UC Davis February 1, 2012 General Linear Systems of Equations (2.2). Given a system of m equations and n unknowns. Now m n is OK! Apply elementary

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Span & Linear Independence (Pop Quiz)

Span & Linear Independence (Pop Quiz) Span & Linear Independence (Pop Quiz). Consider the following vectors: v = 2, v 2 = 4 5, v 3 = 3 2, v 4 = Is the set of vectors S = {v, v 2, v 3, v 4 } linearly independent? Solution: Notice that the number

More information

Differential Equations

Differential Equations This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

Chapter 1: Linear Equations

Chapter 1: Linear Equations Chapter : Linear Equations (Last Updated: September, 7) The material for these notes is derived primarily from Linear Algebra and its applications by David Lay (4ed).. Systems of Linear Equations Before

More information

4 Elementary matrices, continued

4 Elementary matrices, continued 4 Elementary matrices, continued We have identified 3 types of row operations and their corresponding elementary matrices. If you check the previous examples, you ll find that these matrices are constructed

More information

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES A SHORT SUMMARY OF VECTOR SPACES AND MATRICES This is a little summary of some of the essential points of linear algebra we have covered so far. If you have followed the course so far you should have no

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information

EXAM 2 REVIEW DAVID SEAL

EXAM 2 REVIEW DAVID SEAL EXAM 2 REVIEW DAVID SEAL 3. Linear Systems and Matrices 3.2. Matrices and Gaussian Elimination. At this point in the course, you all have had plenty of practice with Gaussian Elimination. Be able to row

More information

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function MODEL ANSWERS TO THE FIRST QUIZ 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function A: I J F, where I is the set of integers between 1 and m and J is

More information

Math 314/814 Topics for first exam

Math 314/814 Topics for first exam Chapter 2: Systems of linear equations Math 314/814 Topics for first exam Some examples Systems of linear equations: 2x 3y z = 6 3x + 2y + z = 7 Goal: find simultaneous solutions: all x, y, z satisfying

More information

Linear equations in linear algebra

Linear equations in linear algebra Linear equations in linear algebra Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra Pearson Collections Samy T. Linear

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Matrices MA1S1. Tristan McLoughlin. November 9, Anton & Rorres: Ch

Matrices MA1S1. Tristan McLoughlin. November 9, Anton & Rorres: Ch Matrices MA1S1 Tristan McLoughlin November 9, 2014 Anton & Rorres: Ch 1.3-1.8 Basic matrix notation We have studied matrices as a tool for solving systems of linear equations but now we want to study them

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. = SECTION 3.3. PROBLEM. The null space of a matrix A is: N(A) {X : AX }. Here are the calculations of AX for X a,b,c,d, and e. Aa [ ][ ] 3 3 [ ][ ] Ac 3 3 [ ] 3 3 [ ] 4+4 6+6 Ae [ ], Ab [ ][ ] 3 3 3 [ ]

More information