MATH 2210Q Applied Linear Algebra, Fall 2016

Size: px
Start display at page:

Download "MATH 2210Q Applied Linear Algebra, Fall 2016"

Transcription

1 MATH 220Q Applied Linear Algebra, Fall 206 Arthur J. Parzygnat These are my personal notes. This is not a substitute for Lay s book, which shall henceforth be referred to as [Lay]. You will not be responsible for any Remarks in these notes. However, everything else, including what is in [Lay] (even if it s not here), is fair game for homework, quizzes, and exams. At the end of each lecture, I provide a list of homework problems that should be done after that lecture. These homework problems will be collected every Tuesday! I also provide additional exercises which I believe are good to know. You should also browse other books and do other problems as well to get better at writing proofs and understanding the material. Notes in light red are for the reader. Notes in light green are reminders for me. When a word or phrase is underlined, that typically means the definition of this word or phrase is being given. Contents August September 9 3 September September September September September September September September October 4 57

2 2 October October 59 4 October October October October October November November November November November November November December 6 27 December December

3 August 30 Hand out short questionnaire. Give them 5 minutes. Linear algebra is the study of systems of linear equations of (in this course) a finite number of variables. These are equations of the form a x + a 2 x a n x n = b a 2 x + a 22 x a 2n x n = b 2 a m x + a m2 x a mn x n = b m, where the a ij are typically known constants (often real numbers), the b i are also known values, and the x j are the variables which we would like to solve for. Keep the above general form on the board throughout lecture. It helps to start off immediately with some examples. We will slowly develop a more formal and rigorous approach to linear algebra as the semester progresses. But for now, in the words of Richard Feynman, Shut up and calculate! Problem.2. [Exercise 33 in [Lay]] The temperature on the boundary of a cross section of a metal beam is fixed and known but is unknown in the intermediate points on the interior (.) 0 0 T T 4 T 2 T (.3) Assume the temperature at these intermediate points equals the average of the temperature at the nearest neighboring points. 2 Write a system of linear equations to describe the temperatures T, T 2, T 3, and T 4. Answer. The system of equations is given by T = 4 ( T 2 + T 4 ) T 2 = 4 (T T 3 ) T 3 = 4 (T 4 + T ) (.4) T 4 = 4 (0 + T + T ) Wiki suggests this phrase might actually be due to David Mermin, another prominent physicist. 2 This is true to a good approximation and is in fact how approximation techniques can be used to solve problems like this though the mesh will usually be much finer, and the boundary might not look so nice. 3

4 Rewriting them in the form provided above gives 4T T 2 + 0T 3 T 4 = 30 T + 4T 2 T 3 + 0T 4 = 60 0T T 2 + 4T 3 T 4 = 70 T + 0T 2 T 3 + 4T 4 = 40. Is there a solution for the temperatures in the previous problem? If there is a solution, is it unique? A part of this course is about answering problems of this nature. Notice that the coefficients and numbers in (.5) can be put together in an array (.6) This augmented matrix will aid in implementing calculations to solve for the temperatures. From a course in algebra, you might guess that one way to solve for the temperatures is to solve for one and then plug in this value successively into the other ones. This becomes difficult when we have more than two variables. Some things we can do, which are more effective, are adding linear combinations of equations within the system (.5). For instance, subtracting row 4 of (.5) by row 2 gives T + 0T 2 T 3 + 4T 4 = 40 ( T + 4T 2 T 3 + 0T 4 = 60 ) 0T 4T 2 + 0T 3 + 4T 4 = 20 for row 4. We know we can do this because all we are doing is adding two equations of the form A = B and C = D and obtaining A + C = B + D. This is based on the assumption that a solution exists in the first place. We can also multiply this equation by without changing the values of 4 the variables. This gives 0T T 2 + 0T 3 + T 4 = 5. (.8) From this, we see that we are only manipulating the entries in the augmented matrix (.6) and we don t have to constantly rewrite all the T variables. In other words, the augmented matrix becomes (.9) after these two row operations. If we could get rid of T 2 from this last row, we could solve for T 4 (or vice versa). Similarly, we should try to solve for all the other temperatures by finding combinations of rows to eliminate as many entries from the left-hand-side of the augmented matrix. This lefthand-side of the augmented matrix is just called a matrix. 3 [Lay] does not draw a vertical line to separate the two sides. I find this confusing. We will always draw this line to be clear. 4 (.5) (.7)

5 Problem.0 (Exercise 34 in [Lay]). Solve the system of linear equations in (.5). Answer. Let s begin by adding 4 of row 2 to row (.) Add 5 of row 4 to row (.2) Subtract row 4 from row (.3) Add row 3 to row (.4) Divide row by (.5) Add row to row 3 and subtract half of row from row (.6) Add 4 of row 4 to row 2 and divide row 3 by (.7) Add row 3 to row (.8)

6 Multiply rows 2 and 4 by and divide row by (.9) In other words, we have found a solution T = 20 T 2 = 27.5 T 3 = 30 T 4 = 22.5 (.20) Because it helps to visualize this the same way, we can permute the rows and still have the same equations describing our problem (.2) This is another example of a row operation. You should check these solutions by plugging them back into the original linear system (.5). In total, we have used three row operations to help us solve linear systems: (a) scaling rows, (b) adding rows, and (c) permuting rows. In this situation, we were lucky and a solution existed and was unique. Sometimes, a solution need not exist or if one exists, it might not be unique. Example.22. Let 2x + 3y = 5 4x + 6y = 2 (.23) be two linear equations in the variables x and y. There is no solution to this system. If there were a solution, then dividing the second line by 2 would give 5 =, which is impossible. 4 This can also be seen by plotting these two equations in the plane as in Figure. These two lines do not intersect. Go to Mathematica and draw the plot. 4 This is an example of a proof by contradiction. 6

7 3 2 2x + 3y = x + 6y = 2 Figure : A plot of the equations 2x + 3y = 5 and 4x + 6y = 2. Example.24. Consider the linear system given by x y + z = 2 2x + y + z = (.25) These two equations are plotted in Figure 2. x y + z = 2 2x + y + z = Figure 2: A plot of the equations x y + z = 2 and 2x + y + z =. Go to Mathematica and draw the plot. It is clear from this picture that there are solutions, in fact a lines worth of solutions instead of a unique one (the intersection of the two planes is the set of solutions). How can we describe this line explicitly? Looking at (.25), we can add the two equations to get 5 3x + 2z = z = (3x ) (.26) 2 5 The symbol means if and only if, which in this context means that the two equations are equivalent. 7

8 We can also subtract the second equation from the first to get x 2y = 3 y = (3 + x). (.27) 2 Hence, the set of points given by (x, 2 (3 + x), 2 (3x ) ) (.28) as x varies over real numbers, are all solutions of (.25). We can plot this in Figure 3. Figure 3: A plot of the equations x y + z = 2 and 2x + y + z = together with the intersection shown in red and given parametrically as x ( x, 2 (3 + x), 2 (3x )). Go to Mathematica and draw the plot. In the example.22, the set of solutions is empty. Such a system is said to be inconsistent. A linear system where the solution set is non-empty is said to be consistent. In example.24, there is more than one solution. The solution set of a linear system (.) is the collection of all (x, x 2,..., x n ) that satisfy (.). In problem.2, there is only one element in the solution set. Occasionally, two arbitrary linear systems may have the same set of solutions. Two linear systems of equations that have the same set of solutions are said to be equivalent. Hence, the two linear systems of equations given in (.5) and (.2) are equivalent. Homework (Due: Tuesday September 6). Exercises 2, 3, 2, 4, 6, 7, 8, 27, and 28 in Section. of [Lay]. Please show all your work, step by step! Do not use calculators or computer programs to solve any problems! 8

9 2 September As we do more problems, we get familiar with faster methods of solving systems of linear equations. We start with another problem from circuits with batteries and resistors. Problem 2.. Consider a circuit of the following form 4 Ohm 2 V Ohm 6 V 3 Ohm Here the jagged lines represent resistors and the two parallel lines, with one shorter than the other, represent batteries with the positive terminal on the longer side. The units of resistance are Ohms and the units for voltage are Volts. Find the current (in units of Amperes) across each resister along with the direction of current flow. Answer. Kirchhoff s rule says that the the voltage difference across any closed loop in a circuit with resistors and batteries is always zero. Across a resistor, the voltage drop is the current times the resistance. Across a battery from the negative to positive terminal, there is a voltage increase given by the voltage of the battery. There is also the rule that says current is always conserved, meaning that at a junction, current in equals current out. Knowing this, we label the currents in the wires by I, I 2, and I 3 as follows. 4 Ohm I 2 2 V I Ohm I3 6 V Conservation of current gives Kirchhoff s rule for the left loop in the circuit gives and for the right loop gives 2 Ohm I = I 2 + I 3. (2.2) 2 4I I 3 = 0 (2.3) 6 2I 2 + I 3 = 0. (2.4) These are three equations in three unknowns. If you were lost up until this point, that s fine. You can start by assuming the following form for the linear system of equations. 9

10 Rearranging them gives I I 2 I 3 = 0 0I 2I 2 + I 3 = 6 4I + 0I 2 + I 3 = 2 and putting it in augmented matrix form gives (2.6) To solve this, we perform row operations. Subtract 4 times row from row (2.7) Adding 2 of row 2 to row 3 gives (2.5) (2.8) The matrix is now in echelon form (more on this after we solve the actual problem). Dividing row 3 by 7 gives (2.9) Adding row 3 to row and subtracting row 3 from row 2 gives (2.0) Divide row 2 by -2 Add row 2 to row (2.) (2.2) The matrix is now in reduced echelon form (more on this later) and we have found our solution I = 0 A I 2 = 2 A I 3 = 2 A (2.3) The negative sign means that the current is actually flowing in the opposite direction to what we assumed. 0

11 You should check these solutions by plugging them back into the original linear system. Given a linear system of equations as in (.), which is written as an augmented matrix as a a 2 a n b a 2 a 22 a 2n b , (2.4) a m a m2 a mn b m an echelon form of such an augmented matrix is an equivalent augmented matrix whose matrix components (to the left of the vertical line) satisfy the following conditions. (a) All nonzero rows are above any rows containing only zeros. (b) The first entry (from the left), also known as a pivot, of any nonzero row is always to the right of the first nonzero entry of the row above it. (c) All entries in the column below a pivot are zeros. The column corresponding to a pivot is called a pivot column. Draw a few on the board. For the student reading these notes, look in [Lay]. A matrix is in reduced echelon form if in addition the following hold. (d) All pivots are. (e) The pivots are the only nonzero entries in the corresponding pivot columns. Draw a few reduced row echelon form matrices on the board. For the student reading these notes, look in [Lay]. It is a fact that the reduced row echelon form of a matrix is always unique provided that the linear system corresponding to it is consistent. Furthermore, a linear system is consistent if and only if an echelon form of the augmented matrix does not contain any rows of the form [ 0 0 b ] with b nonzero (2.5) In our earlier examples of temperature on a rod from lecture and currents in a circuit from this lecture, the arrays of numbers given by T I T 2 & I 2 (2.6) T 3 I 3 T 4

12 are examples of vectors in R 4 and R 3, respectively. Here R is the set of real numbers and R n is the set of n-tuples of real numbers where n is a positive integer being one of, 2, 3, 4,.... Given two vectors in R n, a. & b. (2.7) we can take their sum defined by a. a n a n + b. b n := b n a + b. a n + b n. (2.8) We can also scale each vector by any number c in R by a ca c. :=.. (2.9) a n ca n The above descriptions of vectors are algebraic and we ve illustrated their algebraic structures (addition and scaling). Vectors can also be visualized when n =, 2, 3. Draw a few vectors in R 2 and R 3. Note that the vectors are not actually the arrows drawn but the endpoints of these arrows. The arrows merely help for visualizations of operations such as addition of vectors. We will often write vectors with an arrow over them as in a and b when n is understood. Let S := { v,..., v m } be a set of m vectors in R n. The span of S is the set of all vectors of the form 6 m a i v i a v + a m v m, (2.20) i= where the a i can be any real number. For a fixed set of a i, the right-hand-side of (2.20) is called a linear combination of the vectors v i. In set-theoretic notation, we would write this as { m } span(s) := a i v i R n : a,..., a m R. (2.2) The span of vectors in R 2 and R 3 can be visualized quite nicely. i= 6 Please do not confuse the notation v i with the components of the vector v i. It can be confusing with these indices, but to be very clear, we could write the components of the vector v i as (v i ).. (v i ) n 2

13 Problem In the following figure, vectors u and v are depicted with a grid showing unit markings. v u (2.23) What linear combinations of u and v will produce the other bullets drawn in the graph? Answer. Draw the linear combinations in class, or have students do it. Problem In the previous example, show that every vector ] [ b b 2 (2.25) can be written as a linear combination of u and v. Thus { u, v} spans R 2. Answer. To see this, note that u = [ ] 2 & v = [ ]. (2.26) 2 To prove the claim, we must find real numbers a and a 2 such that [ ] b a u + a 2 v =. (2.27) But the left-hand-side is given by [ ] 2 a u + a 2 v = a [ ] (2.9) + a 2 = 2 b 2 [ ] [ ] 2a a2 + a 2a 2 Therefore, we need to solve the linear system of equations given by (2.8) = [ 2a a 2 a + 2a 2 ]. (2.28) 2a a 2 = b a + 2a 2 = b 2, which should by now be a familiar procedure. Put it in augmented matrix form [ 2 ] b 2 b 2 (2.29) (2.30) 3

14 Permute the first and second rows [ ] 2 b2 Add two of row to row 2 to get 2 b [ 2 b2 0 3 b + 2b 2 This is now in echelon form. Multiply row by and divide row 2 by 3 ] [ ] 2 b2 0 (b 3 + 2b 2 ) (2.3) (2.32) (2.33) Add 2 of row 2 to row [ 0 b2 + 2(b ] 3 + 2b 2 ) b 2 ) which is equal to [ 0 ] 3 + b 2 ) 0 (b 3 + 2b 2 ) which says that [ b b 2 ] ( ) 2b + b 2 = u + 3 ( b + 2b 2 3 (2.34) (2.35) ) v. (2.36) Homework (Due: Tuesday September 6). Exercises 8, 2, 3, 6, 24, 26, and 29 in Section.2 of [Lay]. Exercises 3, 4, 7, 8, 2, 8, 25, 26, and 32 in Section.3 of [Lay]. Please show all your work, step by step! Do not use calculators or computer programs to solve any problems! 4

15 3 September 6 HW #0 is due at the beginning of class! An augmented matrix of the form (2.4) corresponding to a linear system (.) with variables x,..., x n, can be expressed as A x = b, (3.) where the notation A x stands for the vector 7 a a 2 a n x a x + a 2 x a n x n a 2 a 22 a 2n x := a 2 x + a 22 x a 2n x n. a m a m2 a mn x n a m x + a m2 x a mn x n (3.2) in R m (yes, that s an m, not an n). Therefore, an m n matrix acts on a vector in R n to produce a vector in R m. We will discuss this more next week. Warning: we do not provide a definition for an m n matrix acting on a vector in R k with k n. Give a simple example in class! This is a way of consolidating the augmented matrix and A is precisely the matrix corresponding to the linear system. One can also express the matrix A as a row of column vectors A = [ a a 2 a n ] where the i-th component of the j-th vector a j is given by (3.3) ( a j ) i = a ij. (3.4) In this case, b is explicitly expressed as a linear combination of the vectors a,..., a n via b = x a + + x n a n. (3.5) Therefore, solving for the variables x,..., x n for the linear system (.) is equivalent to finding coefficients x,..., x n that satisfy (3.5). (3.5) is called a matrix equation. Here A is an m n matrix. 8 Thus, there are three equivalent ways to express a linear system. (a) m linear equations in n variables (.). (b) An augmented matrix (2.4). (c) A matrix equation A x = b as in (3.2). The above observations also lead to the following. 7 The vector on the right-hand-side is a definition of the notation on the left-hand-side. Don t be confused by the fact that there are a lot of terms inside each component of the vector on the right-hand-side of (3.2) it is not an m n matrix! 8 Here m n stands for m rows and n columns. 5

16 Theorem 3.6. Let A be a fixed m n matrix. The following statements are equivalent (which means that any one implies the other and vice versa). (a) For every vector b in R m, the solution set of the equation A x = b, meaning the set of all x satisfying this equation, is nonempty. (b) Every vector b in R m can be written as a linear combination of the columns of A, viewed as vectors in R m, i.e. the columns of A span R m. (c) A has a pivot position in every row. Proof. Let s just check part of the equivalence between (a) and (b) by showing that (b) implies (a). Suppose that a vector b can be written as a linear combination b = x a + + x n a n, (3.7) where the {x,..., x n } are some coefficients. Rewriting this using column vector notation gives b (a ) (a n ). = x. + + x n. (3.8) (a ) m (a n ) m b m We can set our notation and write (a j ) i a ij. (3.9) Then, writing out this equation of vectors gives b x a + + x n a n. =. (3.0) x a m + + x n a mn b m by the rules about scaling and adding vectors from last lecture. The resulting equation is exactly the linear system corresponding to A x = b. Hence, the x s from the linear combination in (3.7) give a solution of the matrix equation A x = b. Do a simple example in class by writing out 3 vectors { a, a 2, a 3 } and some other vector b and writing out the system to figure out if b is in the span of { a, a 2, a 3 }. If the resulting linear system is consistent, a solution exists and b is in the span of { a, a 2, a 3 }. If it is inconsistent, no solution exists and b is not in the span of { a, a 2, a 3 }. Theorem 3.. Let A be an m n matrix, let x and y be two vectors in R n, and let c be any real number. Then A( x + y) = A x + A y & A(c x) = ca x. (3.2) Exercise 3.3. Prove this! To do this, write out an arbitrary A matrix with entries as in (3.2) along with two vectors x and y and simply work out both sides of the equation using the rule in (3.2). 6

17 Give an example instead of proving the theorem. Problem 3.4 (Exercise 8 in Section.6 of [Lay]). Consider a chemical reaction that turns limestone CaCO 3 and acid H 3 O into water H 2 O, calcium Ca, and carbon dioxide CO 2. In a chemical reaction, all elements must be accounted for. Find the appropriate ratios of these compounds and elements needed for this reaction to occur without other waste products. Answer. Introduce variables x, x 2, x 3, x 4 and x 5 for the coefficients of limestone, acid, water, calcium, and carbon dioxide, respectively. The elements appearing in these compounds and elements are H, O, C, and Ca. We can therefore write the compounds as a vector in these variables (in this order). For example, limestone, CaCO 3, is 0 H 3 O (3.5) C Ca since it is composed of zero hydrogen atoms, three oxygen atoms, one carbon atom, and one calcium atom. Thus, the linear system we need to solve is given by x CaCO 3 + x 2 H 3 O = x 3 H 2 O + x 4 Ca + x 5 CO x + x 2 0 = x x x The associated augmented matrix is and the associated matrix equation is given by (3.6) (3.7) A x = b, (3.8) where A is the left-hand-side of the augmented matrix and b = 0, the zero vector. Subtract row 4 from row 3 and subtract 3 of row 4 from row (3.9) Subtract 3 of row 2 from row (3.20)

18 Permute the rows so that the augmented matrix is in echelon form Add row 4 to row and add row 3 to row Add 9 of row 4 to row 3 and add 6 of row 4 to row (3.2) (3.22) (3.23) Now the augmented matrix is in reduced echelon form. Notice that although solutions exist, they are not unique! We saw this happening in example.24 back in lecture. Let us write the concentrations in terms of x 5, the concentration of calcium. Thus, the resulting reaction is given by x = x 5, x 2 = 2x 5, x 3 = 3x 5, & x 4 = x 5. (3.24) x 5 CaCO 3 + 2x 5 H 3 O 3x 5 H 2 O + x 5 Ca + x 5 CO 2 (3.25) It is common to set the smallest quantity to so that this becomes CaCO 3 + 2H 3 O 3H 2 O + Ca + CO 2. (3.26) Nevertheless, we do not have to do this, and a proper way to express the solution is in terms of the concentration of calcium (for instance) as x x 2 2 x 3 = x 5 3. (3.27) x 4 x 5 We did not have to choose calcium as the free variable. Any of the other elements would have been as good of a choice as any other, but in some instances, the resulting coefficients might be fractions. 8

19 The previous example leads us to the notion of homogeneous linear systems. A linear system A x = b is said to be homogeneous if b = 0. Note that a homogeneous linear system always has at least one solution, namely x = 0, which is called the trivial solution. We have also noticed in the example that there is a free variable in the solution. This is a generic phenomena: Theorem The homogeneous equation A x = 0 has a nontrivial solution if and only if the corresponding system of linear equations has a free variable. In (3.27), the solution of the homogeneous equation was written in the form x = p + t v (3.29) where in that example p was 0, t was x 5, and v was the vector 2 3. (3.30) This form of the solution of a linear equation is in parametric form because its value depends on an additional unspecified parameter, which in this case is t. In other words, all solutions are valid as t varies over the real numbers. For a homogeneous equation, p is always 0 (because then 0 would not be a solution, contradicting A x = 0). In fact, there could be more than one such parameter involved. Theorem 3.3. Suppose that the linear system described by A x = b is consistent and let x = p be a solution. Then the solution set of A x = b is the set of all vectors of the form p + u where u is any solution of the homogeneous equation A x = 0. This says that the solution set of a consistent linear system A x = b can be expressed as x = p + t u + + t k u k, (3.32) where p is one solution of A x = b, k is a positive integer, {t,..., t k } are the parameters (real numbers), and the set { u,..., u k } spans the solution set of A x = 0. A linear combination of solutions to A x = 0 is a solution as well. This problem will be addressed in your homework! You may want to use Theorem 3. to prove this last statement. Homework (Due: Tuesday September 3). Exercises 4, 7, 0, 25, 30, and 32 in Section.4 of [Lay]. Exercises 2, 5, 5, 37, and 39 in Section.5 of [Lay]. Please show all your work, step by step! Do not use calculators or computer programs to solve any problems! 9

20 4 September 8 Quiz (review of what we covered on Aug 30 and Sep ) at the beginning of class! Today is proof day! We will slowly begin more formal aspects of linear algebra. Definition 4.. A set of vectors { u,..., u k } in R n is linearly independent if the solution set of the vector equation 9 x u + + x k u k = 0 (4.2) consists of only the trivial solution. Otherwise, the set is said to be linearly dependent in which case there exist some coefficients x,..., x k not all of which are zero such that (4.2) holds. Example 4.3. The vectors are linearly dependent because so that Example 4.7. The vectors 2 0 & 3 6 (4.4) = 3 2 (4.5) = 0. (4.6) 0 0 [ ] & [ ] are linearly independent for the following reason. Let x and x 2 be two real numbers such that [ ] [ ] [ ] 0 x + x 2 =. (4.9) 0 This equation describes the system associated to the augmented matrix [ ] 0. (4.0) 0 Subtracting row from row 2 gives (4.8) [ ] 0. (4.) Dividing row 2 by 2 and then adding it to row gives [ ] 0 0. (4.2) 0 0 The only solution to (4.9) is therefore x = 0 and x 2 = 0. Thus, the two vectors in (4.8) are linearly independent. 9 Recall, the solution set is the set of all x,..., x k satisfying (4.2). 20

21 Example 4.3. A set { u, u 2 } of two vectors in R m is linearly dependent if and only if 0 one can be written as a scalar multiple of the other, i.e. there exists a real number c such that u = c u 2 or c u = u 2. This is going to be our first full proof. We will therefore try to guide you using footnotes so that you know what is part of the proof and what is based on intuition. Instead of first teaching you how to do proofs from scratch, we will go through several examples so that you see what they are like first. This is like learning a new language. Before learning the grammar, you want to first listen to people talking to get a feel for what the language sounds like. Then, when you learn the alphabet, you want to read a few passages before you start constructing sentences on your own. Proof. First note that the associated vector equation is of the form where 2 x and x 2 are coefficients, or upon rearranging x u + x 2 u 2 = 0, (4.4) x u = x 2 u 2. (4.5) ( ) If the set is linearly dependent, then x and x 2 cannot both be zero. 3 Without loss of generality, suppose that x is nonzero. 4 Then dividing both sides of (4.5) by x gives u = x 2 x u 2. (4.6) Thus, setting c := x 2 x proves the first claim 5 (a similar argument can be made if x 2 is nonzero). ( ) Conversely, 6 suppose that there exists a real number c such that 7 u = c u 2. Then u c u 2 = 0 (4.7) showing that the set { u, u 2 } is linearly dependent since the coefficient in front of u is nonzero (it is ). 8 At the end of a proof, you should always check your work! Draw a few situations where a set of vectors is linearly dependent and independent first in R 2 and then in R 3. 0 To prove a statement of the form A if and only if B, one must show that A implies B and B implies A. In a proof, we often depict the former by ( ) and the latter by ( ). Before proving anything, we just recall what the vector equation is to remind us of what we ll need to refer to. 2 If you introduce notation in a proof, please say what it is every time! 3 What we have done so far is just state the definition of what it means for { u, u 2 } to be linearly dependent. Stating these definitions to remind ourselves of what we know is a large part of the battle in constructing a proof. 4 We know from the definition that at least one of x or x 2 is not zero but we do not know which one. It won t matter which one we pick in the end (some insight is required to notice this), so we may use the phrase without loss of generality to cover all other possible cases. 5 Remember, we wanted to show that u is a scalar multiple of u 2. 6 We say conversely when we want to prove an assertion in the opposite direction to the previously proven assertion. 7 Remember, this is literally the latter assumption in the claim. 8 Recall the definition of what it means to be linearly dependent and confirm that you agree with the conclusion. 2

22 Example 4.8. Let 0 0 ˆx := 0, ŷ :=, & ẑ := 0 (4.9) 0 0 be the three unit vectors in R 3 (sometimes denoted by î, ĵ, and ˆk, respectively). In addition, let u be any other vector in R 3. Then the set {ˆx, ŷ, ẑ, u} is linearly dependent because u can be written as a linear combination of the three unit vectors. This is obvious because if we write u u = u 2 (4.20) u 3 then u = u ˆx + u 2 ŷ + u 3 ẑ. (4.2) Here s a less trivial example. Example The vectors 0, 2, 3 & 2 (4.23) 3 are linearly dependent. This is a little bit more difficult to see so let us try to solve it from scratch. We must find x, x 2, and x 3 such that 2 0 x 0 + x 2 + x 3 2 = 0. (4.24) This is exactly a matrix equation by Theorem (3.6). Hence, we have to solve the augmented matrix system given by (4.25) which after some row operations is equivalent to (4.26) This has non-zero solutions. Setting x 3 = (we don t have to do this we can leave x 3 as a free variable, but I just want to show that we can write the last vector in terms of the first two) shows that = 3 2. (4.27)

23 The previous examples hint at a more general situation. Theorem Let S := { u,..., u k } be a set of vectors in R n. S is linearly dependent if and only if at least one vector from S can be written as a linear combination of the others. The proof of Theorem 4.28 will be similar to the previous example. Why should we expect this? Well, if k = 3, then we have { u, u 2, u 3 } and we could imagine doing something very similar. Think about this! If you re not comfortable working with arbitrary k just yet, specialize to the case k = 3 and try to mimic the previous proof. Then try k = 4. Do you see the pattern? Once you re ready, try the following. Ask the students for suggestions! If this is your first time proving things outside of geometry in highschool, study how these proofs are written. Try to prove things on your own. Do not be discouraged if you are wrong. Keep trying. A good book on learning how to think about proofs is How to Solve It by G. Polya []. A course in discrete mathematics also helps. Practice, practice, practice! Proof. The vector equation associated to S is k x j u j = 0, (4.29) j= where the x j are coefficients. ( ) If the set S is linearly dependent, then there exists 9 a nonzero x i (for some i between and k). Therefore, k ( u i = x ) j u j, (4.30) x i j i where the sum is over all numbers j from to k except i. Hence, the vector u i can be written as a linear combination of the others. ( ) Conversely, suppose that there exists a vector u i from S that can be written as a linear combination of the others, i.e. k u i = y j u j, (4.3) j i where the y j are real numbers. 20 Rearranging gives u i k x j u j = 0, (4.32) j i and we see that the coefficient in front of u i is nonzero (it is ). Hence S is linearly dependent. The following two theorems will give quick methods to figure out whether a given set of vectors is linearly dependent. 9 By definition of a linearly dependent set, at least one of the x i s must be nonzero. This is phrased concisely by the statement there exists a nonzero x i We call our variables y to avoid potentially confusing them with the previous variables x. 23

24 Theorem Let S := { u,..., u k } be a set of vectors in R n with k > n. Then S is linearly dependent. Proof. Recall, S is linearly dependent 2 if there exist numbers x,..., x k not all zero such that k x i u i = 0. (4.34) This equation can be expressed as a linear system k x i (u i ) = 0 i= i= k x i (u i ) n = 0, i=. (4.35) where 22 (u i ) j is the j-th component of the vector u i. In this linear system, there are k unknowns given by the variables x,..., x k and there are n equations. Because k > n, there are more unknowns than equations, and hence there is at least one free variable. 23 Let x p be one of these free variables. Then the other x i s might depend on x p so we may write x i (x p ). 24 Then by setting x p =, we find k u p + x i (x p = ) u i = 0 (4.36) i p showing that S is linearly dependent (again since the coefficient in front of u p is nonzero). Warning! Using an example of S := { u,..., u k } and showing that it is linearly dependent is not a proof! We have to prove the claim for all potential cases. Nevertheless, an example helps to see why the claim might be true in the first place. Theorem Let S := { u,..., u k } be a set of vectors in R n with at least one of the u i being zero. Then S is linearly dependent. Proof. Suppose u i = 0. Then choose 25 the coefficient of u j to be x j := { if j = i 0 otherwise (4.38) 2 Again, it is always helpful to constantly remind yourself and the reader of definitions that are crucial to solving the problem at hand. It is also helpful to use them to introduce notation that has not been introduced in the statement of the claim (the theorem). 22 We have introduced some notation, so we should define it. 23 But wait, how do we know that a solution even exists? If a solution doesn t exist, then our conclusion must be false! Thankfully, by our earlier comments from the previous lecture, we know that every homogeneous linear system has at least one solution, namely the trivial solution. Hence, the solution set is not empty. 24 This is read as x i is a function of x p. 25 To show that the set is linearly dependent, we have to find a set of coefficients, not all of which are zero, so that their linear combination results in the zero vector. The coefficients that I ve chosen here are not the only coefficients that will work. You may choose have chosen others. All we have to do is exhibit the existence of one such choice. We do not have to exhaust all posibilities. 24

25 Then k x j u j = u i = ( 0) = 0 (4.39) j= because any scalar multiple of the zero vector is the zero vector. Since not all of the coefficients are zero (one of them is ), S is linearly dependent. Homework (Due: Tuesday September 3). Exercises 6, 8, 0, 4, 22, 34, 36, and 38 in Section.7 of [Lay]. Please note: For exercises 34, 36, and 38, if the statements are true, prove them! That s what justification means. You may (and are encouraged to) use any theorems we have done in class! Please show all your work, step by step! Do not use calculators or computer programs to solve any problems! 25

26 5 September 3 HW #02 is due at the beginning of class! As we discussed last week, an m n matrix A acts on a vector x in R n and produces a vector b in R m as in A x = b. (5.) Furthermore, a matrix acting on vectors in R n in this way satisfies the following two properties and A( x + y) = A x + A y (5.2) A(c x) = ca x (5.3) for any other vector y in R n and any scalar c. Since x is arbitrary, we can think of A as an operation that acts on all of R n. Any time you input a vector in R n, you get out a vector in R m. We can depict this diagrammatically as R m A R n (5.4) You will see right now (and several times throughout this course) why we write the arrows from right to left (your book does not, which I personally find confusing). 26 For example, (5.5) is a 4 3 matrix (in the middle) acting on a vector in R 3 (on the right) and producing a vector in R 4 (on the left). In other words, we can think of A as a function from R n to R m. This leads us to a seemingly new definition. Definition 5.6. A linear transformation/operator from R n to R m is an assignment T sending any vector x in R n to a unique vector T ( x) in R m satisfying and T ( x + y) = T ( x) + T ( y) (5.7) T (c x) = ct ( x) (5.8) for all x, y in R n and all c in R. Such a linear transformation can be written in any of the following ways T : R n R m, R n T R m, R m R n : T, or R m T R n. (5.9) 26 It doesn t matter how you draw it as long as you are consistent and you know what it means. It s not a rule and only my preference. 27 We use arrows with a vertical dash as in at the beginning when we act on specific vectors. 26

27 Given a vector x in R n and a linear operator R m T R n, the vector T ( x) in R m is called the image of x under T. R n is called the domain of T and R m is called the codomain. The image of all vectors in R n under T is called the range of T. From the above discussion, every m n matrix is an example of a linear transformation from R n to R m. In the example above, namely (6.8), the image of (5.0) 2 under the linear operator given by the matrix (5.) is 4 4. (5.2) 7 Notice that the operator can act on any other vector in R 3 as well, not just the particular choice we made. So for example, the image of 0 3 (5.3) would be = (5.4) Maybe now you see why we wrote our arrows from right to left. It makes acting on the vectors with the matrix much more straightforward (as written on the page). If we didn t, we would have to flip the vector to the other side of the matrix every time to calculate the image. In this calculation, we showed (5.5) Notice that the center matrix always stays the same no matter what vectors in R 3 we put on the right. The matrix in the center is a rule that applies to all vectors in R 3. When the matrix changes, the rule changes, and we have a different linear transformation. 27

28 Example 5.6. Consider the transformation that multiplies every vector by 2. Under this transformation, the vector 2 (5.7) 2 gets sent to 2 4 (5.8) 4 This transformation is linear and the matrix representing it is (5.9) Example Let θ be some angle in [0, 2π). Let R θ : R 2 R 2 be the transformation that rotates (counter-clockwise) all the vectors in the plane by θ degrees (for the pictures, let s say θ = π ). This transformation is linear and is represented by the matrix 2 [ ] cos θ sin θ R θ := (5.2) sin θ cos θ For θ = π, this looks like 2 R π 2 ( e ) [ ] 0 0 e 2 R π 2 ( e 2) e Example A vertical shear in R 2 is given by a matrix of the form [ ] S 0 k := k (5.23) while a horizontal shear is given by a matrix of the form [ ] k S k :=, (5.24) 0 where k is a real number. When k =, the former is depicted by S ( e 2 ) S ( e ) [ ] 0 e 2 e 28

29 for k = while the latter is depicted by S ( e 2 ) [ ] 0 e 2 S ( e ) e Example Many more examples are given in Section.9 of [Lay]. You should be comfortable with all of them! We have seen that matrices give examples of linear transformations. It turns out that all linear transformations are determined by matrices. For the statement, it is convenient to use the following notation. Fix a natural number n. Let i be a natural number in the range i n. Set e i to be the vector 0. 0 e i := i-th row (5.26) 0. 0 in R n. For example, when n = 3, we called these vectors Theorem Let R m matrix A such that 0 0 e := ˆx = 0, e 2 := ŷ =, & e 3 := ẑ = 0. (5.27) 0 0 T R n be a linear transformation. Then there exists a unique m n for all x in R n. Furthermore, this matrix A is given by T ( x) = A x (5.29) A = [ T ( e ) T ( e n ) ]. (5.30) Problem 5.3. Let and set A := [ ] (5.32) [ ] 2 b :=. (5.33) 0 (a) Find a vector x such that A x = b. 29

30 (b) Is there more than one such x as in part (a)? (c) Is the vector v := [ ] 3 0 in the range of A viewed as a linear transformation? (5.34) Answer. you found me! (a) To answer this, we must solve [ ] x x 2 = x 3 [ ] 2 0 (5.35) which we can do in the usual way we have learned [ ] add 5 of row to row 2 [ 2 3 ] (5.36) There are two free variables here, say x 2 and x 3. Then x is expressed in terms of them via x = 2 + 2x 2 3x 3. (5.37) Therefore, any vector of the form 2 + 2x 2 3x 3 x 2 x 3 (5.38) for any choice of x 2 and x 3 will have image b. (b) By the analysis from part (a), yes there is more than one such vector. (c) To see if v is in the range of A, we must find a solution to [ ] 2 3 x = x 2 x 3 [ ] 3 0 but applying row operations as above [ ] [ ] add 5 of row to row (5.39) (5.40) show that the system is inconsistent. This means that there are no solutions and therefore, v is not in the range of A. Definition 5.4. A linear transformation R m T R n is onto if every vector b in R m is in the range of T and is one-to-one if for any vector b in the range of T, there is only a single vector x in R n whose image is b. 30

31 Theorem The following are equivalent for a linear transformation R m T R n : (a) T is one-to-one. (b) The only solution to the linear system T ( x) = 0 is x = 0. (c) The columns of A are linearly independent. Theorem A linear transformation R m R m. T R n is onto if and only if the columns of A span Homework (Due: Tuesday September 20). Exercises 9,, 6, 7, 3, and 32 in Section.8 of [Lay]. Exercises 3, 6, and 24 in Section.9 of [Lay]. Please show all your work, step by step! Do not use calculators or computer programs to solve any problems! 3

32 6 September 5 Quiz (review of what we covered on Sep 6 and Sep 8) at the beginning of class! In the previous lecture, we saw how to think of matrices as linear transformations and vice versa. If you think of a linear transformation as a process, you can perform processes in succession. For example, imagine you had two linear transformations R m A R n (6.) and R l B R m. (6.2) Here A is an m n matrix and B is an l m matrix. Then it should be reasonable to perform these operations in succession as R l B R m A R n (6.3) so that the result is some operation, denoted by BA, from R n to R l R l BA R n. (6.4) In fact, we know we can do this because if x is a vector in R n, we act on it with A to get a vector A x in R m. Now that we have a vector in R m we act on it with B to get a vector B(A x) in R l. This operation of performing A first and then B is a linear transformation (exercise!) and therefore must correspond to some unique matrix by Theorem We call this matrix BA. In fact, we can figure out a formula for its matrix components! Let s try to do this. Let x be an arbitrary vector in R n. Then (3.2) gives a a 2 a n x a x + a 2 x a n x n a 2 a 22 a 2n x 2 A x =.... = a 2 x + a 22 x a 2n x n. a m a m2 a mn x n a m x + a m2 x a mn x n (6.5) where the vector on the right has m components. Now let s act on this vector with B b b 2 b m a x + a 2 x a n x n b 2 b 22 b 2m a 2 x + a 22 x a 2n x n B(A x) =.... b l b l2 b lm a m x + a m2 x a mn x n (6.6) 32

33 It looks complicated, but let us persevere and calculate this expression. To simplify things, let us write the vector A x using a shorthand notation as 28 a x + a 2 x a n x n n i= a 2 x + a 22 x a 2n x n a ix i n. = i= a 2ix i. a m x + a m2 x a mn x n n i= a mix i (6.7) Using this notation, we can write b n i= a ix i + b n 2 i= a 2ix i + + b n m i= a mix i b n 2 i= B(A x) = a ix i + b n 22 i= a 2ix i + + b n 2m i= a mix i. b n l i= a ix i + b n l2 i= a 2ix i + + b n lm i= a mix i n i= b a i x i + n i= b 2a 2i x i + + n i= b ma mi x i n i= = b 2a i x i + n i= b 22a 2i x i + + n i= b 2ma mi x i. n i= b la i x i + n i= b l2a 2i x i + + n i= b lma mi x i ) n i= (b a i + b 2 a 2i + + b m a mi x i ) n = i= (b 2 a i + b 22 a 2i + + b 2m a mi x i. ) n i= (b l a i + b l2 a 2i + + b lm a mi n i= ( m k= b ka ki ) x i n i= = ( m k= b 2ka ki ) x i., n i= ( m k= b lka ki ) x i x i (6.8) which is now in the form needed to extract the matrix BA. By comparing this vector to the one in (6.5), BA is the matrix m k= b m ka k k= b ka k2 m k= BA = b m 2ka k k= b 2ka k2.. m k= b m lka k k= b lka k2 m k= b mka kn m k= b 2ka kn. m k= b lka kn (6.9) From this calculation, we see that the (BA) ij component of the matrix BA is given by (BA) ij := m b ik a kj. (6.0) k= 28 You should have seen this notation in calculus when learning about series. 33

34 The resulting formula seems overwhelming, but there is a convenient way to remember it instead of this long derivation. The ij-th component of BA is given by multiplying the entries of the i-th row of B with the entries of the j-th column of A and adding them all together: a j a b i b i2 b im 2j. = m k= b ika kj (6.) a mj This operation makes sense because the number of entries in a row of B is m while the number of entries in a column of A is also m. Example 6.2. Consider the following two linear transformations on R 2 given by a shear S and then a rotation R by angle θ (in the figures, k = and θ = π). 2 [ ] [ ] cos θ sin θ k R 2 R sin θ cos θ 2 R 0 2. (6.3) R(S( e 2 )) R(S( e )) S( e 2 ) S( e ) e 2 e The resulting linear transformation is given by [ ] cos θ k cos θ sin θ R 2 R sin θ k sin θ cos θ 2, (6.4) which with k = and θ = π becomes 2 [ 0 ] R 2 If, however, we executed these operations in the opposite order [ ] [ ] k cos θ sin θ R 2 R 0 2 sin θ cos θ R 2 R 2 (6.5) (6.6) S(R( e )) R( e ) e 2 S(R( e 2 )) R( e 2 ) e 34

35 we would find the resulting linear transformation to be [ ] k sin θ + cos θ k cos θ sin θ R 2 R sin θ cos θ 2, (6.7) which with k = and θ = π 2 becomes [ ] R 2 0 R 2 (6.8) Homework (Due: Tuesday September 20). Exercises 9, 0,, and 2 in Section 2. of [Lay]. Please show all your work, step by step! Do not use calculators or computer programs to solve any problems! Notice how counterintuitive the results of these problems are. In addition to these problems, mention how the results of problems 9, 0,, and 2 would change if the matrices were matrices instead. 35

36 7 September 20 HW #03 is due at the beginning of class! Given a linear transformation R m T R n (7.) taking vectors with n components in and providing vectors with m components out, you might want to know if there is a way to go back to reverse the process. This would be a linear transformation going in the opposite direction (I ve drawn it going backwards to our usual convention) R m S R n (7.2) so that if we perform these two processes in succession, the result would be the transformation that does nothing, i.e. the identity transformation. In other words, going along any closed loop in the diagram T R m R n (7.3) S is the identity. Expressed another way, this means that ST R n = n R n (7.4) and R m T S = R m m (7.5) Here m is the identity transformation on R m and similarly n on R n. Often, the inverse S of T is written as T and the inverse T of S is written as S. This is because inverses, if they exist, are unique. 36

37 Definition 7.6. An m n matrix A is invertible/nonsingular if there exists an n m matrix B such that AB = n & BA = m. (7.7) A matrix that is not invertible is called a noninvertible/singular matrix. Example 7.8. Consider the matrix R θ describing rotation in R 2 counterclockwise about the origin by angle θ [ ] cos θ sin θ R 2 R sin θ cos θ 2. (7.9) For θ = π, this looks like 2 R π 2 ( e ) [ ] 0 0 e 2 R π 2 ( e 2) e The inverse of such a rotation matrix should be obvious! We just want to rotate back by angle θ, i.e. clockwise by angle θ. This inverse should therefore be given by the matrix [ ] [ ] cos( θ) sin( θ) cos θ sin θ R θ = = (7.0) sin( θ) cos( θ) sin θ cos θ For θ = π, this looks like 2 R π 2 ( e ) R π 2 ( e 2) [ ] 0 0 e 2 Is this really the inverse, though? We have to check the definition. Remember, this means we need to show R θ R θ = 2 & R θ R θ = 2. (7.) It turns out that we only need to check any one of these conditions (this is one of the exercises in [Lay]), so let s check the second one. [ ] [ ] cos θ sin θ cos θ sin θ R θ R θ = sin θ cos θ sin θ cos θ [ ] cos 2 θ + sin 2 θ cos θ sin θ + sin θ cos θ = sin θ cos θ + cos θ sin θ sin 2 θ + cos 2 (7.2) θ [ ] 0 = 0 e 37

38 Example 7.3. Consider the matrix S k describing a vertical shear in R2 of length k [ ] 0 R 2 R k 2. (7.4) When k =, this transformation is depicted by S ( e 2 ) S ( e ) [ ] 0 e 2 e In this case as well, it seems intuitively clear that the inverse should be also vertical shear but where the shift is in the opposite vertical direction, namely, k should be replaced with k. Thus, we propose that the inverse vertical shear, S k, is given by [ ] S 0 k =. (7.5) k When k =, this transformation is depicted by S ( e 2 ) [ ] 0 e 2 S ( e ) e We check that this works: [ ] [ ] S 0 0 k S k = k k = [ ] 0. (7.6) 0 Theorem 7.7. A 2 2 matrix [ ] a b A := c d is invertible if and only if ad bc 0. When this happens, A = ad bc [ d b c a (7.8) ]. (7.9) 38

39 The quantity ad bc of a matrix as in this theorem is called the determinant of the matrix A and is denoted by det A. In all of the examples, the matrices were square matrices, i.e. m n matrices where m = n. It turns out that an m n matrix cannot be invertible if m n. Our examples from above are consistent with this theorem. Example In the 2 2 rotation matrix R θ from our earlier examples, the determinant is given by det R θ = cos θ cos θ sin θ( sin θ) = cos 2 θ + sin 2 θ =. (7.2) Example In the 2 2 vertical shear matrix S k from our earlier examples, the determinant is given by det S k = 0 k =. (7.23) Invertible matrices are quite useful for the following reason. Theorem Let A be an invertible m m matrix and let b be a vector in R m. Then the linear system A x = b (7.25) has a unique solution. Furthermore, this solution is given by Exercise Let x = A b. (7.26) [ ] 3 b := (7.28) and let R π/6 be the matrix that rotates by 30 (in the counterclockwise direction). Find the vector x whose image is b under this rotation. Make the students answer this. Steps: () Write the matrix R π/6 explicitly. (2) Draw the vector b. (3) Guess a solution x by thinking about how R π/6 acts. (4) Use the theorem to calculate x to test your guess. (5) Compare your results and then make sure it works. Theorem If A is an invertible m m matrix, then ( A ) = A. (7.30) If A and B are invertible m m matrices, then AB is invertible and (BA) = A B. (7.3) 39

40 This theorem is completely intuitive! To reverse two processes, you do each one in reverse as if you re rewinding a movie! The inverse of an m m matrix A can be computed, if it exists, in the following way, reminiscent of how we solved linear systems. The idea is to row reduce the augmented matrix [ ] A m (7.32) to the form where B is some new m m matrix. If this can be done, B = A. Example The inverse of the matrix can be calculated by some row reductions [ m B ] (7.33) A := 0 (7.35) (7.36) and then (7.37) So the supposed inverse is To verify this, we should check that it works: A = 0. (7.38) = 0 0. (7.39) Example A rotation by angle θ (about the origin) in R 3 in the plane spanned by e and e 2 is given by the matrix cos θ sin θ 0 sin θ cos θ 0 (7.4) 0 0 Theorem 7.42 (The Invertible Matrix Theorem). Please see [Lay] for this theorem. It provides many characterizations for a matrix to be invertible. 40

41 Homework (Due: Tuesday September 27). Exercises 7, 2, 22, and 33 in Section 2.2 of [Lay], Exercises and 2 in Section 2.3 of [Lay], and Exercises 6 and 7 (ignore that it says to produce an m m matrix please produce an (m ) (m ) matrix that answers the questions m = 3 for exercise 6 and m = 4 for exercise 7) in Section 2.7 of [Lay]. Warning: Please do not read Section 2.7 in [Lay]. It may confuse you. Instead, refer to my notes. Please show all your work, step by step! Do not use calculators or computer programs to solve any problems! Recommended exercises include Exercises 9 (except part (e)), 3, 4, 23, 24, 25, 26, 34, 35 in Section 2.2 of [Lay] and Exercises 3, 4, 2, 29, and 36 in Section 2.3 of [Lay]. 4

42 8 September 22 Quiz (review of what we covered on Sep 3 and Sep 5) at the beginning of class! Try to make the first half of this lecture a bit more interactive. Definition 8.. A subspace of R n is a collection H of vectors in R n satisfying the following conditions. (a) 0 H. (b) For every pair of vectors u and v in H, their sum u + v is also in H. (c) For every vector v and constant c, the scalar multiple c v is in H. Example 8.2. R n itself is a subspace of R n. Also, the set { 0} consisting of just the zero vector in R n is a subspace. Are there other subspaces? Exercise 8.3. Let H be the set of points in R 3 described by the solution set of See Figure 4. Is 0 in H? Let 3x 2y + z = 0. (8.4) 3x 2y + z = 2 3x 2y + z = 0 Figure 4: A plot of the planes described by 3x 2y + z = 0 and 3x 2y + z = 2. u v u = u 2 & v = v 2 (8.5) u 3 v 3 be two vectors in H and let c be a real number. Is u + v in H? Is c v in H? 42

43 Exercise 8.6. Is the set of solutions to 3x 2y + z = 2 (8.7) a subspace of R 3? See Figure 4. What goes wrong? Exercise 8.8. Is the set of solutions to 3x 2y + z = 0 (8.9) with the constraint that x 2 + y 2 (8.0) a subspace of R 3? See Figure 5. What goes wrong? Which of the three properties of the definition 3x 2y + z = 0 Figure 5: A plot of the plane described by 3x 2y + z = 0 with the constraint x 2 + y 2. of subspace remain valid even in this example? What about the same linear system but with the constraint that 3 x2 + y 2? (8.) See Figure 6. The previous example leads to the following definition and hints at the following fact. Definition 8.2. Let A be an m n matrix. The kernel/null space of A is the set of all solutions to the homogeneous equation A x = 0. (8.3) It is more often to call it null space when referring to the matrix and kernel when referring to the associated linear transformation. Theorem 8.4. The null space of a m n matrix is a subspace of R n. 43

44 3x 2y + z = 0 Figure 6: A plot of the plane described by 3x 2y + z = 0 with the constraint 3 x2 + y 2. Example 8.5. Consider the linear system 3x 2y + z = 0 (8.6) from the previous examples. The matrix corresponding to this linear system is just A = [ 3 2 ], (8.7) a 3 matrix. Hence, it describes a linear transformation from R 3 to R. The nullspace of A exactly corresponds to the solutions of x [ ] 3 2 y = [ 0 ]. (8.8) z Definition 8.9. Let A be an m n matrix. The image/column space of A is the set of all vectors in R m of the form A x with x in R n. It is more often to call it column space when referring to the matrix and image when referring to the associated linear transformation. The reason the image of transformation R m T R n is called the column space is because the image of T is spanned by the vectors in the columns of the associated matrix T ( e ) T ( e n ) (8.20) In other words, b is in the image of A if and only if there exist coefficients x,..., x n such that b = x T ( e ) + + x n T ( e n ). (8.2) Theorem The image of an m n matrix is a subspace of R m. 44

45 Example Consider the linear transformation from R 2 to R 3 described by the matrix 0 0. (8.24) 3 2 The images of the vectors e and e 2 get sent to the columns of the matrix. They span the plane shown in Figure 7. 3x 2y + z = 0 Figure 7: A plot of the plane described by 3x 2y + z = 0 along with two vectors spanning it. Definition A basis for a subspace H of R n is a set of vectors that is both linearly independent and spans H. Exercise Going back to our previous example of the plane in R 3 specified by the linear system 3x 2y + z = 0, (8.27) what is a basis for the vectors in this plane? Since the set of all vectors x y (8.28) z satisfying this linear system define this plane, we just need to find a basis for these solutions. We know that if we specify x and y as our free variables, then a general solution of this system is of the form x y (8.29) 3x + 2y 45

MATH 2210Q Applied Linear Algebra, Spring 2018

MATH 2210Q Applied Linear Algebra, Spring 2018 MATH 22Q Applied Linear Algebra, Spring 28 Arthur J. Parzygnat These are my personal notes. This is not a substitute for Lay s book. I will frequently reference both recent versions of this book. The 4th

More information

Chapter 1: Linear Equations

Chapter 1: Linear Equations Chapter : Linear Equations (Last Updated: September, 7) The material for these notes is derived primarily from Linear Algebra and its applications by David Lay (4ed).. Systems of Linear Equations Before

More information

Chapter 1: Linear Equations

Chapter 1: Linear Equations Chapter : Linear Equations (Last Updated: September, 6) The material for these notes is derived primarily from Linear Algebra and its applications by David Lay (4ed).. Systems of Linear Equations Before

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

MATH240: Linear Algebra Review for exam #1 6/10/2015 Page 1

MATH240: Linear Algebra Review for exam #1 6/10/2015 Page 1 MATH24: Linear Algebra Review for exam # 6//25 Page No review sheet can cover everything that is potentially fair game for an exam, but I tried to hit on all of the topics with these questions, as well

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Announcements Monday, September 18

Announcements Monday, September 18 Announcements Monday, September 18 WeBWorK 1.4, 1.5 are due on Wednesday at 11:59pm. The first midterm is on this Friday, September 22. Midterms happen during recitation. The exam covers through 1.5. About

More information

Section 1.5. Solution Sets of Linear Systems

Section 1.5. Solution Sets of Linear Systems Section 1.5 Solution Sets of Linear Systems Plan For Today Today we will learn to describe and draw the solution set of an arbitrary system of linear equations Ax = b, using spans. Ax = b Recall: the solution

More information

All of my class notes can be found at

All of my class notes can be found at My name is Leon Hostetler I am currently a student at Florida State University majoring in physics as well as applied and computational mathematics Feel free to download, print, and use these class notes

More information

1 Review of the dot product

1 Review of the dot product Any typographical or other corrections about these notes are welcome. Review of the dot product The dot product on R n is an operation that takes two vectors and returns a number. It is defined by n u

More information

MATH10212 Linear Algebra B Homework Week 4

MATH10212 Linear Algebra B Homework Week 4 MATH22 Linear Algebra B Homework Week 4 Students are strongly advised to acquire a copy of the Textbook: D. C. Lay Linear Algebra and its Applications. Pearson, 26. ISBN -52-2873-4. Normally, homework

More information

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

More information

Math 220 F11 Lecture Notes

Math 220 F11 Lecture Notes Math 22 F Lecture Notes William Chen November 4, 2. Lecture. Firstly, lets just get some notation out of the way. Notation. R, Q, C, Z, N,,,, {},, A B. Everyone in high school should have studied equations

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011 Math 31 Lesson Plan Day 5: Intro to Groups Elizabeth Gillaspy September 28, 2011 Supplies needed: Sign in sheet Goals for students: Students will: Improve the clarity of their proof-writing. Gain confidence

More information

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions Math 308 Midterm Answers and Comments July 18, 2011 Part A. Short answer questions (1) Compute the determinant of the matrix a 3 3 1 1 2. 1 a 3 The determinant is 2a 2 12. Comments: Everyone seemed to

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Chapter 1 Review of Equations and Inequalities

Chapter 1 Review of Equations and Inequalities Chapter 1 Review of Equations and Inequalities Part I Review of Basic Equations Recall that an equation is an expression with an equal sign in the middle. Also recall that, if a question asks you to solve

More information

Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound

Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound Copyright 2018 by James A. Bernhard Contents 1 Vector spaces 3 1.1 Definitions and basic properties.................

More information

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked

More information

The Gauss-Jordan Elimination Algorithm

The Gauss-Jordan Elimination Algorithm The Gauss-Jordan Elimination Algorithm Solving Systems of Real Linear Equations A. Havens Department of Mathematics University of Massachusetts, Amherst January 24, 2018 Outline 1 Definitions Echelon Forms

More information

Announcements Wednesday, October 04

Announcements Wednesday, October 04 Announcements Wednesday, October 04 Please fill out the mid-semester survey under Quizzes on Canvas. WeBWorK 1.8, 1.9 are due today at 11:59pm. The quiz on Friday covers 1.7, 1.8, and 1.9. My office is

More information

Gaussian elimination

Gaussian elimination Gaussian elimination October 14, 2013 Contents 1 Introduction 1 2 Some definitions and examples 2 3 Elementary row operations 7 4 Gaussian elimination 11 5 Rank and row reduction 16 6 Some computational

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

MATH 320, WEEK 7: Matrices, Matrix Operations

MATH 320, WEEK 7: Matrices, Matrix Operations MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian

More information

Take the Anxiety Out of Word Problems

Take the Anxiety Out of Word Problems Take the Anxiety Out of Word Problems I find that students fear any problem that has words in it. This does not have to be the case. In this chapter, we will practice a strategy for approaching word problems

More information

AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS

AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS The integers are the set 1. Groups, Rings, and Fields: Basic Examples Z := {..., 3, 2, 1, 0, 1, 2, 3,...}, and we can add, subtract, and multiply

More information

Math 138: Introduction to solving systems of equations with matrices. The Concept of Balance for Systems of Equations

Math 138: Introduction to solving systems of equations with matrices. The Concept of Balance for Systems of Equations Math 138: Introduction to solving systems of equations with matrices. Pedagogy focus: Concept of equation balance, integer arithmetic, quadratic equations. The Concept of Balance for Systems of Equations

More information

Announcements Monday, September 25

Announcements Monday, September 25 Announcements Monday, September 25 The midterm will be returned in recitation on Friday. You can pick it up from me in office hours before then. Keep tabs on your grades on Canvas. WeBWorK 1.7 is due Friday

More information

1.2 Row Echelon Form EXAMPLE 1

1.2 Row Echelon Form EXAMPLE 1 .2 Row Echelon Form 7. The two systems 2x + x 2 = 4x + x 2 = 5 and 2x + x 2 = 4x + x 2 = have the same coefficient matrix but different righthand sides. Solve both systems simultaneously by eliminating

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Math 123, Week 2: Matrix Operations, Inverses

Math 123, Week 2: Matrix Operations, Inverses Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Linear algebra and differential equations (Math 54): Lecture 10

Linear algebra and differential equations (Math 54): Lecture 10 Linear algebra and differential equations (Math 54): Lecture 10 Vivek Shende February 24, 2016 Hello and welcome to class! As you may have observed, your usual professor isn t here today. He ll be back

More information

MATH 310, REVIEW SHEET

MATH 310, REVIEW SHEET MATH 310, REVIEW SHEET These notes are a summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive, so please

More information

Linear Independence Reading: Lay 1.7

Linear Independence Reading: Lay 1.7 Linear Independence Reading: Lay 17 September 11, 213 In this section, we discuss the concept of linear dependence and independence I am going to introduce the definitions and then work some examples and

More information

Last Time. x + 3y = 6 x + 2y = 1. x + 3y = 6 y = 1. 2x + 4y = 8 x 2y = 1. x + 3y = 6 2x y = 7. Lecture 2

Last Time. x + 3y = 6 x + 2y = 1. x + 3y = 6 y = 1. 2x + 4y = 8 x 2y = 1. x + 3y = 6 2x y = 7. Lecture 2 January 9 Last Time 1. Last time we ended with saying that the following four systems are equivalent in the sense that we can move from one system to the other by a special move we discussed. (a) (b) (c)

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2: Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2: 03 17 08 3 All about lines 3.1 The Rectangular Coordinate System Know how to plot points in the rectangular coordinate system. Know the

More information

Math 110, Spring 2015: Midterm Solutions

Math 110, Spring 2015: Midterm Solutions Math 11, Spring 215: Midterm Solutions These are not intended as model answers ; in many cases far more explanation is provided than would be necessary to receive full credit. The goal here is to make

More information

Review Solutions for Exam 1

Review Solutions for Exam 1 Definitions Basic Theorems. Finish the definition: Review Solutions for Exam (a) A linear combination of vectors {v,..., v n } is: any vector of the form c v + c v + + c n v n (b) A set of vectors {v,...,

More information

2. Introduction to commutative rings (continued)

2. Introduction to commutative rings (continued) 2. Introduction to commutative rings (continued) 2.1. New examples of commutative rings. Recall that in the first lecture we defined the notions of commutative rings and field and gave some examples of

More information

12. Perturbed Matrices

12. Perturbed Matrices MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it

More information

3 The language of proof

3 The language of proof 3 The language of proof After working through this section, you should be able to: (a) understand what is asserted by various types of mathematical statements, in particular implications and equivalences;

More information

Matrices, Row Reduction of Matrices

Matrices, Row Reduction of Matrices Matrices, Row Reduction of Matrices October 9, 014 1 Row Reduction and Echelon Forms In the previous section, we saw a procedure for solving systems of equations It is simple in that it consists of only

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Introduction to Algebra: The First Week

Introduction to Algebra: The First Week Introduction to Algebra: The First Week Background: According to the thermostat on the wall, the temperature in the classroom right now is 72 degrees Fahrenheit. I want to write to my friend in Europe,

More information

1300 Linear Algebra and Vector Geometry

1300 Linear Algebra and Vector Geometry 1300 Linear Algebra and Vector Geometry R. Craigen Office: MH 523 Email: craigenr@umanitoba.ca May-June 2017 Introduction: linear equations Read 1.1 (in the text that is!) Go to course, class webpages.

More information

Math 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES

Math 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES Math 46, Spring 2 More on Algebraic and Geometric Properties January 2, 2 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES Algebraic properties Algebraic properties of matrix/vector multiplication Last time

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

36 What is Linear Algebra?

36 What is Linear Algebra? 36 What is Linear Algebra? The authors of this textbook think that solving linear systems of equations is a big motivation for studying linear algebra This is certainly a very respectable opinion as systems

More information

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3 Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,

More information

Physics 6A Lab Experiment 6

Physics 6A Lab Experiment 6 Biceps Muscle Model Physics 6A Lab Experiment 6 Introduction This lab will begin with some warm-up exercises to familiarize yourself with the theory, as well as the experimental setup. Then you ll move

More information

Math Lecture 3 Notes

Math Lecture 3 Notes Math 1010 - Lecture 3 Notes Dylan Zwick Fall 2009 1 Operations with Real Numbers In our last lecture we covered some basic operations with real numbers like addition, subtraction and multiplication. This

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Section 1.8/1.9. Linear Transformations

Section 1.8/1.9. Linear Transformations Section 1.8/1.9 Linear Transformations Motivation Let A be a matrix, and consider the matrix equation b = Ax. If we vary x, we can think of this as a function of x. Many functions in real life the linear

More information

Math 54 HW 4 solutions

Math 54 HW 4 solutions Math 54 HW 4 solutions 2.2. Section 2.2 (a) False: Recall that performing a series of elementary row operations A is equivalent to multiplying A by a series of elementary matrices. Suppose that E,...,

More information

Math 416, Spring 2010 Matrix multiplication; subspaces February 2, 2010 MATRIX MULTIPLICATION; SUBSPACES. 1. Announcements

Math 416, Spring 2010 Matrix multiplication; subspaces February 2, 2010 MATRIX MULTIPLICATION; SUBSPACES. 1. Announcements Math 416, Spring 010 Matrix multiplication; subspaces February, 010 MATRIX MULTIPLICATION; SUBSPACES 1 Announcements Office hours on Wednesday are cancelled because Andy will be out of town If you email

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

CHAPTER 8: MATRICES and DETERMINANTS

CHAPTER 8: MATRICES and DETERMINANTS (Section 8.1: Matrices and Determinants) 8.01 CHAPTER 8: MATRICES and DETERMINANTS The material in this chapter will be covered in your Linear Algebra class (Math 254 at Mesa). SECTION 8.1: MATRICES and

More information

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this. Preface Here are my online notes for my Calculus II course that I teach here at Lamar University. Despite the fact that these are my class notes they should be accessible to anyone wanting to learn Calculus

More information

Math Fundamentals for Statistics I (Math 52) Unit 7: Connections (Graphs, Equations and Inequalities)

Math Fundamentals for Statistics I (Math 52) Unit 7: Connections (Graphs, Equations and Inequalities) Math Fundamentals for Statistics I (Math 52) Unit 7: Connections (Graphs, Equations and Inequalities) By Scott Fallstrom and Brent Pickett The How and Whys Guys This work is licensed under a Creative Commons

More information

Rectangular Systems and Echelon Forms

Rectangular Systems and Echelon Forms CHAPTER 2 Rectangular Systems and Echelon Forms 2.1 ROW ECHELON FORM AND RANK We are now ready to analyze more general linear systems consisting of m linear equations involving n unknowns a 11 x 1 + a

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Section 4.5. Matrix Inverses

Section 4.5. Matrix Inverses Section 4.5 Matrix Inverses The Definition of Inverse Recall: The multiplicative inverse (or reciprocal) of a nonzero number a is the number b such that ab = 1. We define the inverse of a matrix in almost

More information

Nondeterministic finite automata

Nondeterministic finite automata Lecture 3 Nondeterministic finite automata This lecture is focused on the nondeterministic finite automata (NFA) model and its relationship to the DFA model. Nondeterminism is an important concept in the

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1982, 28. This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 199. It is available free to all individuals,

More information

2.2 Graphs of Functions

2.2 Graphs of Functions 2.2 Graphs of Functions Introduction DEFINITION domain of f, D(f) Associated with every function is a set called the domain of the function. This set influences what the graph of the function looks like.

More information

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han Linear Algebra for Beginners Open Doors to Great Careers Richard Han Copyright 2018 Richard Han All rights reserved. CONTENTS PREFACE... 7 1 - INTRODUCTION... 8 2 SOLVING SYSTEMS OF LINEAR EQUATIONS...

More information

The Matrix Vector Product and the Matrix Product

The Matrix Vector Product and the Matrix Product The Matrix Vector Product and the Matrix Product As we have seen a matrix is just a rectangular array of scalars (real numbers) The size of a matrix indicates its number of rows and columns A matrix with

More information

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve: MATH 2331 Linear Algebra Section 1.1 Systems of Linear Equations Finding the solution to a set of two equations in two variables: Example 1: Solve: x x = 3 1 2 2x + 4x = 12 1 2 Geometric meaning: Do these

More information

Differential Equations

Differential Equations This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

Law of Trichotomy and Boundary Equations

Law of Trichotomy and Boundary Equations Law of Trichotomy and Boundary Equations Law of Trichotomy: For any two real numbers a and b, exactly one of the following is true. i. a < b ii. a = b iii. a > b The Law of Trichotomy is a formal statement

More information

Vectors and Coordinate Systems

Vectors and Coordinate Systems Vectors and Coordinate Systems In Newtonian mechanics, we want to understand how material bodies interact with each other and how this affects their motion through space. In order to be able to make quantitative

More information

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015 Midterm 1 Review Written by Victoria Kala vtkala@math.ucsb.edu SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015 Summary This Midterm Review contains notes on sections 1.1 1.5 and 1.7 in your

More information

1 Last time: multiplying vectors matrices

1 Last time: multiplying vectors matrices MATH Linear algebra (Fall 7) Lecture Last time: multiplying vectors matrices Given a matrix A = a a a n a a a n and a vector v = a m a m a mn Av = v a a + v a a v v + + Rn we define a n a n a m a m a mn

More information

Systems of equation and matrices

Systems of equation and matrices Systems of equation and matrices Jean-Luc Bouchot jean-luc.bouchot@drexel.edu February 23, 2013 Warning This is a work in progress. I can not ensure it to be mistake free at the moment. It is also lacking

More information

Topic 14 Notes Jeremy Orloff

Topic 14 Notes Jeremy Orloff Topic 4 Notes Jeremy Orloff 4 Row reduction and subspaces 4. Goals. Be able to put a matrix into row reduced echelon form (RREF) using elementary row operations.. Know the definitions of null and column

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

An Introduction To Linear Algebra. Kuttler

An Introduction To Linear Algebra. Kuttler An Introduction To Linear Algebra Kuttler April, 7 Contents Introduction 7 F n 9 Outcomes 9 Algebra in F n Systems Of Equations Outcomes Systems Of Equations, Geometric Interpretations Systems Of Equations,

More information

Math 291-1: Lecture Notes Northwestern University, Fall 2015

Math 291-1: Lecture Notes Northwestern University, Fall 2015 Math 29-: Lecture Notes Northwestern University, Fall 25 Written by Santiago Cañez These are lecture notes for Math 29-, the first quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

MITOCW ocw f99-lec09_300k

MITOCW ocw f99-lec09_300k MITOCW ocw-18.06-f99-lec09_300k OK, this is linear algebra lecture nine. And this is a key lecture, this is where we get these ideas of linear independence, when a bunch of vectors are independent -- or

More information

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices Math 108B Professor: Padraic Bartlett Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices Week 6 UCSB 2014 1 Lies Fun fact: I have deceived 1 you somewhat with these last few lectures! Let me

More information

Sequence convergence, the weak T-axioms, and first countability

Sequence convergence, the weak T-axioms, and first countability Sequence convergence, the weak T-axioms, and first countability 1 Motivation Up to now we have been mentioning the notion of sequence convergence without actually defining it. So in this section we will

More information

MATH10212 Linear Algebra B Homework Week 5

MATH10212 Linear Algebra B Homework Week 5 MATH Linear Algebra B Homework Week 5 Students are strongly advised to acquire a copy of the Textbook: D C Lay Linear Algebra its Applications Pearson 6 (or other editions) Normally homework assignments

More information

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C = CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with

More information

MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices

MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices We will now switch gears and focus on a branch of mathematics known as linear algebra. There are a few notes worth making before

More information

#29: Logarithm review May 16, 2009

#29: Logarithm review May 16, 2009 #29: Logarithm review May 16, 2009 This week we re going to spend some time reviewing. I say re- view since you ve probably seen them before in theory, but if my experience is any guide, it s quite likely

More information

Alex s Guide to Word Problems and Linear Equations Following Glencoe Algebra 1

Alex s Guide to Word Problems and Linear Equations Following Glencoe Algebra 1 Alex s Guide to Word Problems and Linear Equations Following Glencoe Algebra 1 What is a linear equation? It sounds fancy, but linear equation means the same thing as a line. In other words, it s an equation

More information

Linear Algebra Exam 1 Spring 2007

Linear Algebra Exam 1 Spring 2007 Linear Algebra Exam 1 Spring 2007 March 15, 2007 Name: SOLUTION KEY (Total 55 points, plus 5 more for Pledged Assignment.) Honor Code Statement: Directions: Complete all problems. Justify all answers/solutions.

More information

Solution to Proof Questions from September 1st

Solution to Proof Questions from September 1st Solution to Proof Questions from September 1st Olena Bormashenko September 4, 2011 What is a proof? A proof is an airtight logical argument that proves a certain statement in general. In a sense, it s

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

MITOCW ocw-18_02-f07-lec02_220k

MITOCW ocw-18_02-f07-lec02_220k MITOCW ocw-18_02-f07-lec02_220k The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free.

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information