University of Oxford Department of Engineering Science. A1 Mathematics Linear Algebra. 4 lectures, Michaelmas Term 2010

Size: px
Start display at page:

Download "University of Oxford Department of Engineering Science. A1 Mathematics Linear Algebra. 4 lectures, Michaelmas Term 2010"

Transcription

1 University of Oford Department of Engineering Science A1 Mathematics Linear Algebra 4 lectures, Michaelmas Term 2010 Prof. Steve Roberts sjrob@robots.o.ac.uk 1

2 Recommended Tets These are some books that I found useful while preparing these notes. Allenby, R.B.J.T. (1995). Linear algebra. Elsevier. Anton, H. (1984). Elementary linear algebra (4 th edn). Wiley. Anton, H. & Rorres, C. (2000). Elementary linear algebra applications version (8 th edn). Wiley. Kreyszig, E. (2006). Advanced engineering mathematics (9 th edn). Wiley. Larson, R. & Falvo, D.C. (2010). Elementary linear algebra (6 th edn). Brooks/Cole. Strang, G. (2006). Linear algebra and its applications (4 th edn). Thomson Brooks/Cole 2

3 Topic 1 Systems of linear equations 1.1 Introduction Let us begin by discussing the simple equation a = b (1.1) This is a linear equation in the unknown. It is linear because appears on the left-hand side in a very simple form: raised to the power one, and multiplied by a constant. There are no terms such as the left-hand side. The only other term in the equation the b on the right-hand side is a constant., log() or + 2 on We can think of the left-hand side of eqn (1.1) as the result of a function that maps to a: f( ) = a The adjective linear arises from the fact that the graph of this function is a straight line (and in particular, one that goes through the origin). More formally, we say that f is a linear function if it has the properties (i) f( + y) = f() + f(y) (ii) f(α) = αf() [where α = an arbitrary scalar] 3

4 The function f() = a satisfies both requirements, whereas functions like, log() and + 2 clearly do not: they are non-linear. More generally, a linear equation in n unknowns 1, 2,, n is defined as one that can be epressed in the form a + a + K + a = b n n where the coefficients a 1, a 2, a n and the right-hand side b are all constants. We will assume that the constants and the unknowns are numbers in the real domain R, but it should be pointed out that some applications (e.g. quantum mechanics) give rise to linear equations involving numbers in the comple domain C. For much of this course, we will be studying sets of simultaneous linear equations that we aim to solve for the unknowns. We will describe a set of m linear equations in n unknowns as an m n linear system. As you know from P1, such a system can be epressed in the compact form A = b (1.2) where A is an m n matri, and b and are column vectors with m and n components respectively. Note the similarity with eqn (1.1). We can think of the left-hand side of eqn (1.2) as the result of a function that maps (a vector in R n ) to A (a 4

5 vector in R m ): f ( ) = A This function is a linear transformation from R n to R m because (i) (ii) f( + y) = A( + y) = A + Ay = f() + f(y) f(α) = A(α) = α(a) = αf() Essentially, the origin is mapped to the origin (i.e. there is no translation), and straight lines in R n are mapped to straight lines in R m. R Q R n R Q P R m O P O 5

6 1.2 Two ways of thinking about A = b Consider the 2 2 system = We need to find values for 1 and 2 such that the matri-vector product on the left-hand side is equal to the vector on the right-hand side. The familiar row-oriented view interprets the system as + = = Our task is to find a point in R 2 with coordinates ( 1, 2 ) such that both equations are satisfied simultaneously. Geometrically, we can visualize plotting two lines: 6

7 = (1, 2) = The lines intersect at the point 1 = 1, 2 = 2. This is the (unique) solution of the original system. The alternative column-oriented view interprets the system as = 1 2 This time we have two vectors in R 2, namely i + 4j and i j, and our task is to combine them (by choosing 7

8 suitable scalar multipliers 1 and 2 ) in such a way that the resultant vector is 3i + 2j. Geometrically, we can visualize constructing a parallelogram: 4 i + 4j 3 2 3i + 2j i j -2 2(i j) -3-4 The right linear combination of the column vectors is 1(i + 4j) + 2(i - j). Hence 1 = 1, 2 = 2 is the (unique) solution of the original system. The row- and column-oriented pictures look very different, but they lead, of course, to the same solution. They 8

9 are two sides of the same coin. The column-oriented view may seem less natural, but we will see that it can be very helpful in understanding and solving general m n linear systems, whether homogeneous (A = 0) or nonhomogeneous (A = b). In particular, the column-oriented view provides us with an alternative way of posing (and then answering!) the two fundamental questions that arise in relation to any linear system: Is there any solution at all? If there is a solution, is it unique, or are there infinitely many? Solution eistence Solution uniqueness Row-oriented view Do the m hyperplanes in n-dimensional space have any point in common? Do the hyperplanes intersect at just one point, or infinitely many points? Column-oriented view Can b be epressed as a linear combination of the columns of A? Can the linear combination be formed in just one way, or infinitely many ways? 9

10 1.3 Echelon form Regardless of which geometric interpretation we have in mind, we need a reliable way of answering the basic questions of solution eistence and solution uniqueness for a general linear system A = b. In P1 you learnt how to apply Gaussian elimination to the augmented coefficient matri [ A b ] Having reduced this matri (and thus the system of equations) to a simpler form, you were generally able to solve for the unknowns by a systematic process of back-substitution, leading to a unique solution. Sometimes, however, elimination revealed that there was no solution, implying that the original equations were inconsistent. In other cases, elimination revealed that there were infinitely many solutions one or more of the unknowns could be assigned an arbitrary value. All of the systems you studied in P1 involved n equations in n unknowns. We now want to rela that restriction, and see what can happen when the coefficient matri A is rectangular (m n). There are three scenarios: 10

11 m = n m < n m > n Square Underdetermined Overdetermined Same number of equations as unknowns Fewer equations than unknowns More equations than unknowns Eample: 3 3 Eample: 2 3 Eample: = 2 3 Usual epectation: unique solution 1 2 = 3 Usual epectation: infinitely many solns 1 = 2 3 Usual epectation: no solution The good news is that Gaussian elimination with a few tweaks continues to play a central role. The key is to reduce the (augmented) coefficient matri to row-echelon form, or just echelon form for short. The word is borrowed directly from French (échelon = a rung on a ladder ). 11

12 To illustrate the meaning, the following matrices are in echelon form: In any row that does not consist entirely of zeros, the leftmost nonzero entry is given a special name: the leading entry. We can now proceed to a formal definition. A matri is in row-echelon form ( echelon form) if in every pair of successive nonzero rows, the leading entry in the lower row lies strictly to the right of the leading entry in the upper row; any zero rows are grouped together at the bottom of the matri. Some authors add a third condition, that each of the leading entries must be 1. This complicates the elimination process slightly (each row has to be divided through by its leading entry) but it simplifies back-substitution. We will not insist on this; the main thing is to get the pattern right. 12

13 Here are some eamples of matrices that are not in echelon form: Any matri can be reduced to echelon form by performing a sequence of elementary row operations. These operations allow the corresponding system of equations to be simplified, without altering the solution. The three available elementary row operations are: ERO1 add a multiple of one row to another row. ERO2 swap two rows. ERO3 multiply a row by a nonzero constant. Note that if a square matri is reduced to echelon form, we get a bonus because the determinant is simply the product of the entries on the diagonal. Be careful, however. Although ERO1 (remarkably) does not affect the det, ERO2 and ERO3 both do. Swapping two rows negatives the det, while scaling a row scales the det by the same factor. 13

14 Eample Reduce the following matri to echelon form

15 Solution R2 = R2 2R R3 = R3 + R R4 = R4 4R R2 R R4 = R4 + 2R3 15

16 Optional etra #1: if we wanted all the leading entries to be 1, we could have scaled the rows en route, but we can also scale them now: R2 = R R = R 3 Optional etra #2: having done this, we could carry on and clear out the columns vertically above each leading 1, working from right to left. This eventually leads to the reduced row-echelon form: R1= R1 2R R2 = R2 + 2R R1 = R1 2R

17 Reduced row-echelon form is the simplest form that a matri (or really, the underlying system of equations) can take. It makes back-substitution a breeze, and furthermore it is unique, whereas an ordinary row-echelon form (even with leading 1 s) is not. Then again, we need to work harder to obtain the reduced row-echelon form in the first place. For Matlab enthusiasts, there is a function called rref for computing the reduced row echelon form of a matri. 17

18 1.4 Rank and nullity Let us first refresh the notion of linear independence. A set of vectors is linearly independent if no member of the set can be epressed as a linear combination of the other members. Otherwise, the set of vectors is linearly dependent. When discussing matrices, the terms row rank ( = number of linearly independent rows) and column rank (= number of linearly independent columns) are commonly used. Somewhat surprisingly, it turns out that the row rank and the column rank of any matri are the same, so there is no ambiguity in talking about the rank of a matri. In summary: The rank of a matri A can be defined as either the number of linearly independent rows of A, or the number of linearly independent columns of A. The two definitions are equivalent. 18

19 Sometimes the rank of a matri is obvious from inspection, e.g have ranks of 2, 3 and 2. The failsafe technique, however, is to reduce the matri to echelon form, then count the number of nonzero rows (or equally well, count the number of leading entries on the echelon both counts will be same because of the strict definition of echelon form). We now need to introduce the idea of the kernel (or null space) of a matri. In Section 1.1 we discussed how an m n matri A defines a linear transformation from R n to R m. Essentially, the kernel of A is the set of all vectors in R n that are mapped to the zero vector in R m. It is denoted ker(a). The dimension of this set, i.e. the number of linearly independent vectors it contains, is called the nullity of A. More formally, The kernel of a matri A is the set of all solutions of the homogeneous system A = 0. The nullity of a matri A is the number of linearly independent solutions of the homogeneous system A = 0. 19

20 It is clear that ker(a) always contains the zero vector, 0. If this is the only vector in ker(a), then by convention, nullity(a) = 0. If ker(a) contains vectors other than 0, then nullity(a) > 0. One way to determine nullity(a) is to solve the homogeneous system A = 0, then count the number of linearly independent solutions. We will do this in the net section. There is, however, a neat shortcut based on the rank-nullity theorem: Let A be a matri with n columns. Then rank(a) + nullity(a) = n This means that we can determine the nullity of a matri as soon as it has been reduced to echelon form. 20

21 Eample Find the rank and nullity of the matri A = Solution R2 = R2 2R R3 = R3 3R R3 = R3 2R2 There are two nonzero rows, so rank(a) = 2. Since the matri has n = 4 columns, nullity(a) = n rank(a) = 4 2 = 2. 21

22 1.5 Solving A = 0 A linear system in which the right-hand side vector is filled with zeros is said to be homogeneous. Suppose we are given a matri A and asked to find the general solution of A = 0. This is eactly the same as being asked to find ker(a). To start with, we can quickly determine nullity(a) as outlined above. If it turns out to be zero, ker(a) contains only the zero vector, and we can declare immediately that A = 0 has only the trivial solution = 0. If nullity(a) is not zero, then ker(a) contains vectors other than 0, and A = 0 has infinitely many solutions. To find them, we use reduction to echelon form and a special back-substitution process. Eample Find the general solution of A = 0 where A =

23 Solution R2 = R2 2R R3 = R3 4R R4 = R4 + 2R R3 = R3 3R R4 = R4 + 3R2 We see that rank(a) = 2 and nullity(a) = 5 2 = 3, so the system A = 0 has infinitely many solutions. In fact, we could see at the outset that the system was underdetermined (4 5), so we knew that nullity(a) had to be at least 5 4 = 1. In general, an underdetermined homogeneous system always has infinitely many solutions. 23

24 Writing the echelon form of A = 0 out in full, we have: = The variables are now split into two groups. Columns C1 and C3 contain the leading entries on the echelon (circled). The corresponding variables 1 and 3 are variously referred to as the leading variables, key variables, bound variables or pivot variables. The remaining columns (C2, C4, C5) are associated with the variables 2, 4 and 5. These are called the free variables (or sometimes, the non-leading variables). To find the general solution, we first assign arbitrary parameters to the free variables, say = α, = β, = γ Net we solve for the leading variables in terms of these parameters, working up through the nonzero rows, and from right to left along the echelon. (Note how each of the zero rows automatically has a zero on the right-hand side it is impossible for a homogeneous system to be inconsistent.) We first solve row R2 for 3, = β 9γ = 0 3 = β 3γ 24

25 then solve row R1 for 1 : = ( ) 2 + 3α + 2 β 3γ β + 4γ = 0 = α β + γ The general solution can now be assembled, then (optionally) epressed as a linear combination of vectors, one for each arbitrary parameter: α β + γ α 3 = β 3γ = α 0 + β 1 + γ 3 4 β γ (1.3) This is the desired general solution of A = 0; it is also ker(a). We can easily read off a set of linearly independent basis vectors for the kernel: , 1,

26 In this instance, ker(a) is seen to be a 3-dimensional subspace of R 5. Hence nullity(a) which is just the dimension of ker(a) is equal to 3. This agrees with the initial prediction of nullity(a) that was made using the rank-nullity theorem. It is good practice to check that each of the kernel basis vectors does actually satisfy A = 0, and this can readily be verified. It follows that any linear combination of the basis vectors, as in eqn (1.3), must also satisfy A = Solving A = b With a homogeneous linear system A = 0, the answer to the question of solution eistence is always affirmative, since (as discussed above) there will always be at least the trivial solution = 0. With a non-homogenous system A = b, the answer to this question is no longer automatic: there is now a possibility that the system will have no solution. As usual, reduction to echelon form (this time performing elementary row operations on the augmented coefficient matri [A b]) provides clarity. Suppose we have a 4 5 system and end up with something like

27 We can immediately infer that the original system A = b is inconsistent, i.e. it has no solution. We can interpret the circled entry in two different ways, using the terminology introduced in Section 1.2. Taking the row-oriented view, the emergence of a contradictory equation indicates that the 4 hyperplanes in R 5 have no point in common. Taking the column-oriented view, the fact that [A b] and A have different column ranks (3 and 2 respectively) indicates that the vector b in R 4 cannot be epressed as a linear combination of the 5 columns of A. On the other hand, suppose we reduce a general m n system A = b to echelon form, and find that the trailing rows (if any) all look like [ 0 0 L 0 0] We can then infer that the original system A = b is consistent, i.e. it has at least one solution. Taking the roworiented view, the absence of a contradictory equation indicates that the m hyperplanes in R n have at least one point in common. Taking the column-oriented view, the fact that [A b] and A have the same column rank indicates that the vector b in R m can be epressed in at least one way as a linear combination of the n columns of A. 27

28 If the above analysis shows that there is at least one solution, we turn to the question of uniqueness: is there just one solution, or a whole family of solutions? The answer for A = b, just as for A = 0, is determined by the nullity of A. If nullity(a) = 0, the solution is unique, and it can be found by standard back-substitution (leading variables only, no free variables). If nullity(a) > 0, the solution involves a corresponding number of arbitrary parameters, and its general form has to be worked out by special back-substitution (separate treatment of leading and free variables). Our last eample will show how to solve a non-homogeneous system A = b that has infinitely many solutions. To illustrate some important points, we will retain the same A matri that was used to demonstrate the solution of A = 0 in Section

29 Eample Find the general solution of A = b where A =, b =

30 Solution R2 = R2 2R R3 = R3 4R R4 = R4 + 2R R3 = R3 3R R4 = R4 + 3R2 Since rank([a b]) = rank(a) = 2, the system is consistent: there is at least one solution. Furthermore, since nullity(a) = 5 2 = 3, we know that the general solution will involve 3 arbitrary parameters, so in fact the system has infinitely many solutions. 30

31 Proceeding just as in Section 1.5, we first assign arbitrary parameters to the free variables, say = α, = β, = γ Net we solve for the leading variables, remembering to incorporate the right-hand side entries (this time they are not zero!). Solving row R2 for 3 gives and solving row R1 for 1 gives = β 9γ = 12 3 = 4 + β 3γ = ( ) 2 + 3α β 3γ β + 4γ = 7 = α β + γ 31

32 The general solution can now be assembled, then unpacked as before: α β + γ α 3 = 4 + β 3γ = 4 + α 0 + β 1 + γ 3 4 β γ ker( A) p (1.4) We recognize the second part of the solution as ker(a): it is identical to the solution of the homogeneous system A = 0 that we obtained in eqn (1.3). But we now have an additional term: a particular solution p that (we hope!) satisfies A = b. It is easy to check that it does: = Of course this is not the only possible choice for p. The general solution contains three arbitrary parameters, and we can pick any values we like for those. 32

33 For eample, setting α = 1, β = 0 and γ = 1 in eqn (1.4), we get another solution that also satisfies A = b, = [ ] T = and could therefore serve just as well as the particular solution p. An infinite number of other (equally valid) choices could be made. Note, however, that any epression for the general solution of A = b must always include the whole of the kernel: = p + ker( A ) 33

34 In closing it is worth recalling the first form of the general solution in eqn (1.4), and reflecting on the significance of the rank and nullity of the coefficient matri A α 1 β + γ α 3 = 4 + β 3γ 4 β 5 γ In this eample nullity(a) is 3, which corresponds to the number of free variables ( 2, 4, 5 ) that can be assigned arbitrary values. Also, rank(a) is 2, which corresponds to the number of leading variables ( 1, 3 ) that are bound to the free variables in certain prescribed ratios. In general, the significance of the nullity of A is that it tells us the number of degrees of freedom in the general solution of a linear system A = b (or indeed A = 0). The rank of A is a complementary measure it tells us the number of constraints imposed on the general solution. If the nullity is zero, the solution has no degrees of freedom, therefore that solution is unique. If the nullity is greater than zero, the solution has a corresponding number of degrees of freedom, and consequently the system has infinitely many solutions. 34

35 Flowchart for A = b (matri A is m n) 35

36 Eample (A Q7) A set of linear equations is given by A = b where A =, and = b = 7 5 (a) Define the rank, nullity and kernel of a matri. (b) Reduce the augmented matri [A b] to echelon form and hence find the rank, nullity and kernel of the matri A. (c) Confirm that solutions to A = b eist, and find the general solution for. 36

37 Topic 2 Norms and conditioning 2.1 Vector norms When we talk about the size or magnitude of a real number, we mean its absolute value. Although the number 6.2 is algebraically smaller than the number 2.8, most of us would agree that 6.2 is bigger than 2.8, in the sense that it lies further away from zero. The idea can be epressed more formally: the length of the vector running from the origin to 6.2 is greater than that of the vector running from the origin to 2.8. What about higher dimensions? We know how to calculate the length of vectors in R 2 and R 3 using Pythagoras s theorem, and there is a very natural etension to R n : length of length of length of [ ] [ ] [ L ] T = T = M T n = K + n 37

38 The general definition works also perfectly well when n = 1, since [ ] 2 length of = = Going back to the eample in the opening paragraph, we get the desired outcome: the length of the 1-component vector [ 6.2] is greater than that of [ 2.8 ]. We are very used to calculating the length of a vector in this way, but in fact a whole family of length measures can be defined as follows: length of length of length of T p p 1 p [ 1 2] = ( ) T p p p [ ] = ( ) M 1 p T p p p [ 1 2 L n ] = ( K + n ) This generalized measure of length is called the p-norm (or L p norm). To denote the p-norm of a vector, we use the special notation p. We never write 1 p, though the similarity is of course intentional; taking the norm of a p vector gives us a scalar measure of its magnitude, and is thus analogous to taking the absolute value of a real (or comple) number. 38

39 The p-norm can be defined compactly as follows. Let be a vector and p ( 1) be a scalar. The p-norm of is p 1 p p = i i It is not essential that p be an integer, but in practice it invariably is. In fact, there are only a few norms in common use, namely the ones that are easy to calculate: those for p = 1, p = 2 and p =. The Pythagorean measure of length that we are so accustomed to is just the p-norm with p = 2. It is commonly referred to as the Euclidean norm, or the Euclidean metric (note that in this contet, metric means measure of distance nothing to do with SI units). The 2-norm can also be epressed in a compact vectorial form, since it is the square root of the dot product of with itself: = = T 2 It is often simpler (e.g. when minimizing the length of a vector, see Q7 of the tutorial sheet) to work with square of the 2-norm: 2 T = = 2 39

40 If we set p = 1 in the general definition, we get 1 = i i i.e. the sum of the absolute values of the components. The 1-norm is sometimes referred to as the taicab or Manhattan norm, since a hypothetical tai driving from [ 0 0] T to a destination with coordinates [ ] cover a distance of at least if it is constrained to drive on a rectangular grid of streets in R T has to Finally, if we let p approach infinity in the general definition, we get = ma i ( i ) i.e. the supremum of the absolute values of the components. You are asked to justify this in Q3 of the tutorial sheet. It is also possible to come up with vector norms other than the p-norm, but any proposed norm must possess certain common-sense properties in order to qualify as a meaningful measure of length. These are similar to the properties of the absolute value function. 40

41 Specifically, (i) = 0 if and only if = 0 (ii) α = α (iii) + y + y (the triangle inequality ) It can be shown that these basic properties imply another common-sense property that we would epect any norm to ehibit: 0 for all For eample, the 1-norm could be negative. i is a valid vector norm, but i i is not, since the sum of the terms in a vector i A good way to understand vector norms is to study how the unit circle in R 2 looks under the three common p- norms: 41

42 = 1 2 = 1 = 1 Eample T. Find the 1-, 2-, 5-, and -norms of the vector = [ ] Solution 1 = = 13 2 ( ) ( ) = = 51 = = = 4393 = 5.35 = ma ( 3, 5, 4, 1) = 5 42

43 2.2 Matri norms Can we etend the notion of a norm from vectors to matrices? Intuition suggests that a matri such as must somehow be bigger than a matri such as One idea comes quickly to mind. We could place all the entries of the matri into a single vector, then apply the vector p-norm introduced above. If we took p = 2, this would be equivalent to squaring each entry of the matri, summing and taking the square root. This is indeed a commonly used matri norm known as the Frobenius norm: A Fro = A i j 2 ij Another approach, and one that turns out to be much more useful, is to think about the action of a matri on an arbitrary vector. As we noted in the first lecture, an m n matri A can be viewed as a linear operator that maps a vector in R n to a vector A in R m. If we take the norm of the output vector A and divide it by the norm of the 43

44 input vector, we should get some clue as to the magnifying power of the matri A (and thus its inherent size or magnitude ). We are therefore motivated to calculate the ratio A p p assuming of course that is not the zero vector. The trouble is that, unless A happens to be the identity matri (or some multiple thereof), this ratio will depend on the direction of the input vector. To use a very simple eample, consider the diagonal matri 5 0 A = 0 2 The vector = [ 1 0] T is mapped to A = [ 5 0] T (magnification = 5), but the vector [ ] 0 1 T is mapped to [ 0 2] T (magnification = 2). Vectors in other directions undergo intermediate levels of magnification. This can be clarified by looking at the image of the unit circle under the action of A. It is mapped to an ellipse with semi-aes of length 5 and 2: 44

45 Given that a general matri A is likely to eert widely varying magnifying powers on different input vectors, we had better be conservative and define the matri p-norm of A as A p = ma 0 A p p With this definition, the matri p-norm provides an upper bound on the magnifying power of A (as measured by applying the vector p-norm to the input vector and output vector A). It means that we are assured of satisfying the following important inequality for any vector, regardless of its direction. 45

46 Let A be a matri with n columns, and let be a vector in R n. If the inequality A A is satisfied for all vectors in R n, then the matri norm A is said to be compatible with, or induced from, the vector norm used as a metric for and A. There is no general formula for calculating the p-norm of a matri A in terms of p and the entries A ij. For selected values of p, however, it is possible to work out formulae for compatibility inequality for an arbitrary vector. A that guarantee the satisfaction of the above p For p = 1, the compatible matri norm is A 1 = ma Aij j i which is the maimum absolute column sum of A (remember that i counts down the rows, and j counts across the columns). We could just take this on trust, but it would be nice to verify that the inequality 46

47 A A is actually satisfied for any vector. The proof requires some messy fiddling with summations and absolute values. The indices i and j are understood to run from 1 to m and 1 to n respectively. A 1 = = = = i i i j j j ij ij ij ij j i j j i j ma Aij j j i A A A A = A = A j j A j j ij * 47

48 The justification for the step marked * is the scalar version of the triangle inequality mentioned above: α + β α + β. For p =, the compatible matri norm is A = ma Aij i j which is the maimum absolute row sum of A. To verify the validity of this definition, we need to show that the inequality A A is certain to be satisfied for any vector. This time the proof is a little shorter, but still requires careful study to follow the reasoning. You might find it helpful to make up a few small but concrete test cases (e.g. with A = 2 3 and = 3 1) and trace what happens at each step. Make sure you include a few negative terms in both A and! 48

49 A = ma Aij j i j ma Aij j * i j = ma Aij j i j ma i = A j A ij ma j ( j ) Again, note the use of the triangle inequality to reach line *. The matri 2-norm is widely used, but can be difficult to calculate. If A happens to be square and symmetric, the 2-norm is the supremum of the absolute values of the eigenvalues: A 2 ( i ) = ma λ i (You might like to think about the unit circle to ellipse mapping discussed above.) 49

50 This is compatible with the 2-norm for vectors, i.e. it ensures A A for any vector. More generally, the 2-norm of A turns out to be its largest singular value (this is beyond the scope of the present course). Eample Find the 1-norm, -norm and Frobenius norm of the matri A = Solution The 1-norm is the maimum absolute column sum: A ( ) 1 = ma 12, 11, 13 = 13 The -norm is the maimum absolute row sum: A ( ) The Frobenius norm is the square root of the sum of squares: ( ) ( ) A = + + K + = = Fro = ma 11, 17, 8 = 17 50

51 Eample Find the 2-norm of the matri 1 2 A = 2 3 Solution The matri is square and symmetric, so its 2-norm can be calculated from the eigenvalues. 1 λ 2 det = λ ( )( ) 1 λ 3 λ 4 = 0 2 λ + 2λ 7 = 0 Solving the quadratic gives λ = 1± 8 = 1.83, Hence A 2 ( ) = ma 1.83, 3.83 =

52 Eample Given the matri and vector A = , 0.3 = 0.2 investigate the satisfaction of the compatibility inequality A A p q p when p and q take values 1 and in all combinations. Solution A = = We therefore have = 0.5, = 0.3 (vector norms of ) 1 1 A = 2.0, A = 3.2 (matri norms of A) 1 A = 0.91, A = 0.82 (vector norms of A) 52

53 Combination A A p q p Inequality satisfied? p = 1, q = YES (epected) p =, q = YES (epected) p = 1, q = YES (lucky) p =, q = NO Moral: don t mi and match incompatible matri and vector norms! 53

54 2.3 Conditioning of linear systems Consider the 2 2 system = which has the solution 1 1 = 1 2 If we perturb one of the terms in the matri by 1%, = the solution changes to = This seems reasonable both components of have changed by 0.5%, which is of the same order of magnitude as the perturbation. 54

55 Similarly, if we perturb the first term of the right hand side vector by 1%, = the solution becomes = which is again in line with our intuitive epectation. This type of nicely behaved linear system is said to be well-conditioned. Now let s look at another 2 2 system = 2 2 which also happens to have the solution 1 1 =

56 When we perturb one of the terms in the matri by 1%, = 2 2 we now get a drastic change in the solution: 1 0 = When we perturb the first term of the right hand side vector by 1%, = 2 2 the solution is wildly different again: =

57 This behaviour is surprising, and clearly undesirable: the solution is highly sensitive to small perturbations in the problem data A and b (such as those which might arise from noisy data, imperfect measurements, or numerical roundoff errors during a computation). This type of badly behaved linear system is said to be illconditioned. We can gain some insight into what is happening if we plot the two base case situations (wellconditioned on the left, ill-conditioned on the right): The reason for the ill-conditioned behaviour is now clear. The lines are so close to parallel that even small changes in slope and/or intercept cause the intersection point to move far away from the base case solution point (1, 1). 57

58 If the lines were parallel, the determinant of the coefficient matri would be zero, which might prompt the idea of using det(a) as an indicator of possible conditioning trouble. The ill-conditioned system above has 1 1 A = such that det(a) = 0.02, which seems close to zero. Unfortunately, the determinant is useless for predicting illconditioning because a simple rescaling of the problem leads to a change in the determinant. If we multiply the equations of the ill-conditioned system by 10, we get = 2 20 such that det(a) = 2. The matri appears to be much less singular, but the solution remains just as sensitive to 1% (or other small) perturbations of the entries in A and/or b. Fortunately, there is a better approach for quantifying the sensitivity of a linear system, thus allowing illconditioned behaviour to be predicted in advance (i.e. without resorting to trial and error perturbations of the sort we imposed above). The perturbations to A and b are characterized by the relative error ratios δa A and δb b 58

59 We would like to be able to predict the effect on the solution, via the corresponding relative error ratio δ Any convenient norm can be used, but it is essential that the matri norm and vector norm be compatible in the sense defined previously. Suppose we have a linear system defined by a matri A, a right-hand side vector b, and an eact solution, such that A = b The matri A is assumed to be square and invertible. If the matri is perturbed from A to A + δa, the solution will be perturbed from to + δ, such that ( A + δ A)( + δ ) = b Subtracting A from the left and b from the right, Aδ + δ A( + δ ) = 0 59

60 Premultiplying by 1 A and rearranging, ( ) ( ) 1 δ + δ + δ = A A 0 1 δ = δ + δ A A Taking norms and applying the compatibility inequality twice, Hence ( ) 1 δ = δ + δ A A ( ) 1 δ + δ A A 1 δ + δ A A δ + δ A 1 δa or equivalently 1 ( A A ) δ δa + δ A (2.1) 60

61 Similarly, if the right-hand side vector is perturbed from b to b + δb, the solution will be perturbed from to + δ, such that A( + δ ) = b + δb Subtracting A from the left and b from the right, Aδ = δb Premultiplying by 1 A, 1 δ = A δb Taking norms and applying the compatibility inequality, 1 δ = δ A b A 1 δb Multiplying by b on the left and A on the right, 1 δ b A δb A Applying the compatibility inequality once more, 1 δ b A δb A 61

62 Rearranging, δ 1 ( A A ) δb b (2.2) Remarkably, the bracketed term 1 A A appears in both eqn (2.1) and eqn (2.2). It is called the condition number, and is usually denoted κ. Let A be a square, invertible matri. The condition number of A is 1 ( A) A A κ = For a linear system A = b, the condition number provides an upper bound on the relative error in caused by relative errors in A and b: δ δa κ ( A) + δ A δ κ ( A) δb b 62

63 Eample Find the condition numbers for the two systems considered at the start of this section. Also, for the ill-conditioned system, verify that the actual relative errors caused by the perturbations in A and b are within the bounds predicted by the condition number. Use the infinity norm. Solution For the well-conditioned system, A = and = 1 1 A hence 1 ( ) κ A = A A = 2 1 = 2 (which is low) For the ill-conditioned system, A = and = A

64 hence 1 ( ) κ A = A A = = 201 (which is high) The matri was changed from 1 1 A = to A + δ A = such that δ A = 0 0 This caused the solution to change from 1 = 1 to 0 + δ = 1.98 such that 1 δ =

65 We therefore epect δ δa κ ( A) + δ A which checks out. The right-hand side was changed from 2 b = 2 to 2.02 b + δ b = 2 such that 0.02 δ b = 0 This caused the solution to change from 1 = 1 to δ =

66 such that 1.01 δ = 0.99 We therefore epect δ κ ( A) δb b which again checks out. 66

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra 1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION,, 1 n A linear equation in the variables equation that can be written in the form a a a b 1 1 2 2 n n a a is an where

More information

Solving Systems of Equations Row Reduction

Solving Systems of Equations Row Reduction Solving Systems of Equations Row Reduction November 19, 2008 Though it has not been a primary topic of interest for us, the task of solving a system of linear equations has come up several times. For example,

More information

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES LECTURES 14/15: LINEAR INDEPENDENCE AND BASES MA1111: LINEAR ALGEBRA I, MICHAELMAS 2016 1. Linear Independence We have seen in examples of span sets of vectors that sometimes adding additional vectors

More information

MTH 2032 Semester II

MTH 2032 Semester II MTH 232 Semester II 2-2 Linear Algebra Reference Notes Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2 ii Contents Table of Contents

More information

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 4. Solving Systems of Equations. Chapter 4 Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

Linear Independence Reading: Lay 1.7

Linear Independence Reading: Lay 1.7 Linear Independence Reading: Lay 17 September 11, 213 In this section, we discuss the concept of linear dependence and independence I am going to introduce the definitions and then work some examples and

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Topic 14 Notes Jeremy Orloff

Topic 14 Notes Jeremy Orloff Topic 4 Notes Jeremy Orloff 4 Row reduction and subspaces 4. Goals. Be able to put a matrix into row reduced echelon form (RREF) using elementary row operations.. Know the definitions of null and column

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Linear Algebra. Chapter Linear Equations

Linear Algebra. Chapter Linear Equations Chapter 3 Linear Algebra Dixit algorizmi. Or, So said al-khwarizmi, being the opening words of a 12 th century Latin translation of a work on arithmetic by al-khwarizmi (ca. 78 84). 3.1 Linear Equations

More information

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Math 123, Week 2: Matrix Operations, Inverses

Math 123, Week 2: Matrix Operations, Inverses Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini DM559 Linear and Integer Programming Lecture Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. Outline 1. 3 A Motivating Example You are organizing

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 8

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 8 EECS 6A Designing Information Devices and Systems I Spring 8 Lecture Notes Note 8 8 Subspace In previous lecture notes, we introduced the concept of a vector space and the notion of basis and dimension

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices

MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices We will now switch gears and focus on a branch of mathematics known as linear algebra. There are a few notes worth making before

More information

0.1. Linear transformations

0.1. Linear transformations Suggestions for midterm review #3 The repetitoria are usually not complete; I am merely bringing up the points that many people didn t now on the recitations Linear transformations The following mostly

More information

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces Section 1: Linear Independence Recall that every row on the left-hand side of the coefficient matrix of a linear system A x = b which could

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

18.303: Introduction to Green s functions and operator inverses

18.303: Introduction to Green s functions and operator inverses 8.33: Introduction to Green s functions and operator inverses S. G. Johnson October 9, 2 Abstract In analogy with the inverse A of a matri A, we try to construct an analogous inverse  of differential

More information

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

More information

Lecture 1 Systems of Linear Equations and Matrices

Lecture 1 Systems of Linear Equations and Matrices Lecture 1 Systems of Linear Equations and Matrices Math 19620 Outline of Course Linear Equations and Matrices Linear Transformations, Inverses Bases, Linear Independence, Subspaces Abstract Vector Spaces

More information

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th. Elementary Linear Algebra Review for Exam Exam is Monday, November 6th. The exam will cover sections:.4,..4, 5. 5., 7., the class notes on Markov Models. You must be able to do each of the following. Section.4

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS MA1111: LINEAR ALGEBRA I, MICHAELMAS 2016 1 Linear equations We now switch gears to discuss the topic of solving linear equations, and more interestingly, systems

More information

Gaussian elimination

Gaussian elimination Gaussian elimination October 14, 2013 Contents 1 Introduction 1 2 Some definitions and examples 2 3 Elementary row operations 7 4 Gaussian elimination 11 5 Rank and row reduction 16 6 Some computational

More information

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics The Basics of Linear Algebra Marcel B. Finan c All Rights Reserved Last Updated November 30, 2015 2 Preface Linear algebra

More information

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra Sections 5.1 5.3 A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Lecture 2 Systems of Linear Equations and Matrices, Continued

Lecture 2 Systems of Linear Equations and Matrices, Continued Lecture 2 Systems of Linear Equations and Matrices, Continued Math 19620 Outline of Lecture Algorithm for putting a matrix in row reduced echelon form - i.e. Gauss-Jordan Elimination Number of Solutions

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3 Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,

More information

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form: 3.3 Gradient Vector and Jacobian Matri 3 3.3 Gradient Vector and Jacobian Matri Overview: Differentiable functions have a local linear approimation. Near a given point, local changes are determined by

More information

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics Rank and Nullity MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Objectives We have defined and studied the important vector spaces associated with matrices (row space,

More information

M 340L CS Homework Set 1

M 340L CS Homework Set 1 M 340L CS Homework Set 1 Solve each system in Problems 1 6 by using elementary row operations on the equations or on the augmented matri. Follow the systematic elimination procedure described in Lay, Section

More information

Solutions to Math 51 First Exam October 13, 2015

Solutions to Math 51 First Exam October 13, 2015 Solutions to Math First Exam October 3, 2. (8 points) (a) Find an equation for the plane in R 3 that contains both the x-axis and the point (,, 2). The equation should be of the form ax + by + cz = d.

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors

More information

Chapter 1: Linear Equations

Chapter 1: Linear Equations Chapter : Linear Equations (Last Updated: September, 7) The material for these notes is derived primarily from Linear Algebra and its applications by David Lay (4ed).. Systems of Linear Equations Before

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra Sections 5.1 5.3 A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are

More information

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU LU Factorization A m n matri A admits an LU factorization if it can be written in the form of Where, A = LU L : is a m m lower triangular matri with s on the diagonal. The matri L is invertible and is

More information

Differential Equations

Differential Equations This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

MATH 320, WEEK 11: Eigenvalues and Eigenvectors

MATH 320, WEEK 11: Eigenvalues and Eigenvectors MATH 30, WEEK : Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors We have learned about several vector spaces which naturally arise from matrix operations In particular, we have learned about the

More information

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013 PAUL SCHRIMPF OCTOBER 24, 213 UNIVERSITY OF BRITISH COLUMBIA ECONOMICS 26 Today s lecture is about unconstrained optimization. If you re following along in the syllabus, you ll notice that we ve skipped

More information

Linear Algebra Basics

Linear Algebra Basics Linear Algebra Basics For the next chapter, understanding matrices and how to do computations with them will be crucial. So, a good first place to start is perhaps What is a matrix? A matrix A is an array

More information

2 3 x = 6 4. (x 1) 6

2 3 x = 6 4. (x 1) 6 Solutions to Math 201 Final Eam from spring 2007 p. 1 of 16 (some of these problem solutions are out of order, in the interest of saving paper) 1. given equation: 1 2 ( 1) 1 3 = 4 both sides 6: 6 1 1 (

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence

Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras Lecture - 13 Conditional Convergence Now, there are a few things that are remaining in the discussion

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

EC5555 Economics Masters Refresher Course in Mathematics September 2014

EC5555 Economics Masters Refresher Course in Mathematics September 2014 EC5555 Economics Masters Refresher Course in Mathematics September 4 Lecture Matri Inversion and Linear Equations Ramakanta Patra Learning objectives. Matri inversion Matri inversion and linear equations

More information

33A Linear Algebra and Applications: Practice Final Exam - Solutions

33A Linear Algebra and Applications: Practice Final Exam - Solutions 33A Linear Algebra and Applications: Practice Final Eam - Solutions Question Consider a plane V in R 3 with a basis given by v = and v =. Suppose, y are both in V. (a) [3 points] If [ ] B =, find. (b)

More information

Rectangular Systems and Echelon Forms

Rectangular Systems and Echelon Forms CHAPTER 2 Rectangular Systems and Echelon Forms 2.1 ROW ECHELON FORM AND RANK We are now ready to analyze more general linear systems consisting of m linear equations involving n unknowns a 11 x 1 + a

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Chapter 1: Linear Equations

Chapter 1: Linear Equations Chapter : Linear Equations (Last Updated: September, 6) The material for these notes is derived primarily from Linear Algebra and its applications by David Lay (4ed).. Systems of Linear Equations Before

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

k is a product of elementary matrices.

k is a product of elementary matrices. Mathematics, Spring Lecture (Wilson) Final Eam May, ANSWERS Problem (5 points) (a) There are three kinds of elementary row operations and associated elementary matrices. Describe what each kind of operation

More information

12. Perturbed Matrices

12. Perturbed Matrices MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,

More information

(Refer Slide Time: 1:13)

(Refer Slide Time: 1:13) Linear Algebra By Professor K. C. Sivakumar Department of Mathematics Indian Institute of Technology, Madras Lecture 6 Elementary Matrices, Homogeneous Equaions and Non-homogeneous Equations See the next

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

PROBLEMS In each of Problems 1 through 12:

PROBLEMS In each of Problems 1 through 12: 6.5 Impulse Functions 33 which is the formal solution of the given problem. It is also possible to write y in the form 0, t < 5, y = 5 e (t 5/ sin 5 (t 5, t 5. ( The graph of Eq. ( is shown in Figure 6.5.3.

More information

LINEAR ALGEBRA: THEORY. Version: August 12,

LINEAR ALGEBRA: THEORY. Version: August 12, LINEAR ALGEBRA: THEORY. Version: August 12, 2000 13 2 Basic concepts We will assume that the following concepts are known: Vector, column vector, row vector, transpose. Recall that x is a column vector,

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it

More information

The Gauss-Jordan Elimination Algorithm

The Gauss-Jordan Elimination Algorithm The Gauss-Jordan Elimination Algorithm Solving Systems of Real Linear Equations A. Havens Department of Mathematics University of Massachusetts, Amherst January 24, 2018 Outline 1 Definitions Echelon Forms

More information

Linear Algebra 1 Exam 1 Solutions 6/12/3

Linear Algebra 1 Exam 1 Solutions 6/12/3 Linear Algebra 1 Exam 1 Solutions 6/12/3 Question 1 Consider the linear system in the variables (x, y, z, t, u), given by the following matrix, in echelon form: 1 2 1 3 1 2 0 1 1 3 1 4 0 0 0 1 2 3 Reduce

More information

Introduction. So, why did I even bother to write this?

Introduction. So, why did I even bother to write this? Introduction This review was originally written for my Calculus I class, but it should be accessible to anyone needing a review in some basic algebra and trig topics. The review contains the occasional

More information

1 Last time: determinants

1 Last time: determinants 1 Last time: determinants Let n be a positive integer If A is an n n matrix, then its determinant is the number det A = Π(X, A)( 1) inv(x) X S n where S n is the set of n n permutation matrices Π(X, A)

More information

Slope Fields: Graphing Solutions Without the Solutions

Slope Fields: Graphing Solutions Without the Solutions 8 Slope Fields: Graphing Solutions Without the Solutions Up to now, our efforts have been directed mainly towards finding formulas or equations describing solutions to given differential equations. Then,

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence Linear Combinations Spanning and Linear Independence We have seen that there are two operations defined on a given vector space V :. vector addition of two vectors and. scalar multiplication of a vector

More information

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Topic 15 Notes Jeremy Orloff

Topic 15 Notes Jeremy Orloff Topic 5 Notes Jeremy Orloff 5 Transpose, Inverse, Determinant 5. Goals. Know the definition and be able to compute the inverse of any square matrix using row operations. 2. Know the properties of inverses.

More information

For updated version of this document see LINEAR EQUATION. Chickens and Rabbits

For updated version of this document see  LINEAR EQUATION. Chickens and Rabbits LINEAR EQUATION Chickens and Rabbits A farm has chickens and rabbits. The farmer counts 26 heads and 82 feet. How many chickens and rabbits are in the farm? Trial and error. Before learning algebra, you

More information

MATH 320, WEEK 7: Matrices, Matrix Operations

MATH 320, WEEK 7: Matrices, Matrix Operations MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian

More information

Row Space and Column Space of a Matrix

Row Space and Column Space of a Matrix Row Space and Column Space of a Matrix 1/18 Summary: To a m n matrix A = (a ij ), we can naturally associate subspaces of K n and of K m, called the row space of A and the column space of A, respectively.

More information

1 Review of the dot product

1 Review of the dot product Any typographical or other corrections about these notes are welcome. Review of the dot product The dot product on R n is an operation that takes two vectors and returns a number. It is defined by n u

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information