Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Size: px
Start display at page:

Download "Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem."

Transcription

1 Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

2 Contents The dot product 3. Length of a vector A few rules Unit vectors Orthogonality Pythagorean theorem: vector version Orthogonal matrices 9. Orthogonal bases, orthonormal bases Orthogonal matrices Projections 3. Orthogonal projections General case One remark on the length of the projection The matrix of a projection Cauchy-Schwarz 6 4. The Cauchy-Schwarz inequality The angle between two vectors The triangle inequality The spectral theorem 5. Diagonalization of symmetric matrices: the spectral theorem The case of simple eigenvalues The general case Exercises

3 The dot product If we have two vectors in R n v = v. v n w = their dot product (or scalar product, or inner product is defined to be the scalar v w = v w + v w v n w n R w. w n For example = ( + ( = = or ( ( 3 = 3 + ( 3 = 3 3 = 0 3 Notice that we can think of the dot product as a matrix product: Here v T is the transpose of the column vector v, which is the same vector, but now it s a row vector: Therefore, all rules that apply to matrix multiplication also apply to dot products. In addition, the dot product is commutative: v w = w v 3

4 . Length of a vector The length (or norm of the vector v R n is defined to be the real number v = v v ( For example, the length of ( 44 is 4 = 4 4 = = = 36 = Figure : Apply the Pythagorean theorem twice to obtain the length of a vector in R 3. Referring to Figure, to calculate the length of the vector ( a bc, we apply the Pythagorean theorem to the shaded triangle in the xy-plane (the blue 4

5 one, whose hypotenuse has length a + b. Then the vector itself is the hypotenuse of the upright shaded triangle (the red one, and has length a + b + c = ( ab ( abc a + b + c = So we have used the Pythagorean theorem of plane geometry to justify the definition (. In R 3, Formula ( gives the correct answer, namely the answer we get from our knowledge of basic geometry. But in R n, we consider ( as a definition. The number that the formula v v yields is called the length of the vector v. If we take the dot product of a vector with itself, we get the length squared This is just the square of Equation (. A few rules v v = v ( How does the length change if we multiply a vector by a scalar? ( 3 = ( + ( ( 3 + ( = ( ( + ( 3 + c The general formula is = + ( 3 + = 3 = 3 α v = α v The second rule is v = 0 if and only if v = 0 5

6 The only way that the length of ( a b can be zero is if a + b = 0, which means that both a and b have to be zero, which means that the vector ( a b has to be zero. Unit vectors A vector of length is called a unit vector. For example, the vector ( 3/5 4/5 is a unit vector, because ( 3/5 3 = 4/ = = = 5 No need to take square roots! To check that v is a unit vector, it s enough to check that v v = ( ( 3/5 3/5 = 3 3 4/5 4/ = = 5 To produce a unit vector pointing in the same direction as v, rescale v by its length: if v 0, the normalization of v is u = v v The vector u obtained in this way is always a unit vector. Let s check that: u u = v v v v = v v v = v v = so that, indeed, u is a unit vector. For example, let us normalize the vector (. Calculate its length: so the vector = = 6 6 = 6 6 is the normalization of (, the vector obtained by normalizing (. 6

7 . Orthogonality Two vectors v, w are called orthogonal (or perpendicular, notation v w, if v w = 0. v w if v w = 0 (3 For example, ( ( 3 and 3 are orthogonal, because ( ( 3 = = 0 3 or because = = 0 Consider two vectors in R : The vectors ( ( ( a b and b a are perpendicular (the angle that ab forms with the x-axis is the same as the angle that ( b a forms with the y-axis, and since the two coordinate axes are perpendicular, the two vectors have to be perpendicular, too. When we calculate their dot product, we get ( ( a b = a( b + ba = 0 b a so our definition (3 of orthogonality in terms of the dot product, agrees with basic geometry in the plane. In higher dimensions, where our geometric intuition fails, we consider (3 a definition: we call two vectors orthogonal if their dot product vanishes. 7

8 .3 Pythagorean theorem: vector version Suppose v and w are orthogonal vectors. Then we have v + w = ( v + w ( v + w by Eqn ( for v + w = v v + v w + w v + w w distributive law for dot product = v v w w because v w = v + w again by ( This is the vector form of the theorem of Pythagoras: if v w then v + w = v + w (4 Why do we call this the theorem of Pythagoras? Consider this sketch: The three vectors are v, w and v + w. The shaded triangle is a right triangle. The side lengths of the right triangle are also indicated. Thus, Formula (4, is the Pythagorean theorem for the shaded triangle. We can visualize Formula (4 using this picture, but Formula (4 is really just a property of the dot product of column vectors in R n. (Note that the above sketch does not assume that v and w are in R. They can be in R n. But they will span a two dimensional subspace E = span( v, w inside R n. This plane E is what is displayed in the sketch. The length squared of any vector is greater than or equal to zero: w 0. Thus, Formula (4 has the easy consequence: if v w then v v + w if we take the square root of this, we get if v w then v v + w (5 8

9 We will need this formula to deduce the Cauchy-Schwarz inequality later on. Orthogonal matrices. Orthogonal bases, orthonormal bases If two vectors are perpendicular, they cannot point in the same direction, so they are linearly independent. This is true for any number of vectors. If v,..., v k are vectors in R n, and every one of these vectors is perpendicular to all the others, then we call v..., v i an orthogonal set of vectors. Theorem If v,..., v k is an orthogonal set of vectors, then v,..., v k are linearly independent. Here is why. Suppose we have a linear relation among v,..., v k : α v α k v k = 0 (6 To show that our vectors are linearly independent, we have to show that all the α i have to be 0. The trick is simple: take the dot product of (6 with v i. This gives: α v v i α i v i v i α k v k v i = 0 v i Because v i v and v i v k (in fact v i is perpendicular to all the vectors on the list, on the left hand side of this equation, all the terms v v i, v k v i, etc., are zero, and the only one left is v i v i. Thus, we get α i v i v i = 0 but then, because v i is not the zero vector, v v = v 0, and so we get the desired α i = 0. For example, the vectors ( ( (, 3 and form an orthogonal set: = = 0 = 4 7 = = = Hence these three vectors are linearly independent, and so they form an orthogonal basis of R 3. If all the vectors in an orthogonal set are unit vectors, we call it an orthonormal set of vectors (because all the vectors are normalized. If there 9

10 are n vectors in an orthonormal set in R n, then it s a basis, an orthonormal basis. For example, ( ( 0, is an orthonormal basis of R 0 6 9, 3 74, is an orthonormal basis of R Orthogonal matrices Say u, u, u 3 is an orthonormal basis in R 3. Let P = u u u 3 be the change of basis matrix. Calculate P T P. The rows of the transpose matrix P T contain the vectors u, u, u 3. u P T P = u u u u 3 the rows of P T are the u 3 columns of P u u u u u u 3 = u u u u u u 3 matrix multiplication! u 3 u u 3 u u 3 u = 0 0 because u, u, u 3 is an orthonormal set 0 0 This is a general fact: Theorem If P is an n n matrix, then the columns of P form an orthonormal basis of R n if and only if P T P = I n Another way to say P T P = I is that P T is the inverse of P. Theorem 3 If the columns of P form an orthonormal basis of R n, then P is invertible and P = P T 0

11 Matrices with this property are called orthogonal matrices (not orthonormal matrices. Definition 4 If the columns of P form an orthonormal basis of R n, then P is called an orthogonal matrix. Suppose P is an orthogonal matrix, so P T P = I. Then P = P T. Since P P = I, we also get P P T = I, which we can rewrite as (P T T P T = I. Thus P T satisfies the property of Theorem, and is therefore also orthogonal. So the columns of P T are an orthonormal basis, which means that the rows of P are an orthonormal basis. We can summarize everything in the following theorem. Theorem 5 The following conditions on an n n matrix P are all equivalent to each other (they all mean the same thing (i P is orthogonal, (ii the columns of P form an orthonormal basis of R n, (iii the rows of P form an orthonormal basis of R n, (iv P T P = I n, (v P P T = I n, (vi P is non-singular, and P = P T. 3 Projections 3. Orthogonal projections Let us consider a plane E through the origin in R 3, spanned by two vectors v and w, and let us assume that the two spanning vectors are orthogonal: v w, E = span( v, w. These two vectors form a basis of E. We consider a further vector x and its orthogonal projection onto E. We want to write the projection of v onto E in terms of this basis. The projection of x into the plane E is a vector inside the plane, notation proj E ( x. If we subtract the projection from x, we get the vector x proj E ( x. The important fact is that this vector x proj E ( x is orthogonal to E: ( x proje ( x E (7

12 Figure : The rectangle formed by proj E ( x and x proj E ( x (which contains the reddish right triangle is perpendicular to the plane E, in blue. We know that proj E ( x is a linear combination of v and w, because it is in E: proj E ( x = a v + b w (8 To find a, we take the dot product of this equation with v: proj E ( x v = (a v + b w v this is (8 v = a v v + b w v rules for dot products = a v v + 0 because v w = a v by Equation ( Since v is part of a basis it is not the zero vector. so v is not zero, and

13 so we can divide: a = proj E( x v v Now we use the fact (7: since x proj E ( x is orthogonal to E, it has to be orthogonal to every vector inside E, in particular, ( x proj E ( x v, and hence ( x proj E ( x v = 0 x v proj E ( x v = 0 proj E ( x v = x v we plug this into our formula for a and get a = x v v We can use similar reasoning to find b. First take the dot product of (8 with w, to get proj E ( x w = b w solve for b: Then use (7 again: which gives us b = proj E( x w w ( x proj E ( x w = 0 proj E ( x w = x w which we plug into the formula for b to obtain b = x w w We plug the values we found for a and b back into (8 to get our final formula for proj E ( x in terms of the orthogonal basis v, w for E: proj E ( x = x v x w v + w v, w : orth. basis for E (9 v w Sometimes people find it easier to memorize if it s written like this: proj E ( x = x v x w v + v v w w w 3

14 General case The formula we derived is true in general: Theorem 6 If V is a subspace of R n and v,..., v k is an orthogonal basis of V, then for every vector x R n we have proj V ( x = x v v v + x v v v x v k v k v k If we have an orthonormal basis, the formula simplifies: Theorem 7 If V is a subspace of R n and u,..., u k is an orthonormal basis of V, then for every vector x R n we have proj V ( x = ( x u u + ( x u u ( x u k u k The second theorem looks simpler, but in practise, it does not save any calculational effort. If V = R n, then proj V x = x, and so we get: Theorem 8 If v,..., v n is an orthogonal basis of R n, then for every x R n x = x v v v + x v v v x v n v n v n If u,..., u n is an orthonormal basis of V, then for every vector x R n x = ( x u u + ( x u u ( x u n u n One remark on the length of the projection Looking one more time at the important fact (7, we see that proj V ( x ( x proj V ( x because, proj V ( x is in V, and x proj V ( x is perpendicular to everything in V. Now looking back at Fact (5, we deduce that proj V ( x proj V ( x + ( x proj V ( x = x (0 All this is saying is that the length of a vector is as least as big as its projection. 4

15 3. The matrix of a projection So far, we have looked at the projection of one vector at a time. Now, fixing the subspace V, we project all vectors. We get a linear map: T : R n R n x proj V ( x The transformation T takes a vector and maps it to its projection into V. The image of T is the subspace V. The kernel of T consists of all vectors perpendicular to all V. This is called the orthogonal complement of V, notation V. ker T = V An example: Suppose B = ( u, u, u 3, u 4 is an orthonormal basis of R 4. Let V = span( u, u be the plane spanned by the first two of the vectors in B. Let T be the orthogonal projection onto V. We have T ( u = u T ( u = u T ( u 3 = 0 T ( u 4 = 0 because u V because u V because u 3 V because u 4 V Therefore, the matrix of T in the basis B is B0 0 0C [T ] B A So the matrix of T in the standard basis is [T ] E = P [T ] B P Since B is orthonormal, the matrix P is orthogonal, so P = P T. Thus, [T ] E = P [T ] B P T Let s calculate:! u B 0 0 0C B u [T ] E = u u u 3 u C u 3 A u u u u 0 0 A B u u 3 A u 4! «u = u u u Yes, the last equation is a 4 times a 4 matrix, resulting in a 4 4 matrix. Note also, that even though we started with a full basis ( u, u, u 3, u 4 of R 4, in the end only the vectors u and u, which span the subspace V enter into this formula. 5

16 Let s do this calculation in a concrete example. The following is an orthonormal basis of R 4 : u = A u = B A u 3 = B A u = B A The matrix of the projection T onto the plane spanned by the first two of these vectors is [T ] E = B C B A B A = B 0 0C B A = «B 4 A (The best way to understand the last step is to convince yourself that when you actually calculate the two different matrix products, you end up doing the same calculations, because of all those zeros. In our example, we get 0 [T ] E = «B 4 A = B A = B A In the general case, we have the following theorem: Theorem 9 If V is a subspace of R n and u,..., u k is an orthonormal basis of V, then the matrix of the orthogonal projection onto V is given by the matrix product u u k! 0 u B A u k 4 Cauchy-Schwarz 4. The Cauchy-Schwarz inequality The Cauchy-Schwarz inequality derives from the fact that a vector cannot get any longer by projecting it orthogonally. Consider two vectors v and w 6

17 Figure 3: The vector v and its projection are in red, the vector w and the line it spans in blue and project v into the line spanned by w, as in Figure 3. Recall (0 that a vector is always as least as long as its projection: proj span w ( v v Plug in Formula (9: v w w v w or, by using the rules for manipulating lengths: which simplifies to v w w v w v w v w This is the Cauchy-Schwarz inequality. Theorem 0 (Cauchy-Schwarz Inequality For any two vectors v, w in R n, we have v w v w In words: the absolute value of the dot product is less than or equal to the product of the lengths. For example, for plane vectors v = ( ( a b and w = cd, the Cauchy- Schwarz inequality says that ( ( ( ( a c a c b d b d 7

18 ac + bd a + b c + d (ac + bd (a + b (c + d a c + acbd + b d a c + a d + b c + b d acbd a d + b c 0 a d adbc + b c 0 (ad bc In fact, all of these statements are equivalent. Of course (ad bc is always greater than or equal to zero. So this reasoning gives another proof of the Cauchy-Schwarz inequality in two dimensions. 4. The angle between two vectors Let v and w be two non-zero vectors in R n. Dividing the Cauchy-Schwarz inequality by v w, we get or, in other words, v v w w v v w w So (see Figure 4 there is a unique θ [0, π] such that cos θ = v v w w This θ is called the angle between the vectors v and w. Figure 4: for every x between and there is a unique angle θ [0, π], such that cos θ = x 8

19 w Definition For any two vectors v, w in R n, the angle between them is defined to be ( θ = arccos v v w w Notice that v v is the unit vector in the same direction as v. Similarly, w is the unit vector in the direction of w. So our definition really just says that for unit vectors v and w the angle between them is defined to be arccos( v w θ = arccos( v w if v and w are unit vectors Let us check that this makes sense for vectors in the plane: The vectors v and w are unit vectors. spanned by w is then the vector The projection of v onto the line proj span w ( v = cos θ w by some simple trigonometry. Using Theorem 6, we calculate proj span w ( v = ( v w w and comparing these two formulas for the projection, we see that cos θ w = ( v w w and from that we get, because w is not zero: cos θ = v w 9

20 or θ = arccos( v w So Definition agrees with trigonometry in the plane, and is therefore a reasonable definition in R n. We can tell whether and angle is acute or obtuse by looking at its cosine: > 0 then θ is acute if cos θ = 0 then θ is right < 0 then θ is obtuse so if v and w are arbitrary vectors in R n, we can say > 0 then the angle between v and w is acute if v w = 0 then the angle between v and w is right < 0 then the angle between v and w is obtuse Consider one more time the formula for the projection of v onto w proj span w ( v = If we take the length on both sides, we get which simplifies to and from this we deduce v w = proj span w ( v = v w w w v w w w proj span w ( v w = v w proj span w ( v w if the angle( v, w is acute 0 if the angle( v, w is right proj span w ( v w if the angle( v, w is obtuse This is a geometric interpretation of the meaning of the dot product: the dot product of two vectors is (up to sign the length of the projection of one onto the other, times the length of the other. The sign is determined by whether the angle between the vectors if acute or obtuse. 0

21 4.3 The triangle inequality Let v and w be two vectors in R n. Let us calculate v + w : v + w = ( v + w ( v + w just the definition of length = v v + v w + w v + w w dot product is distributive = v + v w + w dot product is commutative v + v w + w Cauchy-Schwarz! = ( v + w Taking the square root of this inequality we get the triangle inequality v + w v + w To see why it s called the triangle inequality, look at Figure 5 Figure 5: The vectors w, v + w and v (translated form a triangle. The length of v + w cannot be bigger than the sum of the lengths of the other two sides of the triangle 5 The spectral theorem 5. Diagonalization of symmetric matrices: the spectral theorem The most important source of orthogonal sets of vectors is from eigenvectors of symmetric matrices. Recall that a matrix A is symmetric if A = A T.

22 Theorem Suppose A is a symmetric matrix. Suppose further that λ and λ are two (different eigenvalues of A, and that v and v are corresponding eigenvectors. Then v v It s easy to explain why this is true. multiplied by one of the eigenvectors: λ ( v v = v T (λ v Simply compute the dot product, rewrite dot product as matrix multiplication = v T A v because A v = λ v by the eigenvalue property = v T A T v use that A T = A = (A v T v rules for transposes include (A v T = v T A T = λ v T v because A v = λ v by the eigenvalue property = λ ( v v back to dot product This equality gives rise to (λ λ ( v v = 0 and because the two eigenvalues are different, we can conclude that v v = 0, so that v v. The case of simple eigenvalues From Theorem, we can immediately deduce: Corollary 3 If A is a symmetric n n matrix, which has n distinct eigenvalues, then there exists an orthonormal basis of R n, consisting of eigenvalues for A. (We say that A is orthogonally diagonalizable. (Simply take an eigenvector for each eigenvalue, they form an orthogonal set by the theorem, then normalize them to get the orthonormal basis. For example, the matrix 0 A = 3 0 is symmetric. Its characteristic polynomial is λ det λ 3 = λ (λ 3 λ λ 4(λ 3 λ

23 = λ 3 3λ 6λ + 8 = (λ 4(λ (λ + So the eigenvalues are λ = 4, λ = and λ 3 =. Corresponding eigenvectors are easily found (each of the three homogeneous systems of equations has exactly one free variable. We get v = v = v 3 = 0 We normalize to get an orthonormal basis: 6 u = 3 u = u 3 = The change of basis matrix P has these vectors as columns: P = Now we can write down the diagonalization A = P DP T of A: = The general case The spectral theorem says that every symmetric matrix admits an orthonormal basis of eigenvectors. Theorem 4 (Spectral Theorem Suppose A is a symmetric matrix. Then there exists an orthonormal basis of R n consisting of eigenvectors of A. So there exists an orthogonal matrix such that where D is diagonal. A = P D P T This theorem says three things: the characteristic polynomial of every symmetric matrix splits completely, all the geometric multiplicities of all the 3

24 eigenvalues are equal to all the algebraic multiplicities, and on top of this, that the eigenvectors can be chosen orthogonal to each other. Let us explain why the theorem is true for matrices. If A is symmetric, it looks like this: ( a b A = b c The characteristic polynomial is λ (a + cλ + ac b. Whose roots are λ = (a + c ± (a + c 4(ac b Let us examine the term under the square root (the discriminant of the quadratic (a+c 4ac+4b = a +ac+c 4ac+4b = a ac+c +4b = (a c +4b Notice that this expression is always positive, because it is a sum of two squares. Thereofore, the square root exits. When solving this particular quadratic, we never run into square roots of negative numbers. This already shows that the characteristic polynomial of A will split completely, into two linear factors. So we have two real eigenvalues. They can either be equal or distinct. If they are distinct, we can find corresponding eigenvectors which will automatically be orthogonal. We can normalize them and then we have our orthonormal basis of eigenvectors. What happens if the two eigenvalues are equal? In other words, there is only one eigenvalue, but its algebraic multiplicity is? Could it be that the geometric multiplicity is only? Let s see. The only way the two roots of the quadratic equation can be equal, is if the term underneath the square root is 0. So that means that (a c + 4b = 0 but the only way the sum of two positive numbers can be zero, is if both of them are zero. So (a c = 0 and 4b = 0, which impies that a = c and b = 0. So our matrix is ( a 0 A = 0 a which is already diagonal! The standard basis is an orthonormal basis which diagonalizes A. This justifies the spectral theorem in dimensions. 4

25 Let us do an example. Consider the matrix 4 A = 4 4 which it symmetric. The characteristic polynomial is λ 3 λ + 45λ 54. After some trial and error, we may find that 3 is a root of this polynomial: = = 0. Factoring out (λ 3 from the characterist polynomial we are left with the quadratic λ 9λ+8, which has the two roots λ = 3 and λ = 6. So our eigenvalues are 3, with multiplicity, and 6 with multiplicity. Let us first deal with λ = 6. The eigenspace E is the null space of the matrix which has one free variable and a solution is v = (. Now let us deal with λ = 3. The eigenspace E is the null space of the matrix which row reduces to ( We see that there are two free variables, and so the eigenspace is twodimensional, and so the geometric multiplicity of the eigenvalue 3 is equal to the algebraic multiplicity, as it should be, by the spectral theorem. The usual way of finding a basis for the null space of ( (setting the free variables first equal to (, 0 and then equal to (0, gives us the two eigenvectors ( 0 ( 0 These are not orthogonal to each other. We have to work a little harder to find an orthogonal basis of E. Let us the first of these as second eigenvector v : v = ( 0 5

26 Now to find the third eigenvector, we want a vector which satisfies two properties: first it is an eigenvector with eigenvalue 3, so it has to be in the null space of (, second, it has to be perpendicular to v = ( 0, which means the dot procuct with ( 0 has to be zero, which we can force by adding this condition as a second row to (: ( 0 This matrix now reduces to which has the vector ( 0 3/ 0 / v 3 = ( as solution. Now the vectors v = ( 0 v 3 = ( form an orthogonal basis of the eigenspace E. Both of these are automatically orthogonal to v. All together, we now have an othogonal eigenbasis: v = ( with corresponding eigenvalues v = ( 0 v 3 = ( λ = 6 λ = 3 λ 3 = 3 We normalize these eigenvectors to form an othonormal basis: u = 3 3 u = u 3 = So the change of basis matrix is P =

27 and the orthogonal diagonalization of A is = By this method we can can orthogonally diagonalize any symmetric matrix, (assuming we can manage to find the roots of the characteristic polynomial. For multiple eigenvalues we solve for one eigenvector at a time, each time adding the eigenvectors already found as equations, so as to assure that the subsequent eigenvectors are perpendicular to the ones already found. 5. Exercises Exercise 5. Find an orthonormal basis of eigenvectors for the matrix A = Write down the corresponding diagonalization of A. (Hint: the eigenvalues are 8 and. 7

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

y 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n.

y 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n. 6.. Length, Angle, and Orthogonality In this section, we discuss the defintion of length and angle for vectors and define what it means for two vectors to be orthogonal. Then, we see that linear systems

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Solutions to Final Exam

Solutions to Final Exam Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Written by Santiago Cañez These are notes which provide a basic summary of each lecture for Math 290-2, the second

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Definitions for Quizzes

Definitions for Quizzes Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

MATH 221, Spring Homework 10 Solutions

MATH 221, Spring Homework 10 Solutions MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

More information

k is a product of elementary matrices.

k is a product of elementary matrices. Mathematics, Spring Lecture (Wilson) Final Eam May, ANSWERS Problem (5 points) (a) There are three kinds of elementary row operations and associated elementary matrices. Describe what each kind of operation

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Practice problems for Exam 3 A =

Practice problems for Exam 3 A = Practice problems for Exam 3. Let A = 2 (a) Determine whether A is diagonalizable. If so, find a matrix S such that S AS is diagonal. If not, explain why not. (b) What are the eigenvalues of A? Is A diagonalizable?

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th. Elementary Linear Algebra Review for Exam Exam is Monday, November 6th. The exam will cover sections:.4,..4, 5. 5., 7., the class notes on Markov Models. You must be able to do each of the following. Section.4

More information

Exercise Set 7.2. Skills

Exercise Set 7.2. Skills Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize

More information

22m:033 Notes: 6.1 Inner Product, Length and Orthogonality

22m:033 Notes: 6.1 Inner Product, Length and Orthogonality m:033 Notes: 6. Inner Product, Length and Orthogonality Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman April, 00 The inner product Arithmetic is based on addition and

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name. HW2 - Due 0/30 Each answer must be mathematically justified. Don t forget your name. Problem. Use the row reduction algorithm to find the inverse of the matrix 0 0, 2 3 5 if it exists. Double check your

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated )

Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated ) Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated 24--27) Denis Sevee s Vector Geometry notes appear as Chapter 5 in the current custom textbook used at John Abbott College for

More information

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is 17 Inner products Up until now, we have only examined the properties of vectors and matrices in R n. But normally, when we think of R n, we re really thinking of n-dimensional Euclidean space - that is,

More information

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

The geometry of least squares

The geometry of least squares The geometry of least squares We can think of a vector as a point in space, where the elements of the vector are the coordinates of the point. Consider for example, the following vector s: t = ( 4, 0),

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

MATH Linear Algebra

MATH Linear Algebra MATH 4 - Linear Algebra One of the key driving forces for the development of linear algebra was the profound idea (going back to 7th century and the work of Pierre Fermat and Rene Descartes) that geometry

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Eigenvalues and Eigenvectors A =

Eigenvalues and Eigenvectors A = Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

(arrows denote positive direction)

(arrows denote positive direction) 12 Chapter 12 12.1 3-dimensional Coordinate System The 3-dimensional coordinate system we use are coordinates on R 3. The coordinate is presented as a triple of numbers: (a,b,c). In the Cartesian coordinate

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Math 304 Final Exam (May 8) Spring 206 No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Name: Section: Question Points

More information

y 1 y 2 . = x 1y 1 + x 2 y x + + x n y n y n 2 7 = 1(2) + 3(7) 5(4) = 4.

y 1 y 2 . = x 1y 1 + x 2 y x + + x n y n y n 2 7 = 1(2) + 3(7) 5(4) = 4. . Length, Angle, and Orthogonality In this section, we discuss the defintion of length and angle for vectors. We also define what it means for two vectors to be orthogonal. Then we see that linear systems

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

Eigenvalues and Eigenvectors: An Introduction

Eigenvalues and Eigenvectors: An Introduction Eigenvalues and Eigenvectors: An Introduction The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Sec. 6.1 Eigenvalues and Eigenvectors Linear transformations L : V V that go from a vector space to itself are often called linear operators. Many linear operators can be understood geometrically by identifying

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

Problem 1: Solving a linear equation

Problem 1: Solving a linear equation Math 38 Practice Final Exam ANSWERS Page Problem : Solving a linear equation Given matrix A = 2 2 3 7 4 and vector y = 5 8 9. (a) Solve Ax = y (if the equation is consistent) and write the general solution

More information

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 : Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v =

More information

Linear Algebra. Alvin Lin. August December 2017

Linear Algebra. Alvin Lin. August December 2017 Linear Algebra Alvin Lin August 207 - December 207 Linear Algebra The study of linear algebra is about two basic things. We study vector spaces and structure preserving maps between vector spaces. A vector

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

Exercise Solutions for Introduction to 3D Game Programming with DirectX 10

Exercise Solutions for Introduction to 3D Game Programming with DirectX 10 Exercise Solutions for Introduction to 3D Game Programming with DirectX 10 Frank Luna, September 6, 009 Solutions to Part I Chapter 1 1. Let u = 1, and v = 3, 4. Perform the following computations and

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Rigid Geometric Transformations

Rigid Geometric Transformations Rigid Geometric Transformations Carlo Tomasi This note is a quick refresher of the geometry of rigid transformations in three-dimensional space, expressed in Cartesian coordinates. 1 Cartesian Coordinates

More information

Notes on multivariable calculus

Notes on multivariable calculus Notes on multivariable calculus Jonathan Wise February 2, 2010 1 Review of trigonometry Trigonometry is essentially the study of the relationship between polar coordinates and Cartesian coordinates in

More information

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2 MATH 7- Final Exam Sample Problems Spring 7 ANSWERS ) ) ). 5 points) Let A be a matrix such that A =. Compute A. ) A = A ) = ) = ). 5 points) State ) the definition of norm, ) the Cauchy-Schwartz inequality

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 21

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 21 EECS 16A Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 21 21.1 Module Goals In this module, we introduce a family of ideas that are connected to optimization and machine learning,

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Vector Geometry. Chapter 5

Vector Geometry. Chapter 5 Chapter 5 Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true. 1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III

More information

Inner Product, Length, and Orthogonality

Inner Product, Length, and Orthogonality Inner Product, Length, and Orthogonality Linear Algebra MATH 2076 Linear Algebra,, Chapter 6, Section 1 1 / 13 Algebraic Definition for Dot Product u 1 v 1 u 2 Let u =., v = v 2. be vectors in Rn. The

More information

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 2 Notes, Linear Algebra 5e Lay Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information