Linear Algebra, Summer 2011, pt. 3

Size: px
Start display at page:

Download "Linear Algebra, Summer 2011, pt. 3"

Transcription

1 Linear Algebra, Summer 011, pt. 3 September 0, 011 Contents 1 Orthogonality The length of a vector Orthogonal vectors Orthogonal Subspaces Orthogonality and the fundamental subspaces Projection onto subspaces. 8.1 Projection onto one dimensional subspaces Least squares Projection onto subspaces Orthogonal Matrices and Gram-Schmidt Gram-Schmidt Orthogonality. We have seen that basis vectors are a great way of describing a vector space, in part because each vector can be expressed uniquely as a linear combination of basis vectors. However, not all bases are created equally: there are some that we like more than others. Computationally and aesthetically, we prefer basis vectors which are at right angles to each other, and of length one. That is to say, basis vectors which look like the basis vectors from R or R 3. We will do this in three steps: 1. What is the length of a vector? 1

2 . What does it mean for two vectors or subspaces to be orthogonal? 3. Given a basis for a subspace, how to we create orthogonal vectors which span the same subspace? Let us hit these questions right away. 1.1 The length of a vector. Given a vector x = (x 1, x in R, we are probably comfortable with the idea that the length of this vector (which we will denote by x is given by the Pythagorean theorem: x = x 1 + x. Also, for x = (x 1, x, x 3 R 3, the formula is x = x 1 + x + x 3. Then we generalize this to x = (x 1, x,..., x n R n. Definition 1.1 (Length of a vector.. For x = (x 1, x,..., x n R n, we define x := x 1 + x + + x n. We may prefer to write this as a multiplication of matrices: x = x T x, where we are writing x above as a column vector, as usual. As an example, we will expect the vector (, 3, 4 to have length 9, and indeed, ( = = 9 = (, 3, 4.

3 1. Orthogonal vectors. To motivate the definition of orthogonal vectors, we will again turn to the Pythagorean theorem. First let s state the theorem in terms of vectors: Theorem 1.1 (Pythagorean Theorem. If x and y are orthogonal, then x + y = x y. This is the theorem in two or three dimensions. What we do is we define orthogonality so that this theorem is true! In particular, we need x T x + y T y = (x y T (x y = x T x + y T y x T y. This is true only when x T y = 0. Hence, we say that two vectors x and y are orthogonal if x T y = 0. Example 1.1. The vectors (1, 1, 3 and ( 1,, 1 are orthogonal, since (1, 1, 3 ( 1,, 1 = = = 0. Here is a surprising statement that is easy to prove: Proposition 1.1. If {x 1,..., x n } are mutually orthogonal (i.e., each is orthogonal to all the others and nonzero, then they are linearly independent. Proof. This is a beautiful proof, since we use only the definitions of linear independence and orthogonality. In particular, we will take a linear combination of the vectors adding to zero, then take a dot product, and show each coefficient must be zero. Specifically, suppose c 1 x 1 + c n x n = 0. We will then take a dot product of each side with x 1, which is orthogonal to all the other x j s, except for itself. We get (c 1 x 1 + c n x n x 1 = c 1 x 1 = 0 x 1 = 0. Since x 1 is nonzero, we conclude that c 1 must be equal to 0. We repeat this n times to find that all c 1,..., c n must be 0, hence {x 1,..., x n } are linearly independent. Another important notion we should record is the following: 3

4 Definition 1. (Orthonormal.. We say a collection of vectors {x 1,..., x n } are orthonormal provided 1. x T j x T k = 0 for all j k, and. x j = 1 for all j = 1,,... n. This definition is definitely motivated by our favorite basis for euclidean space: {(1, 0, 0, (0, 1, 0, (0, 0, 1}. We can readily verify that these vectors have length 1 and are mutually orthogonal. In fact, it is not hard to describe all such vectors in R.] Example 1. (Rotation matrices.. There is a matrix that rotates vectors (x, y by a fixed amount θ, which we ll call R θ. To derive a formula for it, let s take a point (x, y R, and write it in polar coordinates, (x, y = (r cos ω, r sin ω. Then we hope that R θ ( r cos ω r sin ω = ( r cos (ω + θ r sin (ω + θ = From this we can deduce that ( cos θ sin θ R θ = sin θ cos θ ( r cos ω cos θ r sin ω sin θ r cos ω sin θ + r sin ω cos θ Now if we wish to rotate the usual basis for R by an angle θ, we will get ( ( ( cos θ sin θ 1 cos θ =, sin θ cos θ 0 sin θ.. and ( cos θ sin θ sin θ cos θ ( 0 1 = These are all the orthonormal bases for R. ( sin θ cos θ. 1.3 Orthogonal Subspaces. In the previous section, we asked when two vectors were orthogonal, which we generalized to talking about when a collection of vectors was orthogonal. Now we define what it means for two subspaces is orthogonal. Looking around a room for an example of such a situation, we are tempted to say that the 4

5 floor and a wall look like two orthogonal planes. However, this does not turn out to be the right definition to get all the mileage we want out of this. In particular, there are certain vectors in the wall that point in the same direction as vectors in the floor. The right definition is as follows: Definition 1.3 (Orthogonal subspaces.. Two subspaces V and W are orthogonal if for every v V and w W, v T w = 0. Thus, for example, the x y plane and the y z plane are not orthogonal subspaces of R 3, since, for example, (0, 1, 0 is in both the x y plane and the y z plane (and (0, 1, 0 (0, 1, 0 = 1 0. Here is a useful criterion for deciding when two subspaces are orthogonal: Proposition 1.. If V and W are subspaces of R k with bases {v 1,..., v m } and w 1,..., w n }, respectively, then V and W are orthogonal if and only if their basis vectors are mutually orthogonal. Proof. Certainly if V and W are orthogonal, then the basis vectors are mutually orthogonal: all vectors in V will be orthogonal to all the vectors in W. What about the other direction? Suppose we have a vector v V, and a vector w W. Let A be the k m matrix with the basis for V as its columns, and let B be the k n matrix with the basis for W as its columns. Then we can write v and w as for a unique choice of c 1, c. Now we have only the calculation v = Ac 1, and w = Bc, v T w = (Ac 1 T (Bc = c T 1 (A T Bc. But each entry of A T B is the dot product of a basis vector of V with a basis vector of W. By hypothesis each of these is zero. Hence v T w = c T 1 (A T Bc = c T 1 0 m n c = 0. Hence the dot product of any two vectors in the subspaces is zero, so orthogonal basis vectors imply orthogonal subspaces. 5

6 Notice that in R 3, we can have two orthogonal lines, or a plane and an orthogonal line, but there are not enough dimensions to fit two orthogonal planes in, since the planes would each have dimension, and 4 vectors in R 3 cannot be linearly independent (and so cannot be orthogonal. However, in R 4, there is more room: the plane spanned by {(1,, 0, 0, (0, 1, 0, 0} is orthogonal to both the line {(0, 0, 1, 5} and the line {(0, 0, 5, 1}, which are orthogonal to each other. 1.4 Orthogonality and the fundamental subspaces. Here s a beautiful and surprising fact: given an m n matrix A, the row space is orthogonal with the null space, and the column space is orthogonal to the left null space! First, we verify that both the row space and null space are subspaces of R n, and that C(A, N(A T are both subspaces of R m. Now we should prove this fact. Theorem 1.. Let A be an m n matrix. Then the row space and null space are orthogonal subspaces of R n. Proof. The insight to this is that if we take x N(A, then Ax = 0. This means that the inner product of each row of A with x is equal to zero. In other words, each row of A is orthogonal to x. The rest of the proof is simply pointing out that if the rows of A are r 1,..., r m, then an arbitrary vector in C(A T can be written w = c 1 r c m r m. Taking the dot product of w with x gives us w T x = c 1 r T 1 x + c m r T mx = 0. Hence the dot product is zero for any vector in the row space and any vector in the null space. We actually have a second, much cleaner proof! 6

7 Proof. Suppose v C(A T, and x N(A. Then v = A T y for some vector y R m. In this case, v T x = (A T yx = y T Ax = y T 0 = 0. We leave the other proof as an exercise! Example 1.3. Let A = Then the row space is one dimensional, and equal to multiples of (, 3. Hence the null space must be all multiples of ( 3,, since this is the only orthogonal vector to the line determined by (, 3. Similarly, the column space is generated by (, 4, 6, so the left null space must be generated by two mutually orthogonal vectors to this one. Pleasantly, the equation for this plane is given by x + 4y + 6z = 0, which is exactly what the condition (, 4, 6 (x, y, z = 0 says. We ve thus derived the equation for a plane, by observing that it must be orthogonal to a certain line. Notice that the null space is not just orthogonal to the row space. It contains every vector orthogonal to the row space. We have a word for this: Definition 1.4 (Orthogonal complement.. Given a subspace V, we call the subspace of all vectors orthogonal to V the orthogonal complement of V, and denote it by V ( V perp.. Using our new words, we can say the following: The row space is the orthogonal complement of the null space. The column space is the orthogonal complement of the left null space. This doesn t look like such a big deal, but it gives us the following theorem for free: Theorem 1.3. The equation Ax = b has a solution if and only if b T y = 0 whenever A T y = 0. 7

8 Projection onto subspaces. We now turn our attention back to projecting onto subspaces. We first treat the easiest example, that of projection onto a line, and then general projection matrices. We should clarify our goal. Given a subspace S, we hope to have a matrix P S so that for any vector b, the vector P S b is not only in S, but it is the closest vector in S to b..1 Projection onto one dimensional subspaces. In our first situation, we will project onto a line. Intuitively, given vectors a and b, we are asking how much does b point in the direction a? Our answer starts by considering these two vectors in the plane. Note that this is general in a certain sense, since two vectors in R 3 still only span a plane. Now some trigonometry allows us to derive that a T b = a b cos θ. In particular, we notice that, if α is the angle a = (a 1, a makes with the axis, and β is the angle b = (b 1, b makes with the axis (and we suppose that β > α, then sin α = a α, cos α = a 1 α, and similarly for β and b. Now the angle between a and b is θ = β α. We plus in cos θ = cos β cos α + sin β sin α = a 1b 1 + a b a b = at b a b. Now let us call p the closest point to b on the line a. Then the line from p to b must be at a right angle to a. That is to say, a T (p b = 0. However, we also know that p = λa, since the projection must lie on the line determined by a. together, we find λa T a a T b = 0, Putting these 8

9 so Thus we find that λ = at b a T a. p = at b a T a a. However, our goal is to find a matrix P a so that p = P a b. Since at b a T a is just a number, we can move the vector a to the left side of the equation and get P a = aat a T a. We will record a few observations about P a : 1. If a R n, then aa T is an n n matrix.. The matrix aa T has rank 1, since each column (and row! will be a multiple of a. Hence, the row and column space both have basis {a}. 3. P a is symmetric. 4. The matrix is scale invariant: P a = P a. Intuitively this is pleasing, since the subspace defined by a is the same as the subspace defined by a. 5. If we apply P a to a vector already on the line determined by a, it doesn t do anything: ( aa T P a (λa = (λa = λ aat a a T a a T a = λa. Intuitively, if a point is already on this line, then the closest point to it on the line is itself. 6. P a = P a. This is a generalization of the previous observation. Since P a b = λa lies on a, applying P a more won t do anything. Now an example. 9

10 Example.1 (Projection onto (1,, 3. Suppose we wish to project vectors onto the line spanned by (1,, 3. Then by the formula above, P (1,,3 = 1 1 ( 1 3 = Then, for example, the vector b = (0, 0, 1 is projected to P (1,,3 b = = Least squares. Now let us look at an applied example. Suppose we are searching for a linear relationship between a person s height and their weight (in reality, we will get a much better fit from height 3, since volume and length live in different dimensions, but let s ignore this. Anyways, supposing we take a random sample of 5 people, and find their heights (in inches are h = (69, 7, 80, 80, 73, and their respective weights are w = (173, 193, 8, 11, 187 (note: this is actual data from the Rice basketball team, so the randomness of the data could be argued. Now we are claiming that hx = w, i.e ( x = A reasonable, and computationally friendly, way to measure how good our fit is is via least squares. That is to say, for any potential solution x, we have an error which is given by E(x = (69x 173 +(7x 193 +(80x 8 +(80x 11 +(73x In matrix notation, E(x = hx w. 10

11 From calculus, we know that to find a minimizer to this equation, we must take a derivative and set it to zero: 0 = (69(69x (7x (80x (80x (73x 187. Again, matrices simplify the notation quite a bit: 0 = E (x = d dx hx w = h T (hx w. Hence, we have x = ht w h T h. Wading through the equations above (or simply using our derived formula, we find that in this case, x = Looking back, we also had E( However, the important thing is that even though hx = w is not solvable, the best (in the sense of least squares solution to it is given by the projection of w onto h..3 Projection onto subspaces. Suppose instead of looking for a relationship between height and weight, we had more explanatory variables. For concreteness, let us add in a variable for how old they are, and try to solve Ax = w, where A is now a matrix whose first column contains heights, and whose second column contains an age. For our particular example, we will take A =

12 Also, x, rather than being a single number indicating how much height affects weight, will now be a vector, x = (h, a, indicating the relative inputs of height and age on weight. Again, there is no solution to Ax = w, so we will try to find an x which minimizes E(x = Ax w. But we now recognize this as finding the closest vector Ax in the column space of A to w. Hence, we are just projecting w onto C(A. We have other information too: Appealing to geometry, know that this error vector must be perpendicular to C(A. Hence (Ax w N(A T, since the left null space contains all vectors perpendicular to the column space of A. In other words, A T (Ax w = 0. Then if we knew that A T A was invertible, we could write down that x = (A T A 1 A T w, so that our least squares approximation is w = Ax = A(A T A 1 A T w. It is important to notice that A is typically a very tall, very non-square matrix, so we usually cannot use the identity (A T A 1 = A 1 (A T 1, and conclude that w = w. Also, in practice, the best way to solve this problem is to take a problem Ax = b, and instead solve A T Ax = A T b. In this way we avoid having to find an inverse matrix, and can instead use Gaussian elimination. There is also the condition we have used that A T A is invertible. We have a condition for checking when this is true: Theorem.1. If the columns of A are linearly independent, then A T A is invertible. We actually prove this by proving an easier fact: that A T A and A have the same nullspaces. Then if the columns of A are linearly independent, A has no null space, so A T A has no nullspace. But A T A is square (and symmetric, though that doesn t matter here, so A T A is invertible. 1

13 Lemma.1. A T A and A have the same nullspace. Proof. To show this, we will first show that every vector in the null space of A is also in the nullspace of A T A. Then we show the opposite: every vector in the nullspace of A T A is in the nullspace of A. The first part of the proof is easy: if Ax = 0, then A T Ax = A T 0 = 0. Now, as planned, suppose that A T Ax = 0. Taking the dot product with x, we get x T A T Ax = Ax = 0. But the only vector with length 0 is the zero vector, so Ax = 0, and x is also in the nullspace of A. so Going way back to our example, we get that (.63 x, 0.1 w = Ax This compares vaguely well with the observed weights of w = As a point of interest, the square root of the squared error is about Hence, adding the age of the players didn t really help all that much. Using the cube of the heights, this error drops to 7.8. Also note that as you add more explanatory variables, you are should expect to get less error, since you are projecting onto larger subspaces. 13

14 3 Orthogonal Matrices and Gram-Schmidt. We first define an orthogonal matrix, which is a pretty bad name for the matrix- not because it lies, but because it doesn t tell the whole truth: Definition 3.1 (Orthogonal Matrix.. An orthogonal matrix is a square matrix whose columns are orthonormal. be Unwinding some definitions, we observe that an orthogonal matrix must Square: n n, Normalized: each column must have length 1, and Orthogonal: the columns are mutually orthogonal. Let s first observe that the columns of an orthogonal matrix are an orthonormal basis for R n. Now some examples: Example 3.1 (Permutation matrices are orthogonal.. Every permutation matrix is an orthogonal matrix: P = has columns of length 1, and each is orthogonal to the others. Example 3. (Rotation matrices are orthogonal.. The rotation matrices we introduced earlier are also orthogonal. We defined ( cos θ sin θ R θ :=. sin θ cos θ Notice that the length of the columns is 1: ( ( cos θ cos θ sin θ = cos θ + sin θ = 1. sin θ and the columns are orthogonal: ( sin θ cos θ ( cos θ sin θ = 0. 14

15 Another great property of orthogonal matrices: Theorem 3.1. If Q is an orthogonal matrix, then Q 1 = Q T. The proof of this is simply that the product of two matrices is a series of inner products, and the inner product between orthogonal matrices is zero. Hence the only nonzero entries in the product Q T Q will be along the diagonal, which will be the length of the columns of Q, which is 1. Now these orthonormal bases have yet another great property: suppose you have a vector b, which you d like to write in the basis given by the columns of Q. More precisely, we wish to write b = x 1 q x n q n. In order to do this, we need to solve for each x j. But we do this by just taking the inner product of each side with respect to q j. Then we find q T j b = x j q T j q j = x j. Example 3.3 (Changing bases.. Suppose we have a vector b = (1, T. Let s first write it as a linear combination of the vectors which are the columns of I, the identity matrix. Then we get and so ( 1 x 1 = ( 1 0 ( 1 x = ( 0 1 ( 1 ( 1 = 1 0 = 1, =, ( which we probably could ve figured out before starting this. Let s now write the same vector as a sum of the columns of the orthogonal matrix ( Q = A similar calculation to the above gives us x 1 = ,, and x = 3. 15

16 Then we get that ( 1 = ( ( 3 1 a much less obvious conclusion (though the calculation was mechanically simple! We give another example from calculus. Example 3.4. Suppose we wish to analyze the graph of the function x 6xy + y = 1. We know from calculus that since this is a second order equation, it must be a conic section- either a parabola, hyperbola, or ellipse. However, this is not an equation that is typically taught to be recognized in calculus courses. In particular, the cross term 6xy throws us off. One way of getting around this is via a change in basis: we ll define ( x y ( cos θ sin θ = sin θ cos θ ( u v =, ( u cos θ v sin θ u sin θ + v cos θ What we now do is choose a θ so that there is no cross term. In particular, we find that, through direct substitution, 1 = (u cos θ v sin θ 6(u cos θ v sin θ(u sin θ + v cos θ + (u sin θ + v cos θ = u (1 6 cos θ sin θ 6uv(cos θ sin θ + v (1 + 6 sin θ cos θ. Hence, we choose any θ (remember, this exercise is just to make calculations easier so that cos θ sin θ = 0. One such θ is π/4. Plugging in, we get u + 4v = 1. This is an equation that we recognize as a hyperbola in the (u, v-plane. Reflecting on our calculation, the (u, v plane is a rotation of the (x, y plane by an angle of π/4. We could alternatively view this as a graph of a rotated hyperbola in the (x, y plane.. 16

17 3.1 Gram-Schmidt. The goal of Gram-Schmidt is to take a (generally non-orthonormal basis {x 1,..., x n } for a subspace S, and to find an orthonormal basis {q 1,..., q n } for S. This is an algorithm, and a computationally simple one at that, though the arithmetic quickly becomes ugly. Not finished! 17

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Written by Santiago Cañez These are notes which provide a basic summary of each lecture for Math 290-2, the second

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES Math 46, Spring 00 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 00 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES Recap Yesterday we talked about several new, important concepts

More information

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1 4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1 Two vectors are orthogonal when their dot product is zero: v w = orv T w =. This chapter moves up a level, from orthogonal vectors to orthogonal

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

P = A(A T A) 1 A T. A Om (m n)

P = A(A T A) 1 A T. A Om (m n) Chapter 4: Orthogonality 4.. Projections Proposition. Let A be a matrix. Then N(A T A) N(A). Proof. If Ax, then of course A T Ax. Conversely, if A T Ax, then so Ax also. x (A T Ax) x T A T Ax (Ax) T Ax

More information

There are two things that are particularly nice about the first basis

There are two things that are particularly nice about the first basis Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially

More information

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Chapter 6 - Orthogonality

Chapter 6 - Orthogonality Chapter 6 - Orthogonality Maggie Myers Robert A. van de Geijn The University of Texas at Austin Orthogonality Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Orthogonal Vectors and Subspaces http://z.cs.utexas.edu/wiki/pla.wiki/

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

MATH 22A: LINEAR ALGEBRA Chapter 4

MATH 22A: LINEAR ALGEBRA Chapter 4 MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 191 Applied Linear Algebra Lecture 1: Inner Products, Length, Orthogonality Stephen Billups University of Colorado at Denver Math 191Applied Linear Algebra p.1/ Motivation Not all linear systems have

More information

18.06 Quiz 2 April 7, 2010 Professor Strang

18.06 Quiz 2 April 7, 2010 Professor Strang 18.06 Quiz 2 April 7, 2010 Professor Strang Your PRINTED name is: 1. Your recitation number or instructor is 2. 3. 1. (33 points) (a) Find the matrix P that projects every vector b in R 3 onto the line

More information

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

Section 6.1. Inner Product, Length, and Orthogonality

Section 6.1. Inner Product, Length, and Orthogonality Section 6. Inner Product, Length, and Orthogonality Orientation Almost solve the equation Ax = b Problem: In the real world, data is imperfect. x v u But due to measurement error, the measured x is not

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Answer Key for Exam #2

Answer Key for Exam #2 . Use elimination on an augmented matrix: Answer Key for Exam # 4 4 8 4 4 4 The fourth column has no pivot, so x 4 is a free variable. The corresponding system is x + x 4 =, x =, x x 4 = which we solve

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis, 5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications

More information

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C = CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

The Gram Schmidt Process

The Gram Schmidt Process u 2 u The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple

More information

The Gram Schmidt Process

The Gram Schmidt Process The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple case

More information

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Linear Algebra Final Exam Study Guide Solutions Fall 2012 . Let A = Given that v = 7 7 67 5 75 78 Linear Algebra Final Exam Study Guide Solutions Fall 5 explain why it is not possible to diagonalize A. is an eigenvector for A and λ = is an eigenvalue for A diagonalize

More information

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3 Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,

More information

The Gram-Schmidt Process

The Gram-Schmidt Process The Gram-Schmidt Process How and Why it Works This is intended as a complement to 5.4 in our textbook. I assume you have read that section, so I will not repeat the definitions it gives. Our goal is to

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Linear Algebra. Chapter Linear Equations

Linear Algebra. Chapter Linear Equations Chapter 3 Linear Algebra Dixit algorizmi. Or, So said al-khwarizmi, being the opening words of a 12 th century Latin translation of a work on arithmetic by al-khwarizmi (ca. 78 84). 3.1 Linear Equations

More information

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 : Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v =

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

GEOMETRY OF MATRICES x 1

GEOMETRY OF MATRICES x 1 GEOMETRY OF MATRICES. SPACES OF VECTORS.. Definition of R n. The space R n consists of all column vectors with n components. The components are real numbers... Representation of Vectors in R n.... R. The

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

The 'linear algebra way' of talking about angle and similarity between two vectors is called inner product. We'll define this next. Orthogonality and QR The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next. So, what is an inner product? An inner product

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

Math 20F Final Exam(ver. c)

Math 20F Final Exam(ver. c) Name: Solutions Student ID No.: Discussion Section: Math F Final Exam(ver. c) Winter 6 Problem Score /48 /6 /7 4 /4 5 /4 6 /4 7 /7 otal / . (48 Points.) he following are rue/false questions. For this problem

More information

Lecture 4: Applications of Orthogonality: QR Decompositions

Lecture 4: Applications of Orthogonality: QR Decompositions Math 08B Professor: Padraic Bartlett Lecture 4: Applications of Orthogonality: QR Decompositions Week 4 UCSB 204 In our last class, we described the following method for creating orthonormal bases, known

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

(II.B) Basis and dimension

(II.B) Basis and dimension (II.B) Basis and dimension How would you explain that a plane has two dimensions? Well, you can go in two independent directions, and no more. To make this idea precise, we formulate the DEFINITION 1.

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES A SHORT SUMMARY OF VECTOR SPACES AND MATRICES This is a little summary of some of the essential points of linear algebra we have covered so far. If you have followed the course so far you should have no

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

MTH 2310, FALL Introduction

MTH 2310, FALL Introduction MTH 2310, FALL 2011 SECTION 6.2: ORTHOGONAL SETS Homework Problems: 1, 5, 9, 13, 17, 21, 23 1, 27, 29, 35 1. Introduction We have discussed previously the benefits of having a set of vectors that is linearly

More information

Vector Geometry. Chapter 5

Vector Geometry. Chapter 5 Chapter 5 Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at

More information

Section 6.4. The Gram Schmidt Process

Section 6.4. The Gram Schmidt Process Section 6.4 The Gram Schmidt Process Motivation The procedures in 6 start with an orthogonal basis {u, u,..., u m}. Find the B-coordinates of a vector x using dot products: x = m i= x u i u i u i u i Find

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

Linear Algebra Fundamentals

Linear Algebra Fundamentals Linear Algebra Fundamentals It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix. Because they form the foundation on which we later

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated )

Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated ) Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated 24--27) Denis Sevee s Vector Geometry notes appear as Chapter 5 in the current custom textbook used at John Abbott College for

More information

Linear Algebra. The Manga Guide. Supplemental Appendixes. Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd.

Linear Algebra. The Manga Guide. Supplemental Appendixes. Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd. The Manga Guide to Linear Algebra Supplemental Appendixes Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd. Copyright by Shin Takahashi and TREND-PRO Co., Ltd. ISBN-: 978--97--9 Contents A Workbook...

More information

CAAM 335: Matrix Analysis

CAAM 335: Matrix Analysis 1 CAAM 335: Matrix Analysis Solutions to Problem Set 4, September 22, 2008 Due: Monday September 29, 2008 Problem 1 (10+10=20 points) (1) Let M be a subspace of R n. Prove that the complement of M in R

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

(arrows denote positive direction)

(arrows denote positive direction) 12 Chapter 12 12.1 3-dimensional Coordinate System The 3-dimensional coordinate system we use are coordinates on R 3. The coordinate is presented as a triple of numbers: (a,b,c). In the Cartesian coordinate

More information

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0 LECTURE LECTURE 2 0. Distinct eigenvalues I haven t gotten around to stating the following important theorem: Theorem: A matrix with n distinct eigenvalues is diagonalizable. Proof (Sketch) Suppose n =

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

Spring 2014 Math 272 Final Exam Review Sheet

Spring 2014 Math 272 Final Exam Review Sheet Spring 2014 Math 272 Final Exam Review Sheet You will not be allowed use of a calculator or any other device other than your pencil or pen and some scratch paper. Notes are also not allowed. In kindness

More information

Math Linear Algebra

Math Linear Algebra Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner

More information

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra

SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1 Winter 2009 I. Topics from linear algebra I.0 : Background 1. Suppose that {x, y} is linearly dependent. Then there are scalars a, b which are not both

More information

Math 52: Course Summary

Math 52: Course Summary Math 52: Course Summary Rich Schwartz September 2, 2009 General Information: Math 52 is a first course in linear algebra. It is a transition between the lower level calculus courses and the upper level

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW When we define a term, we put it in boldface. This is a very compressed review; please read it very carefully and be sure to ask questions on parts you aren t sure of. x 1 WedenotethesetofrealnumbersbyR.

More information

Lecture 3: Linear Algebra Review, Part II

Lecture 3: Linear Algebra Review, Part II Lecture 3: Linear Algebra Review, Part II Brian Borchers January 4, Linear Independence Definition The vectors v, v,..., v n are linearly independent if the system of equations c v + c v +...+ c n v n

More information

The Fundamental Theorem of Linear Algebra

The Fundamental Theorem of Linear Algebra The Fundamental Theorem of Linear Algebra Nicholas Hoell Contents 1 Prelude: Orthogonal Complements 1 2 The Fundamental Theorem of Linear Algebra 2 2.1 The Diagram........................................

More information

Practice problems for Exam 3 A =

Practice problems for Exam 3 A = Practice problems for Exam 3. Let A = 2 (a) Determine whether A is diagonalizable. If so, find a matrix S such that S AS is diagonal. If not, explain why not. (b) What are the eigenvalues of A? Is A diagonalizable?

More information

4.1 Distance and Length

4.1 Distance and Length Chapter Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at vectors

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

PRACTICE PROBLEMS FOR THE FINAL

PRACTICE PROBLEMS FOR THE FINAL PRACTICE PROBLEMS FOR THE FINAL Here are a slew of practice problems for the final culled from old exams:. Let P be the vector space of polynomials of degree at most. Let B = {, (t ), t + t }. (a) Show

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.4 The Projection Matrix 1 Chapter 6. Orthogonality 6.4 The Projection Matrix Note. In Section 6.1 (Projections), we projected a vector b R n onto a subspace W of R n. We did so by finding a basis for

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information