The Orthogonal Geometry of R n

Size: px
Start display at page:

Download "The Orthogonal Geometry of R n"

Transcription

1 Chapter 9 The Orthogonal Geometry of R n This chapter is under revision in projections.tex need to edit Exercises In this chapter, we will study various geometric problems such as what is the minimal distance from a point of R n to a subspace. This is a version of the least squares problem. We will also prepare the way for the topic of the next chapter, which is the Principal Axis Theorem. In particular, we will need to show that every subspace of R n has an orthonormal basis. The last section is devoted to studying the rotations of R 3 and to giving examples of rotation groups. 9.1 A Fundamental Problem The purpose of this section is to solve the problem of finding the distance from a vector in R n to a subspace. This is a problem we already solved for plane through the origin in R 3. Recall that the distance from (x 0,y 0,z 0 ) R 3 to the plane ax + by + cz =0is d = ax 0 + by 0 + cz 0 (a 2 + b 2 + c 2 ) 1/2. What this formula represents of course is the length of the projection of (x 0,y 0,z 0 ) onto the line through the origin normal to the plane. 219

2 The orthogonal complement of a subspace We can easily formulate a n-dimensional version of this problem, the solution of which has many important applications. Consider a subspace of W of R n. Definition 9.1. The orthogonal complement of W is the subspace W of R n defined as the set of all vectors v R n which are orthogonal to every vector in W.Thatis, W = {v R n v w =0 w W } (9.1) The orthogonal complement W is clearly also a subspace of R n. It is easy to visalize in terms of matrices. Suppose W is the column space of an n k real matrix A. Then clearly W = N (A T ). In other words, ignoring the distinction between row and column vectors, it is clear that the row space and null space of a matrix are the orthogonal complements of one another. Applying our old principle that the number of variables in a homogeneous linear system is the number of corner variables plus the number of free variables, and using the fact that A and A T have the same rank, we thus see that dim(w )+dim(w )=n. This leads to a basic result. Proposition 9.1. Let W be a subspace of R n and W the orthogonal complement of W.Then (i) W W = {0}. (ii) Every v R n can be expressed in exactly one way as the sum of a vector in W and a vector in W. In particular, W + W = R n. Proof. Part (i) follows immediately from the fact that if v W W,then v v =0,sov = 0. The proof of (ii) is harder. Let a basis for W be w 1,...,w r and a basis for W be v 1,...,v s. We just showed r + s = n, so all we have to show is that w 1,...,w r, v 1,...,v s are independent since n independent vectors in R n form a basis. But we know from (i) that if we have a sum w + v = 0, wherew W,andv W,thenw = v = 0. Thus if a linear combination of w 1,...,w r, v 1,...,v s is 0, then the coefficients of the w i are all 0 and similarly, the coefficients of the v j are all 0. Therefore we have the independence so the proof is finished. We can now solve the following distance problem.

3 221 Problem: Letx R n be arbitrary, and let W be a subspace of R n. Minimize x y, where y is an arbitrary vector in W. One thing to observe is that the minimizing x y is equivalent to solving the least squares problem of minimizing the sum of squares x y 2. Recall that when x = w + v with w W and w v =0,wecallw the component of x in W. For any y W, x y 2 = (x w)+(w y) 2 = x w 2 + w y 2 (since (x w) (w y) =0) x w 2. This gives the solution. Proposition 9.2. The minimum distance from x R n to the subspace W is x w, wherew is the component of x in W. Put another way, the minimum distance from x to W is the length of the component of x in W The projection on a subspace In view of our solution to the least squares problem, the next step is to find an expression for the component of x in W.Asabove,letw 1,...,w r be a basis of W, and put A =(w 1 w r ). Thus W is the column space col(a) of A. The condition that defines w is that for some y R r w = Ay and A T (x w) =0, since the second condition says that x w W, due to the fact that the rows of A T span W. Substituting, we get A T x = A T Ay. Wealreadyknow Reference that when A is a real matrix with independent columns, A T A is invertible. Thus y =(A T A) 1 A T x. Hence, w = Ay = A(A T A) 1 A T x. (9.2) Hencewehaveanexpressionforthecomponentw of x. We can now define the projection P W of R n onto W by putting P W (x) = w, wherew is the component of x R n in W. Thus, by (9.2), P W (x) =A(A T A) 1 A T x. (9.3) Since P W is given by a matrix, it is obviously a linear map.

4 222 Example 9.1. Let W be the line in R n spanned by w. Here the projection P W is simply P W = w ( w T w ) 1 w T. This is a formula we already saw in Chapter 1. Example 9.2. Let 1 1 A = Then A has rank 2 and we find by direct computation that P W = A(A T A) 1 A T = The next proposition simply says that projections on a subspace behave exactly like the projections on a line considered in Chapter 1. Proposition 9.3. The projection P W of R n onto a subspace W has the following properties: (i) if w W,thenP W (w) =w; (ii) P W P W = P W ; (iii) P W is symmetric; and finally, (iv) for any x R n, x P W (x) is orthogonal to W. Consequently, x = P W (x)+(x P W (x)) is the orthogonal decomposition of x into the sum of a component in W and a component orthogonal to W. 9.2 Orthonormal Sets in R n Orthonormal Bases Recall that the dot product of two vectors v, w R n is defined to be n v w := v T w = v i w i, i=1

5 223 and the length of v of v is obtained from the dot product as v := v v. Definition 9.2. A collection of unit vectors in R n is called orthonormal, or ON for short, if the vectors are mutually perpendicular to each other. An orthonormal basis of R n (ONB for short) is a basis that is ON. More generally, an orthonormal basis of a subspace W of R n is a basis of W which is ON. Proposition 9.4. The vectors u 1, u 2,...,u n give an ONB of R n if and only if the matrix U =(u 1 u 2...u n ) is orthogonal. Proof. This is clear since U T U = I n (u T i u j )=(u i u j )=I n. Proposition 9.5. Any ON set in R n is linearly independent. The proof is an exercise. In order to prove the PAT, we will need to know that every subspace W of R n admits an ONB. This will be proved in the next section. Example 9.3. Here are some examples of ONBs. (a) The standard basis e 1,...,e n is an ONB of R n. (b) 1 3 (1, 1, 1) T, 1 6 (1, 2, 1) T, 1 2 (1, 0, 1) T give an ONB of R 3. The first two basis vectors are an ONB of the plane x z =0. (c) Both the columns and rows of an n n orthogonal matrix Q are an ONB of R n (why?). Using the matrix Q = , we thus get two distinct ONBs of R Projecting via an ONB Let s first consider the problem of expanding a vector in R n in terms of an ONB basis. After that we will find a formula for the projection P W onto a subspace W of R n. The solution to the first problem is very neat.

6 224 Proposition 9.6. Assume u 1, u 2,...,u n is an ONB of R n.thenanyw R n, has the unique expression w = n 1=i (w u i )u i = n 1=i (u T i w)u i. (9.4) We will call (9.4) the projection formula for R n sinceitsaysthat any vector in R n is the sum of its projections on an ONB. The coefficients x i = w u i are often called the Fourier coefficients of w with respect to the orthonormal basis. The projection formula can also be stated in matrix form as follows: n I n = u i u T i. (9.5) In other words, the sum of the projections on an ONB is the identity. Example 9.4. For example, 1=i (1, 0, 0, 0) = 1 2 (1, 1, 1, 1) 1 2 ( 1, 1, 1, 1) (1, 1, 1, 1) + 1 (1, 1, 1, 1). 2 To prove the projection formula (9.4), write w = n 1=i x i u i. To find the x i, we consider the system Qx = w, where Q =(u 1 u 2...u n ). Since Q is orthogonal, the unique solution is x = Q T w. But this that says that each x i = u T i w = w u i, which gives the desired formula. More generally, suppose W is a subspace of R n with an ONB u 1, u 2,...,u k. Then, by a similar argument, any w W has the unique expansion w = k (w u i )u i. (9.6) 1=i

7 225 To see this, first write Then observe that w u j =( k 1=i w = k 1=i x i u i ) u j = x i u i. k 1=i x i (u i u j )=x j since u i u j =0ifi j and u i u i =1. We now claim that the projection P W of R n onto W is given by That is the matrix of P W is P W (x) = P W = k (x u i )u i. (9.7) i=1 k u i u T i = QQ T, (9.8) i=1 where Q =(u 1 u k ). To see this, apply the formula P W = A(A T A) 1 A T to the case A = Q and notice Q T Q = I k (check this). Proposition 9.7. Let Q be the n k matrix Q =(u 1 u 2... u k ),where u 1, u 2,...,u k is an ONB of the subspace W.ThenmatrixofP W is P W = QQ T. (9.9) Example 9.5. Consider be the subspace W =span{(1, 1, 1, 1) T, (1, 1, 1, 1) T }. In order to find the matrix of P W, we must compute P W (e i )fori =1, 2, 3, 4. Observe that u 1 =1/2(1, 1, 1, 1) T and u 2 =1/2(1, 1, 1, 1) T are an ONB of W. Now, by a straightforward computation, P W (e 1 )= 1 2 (1, 0, 0, 1)T, P W (e 2 )= 1 2 (0, 1, 1, 0)T. By inspection, P W (e 3 )=P W (e 2 )andp W (e 4 )=P W (e 1 ). Hence the matrix A of P W is A = Note that we could also have calculated QQ T,whereQ =(u 1 u 2 ).

8 226 The projection onto a hyperplane W in R n with unit normal u 0 clearly has the form P W (x) =(I n uu T )x. Using the same reasoning as in Chapter 1, we define the reflection through W to be the linear transformation H u = I n 2P u. (9.10) We leave it as an exercise to show that H u is a symmetric orthogonal matrix, H u u = u and H u x = x if x W.Thatis,H u has the expected properties of a reflection. The notion of Fourier coefficients and orthogonal projections are very useful in infinite dimensional situations also. A set S of functions in C[a, b] is called orthonormal if for any f,g S, b { 0 if f g (f,g)= f(t)g(t)dt = 1 if f = g. a The formula for projecting C[a, b] onto the subspace W spanned by a finite set of functions is exactly as given above, once an ONB of W has been constructed. This is our next step The Pseudo-Inverse and Least Squares Suppose A has independent columns, and W denotes the column space of A. Then the matrix A + =(A T A) 1 A T is called the pseudo-inverse of A. If A is square, then A and A T are both invertible, so A + = A 1. However, A + is always a left inverse of A. That is, A + A = I k. To see what is going on, it is helpful to consider A as a linear transformation A : R k R n. Since the columns of A are independent, N (A) =0, and thus we know that A is one to one. That is, Ax = Ay implies x = y. Wehavejustshowed that a one to one linear transformation A always has a left inverse B, that is a k n matrix B such that BA = I k, namely the pseudo-inverse A +. However, when k<nthere are many left inverses of A. In fact, if C is any n k matrix such that col(a) N(C) (for example a syndrome of A), then CA = O, so(a + + C)A = A + A + CA = I k + O = I k. Hence A + + C is also a left inverse of A. (Indeed, every left inverse of A has the form A + + C.) The special property of the pseudo-inverse A + is that not only is A + A = I k, but AA + = P W. Thus A + solves the least squares problem for W : given b R n, find x R k so that Ax is the element of W nearest b. Thesolution is of course x = A + b.

9 227 A useful way to look at the least squares problem is as a method of solving inconsistent systems. If the system Ax = b is inconsistent, then this system should be replaced by the consistent system Ax = P W b,sincep W b is the vector in the column space W of A nearest b. The solution to the system Ax = P W b is x = A + b. Now let us consider a typical application. Suppose one has points (a i,b i ), 1 i k, inr 2, which represent the outcome of some experiment, and one wants to find a line which fits these points as well as possible. If the equation of the line is y = mx + n, then the line will pass through (a i,b i ) b i = ma i + n. These equations can be expressed in matrix form a 1 1 Ax = a a k 1 ( ) m = n b 1 b 2.. b k. Note that the unknowns are now m and n. The effect of applying least squares is to replace b 1,...,b k by c 1,...,c k so that all (a i,c i ) lie on a line and the sum k (b i c i ) 2 i=1 is minimized. The solution is easily written down using the pseudo-inverse A + of A. Wehave ( ) m x = = A + b =(A T A) 1 A T b. n More precisely, ( ) m = n ( ) a 2 1 ( i ai ai b i ai k bi ). Note that the 2 2 matrix in this solution is invertible just as long as some a i 0, 1. The problem of fitting a set of points (a i,b i,c i ) to a plane is similar. The method can also be adapted to the problem of fitting a set of points in R 2 to a nonlinear curve, such as an ellipse. This is apparently the genisis of the method of least squares. Its inventor, K. F. Gauss, astonished the astronomical world in 1801 by being able to predict on the basis of only 9 of observed orbit the approximate position of the astroid Ceres 11 months after his initial observations were made. Least squares applies to function spaces as well.

10 228 Example 9.6. Suppose we want to minimize 1 1 (cos x (a + bx + cx 2 )) 2 dx. The solution proceeds exactly as in the Euclidean situation. We first apply GS to 1,x,x 2 on [ 1, 1] to obtain ON polynomials f 0,f 1,f 2 on [ 1, 1] and then compute the Fourier coefficients of cos x with respect to the f i. Clearly f 0 = 1.Moreover,sincex is odd, (x, f 0 )=0. Hence 2 f 1 = x 2 = (x, x) 3 x. To get f 2, we calculate x 2 (x 2,f 0 )f 0 (x 2,f 1 )f 1 which turns out to be x Computing (x2 1 3,x2 1 3 ), we get 45 8,sof 2 = 45 (x ). The Fourier coefficients (cos x, f i )= 1 1 cos xf i(x)dx turn out to be 2cos1, 0 2 and 45 (4 cos sin 1). Thus the best least squares approximation is cos (4 cos sin 1)(x2 1 3 ). The calculation is greatly simplified by the fact that we chose the interval to be symmetric about 0 since x and x 2 are already orthogonal on [ 1, 1], as are any even and odd polynomials. Exercises Exercise 9.1. Expand (1, 0, 0) T using the orthonormal basis consisting of the columns of the matrix Q of Example??(b). Do the same for (1, 0, 0, 0) using the rows of U. Exercise 9.2. Find an ONB for the plane x 2y +3z =0inR 3. Now extend this ON set in R 3 to an ONB of R 3. Exercise 9.3. Show that the product of two orthogonal matrices is orthogonal and the inverse of an orthogonal matrix is orthogonal. Why does the second statement imply the transpose of an orthogonal matrix is orthogonal? Exercise 9.4. Show that any ON set of vectors is linearly independent. (Use the projection formula.)

11 229 Exercise 9.5. What are the eigenvalues of a projection P W? Can the matrix of a projection be diagonalized? That is, does there exist an eigenbasis of R n for any P W? If the answer to the previous question was yes, does there exist an ON eigenbasis? Exercise 9.6. Show that the matrix of a projection is symmetric. Exercise 9.7. Diagonalize the matrix A of P W in Example 9.5. Exercise 9.8. Find the matrix of the reflection H u through the hyperplane orthogonal u defined in (9.10) for the following cases: (a) u is a unit normal to the hyperplane x 1 + 2x 2 + x 3 =0inR 3. (b) u is a unit normal to the hyperplane x 1 + x 2 + x 3 + x 4 =0inR 4. Exercise 9.9. Show that the matrix H u defined in (9.10) is a symmetric orthogonal matrix such that H u u = u and H u x = x if x u =0. Exercise Show that H u admits an ON eigenbasis. Exercise Let Q be the matrix of the reflection H b. (a) What are the eigenvalues of Q? (b) Use the result of (a) to show that Q = 1. (c) Show that Q can be diagonalized by explicitly finding an eigenbasis of R n for Q. Exercise Using the formula P W = A(A T A) 1 A T, show that (a) every projection matrix P W satisfies the identity P W P W = P W and give a geometric interpretation of this, and (b) every projection matrix is symmetric. Exercises Exercise Let A have independent columns. Verify the formula P = QQ T using A = QR. Exercise Prove the Pythagorean relation used in the proof of Theorem 1. That is, show that if p q =0,then p + q 2 = p q 2 = p 2 + q 2. Conversely, if this identity holds for p and q, thenp and q are orthogonal.

12 230 Exercise Let A be the matrix of Problem 2 in 16. Find the matrix of the projection of R 4 onto the column space W of A. Also, find the projection of (2, 1, 1, 1) T onto W. Exercise Suppose H is a hyperplane in R n with normal line L. Interpret each of P H + P L, P H P N and P N P H by giving a formula for each. Exercise Find the line that best fits the points ( 1, 1), (0,.5), (1, 2), and (1.5, 2.5). Exercise Suppose coordinates have been put on the universe so that the sun s position is (0, 0, 0). Four observations of a planet orbiting the sun tell us that the planet passed through the points (5,.1, 0), (4.2, 2, 1.4), (0, 4, 3), and ( 3.5, 2.8, 2). Find the plane (through the origin) that best fits the planet s orbit. Exercise Find the pseudo-inverse of the matrix Exercise Assuming A has independent columns, find the pseudoinverse A + from the QR factorization of A. Exercise Show that if A has independent columns, then any left inverse of A has the form A + + C, whereca = O. Exercise Suppose A has independent columns and let A = QR be the QR factorization of A. Find a left inverse of A in terms of Q and R. Exercise Consider the matrix 1 2 A = (a) Find the pseudo-inverse A + of A, and (b) Compute the QR factorization of A and use the result to find another left inverse of A. Exercise Let W be a subspace of R n with basis w 1,...,w k and put A =(w 1 w k ). Show that A T A is always invertible. (HINT: It is sufficient to show that A T Ax = 0 implies x = 0 (why?). Now consider x T A T Ax.)

13 Gram-Schmidt and the QR Factorization The Gram-Schmidt Method We are now going to show that every subspace of R n has an orthonormal basis. In fact, given a subspace W with a basis w 1,...,w k, we will construct an ONB u 1,...,u k of W such that for each index for m =1,...k, span{w 1,...,w m } =span{w 1,...,w m }. The method is called the Gram-Schmidt method or GS method for short. In fact, Gram-Schmidt is simply the technique we used in R 2, and extended to R n in the previous section (cf Proposition 9.3), to decompose a vector into two orthogonal components using projections. GS works in the abstract setting also, and we will take this theme up in the exercises. If V is any real vector space admitting an inner product, such as C[a, b], GS will also give a method for constructing an ONB of any finite dimensional subspace W of V. Let s begin with a subspace W of R n having a basis w 1,...,w k. Recall that the basis property implies no proper subset of w 1,...,w k can span W. Now proceed as follows: first let u 1 = w 1 1 w 1. Next find a non zero vector v 2 on the plane spanned by w 1 and w 2 which is orthogonal to w 1. The natural solution is to project w 2 on w 1 and put Next set v 2 := w 2 P w1 (w 2 )=w 2 (w 2 u 1 )u 1. Then u 1 and u 2 are ON. Moreover, and u 2 := v 2 1 v 2. w 1 =(w 1 u 1 )u 1, w 2 =(w 2 u 1 )u 1 +(w 2 u 2 )u 2. Thus u 1 and u 2 are in fact an ONB of the plane W 2 spanned by w 1 and w 2. To continue, let W 3 be the three space spanned by w 1, w 2 and w 3.By Proposition 9.3, the vector v 3 = w 3 P W2 (w 3 )=w 3 (w 3 u 1 )u 1 (w 3 u 2 )u 2

14 232 is orthogonal to W 2.Moreover,v 3 W 3 and v 3 0 (why?). Now put u 3 = v 3 1 v 3. Hence u 1, u 2, u 3 are an ONB of W 3. In general, if j k, letw j denote span{w 1,...,w j }, and suppose an ONB u 1,...,u j 1 of W j 1 is already defined. Then, one defines v j := w j P Wj 1 (w j )(sov j is w j minus the component of w j in W j 1 ). In other words, v j = w j (w j u 1 )u 1 (w j u 2 )u 2 (w j u j 1 )u j 1. Finally put u j = v j 1 v j. Then u 1,...,u j 1, u j is an ONB of the subspace W j spanned by w 1,...,w j. Continuing in this manner, we will eventually arrive at an ONB u 1,...,u k of W with the property that the span of u 1,...,u i coincides with the span of w 1,...,w i for each i k. To summarize, we state Proposition 9.8. Suppose w 1,...,w k are linearly independent vectors in R n. Then the Gram-Schmidt method produces ON vectors u 1,...,u k such that span{u 1,...,u i } =span{w 1,...,w i } for each i k The QR Decomposition The GS method can be summarized in an important matrix form called the QR factorization. This is the starting point of several methods in applied linear algebra, such as the QR algorithm. Let us consider the case k =3, the higher cases being analogous. Let W be a subspace of R n with a basis w 1, w 2, w 3. Applying GS to this basis gives an ONB u 1, u 2, u 3 of W such that the following matrix identity holds: w 1 u 1 w 2 u 1 w 3 u 1 (w 1 w 2 w 3 )=(u 1 u 2 u 3 ) 0 w 2 u 2 w 3 u w 3 u 3 In general, if A =(w 1 w k )isann k matrix over R with linearly independent columns, Q =(u 1 u k ) is the associated n k matrix produced by the GS method, and R is the k k upper triangular matrix of Fourier coefficients, then A = QR.

15 233 This is known as the QR decomposition or QR factorization of A. Summarizing, we have Proposition 9.9. Every real n k matrix A with independent columns (i.e. rank k) can be factored A = QR, whereq is an n k matrix with ON columns and R is an invertible upper triangular k k matrix. The fact that R is invertible is because it is upper triangular and its diagonal entries are nonzero (why?). If A is square, then both Q and R are square. In particular, Q is an orthogonal matrix. Constructing the factorization A = QR is the first step in the QR algorithm, which is an important method for approximating the eigenvalues of A. Exercises Exercise Find an ONB of the subspace W of R 4 spanned by (1, 0, 1, 1), ( 1, 1, 0, 0), and (1, 0, 1, 1). Expand (0, 0, 0, 1) and (1, 0, 0, 0) in terms of this basis. Exercise Let A := Find the QR factorization of A. Exercise Find a 4 4 orthogonal matrix Q whose first three columns are the columns of A in the previous problem. Exercise What would happen if the GS method were applied to a set of vectors that were not lineary independent? In other words, why can t we produce an ONB from nothing? Exercise In the QR decomposition, we claimed that the diagonal entries of R are non zero, hence R is invertible. Explain why they are indeed non zero. Exercise Suppose A = QDQ 1 with Q orthogonal and D diagonal. Show that A is always symmetric and that A is orthogonal if and only if all diagonal entries of D are either ±1. Show that A is the matrix of a reflection H u precisely when D =diag( 1, 1,...,1), that is exactly one diagonal entry of D is 1 and all others are +1.

16 234 Exercise How would you define the reflection H through a subspace W of R n? What properties should the matrix of H have? For example, what should the eigenvalues of H be? Exercise Check directly that if R = I n P W,thenR 2 = R. Verify also that the eigenvalues of R are 0 and 1 and that E 0 = W and E 1 = W. Exercise Show that for any subspace W of R n, P W can be expressed as P W = QDQ T,whereD is diagonal and Q is orthogonal. Find the diagonal entries of D, and describe Q. Exercise Let W be the plane in R 4 spanned by (0, 1, 0, 1) and (1, 1, 0, 0). Find an ONB of W,anONBofW andanonbofr 4 containing the ONBs of W and W. 11. Verify that if W is any subset of R n,thenw is a subspace of R n. What is (W )? Exercise The GS method applies to the inner product on C[a, b] as well. (a) Apply GS to the functions 1, x, x 2 on the interval [ 1, 1] to produce an ON basis of the set of polynomials on [ 1, 1] of degree at most two. The resulting functions P 0,P 1,P 2 are the first three normalized orthogonal polynomials of Legendre type. (b) Show that your nth polynomial P n satisfies the differential equation (1 x 2 )y 2xy + n(n +1)y =0. (c) The nth degree Legendre polynomial satisfies this second order differential equation equation for all n 0. This and the orthogonality condition can be used to generate all the Legendre polynomials. Find P 3 and P 4 without GS. Exercise Using the result of the previous exercise, find the projection of x 4 + x on the subspace of C[ 1, 1] spanned by 1, x, x The group of rotations of R 3 One of the mathematical problems one encounters in crystallography is to determine the set of rotations of a particular molecule. In other words, the problem is to determine the rotational symmetries of some object in R 3. The first question we should consider is what is a rotation of R 3. We

17 235 will use a characterization apparently due to Euler that a rotation ρ of R 3 is determined by an axis through the origin, which R fixes pointwise, and every plane orthogonal to this axis is rotated through the same fixed angle θ. Using this as the basic definition, we will now describe the set of rotations of R 3 in terms of matrix theory The set Rot(R 3 ) Let Rot(R 3 ) denote the set of rotations of R 4. It is clear that a rotation R of R 3 about 0 should preserve lengths and angles. Recalling that for any x, y R 3, x y = x y cos α, we see that any transformation of R 3 preserving both lengths and angles also preserves the dot product of any two vectors. Thus if ρ Rot(R 3 ), ρ(x) ρ(y) =x y. (9.11) Therefore, every rotation is given by an orthogonal matrix, and we see that Rot(R 3 ) O(3, R), the set of 3 3 orthogonal matrices. In particular, every rotation of R 3 is a linear transformation. However, not every orthogonal matrix gives a ROR3. For example, a reflection of R 3 through a plane through the origin clearly isn t a rotation, because if a rotation fixes two orthogonal vectors in R 3,itfixesallofR 3. On the other hand, a reflection does fix two orthogonal vectors without fixing R 3. In fact, I claim that every rotation R has a positive determinant. Indeed, ρ fixes a line L through the origin pointwise, so 1 is an eigenvalue. Moreover, the plane orthogonal to L rotated, so there exists a basis of R 3 for which the matrix of ρ has the form cosθ sin θ. 0 sinθ cos θ Hence if ρ Rot(R 3 ), then det(ρ) =1. We now introduce SO(3). Recall that SL(3, R) denotes the set of all 3 3 real matrices of determinant 1. Put SO(3) = SL(3, R) O(3, R). We therefore deduce that Rot(R 3 ) SO(3). In fact, the next thing we will show is Theorem Rot(R 3 )=SO(3).

18 236 Proof. It suffices to show SO(3) Rot(R 3 ), i.e. every element of SO(3) is a rotation. Note that the identity transformation I 3 is a rotation, namely the rotation through zero degrees. We will first prove that if σ SO(3) and σ I 3, then 1 is an eigenvalue of σ and the corresponding eigenspace has dimension 1. That is, E 1 is a line. We know that every 3 3 real matrix has a real eigenvalue, and we also know that the real eigenvalues of an orthogonal matrix are either 1 or 1. Hence, σ SO(3), the eigenvalues of σ are one of one of the following possibilities: (i) 1 of multiplicity three, (ii) 1, 1, where 1 has multiplicity two, and (iii) 1, λ, λ, whereλ λ, since the complex roots of the characteristic polynomial of a real matrix occur in conjugate pairs. Hence, 1 is always an eigenvalue of σ, sodime 1 1. I claim that if σ SO(3) and σ I 3,thendimE 1 = 1. Indeed, dim E 1 = 3, is impossible since σ I 3. If dime 2 =2,thenσ fixes the plane E 2 pointwise. Since σ preserves angles, it also has to send the line L = E2 to itself. Thus L is an eigenspace. But the only real eigenvalue different from 1 is -1, so if σ I 3, there is a basis of R 3 so that the matrix of σ is But then det(σ) = 1, so dim E 1 = 2 cannot happen. This gives us the claim that dim E 1 =1. Therefore σ fixes every point on a unique line L through the origin and maps the plane L orthogonal to L into itself. We now need to show σ rotates L. Let u 1, u 2, u 3 be an ONB in R 3 such that u 1, u 2 L and σ(u 3 )=u 3. Let Q =(u 1 u 2 u 3 ). Since σu 1 and σu 2 are orthogonal unit vectors on L, we can choose an angle θ such that σu 1 =cosθu 1 +sinθu 2 and σu 2 = ±(sin θu 1 cos θu 2 ). In matrix terms, this says cos θ ± sin θ 0 σq = Q sin θ cos θ

19 237 Since det(σ) = 1 and det(q) = ±1, it follows that cos θ ± sin θ 0 det sin θ cos θ 0 = The only possibility is that cos θ sin θ 0 σ = Q sin θ cos θ 0 Q 1. (9.12) This tells us that σ rotates the plane L through θ, hence σ Rot(R 3 ). This completes the proof that SO(3, R) =Rot(R 3 ). Notice that the matrix Q defined above may be chosen to be a rotation. Therefore, the above argument gives another result. Proposition The matrix of a rotation σ SO(3) is similar via another rotation Q to a matrix of the form cos θ sin θ 0 sin θ cos θ We also get a surprising conclusion. Corollary The composition of two rotations of R 3 is another rotation. Proof. This is clear since the product of two elements of SO(3) is another element of SO(3). Indeed, SO(3) = SL(3, R) O(3, R), and, by the product theorem for determinants, the product of two elements of SL(3, R) is another element of SL(3, R). Moreover, we also know that the product of two elements of O(3, R) isalsoino(3, R) Rotation groups We begin with a definition. Definition 9.3. Let S be a solid in R 3.Therotation group of S is defined to be the set of all σ SO(3) such that σ(s) =S. We denote the rotation group of S by Rot(S). Proposition Let S be a solid in R 3. If σ and τ are rotations of S, then so are στ and σ 1.

20 238 Proof. Clearly σ 1 SO(3). By Corollary 9.12, στ SO(3) as well. It s also clear that στ(s) =S, so the proof is finished. Example 9.7. We can now determine the group of rotations of a cube. Let S denote, for example, the cube with vertices at the points (A, B, C), where A, B, C = ±1. Every rotation of R 3 which maps S to itself maps each one of its six faces to another face, and, for any two faces, there is a rotation which maps one to the other. Moreover, there are 4 rotations which map any face to itself. It follows from Proposition 9.13 that there have to be at least 24 rotations of S. Now consider the 4 diagonals of S, i.e. the segments which join a vertex (A, B, C) to( A, B, C). every rotation of S permutes these segments. Moreover, if two rotations define the same permutation of the diagonal, they coincide (why?). Since the number of permutations of 4 objects is 4! = 24, it follows that Rot(S) has 24 elements, and these 24 elements are realized by the 24 permutations of the diagonals of S. Example 9.8. Consider the set consisting of the midpoints of the 6 faces of the cube S. The solid polygon S determined by these 6 points is called the regular octahedron. It is a solid with 8 triangular faces all congruent to each other. The cube and the regular octahedron are two of the 5 Platonic solids, which we will consider in Chapter??. Since each element of Rot(S) must also send midpoint to another midpoint, it follows that Rot(S) Rot(S ). But the other containment clearly also holds, so we deduce that Rot(S) = Rot(S ) Reflections of R 3 We now know that rotations of R 3 are characterized by the property that their determinants are +1, and we know that the determinant of any element of O(3, R) is±1. Hence every element of O(3, R) that isn t a rotation has determinant 1. We also know that every orthogonal 2 2matrixiseither a rotation or a reflection: a rotation when the determinant is +1 and a reflection when the determinant is 1. A natural is whether this is also true in O(3, R). It turns out that the determinant of a reflection of R 3 is indeed 1. This is due to the fact that a reflection leaves a plane pointwise fixed and maps every vector orthogonal to the plane to its negative. Thus, for a reflection, dim E 1 =2anddimE 1 = 1, so the determinant is 1. It turns out, however, that there exist elements σ O(3, R) det(σ) = 1 which are not reflections. For example, such a σ has eigenvalues 1,λ,λ. It

21 239 is left as an easy exercise to describe how σ acts on R 3. As to reflections, we have the following fact. Proposition An element Q O(3, R) is a reflection if and only if Q is symmetric and det(q) = 1. We leave the proof as an exercise. It is useful to recall a reflection can be expressed as I 3 2P L,whereP L is the projection on the line L orthogonal to the plane E 1 of the reflection. One final comment is that every reflection of R 2 actually defines a rotation of R 3.Forifσ reflects R 2 through a line L, the rotation ρ of R 3 through π with L as the axis of rotation acts the same way as σ on R 2, hence the claim. Note: the eigenvalues of ρ are 1, 1, 1, that is 1 occurs with multiplicity two. Remark: The term group as in rotation group will be defined in Chapter??. In essence, a group is a set that has a structure like that of a rotation group. In particular, elements can be multiplied and every element has an inverse. Exercises Exercise Prove Proposition Exercise Let S be a regular quadrilateral in R 3,thatisS has 2 faces made up of congruent triangles. How many elements does Sym(S) have? Exercise Compute Rot(S) in the following cases: (a) S is the half ball x 2 + y 2 + z 2 1,z 0}, and (b) S is the solid rectangle { 1 x 1, 2 y 2, 1 z 1}.

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Introduction to Linear Algebra, Second Edition, Serge Lange

Introduction to Linear Algebra, Second Edition, Serge Lange Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.

More information

Linear Algebra, Summer 2011, pt. 3

Linear Algebra, Summer 2011, pt. 3 Linear Algebra, Summer 011, pt. 3 September 0, 011 Contents 1 Orthogonality. 1 1.1 The length of a vector....................... 1. Orthogonal vectors......................... 3 1.3 Orthogonal Subspaces.......................

More information

TBP MATH33A Review Sheet. November 24, 2018

TBP MATH33A Review Sheet. November 24, 2018 TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

6. Orthogonality and Least-Squares

6. Orthogonality and Least-Squares Linear Algebra 6. Orthogonality and Least-Squares CSIE NCU 1 6. Orthogonality and Least-Squares 6.1 Inner product, length, and orthogonality. 2 6.2 Orthogonal sets... 8 6.3 Orthogonal projections... 13

More information

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

Dimension and Structure

Dimension and Structure 96 Chapter 7 Dimension and Structure 7.1 Basis and Dimensions Bases for Subspaces Definition 7.1.1. A set of vectors in a subspace V of R n is said to be a basis for V if it is linearly independent and

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

Problem 1: Solving a linear equation

Problem 1: Solving a linear equation Math 38 Practice Final Exam ANSWERS Page Problem : Solving a linear equation Given matrix A = 2 2 3 7 4 and vector y = 5 8 9. (a) Solve Ax = y (if the equation is consistent) and write the general solution

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Written by Santiago Cañez These are notes which provide a basic summary of each lecture for Math 290-2, the second

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Section 6.4. The Gram Schmidt Process

Section 6.4. The Gram Schmidt Process Section 6.4 The Gram Schmidt Process Motivation The procedures in 6 start with an orthogonal basis {u, u,..., u m}. Find the B-coordinates of a vector x using dot products: x = m i= x u i u i u i u i Find

More information

Homework 11 Solutions. Math 110, Fall 2013.

Homework 11 Solutions. Math 110, Fall 2013. Homework 11 Solutions Math 110, Fall 2013 1 a) Suppose that T were self-adjoint Then, the Spectral Theorem tells us that there would exist an orthonormal basis of P 2 (R), (p 1, p 2, p 3 ), consisting

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal

More information

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis, 5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 Jesús De Loera, UC Davis February 18, 2012 Orthogonal Vectors and Subspaces (3.1). In real life vector spaces come with additional METRIC properties!! We have

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015 FINAL EXAM Ma-00 Eakin Fall 05 December 6, 05 Please make sure that your name and GUID are on every page. This exam is designed to be done with pencil-and-paper calculations. You may use your calculator

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2, Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in 86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true. 1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III

More information

MATH 22A: LINEAR ALGEBRA Chapter 4

MATH 22A: LINEAR ALGEBRA Chapter 4 MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate

More information

3 Matrix Algebra. 3.1 Operations on matrices

3 Matrix Algebra. 3.1 Operations on matrices 3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

Math 396. An application of Gram-Schmidt to prove connectedness

Math 396. An application of Gram-Schmidt to prove connectedness Math 396. An application of Gram-Schmidt to prove connectedness 1. Motivation and background Let V be an n-dimensional vector space over R, and define GL(V ) to be the set of invertible linear maps V V

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Typical Problem: Compute.

Typical Problem: Compute. Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and

More information

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5 Practice Exam. Solve the linear system using an augmented matrix. State whether the solution is unique, there are no solutions or whether there are infinitely many solutions. If the solution is unique,

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Eigenvalues and Eigenvectors A =

Eigenvalues and Eigenvectors A = Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information

Chapter 3. More about Vector Spaces Linear Independence, Basis and Dimension. Contents. 1 Linear Combinations, Span

Chapter 3. More about Vector Spaces Linear Independence, Basis and Dimension. Contents. 1 Linear Combinations, Span Chapter 3 More about Vector Spaces Linear Independence, Basis and Dimension Vincent Astier, School of Mathematical Sciences, University College Dublin 3. Contents Linear Combinations, Span Linear Independence,

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information