MTH 2032 SemesterII

Size: px
Start display at page:

Download "MTH 2032 SemesterII"

Transcription

1 MTH 202 SemesterII Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011

2 ii

3 Contents Table of Contents iii 1 Systems of Linear Equations 1 2 Matrix Algebra 15 Eigenvalues and Eigenvectors 1 4 Vector Spaces 5 5 Orthogonality 75 iii

4

5 Chapter 5 Orthogonality Example 5.1 (Dot product) Find the dot product of x = (2, 2, 1) and y = (2, 5, ). Solution The dot product (or inner product or scalar product) of x and y is given by x y = (2, 2, 1) (2, 5, ) = ( ) = 11. That is, x y is obtained by multiplying corresponding components and adding the resulting products. The vectors x and y are said to be orthogonal (or perpendicular) if their dot product is zero, that is, if x y = 0. Therefore, for this example, the two given vectors x and y are not orthogonal. Example 5.2 (Norm) For x = (1, 2,, 4, 5), find the norm x. Solution The norm of x is given by x = x x = ( 2) ( 4) = 55. The norm (or length) of a vector x in R n, denoted by x, is defined to be the nonnegative square root of x x. In particular, if x = (x 1, x 2,, x n), then x = x x x2 n. That is, x is the square root of the sum of the squares of the components of x. Thus, x 0, and x = 0 if and only if x = 0. Example 5. (Normalize a vector) x Find the rescaled vector, where x = (2, 2, 1). x Solution By finding the norm x = =, we could normalize x to the following unit vector x x = 1 (2, 2, 1) = (2, 2, 1 ). Verify that the norm of the above rescaled vector is (2/) 2 + (2/) 2 + (1/) 2 = 1. In general, a vector x is called a unit vector if x = 1 or, equivalently, if x x = 1. For any nonzero vector x in R n, the vector ˆx = (1/ x )x = x/ x is the unique unit vector in the same direction of x. The process of finding ˆx from x is called normalizing x. 75

6 5. Orthogonality Example 5.4 (Schwarz inequality) Prove that x y x y. Solution For any real number t, we have 0 (tx + y) (tx + y) = t 2 (x x) + 2t(x y) + (y y) = x 2 t 2 + 2(x y) t + y 2. Let a = x 2, b = 2(x y), c = y 2. Then, for every value of t, at 2 +bt+c 0. This means that the quadratic polynomial cannot have two distinct real roots. This implies that the discriminant D = b 2 4ac 0. Equivalently, b 2 4ac. Thus, 4(x y) 2 4 x 2 y 2. Dividing by 4 and taking the square root of both sides gives the inequality. Remark. The angle θ between nonzero vectors x and y in R n is defined by cos θ = x y x y. This definition is well-defined, since, by the Schwarz inequality, 1 x y 1. Thus, 1 cos θ 1, x y and so the angle exists and is unique. Note that if x y = 0, then θ = π/2. This then agrees with our previous definition of orthogonality. Example 5.5 (Angle between vectors) Consider the vectors x = (2,, 5) and y = (1, 4, ) in R. Find cos θ, where θ is the angle between them. Solution The angle θ between x and y is given by cos θ = x y x y = Note that θ is an acute angle, since cos θ is positive. Example 5.6 (Minkowski s inequality) Prove that x + y x + y. Solution By the Schwarz inequality and other properties of dot product, we have x + y 2 = (x + y) (x + y) = (x x) + 2(x y) + (y y) x x y + y 2 = ( x + y ) 2. Taking the square root of both sides gives the inequality. Remark. The Minkowski s inequality is often known as the triangle inequality, because if we view x + y as the side of the triangle formed with sides x and y, then the inequality just says that the length of one side of a triangle cannot be greater than the sum of the lengths of the other two sides. 76

7 Example 5.7 (Orthogonal / Orthonormal set) Determine whether the vectors u 1 = (1,, 4), u 2 = ( 1, 1, 1) form an orthogonal set. For orthogonal set, further make them orthonormal. Solution Since u 1 u 2 = = 0, we know that the vectors u 1 and u 2 are orthogonal. In other words, u 1, u 2 form an orthogonal set. Then by the norms u 1 = 26, u 2 =, we further obtain an orthonormal set {v 1,v 2}, where v 1 = u1 (1,, 4) = = ( u , 26 26, 4 26 ), v 2 = u2 ( 1, 1, 1) = = ( 1, 1, u 2 1 ). Remark. Generally speaking, vectors v 1, v 2,, v k in R n are said to form an orthogonal set of vectors if each pair of vectors are orthogonal, that is, v i v j = 0 for i j. Stronger than that, vectors v 1, v 2,, v k in R n are said to form an orthonormal set of vectors if the vectors are unit vectors and they form an orthogonal set, that is, { 0, if i j, v i v j = 1, if i = j. Normalizing an orthogonal set refers to the process of multiplying each vector in the set by the reciprocal of its length in order to transform the set into an orthonormal set of vectors. Of course, we have assumed that there is no zero vector in the orthogonal set. Otherwise, division by zero may occur. Example 5.8 (Orthogonal / Orthonormal set) Determine all values of k so that the two vectors (1, 2, ) and (k 2, 1, k) are orthogonal. Solution Two vectors are orthogonal if and only if their dot product is zero. Therefore by (1, 2, ) (k 2, 1, k) = 0 k 2 k + 2 = 0 (k 1)(k 2) = 0, the two vectors are orthogonal for k = 1 or 2. Remark. Now think about the question how to verify whether three vectors form an orthogonal set or not? In fact, for more than two vectors, we should be more careful for the verification process. The vectors v 1, v 2,, v k are said to form an orthogonal set if v 1, v 2,, v k are pairwise orthogonal (or mutually orthogonal). That is, v 1 v 2 = 0, v 1 v = 0, v 1 v 4 = 0, v 1 v 5 = 0,, v 1 v k 1 = 0, v 1 v k = 0, v 2 v = 0, v 2 v 4 = 0, v 2 v 5 = 0,, v 2 v k 1 = 0, v 2 v k = 0, v v 4 = 0, v v 5 = 0,, v v k 1 = 0, v v k = 0,. v k 1 v k = 0. We emphasize that among the vectors of an orthogonal set, it is permissible that some v j s are zero vectors. This is one important difference between orthogonal set and orthonormal set. For every orthonormal set, all vectors are unit vectors so that none of them are zero. 77

8 5. Orthogonality Example 5.9 (Orthogonal / Orthonormal set) Let v 1 = (1, 2, 5), v 2 = (1, 2, 1), v = (1,, 1). Do v 1, v 2, v form an orthogonal set? If yes, rescale the vectors to make them orthonormal. Solution It could be verified that v 1 v 2 = = 0, v 1 v = = 0, v 2 v = = 6. Since v 2 v 0, the vectors v 1, v 2, v are not orthogonal. Example 5.10 (Orthogonal / Orthonormal set) Let v 1 = (1, 2, 1), v 2 = ( 1, 1, 1), v = (1, 0, 1). Do v 1, v 2, v form an orthogonal set? If yes, rescale the vectors to make them orthonormal. Solution It could be verified that v 1 v 2 = = 0, v 1 v = = 0, v 2 v = = 0. Hence, v 1, v 2, v form an orthogonal set. By the norms v 1 = 6, v 2 =, v = 2, we obtain an orthonormal set consisting the three unit vectors v 1 v 1 = ( 1 6, 2 6, 1 6 ), v 2 v 2 = ( 1, 1, 1 ), v v = ( 1, 0, 1 ). 2 2 Remark. By a basis of R n we mean a set of n vectors in R n which are linearly independent. In addition, if the n vectors in R n form an orthogonal set, then we call it an orthogonal basis of R n. An orthogonal basis of R n can always be normalized to form an orthonormal basis of R n. Example 5.11 (Orthogonal basis) Show that the standard basis of R n is orthonormal for every n. Solution We consider n = only. Let {e 1,e 2,e } = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} be the standard basis of R. It is clear that e 1 e 2 = e 1 e = e 2 e = 0, e 1 e 1 = e 2 e 2 = e e = 1. Namely, {e 1,e 2,e } is an orthonormal basis of R. More generally, the standard basis of R n is orthonormal for every n. 78

9 Example 5.12 (Orthogonal basis) Show that v 1 = (1,, 1), v 2 = (1, 1, 2) are orthogonal. Find a third vector v so that v 1, v 2, v form an orthogonal basis of R. Solution Verify that v 1 v 2 = = 0, so v 1 and v 2 are orthogonal. We need an additional vector v = (x, y, z) such that v is pairwise orthogonal to v 1 and v 2. That is, v v 1 = 0 and v v 2 = 0. This yields the homogeneous system { x +y z = 0, x y 2z = 0. Let A be the coefficient matrix of the system. By doing row operations, [ ] [ ] 1 1 A = 1 0 7/ /4 Here only the third column is nonpivot and z is the only free variable. The general solution is given by x 7z/4 y = z/4 = z z z 4 Thus we may take v = (7, 1, 4). Now, as we want, the three (nonzero) vectors v 1, v 2, v form an orthogonal set. Furthermore, it follows from Theorem (Lecture Notes, page 191) that v 1, v 2, v in R are indeed linearly independent. Accordingly, v 1, v 2, v form an orthogonal basis of R. Remark. We may interpret Theorem (Lecture Notes, page 191) in the following. Suppose S is an orthogonal set of nonzero vectors. Then S is linearly independent. (5.1) Proof. Suppose S = {v 1,v 2,,v k } and suppose Taking the dot product of (5.2) with v 1, we get c 1v 1 + c 2v c k v k = 0. (5.2) 0 = 0 v 1 = (c 1v 1 + c 2v c k v k ) v 1 = c 1 v 1 v 1 + c 2 v 2 v c k v k v 1 = c 1 v 1 v 1 + c c k 0 = c 1 v 1 v 1. Since v 1 0, we have v 1 v 1 0. Thus c 1 = 0. Similarly, for i = 2,, k, taking the dot product of (5.2) with v i, 0 = 0 v i = (c 1v 1 + c 2v c k v k ) v i = c 1 v 1 v i + + c i v i v i + + c k v k v i = c i v i v i. But v i v i 0, and hence c i = 0. Thus S is linearly independent. 79

10 5. Orthogonality Example 5.1 (Orthogonal basis) Show that the columns of the following matrix U form an orthogonal basis of R. Hence find U 1 and express (1, 2, ) as a linear combination of the columns of U U = Solution By direct multiplication of matrices, we have U t U = = 0 6 0, and we denote the last diagonal matrix as D def. = diag (5, 6, 0). That is, U t U = D. Now we recall the fact U t U is diagonal columns of U are orthogonal (5.) that the three columns of U form an orthogonal set. Since none of the columns are zero, then by (5.1), the three columns of U in R are linearly independent and hence they form an orthogonal basis of R. Since detd 0, D is invertible, then D 1 U t U = I implies that U is invertible and its inverse is 1/ /5 0 2/5 U 1 = D 1 U t = 0 1/ = 1/ 1/6 1/ / /15 1/6 1/0 Let v 1 = (1, 0, 2), v 2 = (2, 1, 1), v = ( 2, 5, 1) be the three columns of U. That is, U = [ v 1 v 2 v ]. By Theorem (Lecture Notes, page 191), since v 1, v 2, v form an orthogonal basis of R, we could express x = (1, 2, ) as a linear combination of v 1, v 2, v. We first find x v 1 = 7, x v 2 = 1, x v = 11, v 1 v 1 = 5, v 2 v 2 = 6, v v = 0. Therefore, x = x v1 v 1 + x v2 v 2 + x v v = 7 v 1 v 1 v 2 v 2 v v 5 v v v. Remark. (i) We note that the matrix U in (5.) could be non-square and even some columns of U could be zero vectors. However, if all columns of U are unit vectors, we have a stronger version of (5.): U t U = I (the identity matrix) columns of U are orthonormal. (5.4) In fact, (5.) (resp. (5.4) ) can be used to verify whether the columns of a given matrix U can form an orthogonal set (resp. an orthonormal set) or not. In case of square matrix U and none of the columns of U are zero, then by (5.1), one can further verify whether the columns of U can form an orthogonal basis (resp. an orthonormal basis) of R n. In this case, since U is square, U t U = I UU t = I U is invertible and U 1 = U t. A square matrix U satisfying U 1 = U t is called an orthogonal matrix. (ii) By Theorem (Lecture Notes, page 191), if we write x = c 1v 1 + c 2v c k v k, the scalars c 1, c 2,, c k could be formally determined provided that v 1, v 2,, v k are nonzero and they can form an orthogonal set. In Example 5.1, the three columns of U can satisfy this requirement since they can form an orthogonal basis of R, as indeed the question required us to do so. 80

11 Example 5.14 (Orthogonal basis) Show that the columns of the following matrix U form an orthogonal basis of R 4. Then express (1, 0, 0, 0) and (0, 1, 2, ) as linear combinations of the columns of U U = Solution The columns of U are orthogonal because U t U = = is a diagonal matrix. The four nonzero columns of U form an orthogonal basis of R 4. Let x = (1, 0, 0, 0), y = (0, 1, 2, ) and denote that U = [ u 1 u 2 u u 4 ], where uj is the j-th column of U. Then x = y = x u1 u 1 u 1 u 1 + x u2 u 2 u 2 u 2 + x u u u u + x u4 u 4 u 4 u 4 = 1 4 u1 1 4 u u u4, y u1 u 1 + y u2 u 2 + y u u + y u4 u 4 u 1 u 1 u 2 u 2 u u u 4 u 4 = 2 u1 + 0u u 1 2 u4. Remark. As we mentioned in the previous remark (page 80), if U is an n n real matrix, then the following are equivalent: 1. U is an orthogonal matrix. 2. The columns of U form an orthonormal basis of R n.. U t U = I. 4. U is invertible, and U 1 = U t. In fact, since U is square, then we further have U is an orthogonal matrix columns of U form an orthonormal basis of R n U t U = I UU t = I rows of U form an orthonormal basis of R n. In the above, U t U = I UU t = I follows from the fact that for any square matrices A, B, then AB = I = BA = I (Review Notes for Linear Algebra True or False, 5.12). Then UU t = I (U t ) t U t = I columns of U t form an orthonormal basis of R n rows of U form an orthonormal basis of R n. So next time when you see the keyword orthogonal matrix, you may recall any of the above equivalent statements if necessary. 81

12 5. Orthogonality Example 5.15 (Orthogonal matrix) (a) Let 1/ 1/ 1/ P = 0 1/ 2 1/ 2 2/ 6 1/ 6 1/. 6 The columns (as well as the rows) of P are orthogonal to each other and are unit vectors. Thus P is an orthogonal matrix. (b) Let P be an 2 2 orthogonal matrix. Then, for some real number θ, we have [ ] [ ] cos θ sin θ cos θ sin θ P = or P =. sin θ cos θ sin θ cos θ Example 5.16 (Orthogonal matrix) Prove that the product of two orthogonal matrices is again orthogonal. Solution Suppose P and Q are orthogonal matrices. It follows that P 1 = P t and Q 1 = Q t. By (PQ) 1 = Q 1 P 1 = Q t P t = (PQ) t that implies PQ is again orthogonal. Example 5.17 (Orthogonal matrix) Prove that the determinant of an orthogonal matrix is ±1. Solution Suppose P is an orthogonal matrix. Then P 1 = P t. It follows that 1 = deti = det (PP 1 ) = det (PP t ) = detp detp t. But recall that detp t = detp and hence (detp) 2 = 1. Therefore, detp = ±1. Example 5.18 (Orthogonal matrix) Let U be a square matrix. Show that if the columns of U are orthonormal, then the rows of U are also orthonormal. Give an example of a square matrix A such that the columns of A are orthogonal, but the rows of A are not. Solution By (5.4), if the columns of U are orthonormal, then U t U = I. Given that U is a square matrix, then U is indeed an orthogonal matrix and U 1 = U t. It follows that (U t ) t (U t ) = UU t = UU 1 = I. The above implies that the columns of U t are orthonormal. Equivalently, the rows of U are orthonormal. [ ] 2 1/2 Take A =. Then the columns of A are orthogonal but the rows are not

13 Example 5.19 (Orthogonal projection) Find the orthogonal projection of x = (4, 4, ) onto the line spanned by v = (5, 1, 2). Then find the distance from x to the line. Solution Let S be the subspace (line) of R spanned by v. That is, S = span (v) = {kv : all number k}. Then the orthogonal projection of x onto S is proj S x = x v v v v = 0 v = (5, 1, 2). 0 Since proj S x x, we see that x is not in the span S. The distance from x to S is dist (x, S) = x proj S x = (4, 4, ) (5, 1, 2) = ( 1,, 1) = 11. Remark. (i) Recall from Theorem (Lecture Notes, page 195) that the orthogonal projection of x onto a subspace S of R n can be formally determined provided that an orthogonal basis of S should be known. For questions if the subspace S is given as a span of some given vectors (say v 1, v 2,, v k ), then you are indeed required to prove that the vectors v 1, v 2,, v k are orthogonal (and none of them are zero) so that they are linearly independent by (5.1) and hence they can form an orthogonal basis for S. In general cases, if you are required to find the orthogonal projection of vector x onto some subspace of R n (for example, null space, row space, column space of some matrix), again it is supposed you should first find an orthogonal basis of the subspace and accordingly use Theorem (Lecture Notes, page 195) to construct the orthogonal projection. (ii) If proj S x x, then x S. The (shortest) distance from x to S is given by the norm of the orthogonal projection of x onto the orthogonal complement of S. That is, dist (x, S) = z = x y = x proj S x. (iii) We just mentioned the keyword orthogonal complement, it is better to give a formal definition for this concept. Let S be a subspace of R n. The orthogonal complement of S, denoted by S, consists of those vectors in R n that are orthogonal to every vector y in S, that is, S = {z R n : z y = 0 for every y S}. For an example, suppose u is the orthogonal projection of v to W, then what is the orthogonal projection of v to W? v + u, v u, v u or v + u? In fact, the projection of v to W is u. Therefore the projection of v to W is ( v) ( u) = u v. Example 5.20 (Orthogonal projection) Find the orthogonal projection of x = (1, 2, 7) onto the plane spanned by the orthogonal vectors v = (1, 1, 1), Then find the distance from x to the plane. w = (1, 2, 1). Solution By v w = = 0, we know that v and w are indeed orthogonal. Suppose S be the subspace of R spanned by v and w. Since both v and w are nonzero, by (5.1), they are linearly independent and hence they form an orthogonal basis for S. That is, S = span(v,w) = plane in R. By Theorem (Lecture Notes, page 195), the orthogonal projection of x onto S is proj S x = x v v v v + x w w = 2v + 2w = (4, 2, 4). w w Since proj S x x, we see that x is not in the plane S. The distance from x to the plane S is dist (x, S) = x proj S x = (1, 2, 7) (4, 2, 4) = (, 0, ) = 2. 8

14 5. Orthogonality Example 5.21 (Orthogonal projection) Find the orthogonal projection of x = (2, 1,, 2) onto the plane spanned by orthogonal vectors v = (1, 1, 1, 1), w = (2, 1, 1, 2). Then find the distance from x to the plane. Solution By v w = = 0, we know that v and w are indeed orthogonal. Suppose S be the subspace of R 4 spanned by v and w. Since both v and w are nonzero, by (5.1), they are linearly independent and hence they form an orthogonal basis for S. That is, S = span(v,w) = plane in R 4. By Theorem (Lecture Notes, page 195), the orthogonal projection of x onto S is proj S x = x v v v v + x w w w w = 4 4 v + 10 w = (, 0, 2, 1). 10 Since proj S x x, we see that x is not in the plane S. The distance from x to the plane S is dist (x, S) = x proj S x = (2, 1,, 2) (, 0, 2, 1) = ( 1, 1, 1, 1) = 2. Example 5.22 (Orthogonal projection) Find the orthogonal projection of x = (, 4, 2) onto the subspace of R which has the orthonormal basis v 1 = 1 (2, 1, 2), v2 = 1 18 (1, 4, 1). Then find the distance from x to the subspace of R. Solution Suppose S be the subspace of R spanned by v 1 and v 2. Just recall from Theorem (Lecture Notes, page 195) that we need an orthogonal basis for S before we can write down the orthogonal projection. For this example, we are given an orthonormal basis which is in particular an orthogonal basis. For easier hand calculations, we may take u 1 = (2, 1, 2), u 2 = (1, 4, 1) be the orthogonal basis for S. That is, S = span(u 1,u 2) = plane in R. We then use {u 1,u 2} to construct the orthogonal projection. The orthogonal projection of x onto S is proj S x = x u1 u 1 + x u2 u 2 = 6 15 u1 + u 1 u 1 u 2 u u2 = (1 2, 4, 1 2 ). Since proj S x x, we see that x is not in the span S. The distance from x to S is dist (x, S) = x proj S x = (, 4, 2) (1 2, 4, ) = 2 = Remark. In Examples , we find that each subspace S of R n is always given by a span of some vectors in R n and those vectors are already pairwise orthogonal. However, in general cases, vectors in R n are not necessarily orthogonal, and you need to first orthogonalize them to make them usable. We will see an example later for this kind (Example 5.27, page 88). 84

15 Example 5.2 (Orthogonal diagonalization) Orthogonally diagonalize the symmetric matrix A = That is, find an orthogonal matrix Q and diagonal matrix D so that Q 1 AQ = D. Solution Recall the fact that all real symmetric matrices are diagonalizable. Hence, A is diagonalizable and should have a diagonalization A = PDP 1 for some invertible P and diagonal D. Normally, P is constructed as a column partition matrix with eigenvectors of A as its columns. That is, P = [ v 1 v 2 v ], where v 1, v 2, v are eigenvectors of A. Now back to our problem, we need to orthogonally diagonalize A. The keyword here is orthogonally, so indeed we need to find an invertible matrix that is also an orthogonal matrix. Here we use Q (instead of P) to represent this matrix for its orthogonal property. Now we need to guarantee the eigenvectors v 1, v 2, v of A form an orthonormal set (pairwise orthogonal unit vectors). The characteristic equation of A is det (A λi) = 0, or 1 λ 0 1 det 0 1 λ 1 = λ(1 λ)(λ ) = λ which gives the distinct real eigenvalues λ 1 = 0, λ 2 = 1, λ =. For λ 1 = 0, A 0 I = By solving (A 0 I)x = 0, x = (x 1, x 2, x ), we have x 1 = x and x 2 = x. Thus, we get the eigenvector v 1 = (1, 1, 1). For λ 2 = 1, x = (x, x, x ) = x (1, 1, 1), A 1 I = By solving (A 1 I)x = 0, x = (x 1, x 2, x ), we have x 1 = x 2 and x = 0. Thus, we get the eigenvector v 2 = (1, 1, 0). For λ =, x = (x 2, x 2, 0) = x 2 (1, 1, 0), /2 A I = / By solving (A I)x = 0, x = (x 1, x 2, x ), we have x 1 = x /2 and x 2 = x /2. Thus, we get the eigenvector v = ( 1, 1, 2). x = ( x 2, x x, x) = 2 2 ( 1, 1, 2), 85

16 5. Orthogonality Verify that v 1 v 2 = = 0, v 1 v = = 0, v 2 v = = 0. Luckily, v 1, v 2, v are pairwise orthogonal and hence they form an orthogonal set. Since eigenvectors must be nonzero, v 1, v 2, v are linearly independent by (5.1) and hence they form an orthogonal basis of R. Obviously, v 1, v 2, v are not unit vectors. We need to first normalize them. By v 1 =, v 2 = 2, v = 6, we further obtain an orthonormal basis of R consisting x 1 = v1 v 1 = 1 (1, 1, 1), x 2 = v2 v 2 = 1 2 (1, 1, 0), x = v v = 1 6 ( 1, 1, 2). Thus if we construct Q = [ x 1 x 2 x ], we can use it to diagonalize A such that Q 1 AQ = D, where 1/ 1/ 2 1/ 6 Q = 1/ 1/ 2 1/ / 0 2/, D = Here we emphasize that Q is an orthogonal matrix (1) which satisfies Q 1 = Q t, (2) whose columns form an orthonormal basis of R. Please review the equivalent statements in the remark (page 81). Remark. For Example 5.2, we find that the three eigenvectors of A are already orthogonal by our construction. However, in many cases, the eigenvectors are not orthogonal, we then need an algorithm to convert them to orthogonal vectors. One method for this purpose is called Gram Schmidt orthogonalization process. We shall illustrate the details in the following. Suppose {u 1,u 2,,u n} is a basis of a subspace V. One can use this basis to construct an orthogonal basis {v 1,v 2,,v n} of V as follows. Set v 1 = u 1, v 2 = u 2 v = u. v n = u n u2 v1 v 1 v 1 v 1, In other words, for k = 2,,, n, we define u v1 u v2 v 1 v 2, v 1 v 1 v 2 v 2 un v1 un v2 un vn 1 v 1 v 2 v n 1. v 1 v 1 v 2 v 2 v n 1 v n 1 v k = u k c k1 v 1 c k2 v 2 c k,k 1 v k 1, where c ki = (u k v i)/(v i v i) is the component of u k along v i. In fact, each v k is orthogonal to the preceeding v s. Thus v 1, v 2,, v n form an orthogonal basis for V as claimed. Normalizing each v k will then yield an orthonormal basis for V. The above construction is the so-called Gram Schmidt process. We have some remarks for this process. (1) Each vector v k is a linear combination of u k and the preceeding v s. Hence one can easily show, by induction, that each v k is a linear combination of u 1, u 2,, u n. This accounts for span(v 1,v 2,v ) = span(u 1,u 2,u ). (2) Since taking multiples of vectors does not affect orthogonality, it may be simpler in hand calculations to clear fractions in any new v k, by multiplying v k by an appropriate scalar, before obtaining the next v k+1. () Suppose w 1, w 2,, w m are linearly independent, and so they form a basis for W = span (w 1,w 2,,w m). Applying Gram Schmidt process to the w s yields an orthogonal basis for W. 86

17 Example 5.24 (Gram Schmidt Orthogonalization) Apply Gram Schmidt process to the following vectors to produce an orthogonal set. Solution Apply Gram Schmidt process, v 1 = u 1 = (1, 0, 2), v 2 = u 2 u 1 = (1, 0, 2), u 2 = (1,, 7). u2 v1 v 1 = (1,, 7) 15 (1, 0, 2) = ( 2,, 1). v 1 v 1 5 Now v 1, v 2 form an orthogonal set. Also, span(v 1,v 2) = span(u 1,u 2). Example 5.25 (Gram Schmidt Orthogonalization) Find an orthonormal basis for the subspace of R 4 spanned by u 1 = (0, 2, 1, 0), u 2 = (1, 1, 0, 0), u = (1, 2, 0, 1). Solution Let S be the subspace of R 4 spanned by u 1, u 2, u. We find that u 1, u 2, u are linearly independent because all three columns of [ ] u 1 u 2 u are pivot: [ ] u1 u 2 u = Thus u 1, u 2, u indeed form a basis for S. However, we find that u 1, u 2, u are not pairwise orthogonal (u 1 u 2 0). We then use Gram Schmidt process to obtain an orthogonal set {v 1,v 2,v } from {u 1,u 2,u }. We set v 1 = u 1 = (0, 2, 1, 0), and from u 2 we may take v 2 = (5, 1, 2, 0), u u2 v1 v 1 = (1, 1, 0, 0) 2 v 1 v 1 5 (0, 2, 1, 0) = (1, 1 5, 2 5, 0) = 1 (5, 1, 2, 0), 5 and from u v1 u v2 v 1 v 2 = (1, 2, 0, 1) 4 v 1 v 1 v 2 v 2 5 (0, 2, 1, 0) 0 (5, 1, 2, 0) = (1 2, 1, 1, 1), 2 we may take v = (1, 1, 2, 2). Now the three vectors v 1, v 2, v form an orthogonal set. Since none of them are zero, v 1, v 2, v form an orthogonal basis for the subspace S such that S = span (u 1,u 2,u ) = span(v 1,v 2,v ). Finally, we must normalize them to obtain an orthonormal basis for S, that is, { (0, 2 5, 1 5, 0), (, 1 2 1,, 0), (, , 2, 2 } ) Example 5.26 (Gram Schmidt Orthogonalization) Find an orthonormal set from u 1 = (1, 2, 1), u 2 = (1,, 1), u = (2, 2, 1). Solution We note that in some cases, we even do not need Gram Schmidt process for generating an orthogonal set. For this example, u 1, u 2, u are linearly independent and form a basis of R. In particular, u 1, u 2, u span R. So one simple orthonormal set that spans R is the set of the standard basis, i.e., {e 1, e 2, e } = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}. 87

18 5. Orthogonality Example 5.27 (Gram Schmidt Orthogonalization) Consider the subspace W spanned by u 1 = (1, 0, 2, 0), u 2 = (1, 2, 2, 4), u = ( 1, 1,, 2). Find the orthogonal projection of (1, 1, 0, 0) onto W. Then find the distance from the vector to W. Solution By row operations we may reduce the matrix [ u 1 u 2 u ] and find that all columns are pivot, we know that u 1, u 2, u are linearly independent. Since they span W, they indeed form a basis for W. Apply Gram Schmidt process, v 1 = u 1 = (1, 0, 2, 0), v 2 = u 2 v = u u2 v1 v 1 = u 2 5 v1 = (0, 2, 0, 4), v 1 v 1 5 u v1 u v2 v 1 v 2 = u 5 10 v1 v2 = ( 2, 0, 1, 0). v 1 v 1 v 2 v Then W = span(v 1,v 2,v ). Let x = (1, 1, 0, 0). Then the orthogonal projection of x onto W is proj W x = x v1 v 1 + x v2 v 2 + x v v v 1 v 1 v 2 v 2 v v = v1 + v v = (1, 1 5, 0, 2 5 ). The distance from x to W is dist (x, W) = x proj W x = (1, 1, 0, 0) (1, 1 5, 0, 2 5 ) = (0, 4 5, 0, 2 5 ) = 2. 5 Example 5.28 (Gram Schmidt Orthogonalization) Verify that the following vectors v 1 = (1, 2, 1, 1), v 2 = ( 1, 1, 0, 1) are orthogonal to each other. Extend v 1, v 2 to an orthogonal basis of R 4. Solution Verify that v 1 v 2 = = 0, so v 1 and v 2 are orthogonal. We need two independent vectors u, u 4 which are orthogonal to v 1 and v 2. Let u = (x, y, z, w) such that u v 1 = 0 and u v 2 = 0. This yields a homogeneous system. Let A be the coefficient matrix of the system. By doing row operations, A = [ ] [ ] 1 0 1/ / 0 Here columns and 4 are nonpivot and hence z and w are free variables. The general solution is given by (x, y, z, w) = ( z w, z, z, w) = z (1, 1,, 0) + w ( 1, 0, 0, 1). Thus we may take u = (1, 1,, 0), u 4 = ( 1, 0, 0, 1). Since u and u 4 are not orthogonal (u u 4 0), we use Gram Schmidt process to obtain v = u = (1, 1,, 0), u 4 u4 v v = ( 1, 0, 0, 1) 1 v v 11 (1, 1,, 0) = 1 ( 10, 1,, 11), take v4 = ( 10, 1,, 11). 11 Then the vectors v 1, v 2, v, v 4 form an orthogonal basis of R 4. 88

19 Example 5.29 (Gram Schmidt Orthogonalization) Orthogonally diagonalize the symmetric matrix A = That is, find an orthogonal matrix Q and diagonal matrix D so that Q 1 AQ = D. Solution The characteristic equation of A is det (A λi) = 0, or λ 2 1 det 2 λ 2 = (5 λ)(λ + 1) 2 = λ which gives the eigenvalues λ 1 = 5, λ 2 = 1 (λ 2 being a repeated eigenvalue). For λ 1 = 5, A 5 I = By solving (A 5 I)x = 0, x = (x 1, x 2, x ), we have x 1 = x and x 2 = 2x. Thus, we get the eigenvector u 1 = ( 1, 2, 1). For λ 2 = 1, x = ( x, 2x, x ) = x ( 1, 2, 1), A ( 1) I = By solving (A ( 1) I)x = 0, x = (x 1, x 2, x ), we have x 1 = 2x 2 + x. Thus, x = ( 2x 2 + x, x 2, x ) = x 2 ( 2, 1, 0) + x (1, 0, 1), we get the two eigenvectors u 2 = ( 2, 1, 0), u = (1, 0, 1). Verify that u 1 u 2 = = 0, u 1 u = = 0, u 2 u = = 2. Since u 2 u 0, the vectors u 1, u 2, u are not orthogonal. We then use Gram Schmidt process for these last two vectors. Therefore, we take and from u v 1 = u 1 = ( 1, 2, 1), v 2 = u 2 = ( 2, 1, 0), u v2 v 2 = (1, 0, 1) 2 v 2 v 2 5 ( 2, 1, 0) = (1 5, 2 5, 1) = 1 (1, 2, 5), take v = (1, 2, 5). 5 Now, as we want, the vectors v 1, v 2, v form an orthogonal set. Since none of them are zero, v 1, v 2, v are linearly independent by (5.1) and hence they form an orthogonal basis of R. 89

20 5. Orthogonality By v 1 = 6, v 2 = 5, v = 0, we further obtain an orthonormal basis of R consisting x 1 = v1 v 1 = 1 6 ( 1, 2, 1), x 2 = v2 v 2 = 1 5 ( 2, 1, 0), x = v v = 1 0 (1, 2, 5). Thus if we construct Q = [ x 1 x 2 x ], we can use it to diagonalize A such that Q 1 AQ = D, where 1/ 6 2/ 5 1/ 0 Q = 2/ 6 1/ 5 2/ / 6 0 5/, D = Here we emphasize that Q is an orthogonal matrix (1) which satisfies Q 1 = Q t, (2) whose columns form an orthonormal basis of R. Thus, we also have Q t AQ = D. Example 5.0 (Gram Schmidt Orthogonalization) In R, find the distance from the point (1, 1, 1) to the plane x 1 + x 2 + 2x = 0. Solution Any vector in the subspace (plane) can be expressed as (x 1, x 2, x ) = ( x 2 2x, x 2, x ) = x 2 ( 1, 1, 0) + x ( 2, 0, 1). Hence, u 1 = ( 1, 1, 0) and u 2 = ( 2, 0, 1) form a basis of the subspace. Apply the Gram Schmidt process, v 1 = u 1 = ( 1, 1, 0), v 2 = u 2 u2 v1 v 1 = ( 2, 0, 1) 2 ( 1, 1, 0) = ( 1, 1, 1). v 1 v 1 2 Thus the orthogonal projection of the vector x = (1, 1, 1) to the subspace is x v 1 v 1 + x v2 v 2 = 0 1 ( 1, 1, 0) + v 1 v 1 v 2 v 2 2 ( 1, 1, 1) = 1 (1, 1, 1). Thus the distance from x to the plane is (1, 1, 1) 1 (1, 1, 1) = (2, 2, 4 ) = 2 6. Alternatively, by the equation x 1 + x 2 + 2x = 0, we know that the vector z = (1, 1, 2) is perpendicular to the plane. Let W = span(z). Then the distance is given by proj W x = x z z z z = 4 6 (1, 1, 2) =

21 Example 5.1 (Data fitting problem using a straight line) Find a straight line y = c + mx that best fits the following set of data on the xy-plane. (2, 1), (5, 2), (7, ), (8, ). Solution When we can find a straight line y = c + mx passing through all the points, it will of course best fit the data. However, the corresponding system admits no solution. c + 2m = 1, c + 5m = 2, c + 7m =, c + 8m = In this example, the sum of the difference squares is [ ] c m = 1 2 is inconsistent. (c + 2m 1) 2 + (c + 5m 2) 2 + (c + 7m ) 2 + (c + 8m ) 2. Note that a vector in Col A is b 0 = (c+2m, c+5m, c+7m, c+8m), so the sum of difference square is exactly b 0 b and therefore the technique of the normal equation should give us the best approximated solution. Now, by direct computations, we have A t A = [ ], A t b = [ ] 9, 57 and the corresponding normal equation A t Ax = A t b has a unique solution ( 2, 5 ), so the straight line 7 14 that best fits the given set of data is y = x. Example 5.2 (Data fitting problem using a polynomial curve) Find a polynomial of degree at most 2 that best fits the following set of data on the xy-plane. (2, 1), (5, 2), (7, ), (8, ). Solution A general polynomial of degree at most 2 can be represented as the form y = a a 1 x + a 2 x 2. When such a polynomial curve can pass through all the four points, it will of course best fit the data. However, the corresponding system admits no solution. a a a = 1, a a a = 2, a 0 a a a =, a 1 = 2 is inconsistent. a a a = a 2 Again, the technique of normal equation will help. By direct computations, we have A t A = , A t b = We note that A t A is invertible, so the solution of the normal equation is (A t A) 1 A t b =

22 5. Orthogonality This shows that the polynomial of degree at most 2 that best fits the data is y = x 1 12 x2. Example 5. (Data fitting problem using a general curve) Find a curve in the form y = a 0 + a 1 sin x + a 2 sin 2x that best fits the following set of data. ( π 6, 1), (π 4, 2), (π, ), (π 2, ). Solution The system we are considering is again an inconsistent system: a a 1 sin π 6 + a2 sin 2π = 1, 1 sin π 6 6 a a 1 sin π 4 + a2 sin 2π = 2, 4 1 sin π a a 1 sin π + a2 sin 2π A = 4 =, 1 sin π a a 1 sin π 2 + a2 sin 2π =. 2 1 sin π 2 sin 2π 6 sin 2π 4 sin 2π. sin 2π 2 By direct computations, we have 4 A t A = 1 2 ( ) 1 + A t b = ( ) ( ), 1 4 ( ) 2 As we are looking for approximated solution, exact calculation is not necessary. So, an approximated solution for the normal equation is Then the best fitting curve is a , a , a y = ( ) + (5.108) sin x + ( ) sin 2x. 92

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

INNER PRODUCT SPACE. Definition 1

INNER PRODUCT SPACE. Definition 1 INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 191 Applied Linear Algebra Lecture 1: Inner Products, Length, Orthogonality Stephen Billups University of Colorado at Denver Math 191Applied Linear Algebra p.1/ Motivation Not all linear systems have

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture : Orthogonal Projections, Gram-Schmidt Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./ Orthonormal Sets A set of vectors {u, u,...,

More information

Problem 1: Solving a linear equation

Problem 1: Solving a linear equation Math 38 Practice Final Exam ANSWERS Page Problem : Solving a linear equation Given matrix A = 2 2 3 7 4 and vector y = 5 8 9. (a) Solve Ax = y (if the equation is consistent) and write the general solution

More information

Solutions to Review Problems for Chapter 6 ( ), 7.1

Solutions to Review Problems for Chapter 6 ( ), 7.1 Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

Section 6.4. The Gram Schmidt Process

Section 6.4. The Gram Schmidt Process Section 6.4 The Gram Schmidt Process Motivation The procedures in 6 start with an orthogonal basis {u, u,..., u m}. Find the B-coordinates of a vector x using dot products: x = m i= x u i u i u i u i Find

More information

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Linear Algebra Final Exam Study Guide Solutions Fall 2012 . Let A = Given that v = 7 7 67 5 75 78 Linear Algebra Final Exam Study Guide Solutions Fall 5 explain why it is not possible to diagonalize A. is an eigenvector for A and λ = is an eigenvalue for A diagonalize

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015 Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most

More information

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal

More information

Practice Final Exam. Solutions.

Practice Final Exam. Solutions. MATH Applied Linear Algebra December 6, 8 Practice Final Exam Solutions Find the standard matrix f the linear transfmation T : R R such that T, T, T Solution: Easy to see that the transfmation T can be

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0 SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM () We find a least squares solution to ( ) ( ) A x = y or 0 0 a b = c 4 0 0. 0 The normal equation is A T A x = A T y = y or 5 0 0 0 0 0 a b = 5 9. 0 0 4 7

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

M340L Final Exam Solutions May 13, 1995

M340L Final Exam Solutions May 13, 1995 M340L Final Exam Solutions May 13, 1995 Name: Problem 1: Find all solutions (if any) to the system of equations. Express your answer in vector parametric form. The matrix in other words, x 1 + 2x 3 + 3x

More information

MATH 1553 PRACTICE FINAL EXAMINATION

MATH 1553 PRACTICE FINAL EXAMINATION MATH 553 PRACTICE FINAL EXAMINATION Name Section 2 3 4 5 6 7 8 9 0 Total Please read all instructions carefully before beginning. The final exam is cumulative, covering all sections and topics on the master

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

Lecture 10: Vector Algebra: Orthogonal Basis

Lecture 10: Vector Algebra: Orthogonal Basis Lecture 0: Vector Algebra: Orthogonal Basis Orthogonal Basis of a subspace Computing an orthogonal basis for a subspace using Gram-Schmidt Orthogonalization Process Orthogonal Set Any set of vectors that

More information

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

1. In this problem, if the statement is always true, circle T; otherwise, circle F. Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Math 304 Final Exam (May 8) Spring 206 No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Name: Section: Question Points

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

6. Orthogonality and Least-Squares

6. Orthogonality and Least-Squares Linear Algebra 6. Orthogonality and Least-Squares CSIE NCU 1 6. Orthogonality and Least-Squares 6.1 Inner product, length, and orthogonality. 2 6.2 Orthogonal sets... 8 6.3 Orthogonal projections... 13

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

MATH 221, Spring Homework 10 Solutions

MATH 221, Spring Homework 10 Solutions MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

There are two things that are particularly nice about the first basis

There are two things that are particularly nice about the first basis Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

Announcements Monday, November 20

Announcements Monday, November 20 Announcements Monday, November 20 You already have your midterms! Course grades will be curved at the end of the semester. The percentage of A s, B s, and C s to be awarded depends on many factors, and

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Definitions for Quizzes

Definitions for Quizzes Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Check that your exam contains 30 multiple-choice questions, numbered sequentially.

Check that your exam contains 30 multiple-choice questions, numbered sequentially. MATH EXAM SPRING VERSION A NAME STUDENT NUMBER INSTRUCTOR SECTION NUMBER On your scantron, write and bubble your PSU ID, Section Number, and Test Version. Failure to correctly code these items may result

More information

Typical Problem: Compute.

Typical Problem: Compute. Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and

More information

The Gram Schmidt Process

The Gram Schmidt Process u 2 u The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple

More information

The Gram Schmidt Process

The Gram Schmidt Process The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple case

More information

Math 265 Linear Algebra Sample Spring 2002., rref (A) =

Math 265 Linear Algebra Sample Spring 2002., rref (A) = Math 265 Linear Algebra Sample Spring 22. It is given that A = rref (A T )= 2 3 5 3 2 6, rref (A) = 2 3 and (a) Find the rank of A. (b) Find the nullityof A. (c) Find a basis for the column space of A.

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

March 27 Math 3260 sec. 56 Spring 2018

March 27 Math 3260 sec. 56 Spring 2018 March 27 Math 3260 sec. 56 Spring 2018 Section 4.6: Rank Definition: The row space, denoted Row A, of an m n matrix A is the subspace of R n spanned by the rows of A. We now have three vector spaces associated

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

Practice Final Exam Solutions for Calculus II, Math 1502, December 5, 2013

Practice Final Exam Solutions for Calculus II, Math 1502, December 5, 2013 Practice Final Exam Solutions for Calculus II, Math 5, December 5, 3 Name: Section: Name of TA: This test is to be taken without calculators and notes of any sorts. The allowed time is hours and 5 minutes.

More information

Chapter 6. Orthogonality and Least Squares

Chapter 6. Orthogonality and Least Squares Chapter 6 Orthogonality and Least Squares Section 6.1 Inner Product, Length, and Orthogonality Orientation Recall: This course is about learning to: Solve the matrix equation Ax = b Solve the matrix equation

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all203 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v,

More information

6.1. Inner Product, Length and Orthogonality

6.1. Inner Product, Length and Orthogonality These are brief notes for the lecture on Friday November 13, and Monday November 1, 2009: they are not complete, but they are a guide to what I want to say on those days. They are guaranteed to be incorrect..1.

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues. Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be

More information

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true. 1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III

More information

MTH 2310, FALL Introduction

MTH 2310, FALL Introduction MTH 2310, FALL 2011 SECTION 6.2: ORTHOGONAL SETS Homework Problems: 1, 5, 9, 13, 17, 21, 23 1, 27, 29, 35 1. Introduction We have discussed previously the benefits of having a set of vectors that is linearly

More information

Solutions to Final Exam

Solutions to Final Exam Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns

More information

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer. Chapter 3 Directions: For questions 1-11 mark each statement True or False. Justify each answer. 1. (True False) Asking whether the linear system corresponding to an augmented matrix [ a 1 a 2 a 3 b ]

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

The Gram-Schmidt Process 1

The Gram-Schmidt Process 1 The Gram-Schmidt Process In this section all vector spaces will be subspaces of some R m. Definition.. Let S = {v...v n } R m. The set S is said to be orthogonal if v v j = whenever i j. If in addition

More information

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST me me ft-uiowa-math255 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 2/3/2 at :3pm CST. ( pt) Library/TCNJ/TCNJ LinearSystems/problem3.pg Give a geometric description of the following

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information