6.1 SOLUTIONS. 1. Since. and. 2. Since. 3. Since. 4. Since. x w = 6(3) + ( 2)( 1) + 3( 5) = 5, 6. Since x. u u ( 1) 2 5, v u = 4( 1) + 6(2) = 8, and
|
|
- Britton Tucker
- 5 years ago
- Views:
Transcription
1 6. SOLUIONS Notes: he first half of this section is computational and is easily learned. he second half concerns the concepts of orthogonality and orthogonal complements, which are essential for later work. heorem is an important general fact, but is needed only for Supplementary Exercise at the end of the chapter and in Section 7.. he optional material on angles is not used later. Exercises 7 concern facts used later.. Since u ª º and v ª º, 6 u u ( ) 5, v u = ( ) + 6() = 8, and v u u u Since x w w w ª º w and x ª 6º, w w ( ) ( 5) 5, x w = 6() + ( )( ) + ( 5) = 5, and. Since w ª º, 5 w w ( ) ( 5) 5, and w w w ª /5º /5. /7. Since u ª º, u u ( ) 5 and ª /5º u. u u /5 5. Since u ª º and, v ª º 6 u v = ( )() + (6) = 8, u v ª º ª 8/º v. 6 / v v¹ v v 6 5, and ª 6º ª º 6. Since x and w, x w = 6() + ( )( ) + ( 5) = 5, 5 ª 6º ª /9º x w 5 /9 x. x x ¹ 9 5/9 x x 6 ( ) 9, and 5
2 6 CHAPER 6 Orthogonality and Least Squares 7. Since w ª º, 5 w w w ( ) ( 5) Since x ª 6º, x x x 6 ( ) A unit vector in the direction of the given vector is ª º ª º ª /5º ( ) 5 /5. A unit vector in the direction of the given vector is ª 6º ª 6º ª 6/ 6º / 6 ( 6) ( ) 6 6. A unit vector in the direction of the given vector is ª 7/º ª 7/º ª 7/ 69º / / / 69 (7/ ) (/ ) 69/6 / 69. A unit vector in the direction of the given vector is ª 8/º ª 8/º ª /5º (8/ ) /9 /5. Since x ª º and y ª º, 5 x y [ ( )] [ ( 5)] 5 and dist ( xy, ) ª º. Since u 5 and z dist ( uz, ) ª º, 8 u z [ ( )] [ 5 ( )] [ 8] 68 and 5. Since a b = 8( ) + ( 5)( ) = z, a and b are not orthogonal. 6. Since u v= () + ()( ) + ( 5)() =, u and v are orthogonal. 7. Since u v = ( ) + () + ( 5)( ) + (6) =, u and v are orthogonal. 8. Since y z= ( )() + 7( 8) + (5) + ( 7) = z, y and z are not orthogonal. 9. a. rue. See the definition of v. b. rue. See heorem (c). c. rue. See the discussion of Figure 5.
3 6. Solutions 7 ª d. False. Counterexample: º. e. rue. See the box following Example 6.. a. rue. See Example and heorem (a). b. False. he absolute value sign is missing. See the box before Example. c. rue. See the defintion of orthogonal complement. d. rue. See the Pythagorean heorem. e. rue. See heorem.. heorem (b): ( uv) w ( u v) w ( u v ) w u w v w u wv w he second and third equalities used heorems (b) and (c), respectively, from Section.. heorem (c): ( ) ( ) cu v cu v c( u v) c( u v ) he second and third equalities used heorems (c) and (d), respectively, from Section... Since u u is the sum of the squares of the entries in u, u ut. he sum of squares of numbers is zero if and only if all the numbers are themselves zero.. One computes that u v = ( 7) + ( 5)( ) + ( )6 =, u u u ( 5) ( ), v v v ( 7) ( ) 6, and u v ( uv) ( u v ) ( ( 7)) ( 5 ( )) ( 6).. One computes that u v ( uv) ( u v) u u u vv v u u v v and u v ( uv) ( u v) u u u vv v u u v v so uv u v u u v v u u v v u v 5. When v ª aº, b the set H of all vectors ª xº y that are orthogonal to Y is the subspace of vectors whose entries satisfy ax + by =. If a z, then x = (b/a)y with y a free variable, and H is a line through the ª bº ½ origin. A natural choice for a basis for H in this case is ¾. a If a = and b z, then by =. Since b z, y = and x is a free variable. he subspace H is again a line through the origin. A natural choice ª º ½ ª bº ½ for a basis for H in this case is ¾, but ¾ a is still a basis for H since a = and b z. If a = and b =, then H = since the equation x + y = places no restrictions on x or y. 6. heorem in Chapter may be used to show that W is a subspace of the u matrix u. Geometrically, W is a plane through the origin., because W is the null space of
4 8 CHAPER 6 Orthogonality and Least Squares 7. If y is orthogonal to u and v, then y u = y v =, and hence by a property of the inner product, y (u + v) = y u + y v = + =. hus y is orthogonal to u+ v. 8. An arbitrary w in Span{u, v} has the form w cucv. If y is orthogonal to u and v, then u y = v y =. By heorem (b) and (c), w y ( cuc v) y c ( u y) c ( v y ) 9. A typical vector in W has the form w cv }c p v p. If x is orthogonal to each v j, then by heorems (b) and (c), w x ( c v }c v ) y c ( v x) }c ( v x ) p p p p So x is orthogonal to each w in W.. a. If z is in W A, u is in W, and c is any scalar, then (cz) u= c(z u) c =. Since u is any element of W, c z is in W A. b. Let z and z be in W A. hen for any u in W, ( zz) u z uz u. hus z z is in W A. c. Since is orthogonal to every vector, is in W A. hus W A is a subspace.. Suppose that x is in W and W A. Since x is in, x is orthogonal to every vector in W, including x itself. So x x =, which happens only when x =. W A. [M] a. One computes that a a a a and that ai a j for i zj. b. Answers will vary, but it should be that Au = u and Av = v. c. Answers will again vary, but the cosines should be equal. d. A conjecture is that multiplying by A does not change the lengths of vectors or the angles between vectors. x v. [M] Answers to the calculations will vary, but will demonstrate that the mapping x ( x) v v v¹ (for vz) is a linear transformation. o confirm this, let x and y be in n, and let c be any scalar. hen ( xy) v ( x v) ( y v) x v y v ( x y) v v v v ( x) ( y) v v ¹ v v ¹ v v¹ v v¹ and ( cx) v c( x v) x v c ( x) v v c v c( x) v v ¹ v v ¹ v v¹. [M] One finds that N ª 5 º ª 5 /º, R / /
5 6. Solutions 9 he row-column rule for computing RN produces the u zero matrix, which shows that the rows of R are orthogonal to the columns of N. his is expected by heorem since each row of R is in Row A and each column of N is in Nul A. 6. SOLUIONS Notes: he nonsquare matrices in heorems 6 and 7 are needed for the R factorizarion in Section 6.. It is important to emphasize that the term orthogonal matrix applies only to certain square matrices. he subsection on orthogonal projections not only sets the stage for the general case in Section 6., it also provides what is needed for the orthogonal diagonalization exercises in Section 7., because none of the eigenspaces there have dimension greater than. For this reason, the Gram-Schmidt process (Section 6.) is not really needed in Chapter 7. Exercises and prepare for Section 6... Since ª º ª º z, 7 the set is not orthogonal.. Since ª º ª º ª º ª 5º ª º ª 5º, the set is orthogonal.. Since ª 6º ª º z, 9 the set is not orthogonal.. Since ª º ª º ª º ª º ª º ª º 5 5, 6 6 the set is orthogonal. 5. Since ª º ª º ª º ª º ª º ª º , the set is orthogonal. 6. Since ª º ª º z, 5 8 the set is not orthogonal. 7. Since u u, { u, u } is an orthogonal set. Since the vectors are non-zero, u and u are linearly independent by heorem. wo such vectors in automatically form a basis for. So { u, u } is an orthogonal basis for. By heorem 5, x u x u x u u u u u u u
6 CHAPER 6 Orthogonality and Least Squares 8. Since u u 6 6, { u, u } is an orthogonal set. Since the vectors are non-zero, u and u are linearly independent by heorem. wo such vectors in automatically form a basis for. So { u, u } is an orthogonal basis for. By heorem 5, x u x u x u u u u u u u 9. Since u u u u u u, { u, u, u } is an orthogonal set. Since the vectors are non-zero, u, u, and u are linearly independent by heorem. hree such vectors in automatically form a basis for. So { u, u, u } is an orthogonal basis for. By heorem 5, x u x u x u 5 x u u u u u u u u u u u. Since u u u u u u, { u, u, u } is an orthogonal set. Since the vectors are non-zero, u, u, and u are linearly independent by heorem. hree such vectors in automatically form a basis for. So { u, u, u } is an orthogonal basis for. By heorem 5, x u x u x u x u u u u u u u u u u u. Let y ª º 7 and. u ª º he orthogonal projection of y onto the line through u and the origin is the orthogonal projection of y onto u, and this vector is y u ª º yˆ u u u u. Let y ª º and. u ª º he orthogonal projection of y onto the line through u and the origin is the orthogonal projection of y onto u, and this vector is y u ª /5º yˆ u u u u 5 6/5. he orthogonal projection of y onto u is y u ª /5º yˆ u u u u 65 7/5 he component of y orthogonal to u is ª º y y ˆ hus y y ª º ª º ˆ y y ˆ.. he orthogonal projection of y onto u is y u ª /5º yˆ u u u u 5 /5
7 6. Solutions he component of y orthogonal to u is ª º y y ˆ ª º ª º hus y yˆ yy ˆ. 5. he distance from y to the line through u and the origin is y ŷ. One computes that y u ª º ª 8º ª /5º y yˆ y u u u 6 /5 so y y ˆ is the desired distance. 6. he distance from y to the line through u and the origin is y ŷ. One computes that y u ª º ª º ª 6º y yˆ y u u u 9 so y y ˆ is the desired distance. 7. Let u ª /º /, / ª /º v. Since u v =, {u, v} is an orthogonal set. However, u u u / and / v v v /, so {u, v} is not an orthonormal set. he vectors u and v may be normalized to form the orthonormal set ª /º ª /º ½ u v ½, ¾ /, ¾ u v / / 8. Let u ª º, ª º v. Since u v = z, {u, v} is not an orthogonal set. ª.6 º 9. Let u,.8 v ª.8 º..6 Since u v =, {u, v} is an orthogonal set. Also, u u u and v v v, so {u, v} is an orthonormal set. ª /º ª /º. Let u /, / v. Since u v =, {u, v} is an orthogonal set. However, u u u and / v v v 5/ 9, so {u, v} is not an orthonormal set. he vectors u and v may be normalized to form the orthonormal set / / 5 ½ ª º ª º u v ½, / ¾, / 5 ¾ u v /
8 CHAPER 6 Orthogonality and Least Squares. Let u ª / º ª / º ª º /, v /, and w /. Since u v = u w = v w =, {u, v, w} is an / / / orthogonal set. Also, u u u, v v v, and w w w, so {u, v, w} is an orthonormal set.. Let u ª / 8º / 8, / 8 ª / º ª /º v, and w /. Since u v = u w = v w =, {u, v, w} is an / / orthogonal set. Also, u u u, v v v, and w w w, so {u, v, w} is an orthonormal set.. a. rue. For example, the vectors u and y in Example are linearly independent but not orthogonal. b. rue. he formulas for the weights are given in heorem 5. c. False. See the paragraph following Example 5. d. False. he matrix must also be square. See the paragraph before Example 7. e. False. See Example. he distance is y ŷ.. a. rue. But every orthogonal set of nonzero vectors is linearly independent. See heorem. b. False. o be orthonormal, the vectors is S must be unit vectors as well as being orthogonal to each other. c. rue. See heorem 7(a). d. rue. See the paragraph before Example. e. rue. See the paragraph before Example o prove part (b), note that ( Ux) ( Uy) ( Ux) ( Uy) x U Uy x y x y because U U I. If y = x in part (b), (Ux) (Ux) = x x, which implies part (a). Part (c) of the heorem follows immediately fom part (b). 6. A set of n nonzero orthogonal vectors must be linearly independent by heorem, so if such a set spans W it is a basis for W. hus W is an n-dimensional subspace of n, and W n. 7. If U has orthonormal columns, then U U I by heorem 6. If U is also a square matrix, then the equation U U I implies that U is invertible by the Invertible Matrix heorem. 8. If U is an n un orthogonal matrix, then I UU UU. Since U is the transpose of U, heorem 6 applied to U says that U has orthogonal columns. In particular, the columns of U are linearly independent and hence form a basis for n by the Invertible Matrix heorem. hat is, the rows of U form a basis (an orthonormal basis) for n. 9. Since U and V are orthogonal, each is invertible. By heorem 6 in Section., UV is invertible and ( UV ) V U V U ( UV ), where the final equality holds by heorem in Section.. hus UV is an orthogonal matrix.
9 6. Solutions. If U is an orthogonal matrix, its columns are orthonormal. Interchanging the columns does not change their orthonormality, so the new matrix say, V still has orthonormal columns. By heorem 6, V V I. Since V is square, V V by the Invertible Matrix heorem. y u. Suppose that yˆ u. Replacing u by cu with c z gives u u y ( c u ) ( ) c( y c u ) ( ) c ( y c u ) y u u u u u yˆ ( cu) ( cu) c ( u u) c ( u u) u u So ŷ does not depend on the choice of a nonzero u in the line L used in the formula.. If v v, then by heorem (c) in Section 6., ( cv ) ( c v ) c[ v ( c v )] cc ( v v ) cc x u. Let L = Span{u}, where u is nonzero, and let ( x) u u u. For any vectors x and y in n and any scalars c and d, the properties of the inner product (heorem ) show that ( cxdy) u c ( x dy) u u u cx udy u u u u cx u dy u u u u u u u c ( x) d ( y ) hus is a linear transformation. Another approach is to view as the composition of the following three linear mappings: xa = x v, a b = a / v v, and b bv.. Let L = Span{u}, where u is nonzero, and let ( x) reflly projlyy. By Exercise, the mapping y proj L y is linear. hus for any vectors y and z in n and any scalars c and d, c ( y dz) proj L ( cydz) ( cydz ) ( cprojlyd proj Lz) cydz cprojlycyd projlzdz c( proj Lyy) d( proj Lzz ) c ( y) d ( z ) hus is a linear transformation. 5. [M] One can compute that A A I. Since the off-diagonal entries in A are orthogonal. A A are zero, the columns of
10 CHAPER 6 Orthogonality and Least Squares 6. [M] a. One computes that U U I, while UU ª º ¹ he matrices U U and UU are of different sizes and look nothing like each other. b. Answers will vary. he vector p UU y is in Col U because p UU ( y ). Since the columns of U are simply scaled versions of the columns of A, Col U = Col A. hus each p is in Col A. c. One computes that U z. d. From (c), z is orthogonal to each column of A. By Exercise 9 in Section 6., z must be orthogonal to every vector in Col A; that is, z is in (Col A) A. 6. SOLUIONS Notes: Example seems to help students understand heorem 8. heorem 8 is needed for the Gram-Schmidt process (but only for a subspace that itself has an orthogonal basis). heorems 8 and 9 are needed for the discussions of least squares in Sections 6.5 and 6.6. heorem is used with the R factorization to provide a good numerical method for solving least squares problems, in Section 6.5. Exercises 9 and lead naturally into consideration of the Gram-Schmidt process.. he vector in Span{ u } is ª º 6 x u 7 u u u u u 6 x u Since x cu c u c u u, the vector u u ª º ª º ª º 8 6 x u x u u u is in Span{ u, u, u }.
11 6. Solutions 5. he vector in Span{ u } is ª º v u u u u u u 7 v u Since x ucu cu cu, u u ª º ª º ª º 5 v u v u u u 5 is in Span{ u, u, u }. the vector. Since u u, { u, u } is an orthogonal set. he orthogonal projection of y onto Span{ u, u } is ª º ª º ª º y u y u 5 5 yˆ u u u u u u u u. Since u u, { u, u } is an orthogonal set. he orthogonal projection of y onto Span{ u, u } is ª º ª º ª 6º y u y u 5 6 yˆ u u u u u u u u Since u u, { u, u } is an orthogonal set. he orthogonal projection of y onto Span{ u, u } is ª º ª º ª º y u y u yˆ u u u u u u u u Since u u, { u, u } is an orthogonal set. he orthogonal projection of y onto Span{ u, u } is ª º ª º ª 6º y u y u yˆ u u u u u u u u 8 7. Since u u 5 8, { u, u } is an orthogonal set. By the Orthogonal Decomposition heorem, ª /º ª 7 / º y u y u yˆ u u u u /, ˆ 7/ z y y u u u u 8/ 7/ and y = ŷ + z, where ŷ is in W and z is in W A.
12 6 CHAPER 6 Orthogonality and Least Squares 8. Since u u, { u, u } is an orthogonal set. By the Orthogonal Decomposition heorem, ª /º ª 5/º y u y u yˆ u u u u 7/, ˆ / z y y u u u u and y = ŷ + z, where ŷ is in W and z is in W A. 9. Since u u u u u u, { u, u, u } is an orthogonal set. By the Orthogonal Decomposition heorem, ª º ª º y u y u y u yˆ u u u u u u, z y yˆ u u u u u u and y= ŷ + z, where ŷ is in W and z is in W A.. Since u u u u u u, { u, u, u } is an orthogonal set. By the Orthogonal Decomposition heorem, ª 5º ª º y u y u y u 5 yˆ u u u u u u, z y yˆ u u u u u u 6 and y= ŷ + z, where ŷ is in W and z is in W A.. Note that v and v are orthogonal. he Best Approximation heorem says that ŷ, which is the orthogonal projection of y onto W Span{ v, v }, is the closest point to y in W. his vector is ª º y v y v yˆ v v v v v v v v. Note that v and v are orthogonal. he Best Approximation heorem says that ŷ, which is the orthogonal projection of \ onto W Span{ v, v }, is the closest point to y in W. his vector is ª º 5 y v y v yˆ v v v v v v v v 9. Note that v and v are orthogonal. By the Best Approximation heorem, the closest point in Span{ v, v } to z is ª º z v z v 7 zˆ v v v v v v v v
13 6. Solutions 7. Note that v and v are orthogonal. By the Best Approximation heorem, the closest point in Span{ v, v } to z is ª º z v z v zˆ v v v v v v v v / / 5. he distance from the point y in to a subspace W is defined as the distance from y to the closest point in W. Since the closest point in W to y is yˆ proj y, the desired distance is y ŷ. One computes that yˆ = 9, y y ˆ =, and y y ˆ = =. 6 W 6. he distance from the point y in to a subspace W is defined as the distance from y to the closest point in W. Since the closest point in W to y is yˆ proj y, the desired distance is y ŷ. One computes that 7. a. ª º ª º yˆ y y ˆ, and y ŷ = 8. U U ª 8/9 /9 /9º ª º, UU /9 5/9 /9 /9 /9 5/9 b. Since U U I, the columns of U form an orthonormal basis for W, and by heorem ª 8/9 /9 /9ºª º ª º projw y UU y /9 5/ 9 /9 8. /9 /9 5/ a. U U >@, UU ª / /º / 9/ b. Since U U, { u } forms an orthonormal basis for W, and by heorem ª / /ºª 7º ª º proj W y UU y. / 9/ 9 6 W 9. By the Orthogonal Decomposition heorem, u is the sum of a vector in W Span{ u, u } and a vector v orthogonal to W. his exercise asks for the vector v: ª º ª º ª º v u projw u u /5 /5 u u 5 ¹ /5 /5 Any multiple of the vector v will also be in W A.
14 8 CHAPER 6 Orthogonality and Least Squares. By the Orthogonal Decomposition heorem, u is the sum of a vector in W Span{ u, u } and a vector v orthogonal to W. his exercise asks for the vector v: ª º ª º ª º v u projw u u / 5 / 5 u u 6 ¹ /5 /5 Any multiple of the vector v will also be in W A.. a. rue. See the calculations for z in Example or the box after Example 6 in Section 6.. b. rue. See the Orthogonal Decomposition heorem. c. False. See the last paragraph in the proof of heorem 8, or see the second paragraph after the statement of heorem 9. d. rue. See the box before the Best Approximation heorem. e. rue. heorem applies to the column space W of U because the columns of U are linearly independent and hence form a basis for W.. a. rue. See the proof of the Orthogonal Decomposition heorem. b. rue. See the subsection A Geometric Interpretation of the Orthogonal Projection. c. rue. he orthgonal decomposition in heorem 8 is unique. d. False. he Best Approximation heorem says that the best approximation to y is proj W y. e. False. his statement is only true if x is in the column space of U. If n > p, then the column space of U will not be all of n, so the statement cannot be true for all x in n.. By the Orthogonal Decomposition heorem, each x in n can be written uniquely as x = p + u, with p in Row A and u in (Row A) A A. By heorem in Section 6., (Row A) Nul A, so u is in Nul A. Next, suppose Ax = b is consistent. Let x be a solution and write x = p + u as above. hen Ap = A(x u) = Ax Au = b = b, so the equation Ax = b has at least one solution p in Row A. Finally, suppose that p and p are both in Row A and both satisfy Ax = b. hen p p is in Nul A (Row A) A, since A( p p) Ap Ap b b. he equations p p ( pp ) and p = p+ both then decompose p as the sum of a vector in Row A and a vector in (Row A) A. By the uniqueness of the orthogonal decomposition (heorem 8), p p, and p is unique.. a. By hypothesis, the vectors w, }, w p are pairwise orthogonal, and the vectors v, }, v q are pairwise orthogonal. Since w i is in W for any i and v j is in W A for any j, wi v j for any i and j. hus { w, }, wp, v, }, v q} forms an orthogonal set. b. For any y in n, write y = ŷ + z as in the Orthogonal Decomposition heorem, with ŷ in W and z in W A. hen there exist scalars c, }, cp and d, }, dq such that y yˆ z cw}cpwp dv} dqv q. hus the set { w, }, wp, v, }, v q} spans n. c. he set { w, }, wp, v, }, v q} is linearly independent by (a) and spans n by (b), and is thus a basis for n A. Hence dimw dimw p q dim n.
15 6. Solutions 9 5. [M] Since U U I, U has orthonormal columns by heorem 6 in Section 6.. he closest point to y in Col U is the orthogonal projection ŷ of y onto Col U. From heorem, yˆ UU 7 y ª º 6. [M] he distance from b to Col U is b ˆb, where b ˆ UU 7 b. One computes that ª º ª º ˆ UU 7 ˆ ˆ b b b b b b which is.66 to four decimal places. 6. SOLUIONS Notes: he R factorization encapsulates the essential outcome of the Gram-Schmidt process, just as the LU factorization describes the result of a row reduction process. For practical use of linear algebra, the factorizations are more important than the algorithms that produce them. In fact, the Gram-Schmidt process is not the appropriate way to compute the R factorization. For that reason, one should consider deemphasizing the hand calculation of the Gram-Schmidt process, even though it provides easy exam questions. he Gram-Schmidt process is used in Sections 6.7 and 6.8, in connection with various sets of orthogonal polynomials. he process is mentioned in Sections 7. and 7., but the one-dimensional projection constructed in Section 6. will suffice. he R factorization is used in an optional subsection of Section 6.5, and it is needed in Supplementary Exercise 7 of Chapter 7 to produce the Cholesky factorization of a positive definite matrix. ª º x v. Set v x and compute that v x v x v 5. hus an orthogonal basis for W is v v ª º ª º ½, 5 ¾.
16 5 CHAPER 6 Orthogonality and Least Squares ª 5º x v. Set v x and compute that v x v x v. v v 8 ª º ª 5º ½, ¾. 8 hus an orthogonal basis for W is ª º x v. Set v x and compute that v x v x v /. v v / ª º ª º ½ 5, / ¾. / ª º x v. Set v x and compute that v x v x ( ) v 6. v v ª º ª º ½, 6 ¾. 5 hus an orthogonal basis for W is hus an orthogonal basis for W is ª 5º x v 5. Set v x and compute that v x v x v. hus an orthogonal basis for W is v v ª º ª 5º ½, ¾. ª º 6 x v 6. Set v x and compute that v x v x ( ) v. hus an orthogonal basis for W is v v ª º ª º ½ 6, ¾.
17 6. Solutions 5 7. Since v and v 7 / 6 /, an orthonormal basis for W is ª / º ª / 6º ½ v v ½, ¾ 5/, / 6 ¾. v v / / 6 8. Since v 5 and v 5 6, an orthonormal basis for W is ª / 5º ª / 6º ½ v v ½, ¾ / 5, / 6 ¾. v v 5/ 5 / 6 9. Call the columns of the matrix x, x, and x and perform the Gram-Schmidt process on these vectors: v x v ª º x v x v x ( ) v v v v ª º x v x v x v v x v v v v v v ¹ hus an orthogonal basis for W is ª º ª º ª º ½,, ¾.. Call the columns of the matrix x, x, and x and perform the Gram-Schmidt process on these vectors: v x v ª º x v x v x ( ) v v v v ª º x v x v 5 x v v x v v v v v v ª º ª º ª º ½ hus an orthogonal basis for W is,, ¾.
18 5 CHAPER 6 Orthogonality and Least Squares. Call the columns of the matrix x, x, and x and perform the Gram-Schmidt process on these vectors: v x ª º x v v x v x ( ) v v v ª º x v x v v x v v x v v v v v v ¹ hus an orthogonal basis for W is ª º ª º ª º ½,, ¾.. Call the columns of the matrix x, x, and x and perform the Gram-Schmidt process on these vectors: v x v ª º x v x v x v v v ª º x v x v 7 v x v v x v v v v v v ª º ª º ª º ½ hus an orthogonal basis for W is,, ¾.
19 6. Solutions 5. Since A and are given, R A. Since A and are given, R A ª 5 9º 5/6 /6 /6 /6 7 ª º ª 6 º /6 5/6 /6 / ª º /7 5/7 /7 /7 5 7 ª º ª 7 7º 5/7 /7 /7 / he columns of will be normalized versions of the vectors v, v, and v found in Exercise. hus ª / 5 / /º / 5 ª 5 5 5º / 5 / /, R A 6 / 5 / / / 5 / / 6. he columns of will be normalized versions of the vectors v, v, and v found in Exercise. hus ª / / /º / / / ª 8 7º /, R A / / / 6 / / / 7. a. False. Scaling was used in Example, but the scale factor was nonzero. b. rue. See () in the statement of heorem. c. rue. See the solution of Example. 8. a. False. he three orthogonal vectors must be nonzero to be a basis for a three-dimensional subspace. (his was the case in Step of the solution of Example.) b. rue. If x is not in a subspace w, then x cannot equal proj W x, because proj W x is in W. his idea was used for v k in the proof of heorem. c. rue. See heorem. 9. Suppose that x satisfies Rx = ; then Rx = =, and Ax =. Since the columns of A are linearly independent, x must be. his fact, in turn, shows that the columns of R are linearly indepedent. Since R is square, it is invertible by the Invertible Matrix heorem.
20 5 CHAPER 6 Orthogonality and Least Squares. If y is in Col A, then y = Ax for some x. hen y = Rx = (Rx), which shows that y is a linear combination of the columns of using the entries in Rx as weights. Conversly, suppose that y = x for some x. Since R is invertible, the equation A = R implies that AR. So y AR x A( R x ), which shows that y is in Col A.. Denote the columns of by { q, }, q n }. Note that n dm, because A is m un and has linearly independent columns. he columns of can be extended to an orthonormal basis for m as follows. Let f be the first vector in the standard basis for m that is not in W n Span{ q, }, q n}, let u f projw n f, and let q u n / u. hen { q, }, qn, q n} is an orthonormal basis for Wn Span{ q, }, qn, q n }. Next let f be the first vector in the standard basis for m that is not in Wn, let u f proj W, n f and let qn u/ u. hen { q, }, qn, qn, q n} is an orthogonal basis for Wn Span{ q, }, qn, qn, q n }. his process will continue until m n vectors have been added to the original n vectors, and { q, }, qn, qn, }, q m} is an orthonormal basis for m. }. hen, using partitioned matrix multiplication, Let > q and n ª Rº R A O. m. We may assume that { u, }, u p } is an orthonormal basis for W, by normalizing the vectors in the original basis given for W, if necessary. Let U be the matrix whose columns are u, }, u. hen, by heorem in Section 6., ( x) proj W x ( UU ) x for x in hence is a linear transformation, as was shown in Section.8. n. hus is a matrix transformation and. Given A = R, partition A > A where A has p columns. Partition as has p columns, and partition R as R ª R R º, O R where R is a p up matrix. hen ª R R º A A A R R R R O R p where hus A R. he matrix has orthonormal columns because its columns come from. he matrix R is square and upper triangular due to its position within the upper triangular matrix R. he diagonal entries of R are positive because they are diagonal entries of R. hus R is a R factorization of A.. [M] Call the columns of the matrix x, x, x, and x and perform the Gram-Schmidt process on these vectors: v x ª º x v v x v x ( ) v v v
21 6.5 Solutions 55 ª 6º x v x v v x v v x v v 6 v v v v ¹ ¹ 6 x v x v x v v x v v v x v ( ) v v v v v v v v ¹ ª º ª º ª 6º ª º ½ 5 hus an orthogonal basis for W is 6,, 6, ¾ ª º [M] he columns of will be normalized versions of the vectors v, v, and v found in Exercise. hus ª / / / º ª º / / / / / /, R A 6 /5 / 5 / / / 6. [M] In MALAB, when A has n columns, suitable commands are = A(:,)/norm(A(:,)) % he first column of for j=: n v=a(:,j) *( *A(:,j)) (:,j)=v/norm(v) % Add a new column to end 6.5 SOLUIONS Notes: his is a core section the basic geometric principles in this section provide the foundation for all the applications in Sections Yet this section need not take a full day. Each example provides a stopping place. heorem and Example are all that is needed for Section 6.6. heorem 5, however, gives an illustration of why the R factorization is important. Example is related to Exercise 7 in Section 6.6.
22 56 CHAPER 6 Orthogonality and Least Squares. o find the normal equations and to find ˆx, compute A A A b ª º ª º 6 ª º ª º ª º ª º a. he normal equations are ( A A) x A b: b. Compute 6 ª x º. x ª º ª º ª 6 º ª º ª ºª º ˆx ( A A) A b 6 ª º ª º. o find the normal equations and to find x ˆ, compute A A A b ª º ª º 8 ª º 8 ª 5º ª º 8 ª º 8 ª x º. 8 x a. he normal equations are ( ª º ª º A A) x A b: b. Compute ª 8º ª º ª 8ºª º ˆx ( A A) A b ª º ª º o find the normal equations and to find ˆx, compute A A A b ª º ª º ª 6 6º ª º ª º ª 6º 5 6
23 6.5 Solutions 57 a. he normal equations are ( ª 6 6º ª x º ª 6º A A) x A b: 6 x 6 b. Compute xˆ =(Α Α) Α b = 6 = ª 88º ª /º 6 7 /. o find the normal equations and to find ˆx, compute ª º ª º A A ª º ª 5º ª º ª 6º A b a. he normal equations are ( ª º ª x º ª 6º A A) x A b: x b. Compute 6 6 xˆ =(Α Α) Α b = = ª º ª º 5. o find the least squares solutions to Ax = b, compute and row reduce the augmented matrix for the system A Ax A b: ª º ª 5º ª A A A º b a 5 so all vectors of the form x ˆ = + x are the least-squares solutions of Ax = b. 6. o find the least squares solutions to Ax = b, compute and row reduce the augmented matrix for the system A Ax A b: ª 6 7º ª 5º ª A A A º b a 5 5 so all vectors of the form x ˆ = + x are the least-squares solutions of Ax = b.
24 58 CHAPER 6 Orthogonality and Least Squares 7. From Exercise, A ª º, 5 ª º b, and x ª º ˆ. Since ª º ª º ª º ª º ª º ª º Axˆ b the least squares error is Axˆ b. 8. From Exercise, A ª º, ª 5º b, and x ªº ˆ. Since ª º ª 5º ª º ª 5º ª º ªº Axˆ b the least squares error is Axˆ b. 9. (a) Because the columns a and a of A are orthogonal, the method of Example may be used to find ˆb, the orthogonal projection of b onto Col A: ª º ª 5º ª º ˆ b a b a b a a a a a a a a (b) he vector ˆx contains the weights which must be placed on a and a to produce ˆb. hese weights ª º are easily read from the above equation, so x ˆ.. (a) Because the columns a and a of A are orthogonal, the method of Example may be used to find ˆb, the orthogonal projection of b onto Col A: ª º ª º ª º ˆ b a b a b a a a a a a a a (b) he vector ˆx contains the weights which must be placed on a and a to produce ˆb. hese weights ª º are easily read from the above equation, so x ˆ.
25 6.5 Solutions 59. (a) Because the columns a, a and a of A are orthogonal, the method of Example may be used to find ˆb, the orthogonal projection of b onto Col A: ˆ b a b a b a b a a a aa a a a a a a a ª º ª º ª º ª º (b) he vector ˆx contains the weights which must be placed on a, a, and a to produce ˆb. hese ª º weights are easily read from the above equation, so x ˆ.. (a) Because the columns a, a and a of A are orthogonal, the method of Example may be used to find ˆb, the orthogonal projection of b onto Col A: ˆ b a b a b a 5 b a a a a a a a a a a a a ¹ ª º ª º ª º ª 5º 5 6 (b) he vector ˆx contains the weights which must be placed on a, a, and a to produce ˆb. hese ª º weights are easily read from the above equation, so x ˆ.. One computes that ª º ª º Au, A b u, b Au 6 ª 7º ª º Av, A b v, b Av 9 7 Since Av is closer to b than Au is, Au is not the closest point in Col A to b. hus u cannot be a leastsquares solution of Ax = b.
26 6 CHAPER 6 Orthogonality and Least Squares. One computes that ª º ª º Au 8, A b u, b Au ª 7º ª º Av, A b v, b Av 8 Since Au and Au are equally close to b, and the orthogonal projection is the unique closest point in Col A to b, neither Au nor Av can be the closest point in Col A to b. hus neither u nor v can be a least-squares solution of Ax= b. 5. he least squares solution satisfies Rxˆ b. Since for the system may be row reduced to find ª 5 7º ª º ª R º b a and so x ª º ˆ is the least squares solution of Ax= b. 6. he least squares solution satisfies Rxˆ b. Since matrix for the system may be row reduced to find ª 7 / º ª.9º ª R º b a 5 9/.9 and so x ª º ˆ is the least squares solution of Ax= b. 5 R ª º R ª º 5 and and b b ª 7º, the augmented matrix ª 7 / º 9/, the augmented 7. a. rue. See the beginning of the section. he distance from Ax to b is Ax b. b. rue. See the comments about equation (). c. False. he inequality points in the wrong direction. See the definition of a least-squares solution. d. rue. See heorem. e. rue. See heorem. 8. a. rue. See the paragraph following the definition of a least-squares solution. b. False. If ˆx is the least-squares solution, then A ˆx is the point in the column space of A closest to b. See Figure and the paragraph preceding it. c. rue. See the discussion following equation (). d. False. he formula applies only when the columns of A are linearly independent. See heorem. e. False. See the comments after Example. f. False. See the Numerical Note.
27 6.6 Solutions 6 9. a. If Ax =, then A Ax A. his shows that Nul A is contained in Nul A A. b. If A A x, then x A A x x. So ( Ax) ( A x ), which means that A x, and hence Ax =. his shows that Nul A A is contained in Nul A.. Suppose that Ax =. hen A Ax A. Since A are linearly independent. A A is invertible, x must be. Hence the columns of. a. If A has linearly independent columns, then the equation Ax = has only the trivial solution. By Exercise 7, the equation A A x also has only the trivial solution. Since A A is a square matrix, it must be invertible by the Invertible Matrix heorem. b. Since the n linearly independent columns of A belong to m, m could not be less than n. c. he n linearly independent columns of A form a basis for Col A, so the rank of A is n.. Note that A A has n columns because A does. hen by the Rank heorem and Exercise 9, rank A A n dim Nul A A n dim Nul A rank A. By heorem, b ˆ Axˆ AA A A b. he matrix statistics. AAA ( ) A is sometimes called the hat-matrix in. Since in this case A A I, the normal equations give xˆ A b. ª ºª xº ª 6 º 5. he normal equations are, y 6 whose solution is the set of all (x, y) such that x + y =. he solutions correspond to the points on the line midway between the lines x + y = and x + y =. 6. [M] Using.7 as an approximation for /, a a.555 and a.5. Using.77 as an approximation for /, a a.5559, a SOLUIONS Notes: his section is a valuable reference for any person who works with data that requires statistical analysis. Many graduate fields require such work. Science students in particular will benefit from Example. he general linear model and the subsequent examples are aimed at students who may take a multivariate statistics course. hat may include more students than one might expect.. he design matrix X and the observation vector y are X ª º ª º, y, and one can compute ª 6º 6 ˆ.9, ª º, ( X X X X X) X ª º 6 y C y. he least-squares line y C Cx is thus y =.9 +.x.
28 6 CHAPER 6 Orthogonality and Least Squares. he design matrix X and the observation vector y are X ª º ª º, y, 5 and one can compute ª º 6 ˆ.6, ª º, ( X X X X X) X ª º 6 y 5 C y.7 he least-squares line y C Cx is thus y =.6 +.7x.. he design matrix X and the observation vector \ are X ª º ª º, y, and one can compute ª º 7 ˆ., ª º, ( X X X X X) X ª º 6 y C y. he least-squares line y C Cx is thus y =. +.x.. he design matrix X and the observation vector y are X ª º ª º, y, 5 6 and one can compute ª 6º 6 ˆ., ª º, ( X X X X X) X ª º 6 7 y 7 C y.7 he least-squares line y C Cx is thus y =..7x. 5. If two data points have different x-coordinates, then the two columns of the design matrix X cannot be multiples of each other and hence are linearly independent. By heorem in Section 6.5, the normal equations have a unique solution. 6. If the columns of X were linearly dependent, then the same dependence relation would hold for the vectors in formed from the top three entries in each column. hat is, the columns of the matrix ª x x º x x would also be linearly dependent, and so this matrix (called a Vandermonde matrix) x x would be noninvertible. Note that the determinant of this matrix is ( x x)( x x)( x x) z since x, x, and x are distinct. hus this matrix is invertible, which means that the columns of X are in fact linearly independent. By heorem in Section 6.5, the normal equations have a unique solution.
29 6.6 Solutions 6 7. a. he model that produces the correct least-squares fit is y= XC+ F, where ª º ª.8º ª F º.7 F ª C º X 9, y., C,and C F F 6.8 F F 5 b. [M] One computes that (to two decimal places) y.76 x.x. ˆ ª.76º C,. so the desired least-squares equation is 8. a. he model that produces the correct least-squares fit is y= XC+ F, where X ª x x x º ª yº ª Cº ª Fº, y, C,and C F x y n xn x n n C Fn b. [M] For the given data, ª 6 6º ª.58º X and y ª.5º so ˆ ( X X) X.8 C y, and the least-squares curve is.6 y.5 x.8 x.6 x. 9. he model that produces the correct least-squares fit is y= XC+ F, where ª cos sin º ª 7.9º ª F º A X cos sin, 5. ª º,, and y C B F F cos sin.9 F. a. he model that produces the correct least-squares fit is y= XC + F, where.().7() ª e e º ª.º ª F º.().7() e e.68 F M.().7() A X ª º e e,.5,,and y C, M F F B.().7() e e 8.87 F.(5).7(5) e e 8. F5
30 6 CHAPER 6 Orthogonality and Least Squares b. [M] One computes that (to two decimal places). t.7t y 9.9e. e. ˆ ª 9.9º C,. so the desired least-squares equation is. [M] he model that produces the correct least-squares fit is y= XCÁ+ F, where ª cos.88º ª º ª F º.cos.. F ª C º X.65 cos., y.65, C,and e F F.5 cos.77.5 F.cos.. F 5.5 One computes that (to two decimal places) ˆ ª º C.8. Since e =.8 < the orbit is an ellipse. he equation r = C / ( e cos +) produces r =. when +=.6.. [M] he model that produces the correct least-squares fit is y = XCÁ+ F, where X ª.78º ª 9º ª F º. 98 F ª C º., y, C,and C F F.7 F.88 F One computes that (to two decimal places) ˆ ª º C 9., so the desired least-squares equation is p = ln w. When w =, p 7 millimeters of mercury.. [M] a. he model that produces the correct least-squares fit is y = XC + F, where ª º ª º 8.8 ª F º F 9.9 F 6..7 F ª C º F X C F, y. 6, C,and F C F C F F F F F 89.
31 6.6 Solutions 65 One computes that (to four decimal places) ª.8558º ˆ.75 C, so the desired least-squares polynomial is yt ( ) t5.555 t.7 t. b. he velocity v(t) is the derivative of the position function y(t), so vt ( ).75.8 t.8 t, and v(.5) = 5. ft/sec.. Write the design matrix as > Since the residual vector F = y X C ˆ is orthogonal to Col X, ˆ F ( y XC) y( X) C ˆ ª Cˆ º ( y ˆ ˆ ˆ ˆ } yn ) ª n x º ync C x ny nc ncx Cˆ his equation may be solved for y to find y C ˆ ˆ Cx. 5. From equation () on page, ª x º ª } n x X X º ª º x x } n x ( x) x n ª y º ª } y X º ª º y x x } n xy y n he equations (7) in the text follow immediately from the normal equations X XC X y. 6. he determinant of the coefficient matrix of the equations in (7) is n x ( x). formula for the inverse of the coefficient matrix, ª Cˆ º ª x xº ª y º ˆ C n x ( x) x n xy Hence Cˆ ˆ, C ( x )( y) ( x)( xy) n xy ( x)( y) n x ( x) n x ( x) Using the u ( ) ˆ ˆ y x C nc, which provides a simple formula Note: A simple algebraic calculation shows that for C ˆ once C ˆ is known. 7. a. he mean of the data in Example is x 5.5, so the data in mean-deviation form are (.5, ), ª.5º.5 (.5, ), (.5, ), (.5, ), and the associated design matrix is X. he columns of X are.5.5 orthogonal because the entries in the second column sum to.
32 66 CHAPER 6 Orthogonality and Least Squares b. he normal equations are X XC X y, or so the desired least-squares line is ª º ª C º ª 9 º. C 7.5 * One computes that y (9/ ) (5/) x (9/ ) (5/)( x 5.5). ˆ ª 9/º C, 5/ 8. Since X X ª x º ª } º ª n x º x x } x x n ( ) x n X X is a diagonal matrix when x. 9. he residual vector F = y X C ˆ is orthogonal to Col X, while ŷ =X C ˆ is in Col X. Since F and ŷ are thus orthogonal, apply the Pythagorean heorem to these vectors to obtain ˆ ˆ SS() y yˆ F yˆ F XC y XC SS(R) SS(E). Since C ˆ satisfies the normal equations, ˆ X XC X y, and ˆ C Cˆ Cˆ Cˆ Cˆ C ˆ X ( X ) ( X ) X X X y Since ˆ X C SS(R) and y y y SS(), Exercise 9 shows that SS(E) SS() SS(R) y y C ˆ X y 6.7 SOLUIONS Notes: he three types of inner products described here (in Examples,, and 7) are matched by examples in Section 6.8. It is possible to spend just one day on selected portions of both sections. Example matches the weighted least squares in Section 6.8. Examples 6 are applied to trend analysis in Seciton 6.8. his material is aimed at students who have not had much calculus or who intend to take more than one course in statistics. For students who have seen some calculus, Example 7 is needed to develop the Fourier series in Section 6.8. Example 8 is used to motivate the inner product on C[a, b]. he Cauchy-Schwarz and triangle inequalities are not used here, but they should be part of the training of every mathematics student.. he inner product is x, y² xy 5xy. Let x= (, ), y= (5, ). a. Since x x, x² 9, x =. Since y y, y² 5, x 5. Finally, xy, ² 5 5. b. A vector z is orthogonal to y if and only if x, y²=, that is, z 5z, or z z. hus all ª º multiples of are orthogonal to y.. he inner product is xy, ² xy 5 xy. Let x= (, ), y= (, ). Compute that x xx, ² 56, y y, y², x y 56 76, x, y²=, and x, y² 56. hus xy, ² d x y, as the Cauchy-Schwarz inequality predicts.. he inner product is p, q²= p( )q( ) + p()q() + p()q(), so t,5 t ² () (5) 5() 8.
33 6.7 Solutions 67. he inner product is p, q²= p( )q( ) + p()q() + p()q(), so tt, t ² ( )(5) () (5). 5. he inner product is p, q²= p( )q( ) + p()q() + p()q(), so pq, = + t,+ t = = 5 and p p, p² 5 5. Likewise qq, ² 5 t,5 t ² 5 7 and q q, q² he inner product is p, q²= p( )q( ) + p()q() + p()q(), so p, p² tt,t t ² ( ) and p p, p² 5. Likewise and q q, q² he orthogonal projection ˆq of q onto the subspace spanned by p is qp, ² 8 56 qˆ p ( t) t pp, ² he orthogonal projection ˆq of q onto the subspace spanned by p is qp, ² qˆ p ( t t ) t t pp, ² qq, ² t, t ² 9. he inner product is p, q²= p( )q( ) + p( )q( ) + p()q() + p()q(). a. he orthogonal projection ˆp of p onto the subspace spanned by p and p is p, p² p, p² pˆ p p () t 5 p, p² p, p² b. he vector q p pˆ t will be orthogonal to both p and p and { p, p, q } will be an orthogonal basis for Span{ p, p, p }. he vector of values for q at (,,, ) is (,,, ), so scaling by / yields the new vector q (/ )( t 5).. he best approximation to p t by vectors in W Span{ p, p, q } will be pp, ² p, p² p, q² 6 t 5 pˆ proj W p p p q () ( t) t p, p² p, p² q, q² ¹ 5. he orthogonal projection of p t onto W Span{ p, p, p } will be pp, ² pp, ² pp, ² 7 pˆ proj W p p p p () ( t) ( t ) t p, p ² p, p ² p, p ² 5 5. Let W Span{ p, p, p }. he vector p p proj W p t (7 / 5) t will make { p, p, p, p } an orthogonal basis for the subspace of. he vector of values for p at (,,,, ) is ( 6/5, /5,, /5, 6/5), so scaling by 5/6 yields the new vector p (5/ 6)( t (7 / 5) t) (5/6) t (7/6) t.
34 68 CHAPER 6 Orthogonality and Least Squares. Suppose that A is invertible and that u, v²= (Au) (Av) for u and v in n. Check each axiom in the definition on page 8, using the properties of the dot product. i. u, v²= (Au) (Av) = (Av) (Au) = v, u² ii. u + v, w²= (A(u + v)) (Aw) = (Au + Av) (Aw) = (Au) (Aw) + (Av) (Aw) = u, w²+ v, w² iii. c u, v²= (A( cu)) (Av) = (c(au)) (Av) = c((au) (Av)) = c u, v² iv. cuu, ² ( Au) ( Au) Au t, and this quantity is zero if and only if the vector Au is. But Au = if and only u = because A is invertible.. Suppose that is a one-to-one linear transformation from a vector space V into n and that u, v²= (u) (v) for u and v in n. Check each axiom in the definition on page 8, using the properties of the dot product and. he linearity of is used often in the following. i. u, v²= (u) (v) = (v) (u) = v, u² ii. u+ v, w²= (u + v) (w) = ((u) + (v)) (w) = (u) (w) + (v) (w) = u, Z²+ v, w² iii. cu, v²= (cu) (v) = (c(u)) (v) = c((u) (v)) = c u, v² iv. uu, ² ( u) ( u) ( u ) t, and this quantity is zero if and only if u = since is a one-toone transformation. 5. Using Axioms and, u, c v²= c v, u²= c v, u²= c u, v². 6. Using Axioms, and, u v uv, u v² u, u v² v, u v ² uu, ² uv, ² vu, ² vv, ² uu, ² uv, ² vv, ² u u, v² v Since {u, v} is orthonormal, u v and u, v²=. So u v. 7. Following the method in Exercise 6, u v uv, u v² u, u v² v, u v ² uu, ² uv, ² vu, ² vv, ² uu, ² uv, ² vv, ² u u, v² v Subtracting these results, one finds that uv u v u, v ², and dividing by gives the desired identity. 8. In Exercises 6 and 7, it has been shown that u v u u, v² v and u v u u, v² v. Adding these two results gives uv u v u v. 9. let u ª a º ª b º and v. hen u ab, v b a ab, and uv, ² ab. Since a and b are nonnegative, u ab, v ab. Plugging these values into the Cauchy-Schwarz inequality gives ab uv, ² d u v ab a b ab Dividing both sides of this equation by gives the desired inequality.
35 6.7 Solutions 69. he Cauchy-Schwarz inequality may be altered by dividing both sides of the inequality by and then squaring both sides of the inequality. he result is uv, ² u v d ¹ a Now let u ª º b and ªº v. hen u a b, v, and u, v²= a + b. Plugging these values into the inequality above yields the desired inequality.. he inner product is f, g² ³ f( t) g( t) dt. Let 5 ³ ³ f() t t, f, g² ( t )( t t ) dt t t t dt. he inner product is f, g² ³ f( t) g( t) dt. Let f (t) = 5t, ³ ³ f, g² (5t)( t t ) dt 5t 8t t dt gt () t t. hen gt () t t. hen. he inner product is f, g² ³ f( t) g( t) dt, so f f, f² / 5.. he inner product is f, g² ³ f( t) g( t) dt, so g g, g² / 5. ³ ³ and f, f² ( t ) dt 9t 6t dt /5, 6 5 ³ ³ and gg, ² ( t t) dt t t tdt /5, 5. he inner product is f, g² ³ f( t) g( t) dt. hen and t are orthogonal because, t² ³ t dt. So and t can be in an orthogonal basis for Span{, tt, }. By the Gram-Schmidt process, the third basis element in the orthogonal basis can be,, t t ² t t² t, ² tt, ² Since t,² ³ t dt /,,² dt, ³ and t, t² t dt, ³ the third basis element can be written as t (/ ). his element can be scaled by, which gives the orthogonal basis as {, t, t }. 6. he inner product is f, g² ³ f( t) g( t) dt. hen and t are orthogonal because, t² ³ t dt. So and t can be in an orthogonal basis for Span{, tt, }. By the Gram-Schmidt process, the third basis element in the orthogonal basis can be,, t t ² t t² t, ² tt, ² Since t,² ³ t dt 6/,, ² dt, ³ and t, t² t dt, ³ the third basis element can be written as t (/). his element can be scaled by, which gives the orthogonal basis as {, t, t }.
36 7 CHAPER 6 Orthogonality and Least Squares 7. [M] he new orthogonal polynomials are multiples of 7t 5t and 7 55t 5 t. hese polynomials may be scaled so that their values at,,,, and are small integers. 8. [M] he orthogonal basis is f () t, f () t cos t, f ( t ) cos t (/ ) (/ )cos t, and f ( t ) cos t (/ )cos t (/ )cos t. 6.8 SOLUIONS Notes: he connections between this section and Section 6.7 are described in the notes for that section. For my junior-senior class, I spend three days on the following topics: heorems and 5 in Section 6.5, plus Examples,, and 5; Example in Section 6.6; Examples and in Section 6.7, with the motivation for the definite integral; and Fourier series in Section he weighting matrix W, design matrix X, parameter vector C, and observation vector y are: ª º ª º ª º ª C º W, X, C, y C he design matrix X and the observation vector y are scaled by W: ª º ª º WX, Wy 8 Further compute 8 ( WX ) WX ª º,( WX ) W ª º 6 y and find that ˆ / 8 (( WX ) WX ) ( WX ) W ª ºª º ª º C y /6 / hus the weighted least-squares line is y = + (/)x.. Let X be the original design matrix, and let y be the original observation vector. Let W be the weighting matrix for the first method. hen W is the weighting matrix for the second method. he weighted leastsquares by the first method is equivalent to the ordinary least-squares for an equation whose normal equation is ( ) ˆ WX WX C ( WX ) Wy () while the second method is equivalent to the ordinary least-squares for an equation whose normal equation is ( ) ( ) ˆ WX W X C ( WX ) ( W ) y () Since equation () can be written as ( ) ˆ WX WX C ( WX ) Wy, it has the same solutions as equation ().
37 6.8 Solutions 7. From Example and the statement of the problem, p () t, p () t t, p () t t, p () t (5/6) t (7/6), t and g = (, 5, 5,, ). he cubic trend function for g is the orthogonal projection ˆp of g onto the subspace spanned by p, p, p, and p : g, p g, p g, p g, p pˆ ² ² ² p p p ² p p p ² p p ² p p ² p p ²,,,, () t t t t ¹ 5 7 t t t t 5 t t t ¹ 6 his polynomial happens to fit the data exactly.. he inner product is p, q²= p( 5)q( 5) + p( )q( ) + p( )q( ) + p()q() + p()q() + p(5)q(5). a. Begin with the basis {, tt, } for. Since and t are orthogonal, let p () t and p () t t. hen the Gram-Schmidt process gives t ² t t²,, 7 5 p () t t t t t, ² tt, ² 6 he vector of values for p is (/, 8/, /, /, 8/, /), so scaling by /8 yields the new function p (/8)( t (5/ )) (/8) t (5/8). b. he data vector is g = (,,,, 6, 8). he quadratic trend function for g is the orthogonal projection ˆp of g onto the subspace spanned by p, p and p : g, p ² g, p ² g, p ² pˆ p p p () t t p, p ² p, p ² p, p ² ¹ t t t t ¹ he inner product is f, g² ³ f( t) g( t) dt. Let m zn. hen ³ ³ sin mt, sin nt² sin mt sin nt dt cos(( m n) t) cos(( m n) t) dt hus sin mt and sin nt are orthogonal. 6. he inner product is f, g² ³ f( t) g( t) dt. Let m and n be positive integers. hen ³ ³ sin mt,cos nt² sin mt cos nt dt sin(( m n) t) sin(( m n) t) dt hus sinmt and cosnt are orthogonal.
38 7 CHAPER 6 Orthogonality and Least Squares 7. he inner product is f, g² ³ f( t) g( t) dt. Let k be a positive integer. hen cos kt cos kt, cos kt² cos kt dt cos kt dt ³ ³ and sin kt sin kt,sin kt² sin kt dt cos kt dt ³ ³ 8. Let f(t) = t. he Fourier coefficients for f are: a () ³ f t dt t dt ³ and for k >, a f()cos t kt dt ( t )coskt dt k ³ ³ ³ ³ bk f()sin t kt dt ( t )sinkt dt k he third-order Fourier approximation to f is thus a bsin tbsin t bsin t sin tsin t sin t 9. Let f(t) = t. he Fourier coefficients for f are: a () ³ f t dt ³ and for k >, t dt a f( t) cos kt dt ( t) cos kt dt k ³ ³ ³ ³ bk f()sin t kt dt ( t)sinkt dt k he third-order Fourier approximation to f is thus a bsin tbsin t bsin t sin tsin t sin t fordt. Let f() t. he Fourier coefficients for f are: for d t a f () t dt dt dt ³ ³ ³ and for k >, a f()cos t ktdt cosktdt cosktdt k ³ ³ ³ /( k ) for k odd bk ³ f( t) sin ktdt sin ktdt sin ktdt ³ ³ for k even he third-order Fourier approximation to f is thus bsin t bsin t sin t sin t
39 6.8 Solutions 7. he trigonometric identity cos t sin t shows that sin t cos t he expression on the right is in the subspace spanned by the trigonometric polynomials of order or less, so this expression is the third-order Fourier approximation to cos t.. he trigonometric identity cos t cos t cos t shows that cos t cos t cos t he expression on the right is in the subspace spanned by the trigonometric polynomials of order or less, so this expression is the third-order Fourier approximation to cos t.. Let f and g be in C [, S] and let m be a nonnegative integer. hen the linearity of the inner product shows that ( f + g), cos mt²= f, cos mt²+ g, cos mt², ( f + g), sin mt²= f, sin mt²+ g, sin mt² Dividing these identities respectively by cos mt, cos mt² and sin mt, sin mt² shows that the Fourier coefficients a m and b m for f + g are the sums of the corresponding Fourier coefficients of f and of g.. Note that g and h are both in the subspace H spanned by the trigonometric polynomials of order or less. Since h is the second-order Fourier approximation to f, it is closer to f than any other function in the subspace H. 5. [M] he weighting matrix W is the u diagonal matrix with diagonal entries,,,.9,.9,.8,.7,.6,.5,.,.,.,.. he design matrix X, parameter vector C, and observation vector y are: ª º. ª º ª C º 59. X C 6 6 6, C, y. C C
Orthogonality and Least Squares
6 Orthogonality and Least Squares 6.1 INNER PRODUCT, LENGTH, AND ORTHOGONALITY INNER PRODUCT If u and v are vectors in, then we regard u and v as matrices. n 1 n The transpose u T is a 1 n matrix, and
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More information6. Orthogonality and Least-Squares
Linear Algebra 6. Orthogonality and Least-Squares CSIE NCU 1 6. Orthogonality and Least-Squares 6.1 Inner product, length, and orthogonality. 2 6.2 Orthogonal sets... 8 6.3 Orthogonal projections... 13
More informationSolutions to Review Problems for Chapter 6 ( ), 7.1
Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,
More informationMath 3191 Applied Linear Algebra
Math 191 Applied Linear Algebra Lecture 1: Inner Products, Length, Orthogonality Stephen Billups University of Colorado at Denver Math 191Applied Linear Algebra p.1/ Motivation Not all linear systems have
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationAssignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.
Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has
More information1.1 SOLUTIONS. Replace R2 by R2 + (2)R1 and obtain: 2. Scale R2 by 1/3: Replace R1 by R1 + ( 5)R2:
SOLUIONS Notes: he key exercises are 7 (or or ), 9, and 5 For brevity, the symbols R, R,, stand for row (or equation ), row (or equation ), and so on Additional notes are at the end of the section x +
More informationRecall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:
Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =
More informationMath 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam
Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system
More informationMTH 2032 SemesterII
MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents
More informationEXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)
EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily
More information= = 3( 13) + 4(10) = = = 5(4) + 1(22) =
. SOLUIONS Notes: If time is needed for other topics, this chapter may be omitted. Section 5. contains enough information about determinants to support the discussion there of the characteristic polynomial
More information3.1 SOLUTIONS. 2. Expanding along the first row: Expanding along the second column:
. SOLUIONS Notes: Some exercises in this section provide practice in computing determinants, while others allow the student to discover the properties of determinants which will be studied in the next
More information(v, w) = arccos( < v, w >
MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:
More informationExtra Problems for Math 2050 Linear Algebra I
Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as
More informationMarch 27 Math 3260 sec. 56 Spring 2018
March 27 Math 3260 sec. 56 Spring 2018 Section 4.6: Rank Definition: The row space, denoted Row A, of an m n matrix A is the subspace of R n spanned by the rows of A. We now have three vector spaces associated
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationMTH 2310, FALL Introduction
MTH 2310, FALL 2011 SECTION 6.2: ORTHOGONAL SETS Homework Problems: 1, 5, 9, 13, 17, 21, 23 1, 27, 29, 35 1. Introduction We have discussed previously the benefits of having a set of vectors that is linearly
More informationSection 7.5 Inner Product Spaces
Section 7.5 Inner Product Spaces With the dot product defined in Chapter 6, we were able to study the following properties of vectors in R n. ) Length or norm of a vector u. ( u = p u u ) 2) Distance of
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More information(v, w) = arccos( < v, w >
MA322 F all203 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v,
More information(v, w) = arccos( < v, w >
MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationInner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:
Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationSUMMARY OF MATH 1600
SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You
More informationWorksheet for Lecture 25 Section 6.4 Gram-Schmidt Process
Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal
More informationLinear Models Review
Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign
More informationBASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x
BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationMath Linear Algebra
Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner
More informationMath 3191 Applied Linear Algebra
Math 9 Applied Linear Algebra Lecture : Orthogonal Projections, Gram-Schmidt Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./ Orthonormal Sets A set of vectors {u, u,...,
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationChapter 6. Orthogonality and Least Squares
Chapter 6 Orthogonality and Least Squares Section 6.1 Inner Product, Length, and Orthogonality Orientation Recall: This course is about learning to: Solve the matrix equation Ax = b Solve the matrix equation
More informationChapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.
Chapter 3 Directions: For questions 1-11 mark each statement True or False. Justify each answer. 1. (True False) Asking whether the linear system corresponding to an augmented matrix [ a 1 a 2 a 3 b ]
More informationMAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction
MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationIntroduction to Linear Algebra, Second Edition, Serge Lange
Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationMath 4A Notes. Written by Victoria Kala Last updated June 11, 2017
Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationLINEAR ALGEBRA QUESTION BANK
LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW When we define a term, we put it in boldface. This is a very compressed review; please read it very carefully and be sure to ask questions on parts you aren t sure of. x 1 WedenotethesetofrealnumbersbyR.
More information4.1 Distance and Length
Chapter Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at vectors
More informationVector Geometry. Chapter 5
Chapter 5 Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More informationI. Multiple Choice Questions (Answer any eight)
Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationLinear Algebra Final Exam Study Guide Solutions Fall 2012
. Let A = Given that v = 7 7 67 5 75 78 Linear Algebra Final Exam Study Guide Solutions Fall 5 explain why it is not possible to diagonalize A. is an eigenvector for A and λ = is an eigenvalue for A diagonalize
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationMATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL
MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left
More information6.1. Inner Product, Length and Orthogonality
These are brief notes for the lecture on Friday November 13, and Monday November 1, 2009: they are not complete, but they are a guide to what I want to say on those days. They are guaranteed to be incorrect..1.
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationWI1403-LR Linear Algebra. Delft University of Technology
WI1403-LR Linear Algebra Delft University of Technology Year 2013 2014 Michele Facchinelli Version 10 Last modified on February 1, 2017 Preface This summary was written for the course WI1403-LR Linear
More informationMatrix Algebra for Engineers Jeffrey R. Chasnov
Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon
More informationLecture 23: 6.1 Inner Products
Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such
More informationwhich arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i
MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.
More informationorthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,
5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationSection 6.1. Inner Product, Length, and Orthogonality
Section 6. Inner Product, Length, and Orthogonality Orientation Almost solve the equation Ax = b Problem: In the real world, data is imperfect. x v u But due to measurement error, the measured x is not
More informationPRACTICE PROBLEMS FOR THE FINAL
PRACTICE PROBLEMS FOR THE FINAL Here are a slew of practice problems for the final culled from old exams:. Let P be the vector space of polynomials of degree at most. Let B = {, (t ), t + t }. (a) Show
More informationThe value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.
Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More information2. Every linear system with the same number of equations as unknowns has a unique solution.
1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationSpring 2014 Math 272 Final Exam Review Sheet
Spring 2014 Math 272 Final Exam Review Sheet You will not be allowed use of a calculator or any other device other than your pencil or pen and some scratch paper. Notes are also not allowed. In kindness
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationLinear Algebra Highlights
Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationMATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS
MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationMath 54 HW 4 solutions
Math 54 HW 4 solutions 2.2. Section 2.2 (a) False: Recall that performing a series of elementary row operations A is equivalent to multiplying A by a series of elementary matrices. Suppose that E,...,
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More information1 Last time: determinants
1 Last time: determinants Let n be a positive integer If A is an n n matrix, then its determinant is the number det A = Π(X, A)( 1) inv(x) X S n where S n is the set of n n permutation matrices Π(X, A)
More informationMath 24 Spring 2012 Sample Homework Solutions Week 8
Math 4 Spring Sample Homework Solutions Week 8 Section 5. (.) Test A M (R) for diagonalizability, and if possible find an invertible matrix Q and a diagonal matrix D such that Q AQ = D. ( ) 4 (c) A =.
More information7. Dimension and Structure.
7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain
More informationx 1 x 2. x 1, x 2,..., x n R. x n
WEEK In general terms, our aim in this first part of the course is to use vector space theory to study the geometry of Euclidean space A good knowledge of the subject matter of the Matrix Applications
More informationThis is a closed book exam. No notes or calculators are permitted. We will drop your lowest scoring question for you.
Math 54 Fall 2017 Practice Final Exam Exam date: 12/14/17 Time Limit: 170 Minutes Name: Student ID: GSI or Section: This exam contains 9 pages (including this cover page) and 10 problems. Problems are
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationSOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra
SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1 Winter 2009 I. Topics from linear algebra I.0 : Background 1. Suppose that {x, y} is linearly dependent. Then there are scalars a, b which are not both
More informationMATH 369 Linear Algebra
Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationP = A(A T A) 1 A T. A Om (m n)
Chapter 4: Orthogonality 4.. Projections Proposition. Let A be a matrix. Then N(A T A) N(A). Proof. If Ax, then of course A T Ax. Conversely, if A T Ax, then so Ax also. x (A T Ax) x T A T Ax (Ax) T Ax
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationMath 54. Selected Solutions for Week 5
Math 54. Selected Solutions for Week 5 Section 4. (Page 94) 8. Consider the following two systems of equations: 5x + x 3x 3 = 5x + x 3x 3 = 9x + x + 5x 3 = 4x + x 6x 3 = 9 9x + x + 5x 3 = 5 4x + x 6x 3
More informationThird Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.
Math 7 Treibergs Third Midterm Exam Name: Practice Problems November, Find a basis for the subspace spanned by the following vectors,,, We put the vectors in as columns Then row reduce and choose the pivot
More information1 9/5 Matrices, vectors, and their applications
1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationx 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7
Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)
More information(i) [7 points] Compute the determinant of the following matrix using cofactor expansion.
Question (i) 7 points] Compute the determinant of the following matrix using cofactor expansion 2 4 2 4 2 Solution: Expand down the second column, since it has the most zeros We get 2 4 determinant = +det
More informationMATH 167: APPLIED LINEAR ALGEBRA Least-Squares
MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of
More informationThe Gram-Schmidt Process 1
The Gram-Schmidt Process In this section all vector spaces will be subspaces of some R m. Definition.. Let S = {v...v n } R m. The set S is said to be orthogonal if v v j = whenever i j. If in addition
More information