(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

Size: px
Start display at page:

Download "(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y)."

Transcription

1 Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w x i a mi w m i=1 Therefore y j = n i=1 a jix i = a j1 x 1 + a j2 x a jn x n Exercise 72 i=1 (K + L)( x + y) = K( x + y) + L( x + y) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = K( x) + L( x) + K( y) + L( y) = (K + L)( x) + (K + L)( y) (def of K + L) (K + L)(c x) = K(c x) + L(c x) (def of K + L) = ck( x) + cl( x) (K, L are linear) = c(k( x) + L( x)) = c(k + L)( x) (def of K + L) i=1 By Kα = βk βα and Lα = βl βα, we get Therefore (K + L) βα = K βα + L βα The discussion for cl is similar Exercise 73 (K + L)α = Kα + Lα = βk βα + βl βα = β(k βα + L βα ) (K L)( x + y) = K(L( x + y)) (def of K L) By Kβ = γk γβ and Lα = βl βα, we get = K(L( x) + L( y)) (L is linear) = K(L( x)) + K(L( y)) (K is linear) = (K L)( x) + (K L)( y) (def of K L) (K L)α = K(Lα) = K(βL βα ) = (Kβ)L βα = (γk γβ )L βα = γ(k γβ L βα ) Therefore (K L) γα = K γβ L βα Exercise 74

2 Since L is linear, we get L(L 1 ( x + y)) = x + y = L(L 1 ( x)) + L(L 1 ( y)) = L(L 1 ( x) + L 1 ( y)) Since L is invertible, this implies L 1 ( x+ y) = L 1 ( x)+l 1 ( y) The equality L 1 (c x) = cl 1 ( x) can be similarly proved We have Lα = βl βα By left multiplying L 1 and right multiplying L 1 βα and using the associativity (which is really the linearity of L), we get αl 1 βα = L 1 β Therefore (L 1 ) αβ = (L βα ) 1 Exercise 76 Suppose L maps V to W Suppose K L = I Then L( x 1 ) = L( x 2 ) implies x 1 = I( x 1 ) = K(L( x 1 )) = K(L( x 2 )) = I( x 2 ) = x 2 Therefore L is injective Suppose L K = I Then any y = I( y) = L(K( y)) is the image of K( y) under L Therefore L is surjective Conversely, suppose L is injective Let v 1,, v n be a basis of V The infectivity implies that L( v 1 ),, L( v n ) are still nearly independent Therefore they can be extended to a basis L( v 1 ),, L( v n ), w 1,, w m n of W Define K : W V to be the linear transform determined by K(L( v 1 )) = v 1,, K(L( v n )) = v n, K( w 1 ) = 0,, K( w m n ) = 0 Then we get K L = I Now suppose L is surjective Then for a basis w 1,, w m of W, we have w 1 = L( v 1 ),, w m = L( v m ) for some vectors v 1,, v n in V Define K : W V to be the linear transform determined by K( w 1 ) = v 1,, K( w m ) = v m Then we get L K = I Exercise 77 Let A = L αα and B = L ββ Then B = I βα AI αβ On the other hand, I βα I αβ = I αα is the identity matrix, so that I αβ = I 1 βα Therefore we get B = P AP 1 for P = I βα Exercise 78 If k V and l W, then (k, l)( x y) = k( x) + l( y) is a linear functional on V W Conversely, for a linear functional λ on V W, the restrictions k = λ V and l = λ W are linear functionals on V and W, and λ( x y) = λ( x) + λ( y) = k( x) + l( y) = (k, l)( x y) Exercise 79 By L (k + l)( x)) = (k + l)(l( x)) = k(l( x)) + l(l( x)) = L k( x) + L l( x) = (L k + L l)( x)), we get L (k + l) = L k + L l The equality L (cl) = cl l can be similarly proved Exercise 710

3 By (K + L) l( x) = l((k + L)( x)) = l(k( x) + L( x)) = l(k( x)) + l(l( x)) = K l( x) + L l( x) = (K l + L l)( x), (cl) l( x) = l((cl)( x)) = l(cl( x)) = cl(l( x)) = cl l( x) = (cl )l( x), (K L) l( x) = l((k L)( x)) = l(k(l( x))) = K l(l( x)) = L (K l)( x) = (L K )l( x), we get (K + L) = K + L, (cl) = cl, (K L) = L K Exercise 711 The problem is to show the equality (L( x)) = (L ) ( x ) V L W (L ) (V ) (W ) The equality is verified below (L( x)) (l) = l(l( x)), (L ) ( x )(l) = x (L (l)) = L (l)( x) = l(l( x)) Exercise 712 For L: V W, we have L : W V By Exercise 76, L is injective if and only if K L = I for some linear transform K By Exercises 710 and 711, K L = I is equivalent to L K = (K L) = I = I By Exercise 76 again, L K = I for some linear transform K if and only if L is surjective This proves that L is injective if and only if L is surjective By the similar argument, we may prove that L is surjective if and only if L is injective Remark: Using Exercise 711, the two statements are dual to each other, and therefore imply each other Exercise 713 Let L( v 1 ) = a 11 w 1 + a 21 w a m1 w m, L( v 2 ) = a 12 w 1 + a 22 w a m2 w m, L( v n ) = a 1n w 1 + a 2n w a mn w m

4 and L ( w 1) = b 11 v 1 + b 21 v b n1 v n, L ( w 2) = b 12 v 1 + b 22 v b n2 v n, L ( w m) = b 1m v 1 + b 2m v b nm v n Then by the discussion before Proposition 711, Therefore a ij = b ji a ij = w i (L( v j )), b ji = (L ( w i ))( v j ) = w i (L( v j )) Exercise 714 By Schwarz s inequality, we have l( x) a 2 x 2 Moreover, the equality can happen with l( a) = a 2 2 Therefore the norm l = a 2 In general, a linear transform L = (l 1, l 2,, l m ): R n R m satisfies L( x) = max{ l 1 ( x), l 2 ( x),, l m ( x) } max{ l 1 x, l 2 x,, l m x } max{ l 1, l 2,, l m } x On the other hand, the equality can happen because if max{ l 1, l 2,, l m } = l k, then there is x 0 satisfying l k ( x) = l k x = max{ l 1, l 2,, l m } x This implies that the equality holds Thus L = max{ l 1, l 2,, l m } In case l k ( x) = a k x, this means L = max{ a 1 2, a 2 2,, a m 2 } Exercise 716 Let the columns of A be a 1, a 2,, a n Then A x 2 = x 1 a 1 + x 2 a x n a n 2 x 1 a x 2 a x n a n 2 x x x 2 n a a a n 2 2 = x 2 a 2 ij Therefore the norm with respect to the Euclidean norm A a 2 ij Exercise 718 For n m, we have L m + L m L n L m + L m L n L m + L m L n = L m (1 L n+1 ) 1 L L m 1 L

5 By L < 1, this implies that n=0 Ln satisfies the Cauchy criterion Therefore the series converges The assumption L < 1 and L n L n imply that lim n L n = O, the zero transform By taking lim n on both sides of the equalities (1 + L + L L n )(I L) = I L n+1, (I L)(1 + L + L L n ) = I L n+1, we find ( n=0 Ln )(I L) = (I L)( n=0 Ln ) = I Therefore n=0 Ln = (I L) 1 If L is invertible and K L < 1 L 1, then I KL 1 = (L K)L 1 L K L 1 < 1 By the first part, KL 1 = I (I KL 1 ) is invertible, which further implies that K is invertible Moreover, if M = I KL 1, then LK 1 = (KL 1 ) 1 = (I M) 1 = M n M n 1 = 1 M By M L K L 1, we further get Therefore K 1 = L 1 LK 1 L 1 LK 1 n=0 n=0 L 1 1 M L 1 1 L K L 1 K 1 L 1 = K 1 (L K)L 1 K 1 L K L 1 L 1 1 L K L 1 L K L 1 = L K L L K L 1 The estimation tells us that, matrices in the ball B(L, L 1 1 ) are invertible, so that GL(n) is open Moreover, the estimation also shows that inside the ball, lim K = L implies lim K 1 = L 1, so that the inverse map is continuous Exercise 719 For any norm, the subset K = { x: x = 1} is bounded and closed and is therefore compact Then L( x) L( y) L( x) L( y) = L( x y) λ x y implies that L( x) is a continuous function Thus L = sup x K L( x) is reached at some point in K Exercise 720 The continuity of L + K follows from (L + K) (L 0 + K 0 ) L L 0 + K K 0 The continuity of cl follows from cl c 0 L 0 c c 0 L + c 0 L L 0 The continuity of K L follows from K L K 0 L 0 = K (L L 0 ) + (K K 0 ) L 0 K (L L 0 ) + (K K 0 ) L 0 K L L 0 + K K 0 L 0

6 Exercise 721 c x 2 = c x 2 Therefore the norm of the scalar product is 1 x y x 2 y 2 and x x = x 2 x 2 Therefore the norm of the dot product is 1 Exercise 722 We only verify the triangle inequality here For bilinear maps B and B, we have (B + B )( x, y) = B( x, y) + B ( x, y) B( x, y) + B ( x, y) This implies B + B B + B B x y + B x y ( B + B ) x y Exercise 723 By B(K( x), L( y)) B K( x) L( y) B K L x y, the norm of the bilinear map B(K( x), L( y)) is B K L Exercise 724 Let v i = p 1i v 1 + p 2i v p ni v n, I αα = (p ij ), w j = q 1j w 1 + q 2i w q mj w m, I ββ = (q ij ) Then b ij = b( v i, w j) = k,l b(p ki v k, q lj w l ) = k,l p ki q lj b( v k, w l ) = k,l p ki b kl q lj In matrix form, this means Exercise 725 This means B α β = IT αα B αβi ββ [ x b( x, )] ( y ) = b(, y) Applying the left side to = y, we get [[ x b( x, )] ( y )]( z) = y ([ x b( x, )]( z)) = y (b( z, )) = b( z, y) The result is the right side Exercise 726 The effect of V W on the basis vectors is v i b( v i, ) = j b( v i, w j ) w j = j b ij w j Since the sum is over the second index of b ij, the matrix of the linear transform is B αβ = (b ij )

7 The effect of W V on the basis vectors is w i b(, w i ) = i b( v i, w i ) v i = i b ij v i Since the sum is over the first index of b ij, the matrix of the linear transform is B T αβ = (b ji) Exercise 727 Let L βα = (a ij ) Then b ij = L( e i ) e j = (a 1i e a ni e n ) e j = a ji Therefore B αβ = L T βα With respect to the Euclidean norm, we have z = sup y =1 z y by Exercise 714 Then L = sup L( x) = sup x =1 x =1 sup y =1 L( x) y = sup L( x) y x = y =1 Exercise 728 By the non-singular property, two vectors x, y V are equal if and only if b( x, z) = b( y, z) for all z W For the given basis of β of w, this is equivalent to b( x, w i ) = b( y, w i ) for all i Then the equality x = b( x, w 1 ) v 1 + b( x, w 2 ) v b( x, w n ) v n follows from b(right side, w i ) = b( x, w 1 )b( v 1, w i ) + b( x, w 2 )b( v 2, w i ) + + b( x, w n )b( v n, w i ) Similarly, for y W, we have Exercise 729 = b( x, w i )b( v i, w i ) = b( x, w i ) y = b( v 1, y) w 1 + b( v 2, y) w b( v n, y) w n The definition of L : W V is the following A vector y W gives a linear functional y, =, y W (this uses the isomorphism W = W ) Then we get the linear functional L, y = L( ), y V We wish to express this linear functional as, z V for some z V (this uses the isomorphism V = V ), and this z is L ( y) This means or L( ), y =, z =, L ( y), L( x), y = x, L ( y), x V, y W Note that for each fixed y, the left side is a linear functional on V Since the linear functional is the inner product with a unique z, this uniquely determines L ( y)

8 We have x, (K + L) ( y) = (K + L)( x), y = K( x) + L( x), y = K( x), y + L( x), y = x, K ( y) + x, L ( y) = x, K ( y) + L ( y) = x, (K + L )( y) Since this holds for all x, by the non-singular property of the inner product, we get (K+L) ( y) = (K + L )( y) for all y, or (K + L) = K + L The equalities (cl) = cl, (K L) = L K can be similarly proved Let L( v i ) = a 1i w 1 + a 2i w a ni w n, L βα = (a ij ), L ( w j ) = b 1j v 1 + b 2j v b mj v m, (L ) αβ = (b ij ) Since α and β are orthonormal bases, we have a ji = L( v i ), w j = v i, L ( w j ) = b ij This proves (L ) αβ = (L βα ) T Finally, we have L = L by the second part of Exercise 727 Exercise 730 The homomorphism V 1 V 2 (W 1 W 2 ) = W1 W2 (see Exercise 78 for the equality) induced by b is the direct sum of the homomorphisms V 1 W1 and V 2 W2 induced by b 1 and b 2 Exercise 731 Any bilinear form satisfies b( x + y, x + y) b( x, x) b( y, y) = b( x, y) + b( y, x) In particular, if b( x, x) = 0 for any x, then b( x, y) + b( y, x) = 0 for any x, y This means b is skew-symmetric Exercise 732 The area of the triangle is 1 2 det ( (a + h) a (a + 2h) a (a + h) 2 a 2 (a + 2h) 2 a 2 ) = 12 ( ) det h 2h 2ah + h 2 4ah + 4h 2 = h 3 P n P n 1 consists of triangles with vertices at 2k 2k + 1 2k + 2,, 2n 1 2n 1 2, for k = 0, 1,, n 1 2n 1 1 ( ) By the first part, each such triangle has area = The total area of such 2 n 1 23(n 1) triangles is 2 n = 1 3(n 1) 4 n 1 The area of the region is sum of the area of P n P n 1 for n 0 (P 0 is the triangle with vertices (0, 0), (0, 4), (2, 4) and has area 2) Therefore the area of A is 1 16 n=0 = 4n 1 3

9 Exercise 733 For any bilinear form, we have b( x + y, x + y) b( x, x) b( y, y) = b( x, y) + b( y, x) If b is symmetric and q( x) = b( x, x), then the equality above becomes b( x, y) = 1 (q( x + y) q( x) q( y)) 2 Substituting y by y and using q( y) = q( y), we get Subtracting the two equalities, we get b( x, y) = b( x, y) = 1 (q( x y) q( x) q( y)) 2 2b( x, y) = 1 (q( x + y) q( x y)) 2 Exercise 734 A quadratic form is homogeneous of second order because a bilinear form preserves the scalar multiplication in each variable The parallelogram law follows from Exercise 733 and the fact that b is symmetric Exercise 735 Conversely, suppose the parallelogram law is satisfied By taking x = y = 0 in the law, we get q( 0) = 0 By further taking x = 0 in the law, we get q( x) = q( x) The symmetry property of b follows from its definition Next we prove b( x + y, z) + b( x y, z) = 2b( x, z) b( x + y, z) + b( x y, z) = 1 (q( x + y + z) q( x + y z) + q( x y + z) q( x y z)) 4 = 1 4 (q( x + y + z) + q( x y + z)) 1 (q( x + y z) + q( x y z)) 4 = 1 2 (q( x + z) + q( y)) 1 (q( x z) + q( y)) 2 = 2b( x, z) For fixed z, the function f( x) = b( x, z) satisfies f( 0) = 0 and f( x + y) + f( x y) = 2f( x) Taking x = y, we get f(2 x) = 2f( x) Then replacing x and y by 1 2 ( x + y) and 1 ( x y), we get ( ) 2 1 f( x) + f( y) = 2f ( x + y) = f( x + y) This shows that b is additive in the first variable 2 By symmetry, it is also additive in the second variable

10 Finally, if q is continuous, then b is also continuous By Exercise??, the biadditivity of b implies that b is bilinear Exercise 736 (1) x 2 + 4xy 5y 2 = (x 2 + 4xy + 4y 2 ) 9y 2 = (x + 2y) 2 (3y) 2, indefinite (2) 2x 2 + 4xy = 2(x 2 + 2xy + y 2 ) 2y 2 = 2(x + y) 2 2y 2, indefinite (3) 4x x 1 x 2 + 5x 2 2 = (4x x 1 x 2 + x 2 2) + 4x 2 2 = (2x 1 + x 2 ) 2 + (2x 2 ) 2, positive definite (4) x 2 + 2y 2 + z 2 + 2xy 2xz = (x 2 + 2x(y z) + (y z) 2 ) (y z) 2 + 2y 2 + z 2 = (x + y z) 2 + y 2 + 2yz = (x + y z) 2 + (y + z) 2 z 2, indefinite (5) 2u 2 v 2 6w 2 4uw + 2vw = 2(u 2 + 2uw + w 2 ) v 2 4w 2 + 2vw = 2(u + w) 2 (v + w) 2 3w 3, negative definite (6) x 2 1+x 2 3+2x 1 x 2 +2x 1 x 3 +2x 1 x 4 +2x 3 x 4 = (x 2 1+2x 1 (x 2 +x 3 +x 4 )+(x 2 +x 3 +x 4 ) 2 ) (x 2 +x 3 + x 4 ) 2 +2x 3 x 4 = (x 1 +x 2 +x 3 +x 4 ) 2 (x 2 +x 3 +x 4 ) 2 +2x 3 x 4 = (x 1 +x 2 +u) 2 (x 2 +u) u2 1 2 v2, u = x 3 + x 4, v = x 3 x 4, indefinite Exercise 737 x 2 + 2y 2 + z 2 + 2xy 2xz = 2y 2 + 2xy + (z x) 2 = 1 2 x2 + 2 ( y + 1 2) 2 + (z x) 2 Exercise 738 If the coefficient of the square term x 2 i is a ii 0, then q( e i ) = a ii 0 Therefore q cannot be positive definite Exercise 739 The unit sphere S = { x: x = 1} is compact Then the continuous function q( x) reaches minimum λ at x 0 S Then q( x) q( x 0 ) = λ 0 for any x = 1, where the first inequality is the definition of minimum, and the second inequality is due to the positive assumption of q Now for any vector x, we have x = r u with r = x and u = 1 Then by the second order homogeneity of q, we have q( x) = q(r u) = r 2 q( u) r 2 λ = λ x 2

11 Exercise 740 The proof is the same as Proposition 712 In particular, if F and G are multilinear, then the triangular inequality follows from (F + G)( x 1, x 2,, x k ) = F ( x 1, x 2,, x k ) + G( x 1, x 2,, x k ) F ( x 1, x 2,, x k ) + F ( x 1, x 2,, x k ) F x 1 x 2 x k + G x 1 x 2 x k ( F + G ) x 1 x 2 x k Exercise 741 By the definition of the norm of multilinear maps, we have B 1 ( x, B 2 ( u, v)) B 1 x B 2 ( u, v) B 1 x B 2 u v This implies that the norm of the triple linear map B 1 ( x, B 2 ( u, v)) is B 1 B 2 In general, the norm of composition of multilinear maps is the product of the norms of the individual multilinear maps This generalizes Proposition 712 Exercise 742 We fix a matrix A and consider det AB = det(a b 1 A b 2 A b n ) as a function of columns b 1, b 2,, b n of B Since det is multilinear, we have det(a( b 1 + b 1) A b 2 A b n ) = det(a b 1 + A b 1 A b 2 A b n ) = det(a b 1 A b 2 A b n ) + det(a b 1 A b 2 A b n ), det(a(c b 1 ) A b 2 A b n ) = det(ca b 1 A b 2 A b n ) = c det(a b 1 A b 2 A b n ) This shows that det AB is linear in the first column of B By the similar argument, it is linear in any column of B On the other hand, the alternating property of det implies that det(a b 2 A b 1 A b n ) = det(a b 1 A b 2 A b n ) By the similar argument for other pairs of colums of B, we see that det AB is alternating in columns of b We know the function det AB that is multilinear and alternating in columns of the square matrix B must be a constant multiple of the determinant of B So det AB = a det B The constant can be determined by taking B to be the identity matrix, and we get a = a det I = det AI = det A Therefore det AB = det A det B Exercise 743 dim Λ k V = n!, the number of k element subsets of {1, 2,, n} k!(n k)!

12 Exercise 744 We note that both sides of the associativity ( λ µ) ν = λ ( µ ν) are triple linear maps ΛR n ΛR n ΛR n ΛR n that sends the standard basis (e i1 e ik, e j1 e jl, e k1 e km ) to e i1 e ik e j1 e jl e k1 e km Therefore the two sides are equal Exercise 745 The multilinear and alternating property for the multiple wedge product map follows from the fact that the exterior product is a graded commutative product satisfying the usual algebra properties The explicit formula for x 1 x 2 x k is obtained by applying the formula (732) to F ( x 1, x 2,, x k ) = x 1 x 2 x k Exercise 746 The equality x 1 x 2 x n = det( x 1 x 2 x n ) e [n] is the special case of the explicit formula in Exercise 745, applied to V = R n and the standard basis Applying explicit formula in Exercise 745 to x 2 x n, we get x 2i2 x 3i2 x ni2 x 2i3 x 3i3 x ni3 x 2 x n = det 1 i 2 < <i n n e i 2 e i3 e in x 2in x 3in x nin Note that the indices in the sum must be (i 2,, i n ) = (1,, î,, n) = (1,, i 1, i + 1,, n), obtained by deleting i from all the natural numbers between 1 and n Therefore x 2 x n = C i1 e 1 e i 1 e i+1 e n, with 1 i n x 21 x 31 x n1 x C i1 = det 2(i 1) x 3(i 1) x n(i 1) x 2(i+1) x 3(i+1) x n(i+1) x 2n x 3n x nn Then for x 1 = x 11 e x 1n e n, we get x 1 x 2 x n = x 1i C i1 e i e 1 e i 1 e i+1 e n 1 i n = ( 1) i 1 x 1i C i1 e 1 e i 1 e i e i+1 e n 1 i n ( ) = ( 1) i 1 x 1i C i1 e [n] 1 i n

13 Compared with the first part, we get det( x 1 x 2 x n ) = ( 1) i 1 x 1i C i1 1 i n This is the cofactor expansion with respect to the first column Let A be an n n matrix For two subsets I, J [n] = {1,, n} of k numbers, let A IJ be the k k submatrix of the I-rows and J-columns Moreover, let sign(i, [n] I) be the parity of the number of pair exchanges needed to convert (I, [n] I) to (1,, n) Then det A = sign(i, [n] I) det A I{1,,k} det A ([n] I){k+1,,n} I [n], I =k More generally, suppose [n] is disjoint union of subsets J 1,, J p with k 1,, k p numbers, with k k p = n Then det A = [n]=i 1 I p, I i =p+i sign(i 1,, I p )sign(j 1,, J p ) det A I1 J 1 det A IpJ p Exercise 747 This follows from Exercise 76 and Λ(K L) = ΛK ΛL Exercise 748 The following shows that Λ(cL) = c k ΛL are equal on a basis of the vector space Λ k V Λ(cL)( v i1 v i2 v ik ) = (cl)( v i1 ) (cl)( v i2 ) (cl)( v ik ) This implies Λ(cL) = c k ΛL on the whole Λ k V = cl( v i1 ) cl( v i2 ) cl( v ik ) = c k (L( v i1 ) L( v i2 ) L( v ik )) = c k ΛL( v i1 v i2 v ik ) Exercise 749 Take a basis α = { v 1,, v n } of V and express λ as a linear combination λ = a i1 i k v i1 v ik i 1 < <i k By λ 0, we have a i1 i k for a particular choice of indices (i 1 i k ) Let (j 1 j n k ) be the complement of (i 1 i k ) in [n], and let µ = v j1 v jn k Λ n k V Then λ µ = ai1 i k v i1 v ik v j1 v jn k = ±a i1 i k v 1 v n 0 Exercise 750

14 We may directly computate like (732) Alternatively, consider the special case x i = e i is the standard basis vector in R k and consider By the first part of Exercise 746, we get a i = a i1 e 1 + a i2 e a ik e k = (a i1, a i2,, a ik ) a 1 a 2 a k = (det A) e 1 e 2 e k Then consider the linear transform L: R k V determined by L( e i ) = x i We have L( a i ) = y i by the linearity of L, and y 1 y 2 y k = L( a 1 ) L( a 2 ) L( a k ) = ΛL( a 1 a 2 a k ) = ΛL((det A) e 1 e 2 e k ) = (det A)ΛL( e 1 e 2 e k ) = (det A)L( e 1 ) L( e 2 ) L( e k ) = (det A) x 1 x 2 x k Exercise 751 It follows from the definition of det L that It also follows from Exercise 750 that L( v 1 ) L( v 2 ) L( v n ) = (det L) v 1 v 2 v n L( v 1 ) L( v 2 ) L( v n ) = (det L αα ) v 1 v 2 v n Since v 1 v 2 v n 0, we get det L = det L αα Exercise 752 Applying Exercise 750 to the case x i are α, y i are β, and A is I βα, we get β = (det I βα )( α) By Exercise 713 and I β α = (I α β ) 1 = (I T βα ) 1, we further get β = (det I β α )( α ) = (det(i T βα ) 1 )( α ) = (det I βα ) 1 ( α ) Exercise 753 For we have L(x 1,, x i,, x j,, x n ) = (x 1,, x i + cx j,, x j,, x n ), L( e 1 ) L( e n ) = e 1 e i 1 ( e i + c e j ) e i+1 e n = e 1 e n Therefore det L = 1 For L(x 1,, x i,, x j,, x n ) = (x 1,, x j,, x i,, x n ), we have L( e 1 ) L( e n ) = e 1 e i 1 e j e i+1 e j 1 e i e j+1 e n = e 1 e n

15 Therefore det L = 1 For we have L(x 1,, x i,, x n ) = (x 1,, cx i,, x n ), L( e 1 ) L( e n ) = e 1 e i 1 (c e i ) e i+1 e n = c e 1 e n Therefore det L = c Exercise 754 Let α, β be bases of V and W If dim V = dim W = n, then Λ n K( α) = a( β) and Λ n L( β) = b( α) for some numbers a, b This implies Λ n (L K)( α) = Λ n L(Λ n K( α)) = Λ n L(a( β)) = aλ n L( β) = ab( α) Therefore det(l K) = ab By the same reason, we have det(k L) = ba Therefore det(l K) = det(k L) If dim V < dim W = n, then Λ n (K L) = Λ n K Λ n L: Λ n W Λ n V = 0 Λ n W must be the zero map Therefore det(k L) = 0 However, it may happen that L K is invertible, so that det(l K) 0 Exercise 755 Let l i W and x i V Then Therefore [(ΛL) (l 1 l k )]( x 1 x k ) = (l 1 l k )[ΛL( x 1 x k )] This proves (ΛL) = ΛL = (l 1 l k )(L( x 1 ) L( x k )) = det(l i (L( x j ))) = det(l (l i )( x j )) = [L (l 1 ) L (l k )]( x 1 x k ) (ΛL) (l 1 l k ) = L (l 1 ) L (l k ) = (ΛL )(l 1 l k ) Exercise 756 By Exercise 752, we have det β = β = (det I βα ) 1 ( α ) = (det I βα ) 1 det α Exercise 757 For fixed x i, both sides are bilinear in φ and ψ So it is sufficient to verify the equality for the case φ = v I = v i 1 v i k, i 1 < < i k, ψ = v J = v j 1 v j l, j 1 < < j l, are bases vectors in Λ k V and Λ l V Once φ and ψ are fixed basis vectors (ie, I and J are fixed), then we view both sides as multilinear functions of x So it is sufficient to verify the equality for the case x 1 = v p1,, x k+l = v pk+l are also basis vectors: ( v I v J)( v P ) = ± v I( v R ) v J( v S ) P =R S

16 For the left side to be nonzero, we must have I J =, and P is a rearrangement of I J For the right side to be nonzero, there must be a disjoint union P = R S, such that I = R and J = S up to rearrangement Since R and S are disjoint, we must have I J = Therefore both sides are zero when I J It remains to consider the case I J =, R is a rearrangement of I, and S is a rearrangement of J It is then easy to see that both sides are π1 with the same sign Exercise 758 Compared with the non-singular case, the problem is that we cannot use dual basis to define Λ k b On the other hand, we do expect the induced bilinear map to satisfy Λ k b( x 1 x k, y 1 y k ) = det(b( x i, y j )) 1 i,j k We could certainly use the equality to define Λ k b But two obstacles need to be overcome The first is whether Λ k b is well defined because we could have x 1 x k = x 1 x k with x i x i The second is that x 1 x k are not all the elements in Λ k V The vector space Λ k V consists of linear combinations of vectors of the form x 1 x k So we introduce a map from p( x 1,, x k, y 1,, y k ) = det(b( x i, y j )) 1 i,j k : V 2k R For fixed y 1,, y k, the function is multilinear and alternating in x 1,, x k Therefore the function is the value of a linear functional of Λ k V at x 1 x k The linear functional depends on y 1,, y k, and we may denote it by q( ξ, ( y 1,, y k )), ξ Λ k V Moreover, for the special case ξ = x 1 x k, we have q( x 1 x k, ( y 1,, y k )) = det(b( x i, y j )) 1 i,j k For fixed x 1,, x k, the formula above is multilinear and alternating in y 1,, y k Since ξ Λ k V is a linear combination of vectors of the form x 1 x k, and q( ξ, ( y 1,, y k )) is linear in ξ, we find that for fixed ξ, q( ξ, ( y 1,, y k )) is also multilinear and alternating in y 1,, y k Therefore it is the value of a linear functional of Λ k W at y 1 y k The linear functional depends on ξ linearly, and we may denote it by r( ξ, η), η Λ k W Then r( ξ, η) is bilinear on Λ k V Λ k W, and for the special case ξ = x 1 x k and η = y 1 y k, we have r( x 1 x k, y 1 y k ) = det(b( x i, y j )) 1 i,j k This r is our induced bilinear function Λ k b: Λ k V Λ k W R Exercise 759 Both sides of Λ k 1+k 2 b( λ 1 λ 2, µ 1 µ 2 ) = Λ k 1 b 1 ( λ 1, µ 1 )Λ k 2 b 2 ( λ 2, µ 2 ) are quadruple linear functions of ( λ 1, λ 2, µ 1, µ 2 ) Λ k 1 V 1 Λ k 2 V 2 Λ k 1 W 1 Λ k 2 W 2 So we only need to verify the equality for the case λ 1, λ 2, µ 1, µ 2 are basis vectors in the respective spaces The basis vectors are obtained as follows Let α i, β i be a pair of dual bases of V i and W i for b i Then α 1 α 2, β 1 β 2 is a pair of dual bases of V 1 V 2 and W 1 W 2 We only need to consider the case λ 1, λ 2, µ 1, µ 2 are wedge products of vectors in α 1, α 2, β 1, β 2 Since b( v, w) = 0

17 for v V 1, w W 2, the equality (736) implies that b( v, w) = 0 if v is a factor of λ 1 and w is a factor of µ 2 Since b( v, w) = 0 for v V 2, w W 1, the equality (736) implies that b( v, w) = 0 if v is a factor of λ 2 and w is a factor of µ 1 Then the equality (736) for Λ k 1+k 2 b( λ 1 λ 2, µ 1 µ 2 ) becomes ( ) Λ k 1+k 2 b( λ 1 A1 O λ 2, µ 1 µ 2 ) = det = det A O A 1 det A 2, 2 where det A i is the equality (736) for Λ k i b i ( λ i, µ i ) Therefore we conclude that Λ k 1+k 2 b( λ 1 λ 2, µ 1 µ 2 ) = det A 1 det A 2 = Λ k 1 b 1 ( λ 1, µ 1 )Λ k 2 b 2 ( λ 2, µ 2 ) Remark: Exercise 760 is a special case of the current exercise, with W i = Vi and b i being the canonical dual pairing Exercise 762 ideals with the special case of inner product dual pairing, but gives a more general formula It is possible to generalise the current exercise in the similar way Exercise 760 This is a special Exercise 759 Take b i to be the canonical dual pairing between V and V Take λ 1, λ 2, µ 1, µ 2 to be f, g, x 1 x k, x k+1 x k+l Exercise 761 The inner product on ΛV is defined by extending any orthonormal basis of V to an orthonormal basis of ΛV Since an orthogonal transform preserves orthonormal basis of V, its induced homomorphism also preserve the orthonormal basis of ΛV In other words, the induced map on ΛV is also orthogonal Exercise 762 Suppose W is a subspace of an inner product space V, and W is the orthogonal complement of W in V Prove that ΛW and ΛW are orthogonal in ΛV Moreover, for λ ΛW and η ΛW, prove that λ µ, ξ η = λ, ξ µ, η Both sides of λ µ, ξ η = λ, ξ µ, η are quadruple linear functions of ( λ, µ, ξ, η) ΛW ΛV ΛV ΛW So we only need to verify the equality for the case λ = u1 u k, u i W, µ = x 1 x l, x i V, ξ = y 1 y k, y i V, η = v 1 v l, v i W

18 By (737) and u i, v j = 0, we have u 1, y 1 u 1, y k u 1, v 1 u 1, v l λ µ, ξ u η = det k, y 1 u k, y k u k, v 1 u k, v l x 1, y 1 x 1, y k x 1, v 1 x 1, v l x l, y 1 x l, y k x l, v 1 x l, v l u 1, y 1 u 1, y k 0 0 u = det k, y 1 u k, y k 0 0 x 1, y 1 x 1, y k x 1, v 1 x 1, v l x l, y 1 x l, y k x l, v 1 x l, v l u 1, y 1 u 1, y k x 1, v 1 x 1, v l = det det = λ, ξ µ, η u k, y 1 u k, y k x l, v 1 x l, v l

19 Exercise 763 By Exercise 73, we have I γα = I γβ I βα Therefore det I γα = det I γβ det I βα (see Exercise 742) Suppose det I βα > 0 Then the equality implies that det I γα and det I γβ have the same sign Therefore γ o α γ o β and γ o α γ o β This proves o α = o β and o α = o β Suppose det I βα < 0 Then the equality implies that det I γα and det I γβ have the opposite sign Therefore γ o α γ o β and γ o α γ o β This proves o α = o β and o α = o β Exercise 764 Since I αα is the identity matrix, we have det I αα = 1 > 0 This shows that α and α are comparably oriented Since I αβ = I 1 βα, we have det I αβ = (det I βα ) 1 In particular, det I αβ > 0 det I βα > 0 This shows that if α and β are comparably oriented, then β and α are comparably oriented By det I γα = det I γβ det I βα, we get det I γβ > 0 and det I βα > 0 = det I γα > 0 This shows that, if α and β are comparably oriented, and β and γ are comparably oriented, then α and γ are comparably oriented Exercise 765 By v 1 v j v i v n = v 1 v i v j v n, exchanging two basis vectors reverses the orientation By v 1 c v i v n = c v 1 v i v n, multiplying c 0 to a basis vector preserves the orientation if c > 0 and reverses the orientation if c < 0 By v 1 ( v i + c v j ) v j v n = v 1 v i v j v n, adding a scalar multiple of one basis vector to another basis vector preserves the orientation Exercise e 2 e 3 e n e 1 = ( 1) n 1 e 1 e 2 e 3 e n Positively oriented for odd n Negatively oriented for even n 2 e n e n 1 e 1 = ( 1) 1 2 n(n 1) e 1 e n Positively oriented for n = 0, 1 mod 4 Negatively oriented for n = 2, 3 mod 4 3 ( e 1 ) ( e 2 ) ( e n ) = ( 1) n e 1 e n Positively oriented for even n Negatively oriented for odd n ( ) det = 2 Negatively oriented det = 3 Negatively oriented det = 2 Negatively oriented 1 1 0

20 Exercise 767 The subset o V Λ n V { 0} is exactly the wedge products of all the bases in o V {ordered bases} Since ΛL preserve the wedge product, the exercise follows Exercise 768 If L(o U ) = o V and K(o V ) = o W, then (K L)(o U ) = K(L(o U )) = K(o V ) = o W So K L also preserves the orientation If L(o V ) = o W, then o V = L 1 L(o W ) So L 1 also preserves the orientation In general, K L preserves the orientation if both K and L preserve, or both K and L reverse K L reverses the orientation if one of K and L preserves and the other reverses If L reverses the orientation, then L 1 also reverses the orientation Exercise 769 For the special case that ξ is the wedge product of a basis of V and l is the wedge product of the dual basis, we have l(ξ) = 1 > 0 If the basis of V is positively oriented, then ξ o and l o So we get the characterisation in the exercise Exercise 770 If α = { v 1,, v n } o V, then α = { v 1,, v n} o V Therefore L(α) = {L( v 1),, L( v n )} L(o V ) = o W, and L(α) = {(L( v 1 )),, (L( v n )) } o W We need to show that L (L(α) ) and α are compatibly oriented In fact we claim that the two bases are the same The dual basis L(α) is defined by This is the same as (L( v i )) (L( v j )) = δ ij L ((L( v i )) )( v j ) = δ ij This shows that L (L(α) ) is the dual basis of α Therefore L (L(α) ) = α Exercise 774 Suppose α i, β i are two bases of V i Then ( ) Iβ1 α I (β1 β 2 )(α 1 α 2 ) = 1 O O I β2 α 2 This implies det I (β1 β 2 )(α 1 α 2 ) = det I β1 α 1 det I β2 α 2 If det I β1 α 1 > 0 and det I β2 α 2 > 0, then det det I (β1 β 2 )(α 1 α 2 ) > 0 The other way is to notice that, if β i = a i ( α i ), then (β 1 β 2 ) = ( β 1 ) ( β 2 ) = a 1 a 2 ( α 1 ) ( α 2 ) Then a 1, a 2 > 0 = a 1 a 2 > 0 If V 1 and V 2 are exchanged, the orientation is changed by ( 1) (dim V 1)(dim V 2 ) Since the preserving the orientation of V 1 and reversing of orientation of V 2 reverses the orientation of V, for the given orientations of V 1 and V, there is a unique orientation of V 2 compatible with the orientations of V 1 and V So the orientation of V 2 is determined

21 Exercise 775 The matrix I βα between two orthonormal basis is orthogonal and has determinant ±1 Therefore α = ±( β) If the two orthonormal bases are compatibly oriented, then the sign must be positive Exercise 777 The equalities in the exercise will be derived from For λ Λ k V and µ Λ n k V, we have λ µ = λ, µ e, λ, µ = λ, µ λ µ = λ, µ e = λ, µ e, and λ µ = ( 1) k(n k) µ λ = ( 1) k(n k) µ, λ e = ( 1) k(n k) λ, µ e Comparing the second equality with λ µ = λ, µ e, we get λ, µ = ( 1) k(n k) λ, µ Then by λ, µ = λ, µ, we get λ, µ = λ, µ = ( 1) k(n k) λ, µ Since any vector in Λ k V is of the form µ, we conclude that λ = ( 1) k(n k) λ Exercise 778 Let dim V = dim W = n Let e V and e W be the canonical bases of Λ n V and Λ n W If L preserves the orientation, the ΛL(e V ) = e W Moreover, ΛL preserves the induced inner product Therefore Then we have ΛL( λ), ΛL( µ) e W = ΛL( λ) ΛL( µ) = ΛL( λ µ) = ΛL( λ, µ e V ) = λ, µ e W = ΛL( λ ), ΛL( µ) e W ΛL( λ), ΛL( µ) = ΛL( λ ), ΛL( µ) Since this is true for all µ Λ n k V, and the isomorphism ΛL takes µ to all vectors in Λ n k V, we conclude that ΛL( λ) = ΛL( λ ) If L reverses the orientation, then ΛL( λ) = ΛL( λ ) Exercise 779

22 Let { w 1,, w k } be a positively oriented orthonormal basis of W Let { u 1,, u l } be a positively oriented orthonormal basis of W Then { w 1,, w k, u 1,, u l } is a positively oriented orthonormal basis of V = W W We have so that Then for any µ Λ n k V, we use Exercise 762 e W = w 1 w k, e W = u 1 u l, e V = w 1 w k u 1 u l = e W e W µ, e W = e W, e W µ, e W = e W µ, e W e W = e W, µ e V, e V = e W, µ e V, e V = e W, µ Since this is true for all µ, we get e W = e W Exercise 780 We apply Exercise 779 to W = span{ x 1,, x k } and W = span{ x k+1,, x n } We have e W = x 1 x k x 1 x k, e W = x k+1 x n x k+1 x n The orthogonality between the first k vectors and the last (n k) vectors (together with positively oriented α) also implies (either by Exercise 762 and det(x T X) = (det X) 2, or the geometric meaning of volume as square root of determinant or norm of exterior product vector) det( x 1 x n ) = x 1 x k x k+1 x n Then by Exercise 779, we have e W = e W This gives ( x 1 x k ) = x 1 x k x k+1 x n x k+1 x n = x 1 x k x k+1 x n x x k+1 x n 2 k+1 x n = det( x 1 x n ) x k+1 x n 2 2 x k+1 x n Exercise 781 Introduce the cofactor x 11 x 21 x (n 1)1 x C i = det 1(i 1) x 2(i 1) x (n 1)(i 1) x 1(i+1) x 2(i+1) x (n 1)(i+1) x 1n x 2n x (n 1)n

23 Then (see the answer to Exercise 746) ( x 1 x n 1 ) = C i ( e 1 e i 1 e i+1 e n ) = ( 1) n i C i e i 1 i n 1 i n = ( 1) n ( C 1, C 2, C 3, ) By (( x 1 x n 1 ) x i )e [n] = ( x 1 x n 1 ) x i = 0, we have ( x 1 x n 1 ) x i = 0 This is the claim about the orthogonality Exercise 782 We have [( x 1 x n 1 ) x n ]e [n] = x 1 x n 1 x n = det( x 1 x n )e [n] Therefore ( x 1 x n 1 ) x n = det( x 1 x n ) The cofactor expansion is obtained by further using ( x 1 x n 1 ) = ( 1) n ( C 1, C 2, C 3, ) from Exercise 781

Review of linear algebra

Review of linear algebra Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of

More information

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg Math 52H: Multilinear algebra, differential forms and Stokes theorem Yakov Eliashberg March 202 2 Contents I Multilinear Algebra 7 Linear and multilinear functions 9. Dual space.........................................

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Introduction to Geometry

Introduction to Geometry Introduction to Geometry it is a draft of lecture notes of H.M. Khudaverdian. Manchester, 18 May 211 Contents 1 Euclidean space 3 1.1 Vector space............................ 3 1.2 Basic example of n-dimensional

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. MATH 311-504 Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. Determinant is a scalar assigned to each square matrix. Notation. The determinant of a matrix A = (a ij

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Components and change of basis

Components and change of basis Math 20F Linear Algebra Lecture 16 1 Components and change of basis Slide 1 Review: Isomorphism Review: Components in a basis Unique representation in a basis Change of basis Review: Isomorphism Definition

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

Lecture 2 Some Sources of Lie Algebras

Lecture 2 Some Sources of Lie Algebras 18.745 Introduction to Lie Algebras September 14, 2010 Lecture 2 Some Sources of Lie Algebras Prof. Victor Kac Scribe: Michael Donovan From Associative Algebras We saw in the previous lecture that we can

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Functional Analysis MATH and MATH M6202

Functional Analysis MATH and MATH M6202 Functional Analysis MATH 36202 and MATH M6202 1 Inner Product Spaces and Normed Spaces Inner Product Spaces Functional analysis involves studying vector spaces where we additionally have the notion of

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat Linear Algebra Lecture 2 1.3.7 Matrix Matrix multiplication using Falk s

More information

Vector Spaces, Affine Spaces, and Metric Spaces

Vector Spaces, Affine Spaces, and Metric Spaces Vector Spaces, Affine Spaces, and Metric Spaces 2 This chapter is only meant to give a short overview of the most important concepts in linear algebra, affine spaces, and metric spaces and is not intended

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

Introduction to Linear Algebra, Second Edition, Serge Lange

Introduction to Linear Algebra, Second Edition, Serge Lange Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2. SSM: Linear Algebra Section 61 61 Chapter 6 1 2 1 Fails to be invertible; since det = 6 6 = 0 3 6 3 5 3 Invertible; since det = 33 35 = 2 7 11 5 Invertible; since det 2 5 7 0 11 7 = 2 11 5 + 0 + 0 0 0

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

MATH Linear Algebra

MATH Linear Algebra MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Math 215 HW #9 Solutions

Math 215 HW #9 Solutions Math 5 HW #9 Solutions. Problem 4.4.. If A is a 5 by 5 matrix with all a ij, then det A. Volumes or the big formula or pivots should give some upper bound on the determinant. Answer: Let v i be the ith

More information

Math 396. Quotient spaces

Math 396. Quotient spaces Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is 17 Inner products Up until now, we have only examined the properties of vectors and matrices in R n. But normally, when we think of R n, we re really thinking of n-dimensional Euclidean space - that is,

More information

INNER PRODUCT SPACE. Definition 1

INNER PRODUCT SPACE. Definition 1 INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of

More information

Vector Spaces and Linear Transformations

Vector Spaces and Linear Transformations Vector Spaces and Linear Transformations Wei Shi, Jinan University 2017.11.1 1 / 18 Definition (Field) A field F = {F, +, } is an algebraic structure formed by a set F, and closed under binary operations

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

Quadratic Forms. Marco Schlichting Notes by Florian Bouyer. January 16, 2012

Quadratic Forms. Marco Schlichting Notes by Florian Bouyer. January 16, 2012 Quadratic Forms Marco Schlichting Notes by Florian Bouyer January 16, 2012 In this course every ring is commutative with unit. Every module is a left module. Definition 1. Let R be a (commutative) ring

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

Chapter 2: Matrices and Linear Systems

Chapter 2: Matrices and Linear Systems Chapter 2: Matrices and Linear Systems Paul Pearson Outline Matrices Linear systems Row operations Inverses Determinants Matrices Definition An m n matrix A = (a ij ) is a rectangular array of real numbers

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Final Project # 5 The Cartan matrix of a Root System

Final Project # 5 The Cartan matrix of a Root System 8.099 Final Project # 5 The Cartan matrix of a Root System Thomas R. Covert July 30, 00 A root system in a Euclidean space V with a symmetric positive definite inner product, is a finite set of elements

More information

Matrix Operations: Determinant

Matrix Operations: Determinant Matrix Operations: Determinant Determinants Determinants are only applicable for square matrices. Determinant of the square matrix A is denoted as: det(a) or A Recall that the absolute value of the determinant

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am ECS130 Scientific Computing Lecture 1: Introduction Monday, January 7, 10:00 10:50 am About Course: ECS130 Scientific Computing Professor: Zhaojun Bai Webpage: http://web.cs.ucdavis.edu/~bai/ecs130/ Today

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n Lectures -2: Linear Algebra Background Almost all linear and nonlinear problems in scientific computation require the use of linear algebra These lectures review basic concepts in a way that has proven

More information

Projection Theorem 1

Projection Theorem 1 Projection Theorem 1 Cauchy-Schwarz Inequality Lemma. (Cauchy-Schwarz Inequality) For all x, y in an inner product space, [ xy, ] x y. Equality holds if and only if x y or y θ. Proof. If y θ, the inequality

More information

Lecture 11: Clifford algebras

Lecture 11: Clifford algebras Lecture 11: Clifford algebras In this lecture we introduce Clifford algebras, which will play an important role in the rest of the class. The link with K-theory is the Atiyah-Bott-Shapiro construction

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Part 1a: Inner product, Orthogonality, Vector/Matrix norm Part 1a: Inner product, Orthogonality, Vector/Matrix norm September 19, 2018 Numerical Linear Algebra Part 1a September 19, 2018 1 / 16 1. Inner product on a linear space V over the number field F A map,

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

Determinant lines and determinant line bundles

Determinant lines and determinant line bundles CHAPTER Determinant lines and determinant line bundles This appendix is an exposition of G. Segal s work sketched in [?] on determinant line bundles over the moduli spaces of Riemann surfaces with parametrized

More information

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Homework 5 M 373K Mark Lindberg and Travis Schedler

Homework 5 M 373K Mark Lindberg and Travis Schedler Homework 5 M 373K Mark Lindberg and Travis Schedler 1. Artin, Chapter 3, Exercise.1. Prove that the numbers of the form a + b, where a and b are rational numbers, form a subfield of C. Let F be the numbers

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A). Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale

More information

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations Linear Algebra in Computer Vision CSED441:Introduction to Computer Vision (2017F Lecture2: Basic Linear Algebra & Probability Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Mathematics in vector space Linear

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information