(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

Similar documents
Review of linear algebra

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Introduction to Geometry

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

1. General Vector Spaces

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

LINEAR ALGEBRA REVIEW

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Matrices and Linear Algebra

Linear Algebra. Min Yan

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Linear Algebra Highlights

Elementary linear algebra

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

SYLLABUS. 1 Linear maps and matrices

NOTES on LINEAR ALGEBRA 1

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Linear Systems and Matrices

Components and change of basis

Linear Algebra Review. Vectors

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Lecture 2 Some Sources of Lie Algebras

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Review problems for MA 54, Fall 2004.

Math Linear Algebra Final Exam Review Sheet

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Linear Algebra Review

Knowledge Discovery and Data Mining 1 (VO) ( )

Functional Analysis MATH and MATH M6202

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat

Vector Spaces, Affine Spaces, and Metric Spaces

October 25, 2013 INNER PRODUCT SPACES

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1 Determinants. 1.1 Determinant

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Extra Problems for Math 2050 Linear Algebra I

Chapter 2 Linear Transformations

Introduction to Linear Algebra, Second Edition, Serge Lange

Fundamentals of Engineering Analysis (650163)

Algebra II. Paulius Drungilas and Jonas Jankauskas

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

NORMS ON SPACE OF MATRICES

MATH Linear Algebra

Calculating determinants for larger matrices

LINEAR ALGEBRA SUMMARY SHEET.

6 Inner Product Spaces

Math 215 HW #9 Solutions

Math 396. Quotient spaces

Math Linear Algebra II. 1. Inner Products and Norms

Linear Algebra Massoud Malek

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is

INNER PRODUCT SPACE. Definition 1

Vector Spaces and Linear Transformations

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Quadratic Forms. Marco Schlichting Notes by Florian Bouyer. January 16, 2012

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Chapter 2: Matrices and Linear Systems

Lecture Notes in Linear Algebra

Final Project # 5 The Cartan matrix of a Root System

Matrix Operations: Determinant

Exercise Sheet 1.

2. Every linear system with the same number of equations as unknowns has a unique solution.

GQE ALGEBRA PROBLEMS

Applied Linear Algebra in Geoscience Using MATLAB

Linear algebra 2. Yoav Zemel. March 1, 2012

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

Lecture Notes 1: Vector spaces

ELE/MCE 503 Linear Algebra Facts Fall 2018

A PRIMER ON SESQUILINEAR FORMS

Optimization Theory. A Concise Introduction. Jiongmin Yong

Supplementary Notes on Linear Algebra

1 9/5 Matrices, vectors, and their applications

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

Projection Theorem 1

Lecture 11: Clifford algebras

Some notes on Coxeter groups

and let s calculate the image of some vectors under the transformation T.

Analysis-3 lecture schemes

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Mathematics Department Stanford University Math 61CM/DM Inner products

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

CHAPTER VIII HILBERT SPACES

Determinant lines and determinant line bundles

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Conceptual Questions for Review

Homework 5 M 373K Mark Lindberg and Travis Schedler

1 Matrices and Systems of Linear Equations

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations

Lecture Summaries for Linear Algebra M51A

Transcription:

Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w 2 + + a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w 2 + + x i a mi w m i=1 Therefore y j = n i=1 a jix i = a j1 x 1 + a j2 x 2 + + a jn x n Exercise 72 i=1 (K + L)( x + y) = K( x + y) + L( x + y) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = K( x) + L( x) + K( y) + L( y) = (K + L)( x) + (K + L)( y) (def of K + L) (K + L)(c x) = K(c x) + L(c x) (def of K + L) = ck( x) + cl( x) (K, L are linear) = c(k( x) + L( x)) = c(k + L)( x) (def of K + L) i=1 By Kα = βk βα and Lα = βl βα, we get Therefore (K + L) βα = K βα + L βα The discussion for cl is similar Exercise 73 (K + L)α = Kα + Lα = βk βα + βl βα = β(k βα + L βα ) (K L)( x + y) = K(L( x + y)) (def of K L) By Kβ = γk γβ and Lα = βl βα, we get = K(L( x) + L( y)) (L is linear) = K(L( x)) + K(L( y)) (K is linear) = (K L)( x) + (K L)( y) (def of K L) (K L)α = K(Lα) = K(βL βα ) = (Kβ)L βα = (γk γβ )L βα = γ(k γβ L βα ) Therefore (K L) γα = K γβ L βα Exercise 74

Since L is linear, we get L(L 1 ( x + y)) = x + y = L(L 1 ( x)) + L(L 1 ( y)) = L(L 1 ( x) + L 1 ( y)) Since L is invertible, this implies L 1 ( x+ y) = L 1 ( x)+l 1 ( y) The equality L 1 (c x) = cl 1 ( x) can be similarly proved We have Lα = βl βα By left multiplying L 1 and right multiplying L 1 βα and using the associativity (which is really the linearity of L), we get αl 1 βα = L 1 β Therefore (L 1 ) αβ = (L βα ) 1 Exercise 76 Suppose L maps V to W Suppose K L = I Then L( x 1 ) = L( x 2 ) implies x 1 = I( x 1 ) = K(L( x 1 )) = K(L( x 2 )) = I( x 2 ) = x 2 Therefore L is injective Suppose L K = I Then any y = I( y) = L(K( y)) is the image of K( y) under L Therefore L is surjective Conversely, suppose L is injective Let v 1,, v n be a basis of V The infectivity implies that L( v 1 ),, L( v n ) are still nearly independent Therefore they can be extended to a basis L( v 1 ),, L( v n ), w 1,, w m n of W Define K : W V to be the linear transform determined by K(L( v 1 )) = v 1,, K(L( v n )) = v n, K( w 1 ) = 0,, K( w m n ) = 0 Then we get K L = I Now suppose L is surjective Then for a basis w 1,, w m of W, we have w 1 = L( v 1 ),, w m = L( v m ) for some vectors v 1,, v n in V Define K : W V to be the linear transform determined by K( w 1 ) = v 1,, K( w m ) = v m Then we get L K = I Exercise 77 Let A = L αα and B = L ββ Then B = I βα AI αβ On the other hand, I βα I αβ = I αα is the identity matrix, so that I αβ = I 1 βα Therefore we get B = P AP 1 for P = I βα Exercise 78 If k V and l W, then (k, l)( x y) = k( x) + l( y) is a linear functional on V W Conversely, for a linear functional λ on V W, the restrictions k = λ V and l = λ W are linear functionals on V and W, and λ( x y) = λ( x) + λ( y) = k( x) + l( y) = (k, l)( x y) Exercise 79 By L (k + l)( x)) = (k + l)(l( x)) = k(l( x)) + l(l( x)) = L k( x) + L l( x) = (L k + L l)( x)), we get L (k + l) = L k + L l The equality L (cl) = cl l can be similarly proved Exercise 710

By (K + L) l( x) = l((k + L)( x)) = l(k( x) + L( x)) = l(k( x)) + l(l( x)) = K l( x) + L l( x) = (K l + L l)( x), (cl) l( x) = l((cl)( x)) = l(cl( x)) = cl(l( x)) = cl l( x) = (cl )l( x), (K L) l( x) = l((k L)( x)) = l(k(l( x))) = K l(l( x)) = L (K l)( x) = (L K )l( x), we get (K + L) = K + L, (cl) = cl, (K L) = L K Exercise 711 The problem is to show the equality (L( x)) = (L ) ( x ) V L W (L ) (V ) (W ) The equality is verified below (L( x)) (l) = l(l( x)), (L ) ( x )(l) = x (L (l)) = L (l)( x) = l(l( x)) Exercise 712 For L: V W, we have L : W V By Exercise 76, L is injective if and only if K L = I for some linear transform K By Exercises 710 and 711, K L = I is equivalent to L K = (K L) = I = I By Exercise 76 again, L K = I for some linear transform K if and only if L is surjective This proves that L is injective if and only if L is surjective By the similar argument, we may prove that L is surjective if and only if L is injective Remark: Using Exercise 711, the two statements are dual to each other, and therefore imply each other Exercise 713 Let L( v 1 ) = a 11 w 1 + a 21 w 2 + + a m1 w m, L( v 2 ) = a 12 w 1 + a 22 w 2 + + a m2 w m, L( v n ) = a 1n w 1 + a 2n w 2 + + a mn w m

and L ( w 1) = b 11 v 1 + b 21 v 2 + + b n1 v n, L ( w 2) = b 12 v 1 + b 22 v 2 + + b n2 v n, L ( w m) = b 1m v 1 + b 2m v 2 + + b nm v n Then by the discussion before Proposition 711, Therefore a ij = b ji a ij = w i (L( v j )), b ji = (L ( w i ))( v j ) = w i (L( v j )) Exercise 714 By Schwarz s inequality, we have l( x) a 2 x 2 Moreover, the equality can happen with l( a) = a 2 2 Therefore the norm l = a 2 In general, a linear transform L = (l 1, l 2,, l m ): R n R m satisfies L( x) = max{ l 1 ( x), l 2 ( x),, l m ( x) } max{ l 1 x, l 2 x,, l m x } max{ l 1, l 2,, l m } x On the other hand, the equality can happen because if max{ l 1, l 2,, l m } = l k, then there is x 0 satisfying l k ( x) = l k x = max{ l 1, l 2,, l m } x This implies that the equality holds Thus L = max{ l 1, l 2,, l m } In case l k ( x) = a k x, this means L = max{ a 1 2, a 2 2,, a m 2 } Exercise 716 Let the columns of A be a 1, a 2,, a n Then A x 2 = x 1 a 1 + x 2 a 2 + + x n a n 2 x 1 a 1 2 + x 2 a 2 2 + + x n a n 2 x 2 1 + x 2 2 + + x 2 n a 1 2 2 + a 2 2 2 + + a n 2 2 = x 2 a 2 ij Therefore the norm with respect to the Euclidean norm A a 2 ij Exercise 718 For n m, we have L m + L m+1 + + L n L m + L m+1 + + L n L m + L m+1 + + L n = L m (1 L n+1 ) 1 L L m 1 L

By L < 1, this implies that n=0 Ln satisfies the Cauchy criterion Therefore the series converges The assumption L < 1 and L n L n imply that lim n L n = O, the zero transform By taking lim n on both sides of the equalities (1 + L + L 2 + + L n )(I L) = I L n+1, (I L)(1 + L + L 2 + + L n ) = I L n+1, we find ( n=0 Ln )(I L) = (I L)( n=0 Ln ) = I Therefore n=0 Ln = (I L) 1 If L is invertible and K L < 1 L 1, then I KL 1 = (L K)L 1 L K L 1 < 1 By the first part, KL 1 = I (I KL 1 ) is invertible, which further implies that K is invertible Moreover, if M = I KL 1, then LK 1 = (KL 1 ) 1 = (I M) 1 = M n M n 1 = 1 M By M L K L 1, we further get Therefore K 1 = L 1 LK 1 L 1 LK 1 n=0 n=0 L 1 1 M L 1 1 L K L 1 K 1 L 1 = K 1 (L K)L 1 K 1 L K L 1 L 1 1 L K L 1 L K L 1 = L K L 1 2 1 L K L 1 The estimation tells us that, matrices in the ball B(L, L 1 1 ) are invertible, so that GL(n) is open Moreover, the estimation also shows that inside the ball, lim K = L implies lim K 1 = L 1, so that the inverse map is continuous Exercise 719 For any norm, the subset K = { x: x = 1} is bounded and closed and is therefore compact Then L( x) L( y) L( x) L( y) = L( x y) λ x y implies that L( x) is a continuous function Thus L = sup x K L( x) is reached at some point in K Exercise 720 The continuity of L + K follows from (L + K) (L 0 + K 0 ) L L 0 + K K 0 The continuity of cl follows from cl c 0 L 0 c c 0 L + c 0 L L 0 The continuity of K L follows from K L K 0 L 0 = K (L L 0 ) + (K K 0 ) L 0 K (L L 0 ) + (K K 0 ) L 0 K L L 0 + K K 0 L 0

Exercise 721 c x 2 = c x 2 Therefore the norm of the scalar product is 1 x y x 2 y 2 and x x = x 2 x 2 Therefore the norm of the dot product is 1 Exercise 722 We only verify the triangle inequality here For bilinear maps B and B, we have (B + B )( x, y) = B( x, y) + B ( x, y) B( x, y) + B ( x, y) This implies B + B B + B B x y + B x y ( B + B ) x y Exercise 723 By B(K( x), L( y)) B K( x) L( y) B K L x y, the norm of the bilinear map B(K( x), L( y)) is B K L Exercise 724 Let v i = p 1i v 1 + p 2i v 2 + + p ni v n, I αα = (p ij ), w j = q 1j w 1 + q 2i w 2 + + q mj w m, I ββ = (q ij ) Then b ij = b( v i, w j) = k,l b(p ki v k, q lj w l ) = k,l p ki q lj b( v k, w l ) = k,l p ki b kl q lj In matrix form, this means Exercise 725 This means B α β = IT αα B αβi ββ [ x b( x, )] ( y ) = b(, y) Applying the left side to = y, we get [[ x b( x, )] ( y )]( z) = y ([ x b( x, )]( z)) = y (b( z, )) = b( z, y) The result is the right side Exercise 726 The effect of V W on the basis vectors is v i b( v i, ) = j b( v i, w j ) w j = j b ij w j Since the sum is over the second index of b ij, the matrix of the linear transform is B αβ = (b ij )

The effect of W V on the basis vectors is w i b(, w i ) = i b( v i, w i ) v i = i b ij v i Since the sum is over the first index of b ij, the matrix of the linear transform is B T αβ = (b ji) Exercise 727 Let L βα = (a ij ) Then b ij = L( e i ) e j = (a 1i e 1 + + a ni e n ) e j = a ji Therefore B αβ = L T βα With respect to the Euclidean norm, we have z = sup y =1 z y by Exercise 714 Then L = sup L( x) = sup x =1 x =1 sup y =1 L( x) y = sup L( x) y x = y =1 Exercise 728 By the non-singular property, two vectors x, y V are equal if and only if b( x, z) = b( y, z) for all z W For the given basis of β of w, this is equivalent to b( x, w i ) = b( y, w i ) for all i Then the equality x = b( x, w 1 ) v 1 + b( x, w 2 ) v 2 + + b( x, w n ) v n follows from b(right side, w i ) = b( x, w 1 )b( v 1, w i ) + b( x, w 2 )b( v 2, w i ) + + b( x, w n )b( v n, w i ) Similarly, for y W, we have Exercise 729 = b( x, w i )b( v i, w i ) = b( x, w i ) y = b( v 1, y) w 1 + b( v 2, y) w 2 + + b( v n, y) w n The definition of L : W V is the following A vector y W gives a linear functional y, =, y W (this uses the isomorphism W = W ) Then we get the linear functional L, y = L( ), y V We wish to express this linear functional as, z V for some z V (this uses the isomorphism V = V ), and this z is L ( y) This means or L( ), y =, z =, L ( y), L( x), y = x, L ( y), x V, y W Note that for each fixed y, the left side is a linear functional on V Since the linear functional is the inner product with a unique z, this uniquely determines L ( y)

We have x, (K + L) ( y) = (K + L)( x), y = K( x) + L( x), y = K( x), y + L( x), y = x, K ( y) + x, L ( y) = x, K ( y) + L ( y) = x, (K + L )( y) Since this holds for all x, by the non-singular property of the inner product, we get (K+L) ( y) = (K + L )( y) for all y, or (K + L) = K + L The equalities (cl) = cl, (K L) = L K can be similarly proved Let L( v i ) = a 1i w 1 + a 2i w 2 + + a ni w n, L βα = (a ij ), L ( w j ) = b 1j v 1 + b 2j v 2 + + b mj v m, (L ) αβ = (b ij ) Since α and β are orthonormal bases, we have a ji = L( v i ), w j = v i, L ( w j ) = b ij This proves (L ) αβ = (L βα ) T Finally, we have L = L by the second part of Exercise 727 Exercise 730 The homomorphism V 1 V 2 (W 1 W 2 ) = W1 W2 (see Exercise 78 for the equality) induced by b is the direct sum of the homomorphisms V 1 W1 and V 2 W2 induced by b 1 and b 2 Exercise 731 Any bilinear form satisfies b( x + y, x + y) b( x, x) b( y, y) = b( x, y) + b( y, x) In particular, if b( x, x) = 0 for any x, then b( x, y) + b( y, x) = 0 for any x, y This means b is skew-symmetric Exercise 732 The area of the triangle is 1 2 det ( (a + h) a (a + 2h) a (a + h) 2 a 2 (a + 2h) 2 a 2 ) = 12 ( ) det h 2h 2ah + h 2 4ah + 4h 2 = h 3 P n P n 1 consists of triangles with vertices at 2k 2k + 1 2k + 2,, 2n 1 2n 1 2, for k = 0, 1,, n 1 2n 1 1 ( ) 3 1 1 By the first part, each such triangle has area = The total area of such 2 n 1 23(n 1) triangles is 2 n 1 1 2 = 1 3(n 1) 4 n 1 The area of the region is sum of the area of P n P n 1 for n 0 (P 0 is the triangle with vertices (0, 0), (0, 4), (2, 4) and has area 2) Therefore the area of A is 1 16 n=0 = 4n 1 3

Exercise 733 For any bilinear form, we have b( x + y, x + y) b( x, x) b( y, y) = b( x, y) + b( y, x) If b is symmetric and q( x) = b( x, x), then the equality above becomes b( x, y) = 1 (q( x + y) q( x) q( y)) 2 Substituting y by y and using q( y) = q( y), we get Subtracting the two equalities, we get b( x, y) = b( x, y) = 1 (q( x y) q( x) q( y)) 2 2b( x, y) = 1 (q( x + y) q( x y)) 2 Exercise 734 A quadratic form is homogeneous of second order because a bilinear form preserves the scalar multiplication in each variable The parallelogram law follows from Exercise 733 and the fact that b is symmetric Exercise 735 Conversely, suppose the parallelogram law is satisfied By taking x = y = 0 in the law, we get q( 0) = 0 By further taking x = 0 in the law, we get q( x) = q( x) The symmetry property of b follows from its definition Next we prove b( x + y, z) + b( x y, z) = 2b( x, z) b( x + y, z) + b( x y, z) = 1 (q( x + y + z) q( x + y z) + q( x y + z) q( x y z)) 4 = 1 4 (q( x + y + z) + q( x y + z)) 1 (q( x + y z) + q( x y z)) 4 = 1 2 (q( x + z) + q( y)) 1 (q( x z) + q( y)) 2 = 2b( x, z) For fixed z, the function f( x) = b( x, z) satisfies f( 0) = 0 and f( x + y) + f( x y) = 2f( x) Taking x = y, we get f(2 x) = 2f( x) Then replacing x and y by 1 2 ( x + y) and 1 ( x y), we get ( ) 2 1 f( x) + f( y) = 2f ( x + y) = f( x + y) This shows that b is additive in the first variable 2 By symmetry, it is also additive in the second variable

Finally, if q is continuous, then b is also continuous By Exercise??, the biadditivity of b implies that b is bilinear Exercise 736 (1) x 2 + 4xy 5y 2 = (x 2 + 4xy + 4y 2 ) 9y 2 = (x + 2y) 2 (3y) 2, indefinite (2) 2x 2 + 4xy = 2(x 2 + 2xy + y 2 ) 2y 2 = 2(x + y) 2 2y 2, indefinite (3) 4x 2 1 + 4x 1 x 2 + 5x 2 2 = (4x 2 1 + 4x 1 x 2 + x 2 2) + 4x 2 2 = (2x 1 + x 2 ) 2 + (2x 2 ) 2, positive definite (4) x 2 + 2y 2 + z 2 + 2xy 2xz = (x 2 + 2x(y z) + (y z) 2 ) (y z) 2 + 2y 2 + z 2 = (x + y z) 2 + y 2 + 2yz = (x + y z) 2 + (y + z) 2 z 2, indefinite (5) 2u 2 v 2 6w 2 4uw + 2vw = 2(u 2 + 2uw + w 2 ) v 2 4w 2 + 2vw = 2(u + w) 2 (v + w) 2 3w 3, negative definite (6) x 2 1+x 2 3+2x 1 x 2 +2x 1 x 3 +2x 1 x 4 +2x 3 x 4 = (x 2 1+2x 1 (x 2 +x 3 +x 4 )+(x 2 +x 3 +x 4 ) 2 ) (x 2 +x 3 + x 4 ) 2 +2x 3 x 4 = (x 1 +x 2 +x 3 +x 4 ) 2 (x 2 +x 3 +x 4 ) 2 +2x 3 x 4 = (x 1 +x 2 +u) 2 (x 2 +u) 2 + 1 2 u2 1 2 v2, u = x 3 + x 4, v = x 3 x 4, indefinite Exercise 737 x 2 + 2y 2 + z 2 + 2xy 2xz = 2y 2 + 2xy + (z x) 2 = 1 2 x2 + 2 ( y + 1 2) 2 + (z x) 2 Exercise 738 If the coefficient of the square term x 2 i is a ii 0, then q( e i ) = a ii 0 Therefore q cannot be positive definite Exercise 739 The unit sphere S = { x: x = 1} is compact Then the continuous function q( x) reaches minimum λ at x 0 S Then q( x) q( x 0 ) = λ 0 for any x = 1, where the first inequality is the definition of minimum, and the second inequality is due to the positive assumption of q Now for any vector x, we have x = r u with r = x and u = 1 Then by the second order homogeneity of q, we have q( x) = q(r u) = r 2 q( u) r 2 λ = λ x 2

Exercise 740 The proof is the same as Proposition 712 In particular, if F and G are multilinear, then the triangular inequality follows from (F + G)( x 1, x 2,, x k ) = F ( x 1, x 2,, x k ) + G( x 1, x 2,, x k ) F ( x 1, x 2,, x k ) + F ( x 1, x 2,, x k ) F x 1 x 2 x k + G x 1 x 2 x k ( F + G ) x 1 x 2 x k Exercise 741 By the definition of the norm of multilinear maps, we have B 1 ( x, B 2 ( u, v)) B 1 x B 2 ( u, v) B 1 x B 2 u v This implies that the norm of the triple linear map B 1 ( x, B 2 ( u, v)) is B 1 B 2 In general, the norm of composition of multilinear maps is the product of the norms of the individual multilinear maps This generalizes Proposition 712 Exercise 742 We fix a matrix A and consider det AB = det(a b 1 A b 2 A b n ) as a function of columns b 1, b 2,, b n of B Since det is multilinear, we have det(a( b 1 + b 1) A b 2 A b n ) = det(a b 1 + A b 1 A b 2 A b n ) = det(a b 1 A b 2 A b n ) + det(a b 1 A b 2 A b n ), det(a(c b 1 ) A b 2 A b n ) = det(ca b 1 A b 2 A b n ) = c det(a b 1 A b 2 A b n ) This shows that det AB is linear in the first column of B By the similar argument, it is linear in any column of B On the other hand, the alternating property of det implies that det(a b 2 A b 1 A b n ) = det(a b 1 A b 2 A b n ) By the similar argument for other pairs of colums of B, we see that det AB is alternating in columns of b We know the function det AB that is multilinear and alternating in columns of the square matrix B must be a constant multiple of the determinant of B So det AB = a det B The constant can be determined by taking B to be the identity matrix, and we get a = a det I = det AI = det A Therefore det AB = det A det B Exercise 743 dim Λ k V = n!, the number of k element subsets of {1, 2,, n} k!(n k)!

Exercise 744 We note that both sides of the associativity ( λ µ) ν = λ ( µ ν) are triple linear maps ΛR n ΛR n ΛR n ΛR n that sends the standard basis (e i1 e ik, e j1 e jl, e k1 e km ) to e i1 e ik e j1 e jl e k1 e km Therefore the two sides are equal Exercise 745 The multilinear and alternating property for the multiple wedge product map follows from the fact that the exterior product is a graded commutative product satisfying the usual algebra properties The explicit formula for x 1 x 2 x k is obtained by applying the formula (732) to F ( x 1, x 2,, x k ) = x 1 x 2 x k Exercise 746 The equality x 1 x 2 x n = det( x 1 x 2 x n ) e [n] is the special case of the explicit formula in Exercise 745, applied to V = R n and the standard basis Applying explicit formula in Exercise 745 to x 2 x n, we get x 2i2 x 3i2 x ni2 x 2i3 x 3i3 x ni3 x 2 x n = det 1 i 2 < <i n n e i 2 e i3 e in x 2in x 3in x nin Note that the indices in the sum must be (i 2,, i n ) = (1,, î,, n) = (1,, i 1, i + 1,, n), obtained by deleting i from all the natural numbers between 1 and n Therefore x 2 x n = C i1 e 1 e i 1 e i+1 e n, with 1 i n x 21 x 31 x n1 x C i1 = det 2(i 1) x 3(i 1) x n(i 1) x 2(i+1) x 3(i+1) x n(i+1) x 2n x 3n x nn Then for x 1 = x 11 e 1 + + x 1n e n, we get x 1 x 2 x n = x 1i C i1 e i e 1 e i 1 e i+1 e n 1 i n = ( 1) i 1 x 1i C i1 e 1 e i 1 e i e i+1 e n 1 i n ( ) = ( 1) i 1 x 1i C i1 e [n] 1 i n

Compared with the first part, we get det( x 1 x 2 x n ) = ( 1) i 1 x 1i C i1 1 i n This is the cofactor expansion with respect to the first column Let A be an n n matrix For two subsets I, J [n] = {1,, n} of k numbers, let A IJ be the k k submatrix of the I-rows and J-columns Moreover, let sign(i, [n] I) be the parity of the number of pair exchanges needed to convert (I, [n] I) to (1,, n) Then det A = sign(i, [n] I) det A I{1,,k} det A ([n] I){k+1,,n} I [n], I =k More generally, suppose [n] is disjoint union of subsets J 1,, J p with k 1,, k p numbers, with k 1 + + k p = n Then det A = [n]=i 1 I p, I i =p+i sign(i 1,, I p )sign(j 1,, J p ) det A I1 J 1 det A IpJ p Exercise 747 This follows from Exercise 76 and Λ(K L) = ΛK ΛL Exercise 748 The following shows that Λ(cL) = c k ΛL are equal on a basis of the vector space Λ k V Λ(cL)( v i1 v i2 v ik ) = (cl)( v i1 ) (cl)( v i2 ) (cl)( v ik ) This implies Λ(cL) = c k ΛL on the whole Λ k V = cl( v i1 ) cl( v i2 ) cl( v ik ) = c k (L( v i1 ) L( v i2 ) L( v ik )) = c k ΛL( v i1 v i2 v ik ) Exercise 749 Take a basis α = { v 1,, v n } of V and express λ as a linear combination λ = a i1 i k v i1 v ik i 1 < <i k By λ 0, we have a i1 i k for a particular choice of indices (i 1 i k ) Let (j 1 j n k ) be the complement of (i 1 i k ) in [n], and let µ = v j1 v jn k Λ n k V Then λ µ = ai1 i k v i1 v ik v j1 v jn k = ±a i1 i k v 1 v n 0 Exercise 750

We may directly computate like (732) Alternatively, consider the special case x i = e i is the standard basis vector in R k and consider By the first part of Exercise 746, we get a i = a i1 e 1 + a i2 e 2 + + a ik e k = (a i1, a i2,, a ik ) a 1 a 2 a k = (det A) e 1 e 2 e k Then consider the linear transform L: R k V determined by L( e i ) = x i We have L( a i ) = y i by the linearity of L, and y 1 y 2 y k = L( a 1 ) L( a 2 ) L( a k ) = ΛL( a 1 a 2 a k ) = ΛL((det A) e 1 e 2 e k ) = (det A)ΛL( e 1 e 2 e k ) = (det A)L( e 1 ) L( e 2 ) L( e k ) = (det A) x 1 x 2 x k Exercise 751 It follows from the definition of det L that It also follows from Exercise 750 that L( v 1 ) L( v 2 ) L( v n ) = (det L) v 1 v 2 v n L( v 1 ) L( v 2 ) L( v n ) = (det L αα ) v 1 v 2 v n Since v 1 v 2 v n 0, we get det L = det L αα Exercise 752 Applying Exercise 750 to the case x i are α, y i are β, and A is I βα, we get β = (det I βα )( α) By Exercise 713 and I β α = (I α β ) 1 = (I T βα ) 1, we further get β = (det I β α )( α ) = (det(i T βα ) 1 )( α ) = (det I βα ) 1 ( α ) Exercise 753 For we have L(x 1,, x i,, x j,, x n ) = (x 1,, x i + cx j,, x j,, x n ), L( e 1 ) L( e n ) = e 1 e i 1 ( e i + c e j ) e i+1 e n = e 1 e n Therefore det L = 1 For L(x 1,, x i,, x j,, x n ) = (x 1,, x j,, x i,, x n ), we have L( e 1 ) L( e n ) = e 1 e i 1 e j e i+1 e j 1 e i e j+1 e n = e 1 e n

Therefore det L = 1 For we have L(x 1,, x i,, x n ) = (x 1,, cx i,, x n ), L( e 1 ) L( e n ) = e 1 e i 1 (c e i ) e i+1 e n = c e 1 e n Therefore det L = c Exercise 754 Let α, β be bases of V and W If dim V = dim W = n, then Λ n K( α) = a( β) and Λ n L( β) = b( α) for some numbers a, b This implies Λ n (L K)( α) = Λ n L(Λ n K( α)) = Λ n L(a( β)) = aλ n L( β) = ab( α) Therefore det(l K) = ab By the same reason, we have det(k L) = ba Therefore det(l K) = det(k L) If dim V < dim W = n, then Λ n (K L) = Λ n K Λ n L: Λ n W Λ n V = 0 Λ n W must be the zero map Therefore det(k L) = 0 However, it may happen that L K is invertible, so that det(l K) 0 Exercise 755 Let l i W and x i V Then Therefore [(ΛL) (l 1 l k )]( x 1 x k ) = (l 1 l k )[ΛL( x 1 x k )] This proves (ΛL) = ΛL = (l 1 l k )(L( x 1 ) L( x k )) = det(l i (L( x j ))) = det(l (l i )( x j )) = [L (l 1 ) L (l k )]( x 1 x k ) (ΛL) (l 1 l k ) = L (l 1 ) L (l k ) = (ΛL )(l 1 l k ) Exercise 756 By Exercise 752, we have det β = β = (det I βα ) 1 ( α ) = (det I βα ) 1 det α Exercise 757 For fixed x i, both sides are bilinear in φ and ψ So it is sufficient to verify the equality for the case φ = v I = v i 1 v i k, i 1 < < i k, ψ = v J = v j 1 v j l, j 1 < < j l, are bases vectors in Λ k V and Λ l V Once φ and ψ are fixed basis vectors (ie, I and J are fixed), then we view both sides as multilinear functions of x So it is sufficient to verify the equality for the case x 1 = v p1,, x k+l = v pk+l are also basis vectors: ( v I v J)( v P ) = ± v I( v R ) v J( v S ) P =R S

For the left side to be nonzero, we must have I J =, and P is a rearrangement of I J For the right side to be nonzero, there must be a disjoint union P = R S, such that I = R and J = S up to rearrangement Since R and S are disjoint, we must have I J = Therefore both sides are zero when I J It remains to consider the case I J =, R is a rearrangement of I, and S is a rearrangement of J It is then easy to see that both sides are π1 with the same sign Exercise 758 Compared with the non-singular case, the problem is that we cannot use dual basis to define Λ k b On the other hand, we do expect the induced bilinear map to satisfy Λ k b( x 1 x k, y 1 y k ) = det(b( x i, y j )) 1 i,j k We could certainly use the equality to define Λ k b But two obstacles need to be overcome The first is whether Λ k b is well defined because we could have x 1 x k = x 1 x k with x i x i The second is that x 1 x k are not all the elements in Λ k V The vector space Λ k V consists of linear combinations of vectors of the form x 1 x k So we introduce a map from p( x 1,, x k, y 1,, y k ) = det(b( x i, y j )) 1 i,j k : V 2k R For fixed y 1,, y k, the function is multilinear and alternating in x 1,, x k Therefore the function is the value of a linear functional of Λ k V at x 1 x k The linear functional depends on y 1,, y k, and we may denote it by q( ξ, ( y 1,, y k )), ξ Λ k V Moreover, for the special case ξ = x 1 x k, we have q( x 1 x k, ( y 1,, y k )) = det(b( x i, y j )) 1 i,j k For fixed x 1,, x k, the formula above is multilinear and alternating in y 1,, y k Since ξ Λ k V is a linear combination of vectors of the form x 1 x k, and q( ξ, ( y 1,, y k )) is linear in ξ, we find that for fixed ξ, q( ξ, ( y 1,, y k )) is also multilinear and alternating in y 1,, y k Therefore it is the value of a linear functional of Λ k W at y 1 y k The linear functional depends on ξ linearly, and we may denote it by r( ξ, η), η Λ k W Then r( ξ, η) is bilinear on Λ k V Λ k W, and for the special case ξ = x 1 x k and η = y 1 y k, we have r( x 1 x k, y 1 y k ) = det(b( x i, y j )) 1 i,j k This r is our induced bilinear function Λ k b: Λ k V Λ k W R Exercise 759 Both sides of Λ k 1+k 2 b( λ 1 λ 2, µ 1 µ 2 ) = Λ k 1 b 1 ( λ 1, µ 1 )Λ k 2 b 2 ( λ 2, µ 2 ) are quadruple linear functions of ( λ 1, λ 2, µ 1, µ 2 ) Λ k 1 V 1 Λ k 2 V 2 Λ k 1 W 1 Λ k 2 W 2 So we only need to verify the equality for the case λ 1, λ 2, µ 1, µ 2 are basis vectors in the respective spaces The basis vectors are obtained as follows Let α i, β i be a pair of dual bases of V i and W i for b i Then α 1 α 2, β 1 β 2 is a pair of dual bases of V 1 V 2 and W 1 W 2 We only need to consider the case λ 1, λ 2, µ 1, µ 2 are wedge products of vectors in α 1, α 2, β 1, β 2 Since b( v, w) = 0

for v V 1, w W 2, the equality (736) implies that b( v, w) = 0 if v is a factor of λ 1 and w is a factor of µ 2 Since b( v, w) = 0 for v V 2, w W 1, the equality (736) implies that b( v, w) = 0 if v is a factor of λ 2 and w is a factor of µ 1 Then the equality (736) for Λ k 1+k 2 b( λ 1 λ 2, µ 1 µ 2 ) becomes ( ) Λ k 1+k 2 b( λ 1 A1 O λ 2, µ 1 µ 2 ) = det = det A O A 1 det A 2, 2 where det A i is the equality (736) for Λ k i b i ( λ i, µ i ) Therefore we conclude that Λ k 1+k 2 b( λ 1 λ 2, µ 1 µ 2 ) = det A 1 det A 2 = Λ k 1 b 1 ( λ 1, µ 1 )Λ k 2 b 2 ( λ 2, µ 2 ) Remark: Exercise 760 is a special case of the current exercise, with W i = Vi and b i being the canonical dual pairing Exercise 762 ideals with the special case of inner product dual pairing, but gives a more general formula It is possible to generalise the current exercise in the similar way Exercise 760 This is a special Exercise 759 Take b i to be the canonical dual pairing between V and V Take λ 1, λ 2, µ 1, µ 2 to be f, g, x 1 x k, x k+1 x k+l Exercise 761 The inner product on ΛV is defined by extending any orthonormal basis of V to an orthonormal basis of ΛV Since an orthogonal transform preserves orthonormal basis of V, its induced homomorphism also preserve the orthonormal basis of ΛV In other words, the induced map on ΛV is also orthogonal Exercise 762 Suppose W is a subspace of an inner product space V, and W is the orthogonal complement of W in V Prove that ΛW and ΛW are orthogonal in ΛV Moreover, for λ ΛW and η ΛW, prove that λ µ, ξ η = λ, ξ µ, η Both sides of λ µ, ξ η = λ, ξ µ, η are quadruple linear functions of ( λ, µ, ξ, η) ΛW ΛV ΛV ΛW So we only need to verify the equality for the case λ = u1 u k, u i W, µ = x 1 x l, x i V, ξ = y 1 y k, y i V, η = v 1 v l, v i W

By (737) and u i, v j = 0, we have u 1, y 1 u 1, y k u 1, v 1 u 1, v l λ µ, ξ u η = det k, y 1 u k, y k u k, v 1 u k, v l x 1, y 1 x 1, y k x 1, v 1 x 1, v l x l, y 1 x l, y k x l, v 1 x l, v l u 1, y 1 u 1, y k 0 0 u = det k, y 1 u k, y k 0 0 x 1, y 1 x 1, y k x 1, v 1 x 1, v l x l, y 1 x l, y k x l, v 1 x l, v l u 1, y 1 u 1, y k x 1, v 1 x 1, v l = det det = λ, ξ µ, η u k, y 1 u k, y k x l, v 1 x l, v l

Exercise 763 By Exercise 73, we have I γα = I γβ I βα Therefore det I γα = det I γβ det I βα (see Exercise 742) Suppose det I βα > 0 Then the equality implies that det I γα and det I γβ have the same sign Therefore γ o α γ o β and γ o α γ o β This proves o α = o β and o α = o β Suppose det I βα < 0 Then the equality implies that det I γα and det I γβ have the opposite sign Therefore γ o α γ o β and γ o α γ o β This proves o α = o β and o α = o β Exercise 764 Since I αα is the identity matrix, we have det I αα = 1 > 0 This shows that α and α are comparably oriented Since I αβ = I 1 βα, we have det I αβ = (det I βα ) 1 In particular, det I αβ > 0 det I βα > 0 This shows that if α and β are comparably oriented, then β and α are comparably oriented By det I γα = det I γβ det I βα, we get det I γβ > 0 and det I βα > 0 = det I γα > 0 This shows that, if α and β are comparably oriented, and β and γ are comparably oriented, then α and γ are comparably oriented Exercise 765 By v 1 v j v i v n = v 1 v i v j v n, exchanging two basis vectors reverses the orientation By v 1 c v i v n = c v 1 v i v n, multiplying c 0 to a basis vector preserves the orientation if c > 0 and reverses the orientation if c < 0 By v 1 ( v i + c v j ) v j v n = v 1 v i v j v n, adding a scalar multiple of one basis vector to another basis vector preserves the orientation Exercise 766 1 e 2 e 3 e n e 1 = ( 1) n 1 e 1 e 2 e 3 e n Positively oriented for odd n Negatively oriented for even n 2 e n e n 1 e 1 = ( 1) 1 2 n(n 1) e 1 e n Positively oriented for n = 0, 1 mod 4 Negatively oriented for n = 2, 3 mod 4 3 ( e 1 ) ( e 2 ) ( e n ) = ( 1) n e 1 e n Positively oriented for even n Negatively oriented for odd n ( ) 1 3 4 det = 2 Negatively oriented 2 4 1 4 7 5 det 2 5 8 = 3 Negatively oriented 3 6 10 1 0 1 6 det 0 1 1 = 2 Negatively oriented 1 1 0

Exercise 767 The subset o V Λ n V { 0} is exactly the wedge products of all the bases in o V {ordered bases} Since ΛL preserve the wedge product, the exercise follows Exercise 768 If L(o U ) = o V and K(o V ) = o W, then (K L)(o U ) = K(L(o U )) = K(o V ) = o W So K L also preserves the orientation If L(o V ) = o W, then o V = L 1 L(o W ) So L 1 also preserves the orientation In general, K L preserves the orientation if both K and L preserve, or both K and L reverse K L reverses the orientation if one of K and L preserves and the other reverses If L reverses the orientation, then L 1 also reverses the orientation Exercise 769 For the special case that ξ is the wedge product of a basis of V and l is the wedge product of the dual basis, we have l(ξ) = 1 > 0 If the basis of V is positively oriented, then ξ o and l o So we get the characterisation in the exercise Exercise 770 If α = { v 1,, v n } o V, then α = { v 1,, v n} o V Therefore L(α) = {L( v 1),, L( v n )} L(o V ) = o W, and L(α) = {(L( v 1 )),, (L( v n )) } o W We need to show that L (L(α) ) and α are compatibly oriented In fact we claim that the two bases are the same The dual basis L(α) is defined by This is the same as (L( v i )) (L( v j )) = δ ij L ((L( v i )) )( v j ) = δ ij This shows that L (L(α) ) is the dual basis of α Therefore L (L(α) ) = α Exercise 774 Suppose α i, β i are two bases of V i Then ( ) Iβ1 α I (β1 β 2 )(α 1 α 2 ) = 1 O O I β2 α 2 This implies det I (β1 β 2 )(α 1 α 2 ) = det I β1 α 1 det I β2 α 2 If det I β1 α 1 > 0 and det I β2 α 2 > 0, then det det I (β1 β 2 )(α 1 α 2 ) > 0 The other way is to notice that, if β i = a i ( α i ), then (β 1 β 2 ) = ( β 1 ) ( β 2 ) = a 1 a 2 ( α 1 ) ( α 2 ) Then a 1, a 2 > 0 = a 1 a 2 > 0 If V 1 and V 2 are exchanged, the orientation is changed by ( 1) (dim V 1)(dim V 2 ) Since the preserving the orientation of V 1 and reversing of orientation of V 2 reverses the orientation of V, for the given orientations of V 1 and V, there is a unique orientation of V 2 compatible with the orientations of V 1 and V So the orientation of V 2 is determined

Exercise 775 The matrix I βα between two orthonormal basis is orthogonal and has determinant ±1 Therefore α = ±( β) If the two orthonormal bases are compatibly oriented, then the sign must be positive Exercise 777 The equalities in the exercise will be derived from For λ Λ k V and µ Λ n k V, we have λ µ = λ, µ e, λ, µ = λ, µ λ µ = λ, µ e = λ, µ e, and λ µ = ( 1) k(n k) µ λ = ( 1) k(n k) µ, λ e = ( 1) k(n k) λ, µ e Comparing the second equality with λ µ = λ, µ e, we get λ, µ = ( 1) k(n k) λ, µ Then by λ, µ = λ, µ, we get λ, µ = λ, µ = ( 1) k(n k) λ, µ Since any vector in Λ k V is of the form µ, we conclude that λ = ( 1) k(n k) λ Exercise 778 Let dim V = dim W = n Let e V and e W be the canonical bases of Λ n V and Λ n W If L preserves the orientation, the ΛL(e V ) = e W Moreover, ΛL preserves the induced inner product Therefore Then we have ΛL( λ), ΛL( µ) e W = ΛL( λ) ΛL( µ) = ΛL( λ µ) = ΛL( λ, µ e V ) = λ, µ e W = ΛL( λ ), ΛL( µ) e W ΛL( λ), ΛL( µ) = ΛL( λ ), ΛL( µ) Since this is true for all µ Λ n k V, and the isomorphism ΛL takes µ to all vectors in Λ n k V, we conclude that ΛL( λ) = ΛL( λ ) If L reverses the orientation, then ΛL( λ) = ΛL( λ ) Exercise 779

Let { w 1,, w k } be a positively oriented orthonormal basis of W Let { u 1,, u l } be a positively oriented orthonormal basis of W Then { w 1,, w k, u 1,, u l } is a positively oriented orthonormal basis of V = W W We have so that Then for any µ Λ n k V, we use Exercise 762 e W = w 1 w k, e W = u 1 u l, e V = w 1 w k u 1 u l = e W e W µ, e W = e W, e W µ, e W = e W µ, e W e W = e W, µ e V, e V = e W, µ e V, e V = e W, µ Since this is true for all µ, we get e W = e W Exercise 780 We apply Exercise 779 to W = span{ x 1,, x k } and W = span{ x k+1,, x n } We have e W = x 1 x k x 1 x k, e W = x k+1 x n x k+1 x n The orthogonality between the first k vectors and the last (n k) vectors (together with positively oriented α) also implies (either by Exercise 762 and det(x T X) = (det X) 2, or the geometric meaning of volume as square root of determinant or norm of exterior product vector) det( x 1 x n ) = x 1 x k x k+1 x n Then by Exercise 779, we have e W = e W This gives ( x 1 x k ) = x 1 x k x k+1 x n x k+1 x n = x 1 x k x k+1 x n x x k+1 x n 2 k+1 x n = det( x 1 x n ) x k+1 x n 2 2 x k+1 x n Exercise 781 Introduce the cofactor x 11 x 21 x (n 1)1 x C i = det 1(i 1) x 2(i 1) x (n 1)(i 1) x 1(i+1) x 2(i+1) x (n 1)(i+1) x 1n x 2n x (n 1)n

Then (see the answer to Exercise 746) ( x 1 x n 1 ) = C i ( e 1 e i 1 e i+1 e n ) = ( 1) n i C i e i 1 i n 1 i n = ( 1) n ( C 1, C 2, C 3, ) By (( x 1 x n 1 ) x i )e [n] = ( x 1 x n 1 ) x i = 0, we have ( x 1 x n 1 ) x i = 0 This is the claim about the orthogonality Exercise 782 We have [( x 1 x n 1 ) x n ]e [n] = x 1 x n 1 x n = det( x 1 x n )e [n] Therefore ( x 1 x n 1 ) x n = det( x 1 x n ) The cofactor expansion is obtained by further using ( x 1 x n 1 ) = ( 1) n ( C 1, C 2, C 3, ) from Exercise 781