THE GEOMETRY IN GEOMETRIC ALGEBRA THESIS. Presented to the Faculty. of the University of Alaska Fairbanks. in Partial Fulfillment of the Requirements

Size: px
Start display at page:

Download "THE GEOMETRY IN GEOMETRIC ALGEBRA THESIS. Presented to the Faculty. of the University of Alaska Fairbanks. in Partial Fulfillment of the Requirements"

Transcription

1

2

3 THE GEOMETRY IN GEOMETRIC ALGEBRA A THESIS Presented to the Faculty of the University of Alaska Fairbanks in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE By Kristopher N. Kilpatrick, B.S. Fairbanks, Alaska December 2014

4 Abstract We present an axiomatic development of geometric algebra. One may think of a geometric algebra as allowing one to add and multiply subspaces of a vector space. Properties of the geometric product are proven and derived products called the wedge and contraction product are introduced. Linear algebraic and geometric concepts such as linear independence and orthogonality may be expressed through the above derived products. Some examples with geometric algebra are then given. v

5

6 Table of Contents Page Signature Page i Title Page iii Abstract v Table of Contents vii Preface Chapter 1: Preliminaries Definitions Grades resulting from multiplication Multiplication of two vectors Multiplication of a vector and a blade Reversion Multiplication of two blades Products derived from the geometric product Outer or wedge product Linear independence and the wedge product The wedge of r vectors is an r-blade Left/right contraction Scalar product Cyclic permutation of the scalar product Signed magnitude Additional identities Chapter 2: The geometry of blades Wedge product and containment Blades represent oriented weighted subspaces vii

7 Page Direct sum Contraction and orthogonality The contraction of blades is a blade Orthogonal complement Geometric, wedge and contraction relations Duality Projection operator Chapter 3: Examples with geometric algebra Lines and planes Lines Planes The point of intersection of a line and a plane The Kepler Problem Reflections and rotations Reflections Rotations Finding the components of a vector Chapter 4: Appendix Construction of a geometric algebra References viii

8

9 Preface The birth of Clifford algebra can be attributed to three mathematicians: Hermann Gunther Grassmann, William Rowan Hamilton and William Kingdon Clifford. Grassmann contributed greatly to the early theory of linear algebra, and one of those contributions was the exterior, or wedge, product. Hamilton invented the quaternions, a way to extend the complex numbers into 4 dimensions. Clifford synthesized the works of the two mathematicians into an algebra that he coined geometric algebra. The purpose of this thesis is to come to terms with geometric algebra. I originally became interested in geometric algebra from my undergraduate physics teacher. I was informed by him that geometric algebra would be the language used by physicists. I greatly admired that man, and so I picked up the book recommended by him, Clifford Algebra to Geometric Calculus by David Hestenes and Garret Sobczyk [HS84]. I was taken aback by the plethora of identities with no meaning what so ever behind them. My advisor David Maxwell has helped me greatly by asking me questions that I could not answer and could not find answers to in [HS84]. That book is written more for physicists than mathematicians, I believe. I realized that one way to understand this subject would be to start, as a mathematician should, from a set of axioms and build up the theory of geometric algebra. Chapter 1 deals with establishing the product of two vectors, then the product of a vector and a blade, finally the product of a blade with a blade. The wedge, left/right contraction and scalar product are introduced and identities involving them are derived. Chapter 2 deals with establishing the correspondence between a subspace and a blade. Subspaces are then studied with the wedge and right contraction products. Chapter 3 deals with lines and planes, the Kepler problem, rotations and reflections and finding components of a vector via the wedge product. My indebtedness goes to David Maxwell and John Rhodes for their numerous conversations about many mathematical topics and their patience with me. Finally, I should like to express 1

10 my thanks to the University of Alaska Fairbanks for their financial support in my academic studies. 2

11 Chapter 1 Preliminaries We begin by defining a Clifford algebra over the reals, or as Clifford called it, a geometric algebra. Henceforth we shall also use the name geometric algebra. After defining a geometric algebra, we establish the result of the product of two blades. We find that the product of two blades always has a highest and lowest grade resulting from multiplication. New products will be defined based on the highest and lowest grade. Useful identities between the products will be established that will facilitate quick and efficient computations. 1.1 Definitions There are two common approaches to defining a geometric algebra. The axiomatic treatments found in [HS84] and [DL03] have the merit of being more accessible, but these works lack full rigor. On the other hand, the treatment in [Che97] is mathematically rigorous, but its abstract, formal style lacks accessibility. Our approach is intermediate between the two. We start with a set of axioms inspired by [HS84], but modified so as to allow for a rigorous subsequent development. Definition A geometric algebra G is an algebra, with identity, over R with the following additional structure: 1) There are distinguished subspaces G 0, G 1,... such that G = G 0 G 1. 2) 1 G 0. 3

12 4) For all a G 1 a 2 = B(a, a)1 G 0. 3) G 1 is equipped with a non-degenerate, symmetric bilinear form B. Recall that a symmetric bilinear form B on a vector space V is non-degenerate if for all y V, B(x, y) = 0 implies x = 0. 5) For each integer r 2, G r is spanned by all r-blades, where an r-blade is a product of r mutually anti-commuting elements from G 1. Recall that two elements a, b anti-commute if ab = ba. Remark. The explicit multiplication by 1 will not be written. We shall write a 2 = B(a, a) instead of a 2 = B(a, a)1; that is we are identifying R G 0. Definition Elements of G will be called multivectors. Elements of G r will be called r-vectors and will be said to have grade r. Elements of G 0 will be called scalars; elements of G 1 will be called vectors. A general element of G is then a sum of r-vectors, where each r-vector is a sum of r-blades. By our definition, a scalar is a 0-blade and a vector is a 1-blade. Definition Let A r be an r-blade. A representation of A r is a set {a 1,..., a r } of mutually anti-commuting vectors such that A r = a 1 a r. Each a i is called a factor of the representation of A r. 4

13 Intuitively, an r-blade may be thought of as a weighted, oriented r-dimensional subspace spanned by its factors. Example Let a be a 1-blade or a vector. We view this as a arrow with an orientation specified by the direction of a. Let λ be a positive real number. Then λa is a scaling of a. We may think of λa has having a weight of λ, relative to a. Example Let a 1 a 2 be a 2-blade. We view this as a plane spanned by a 1, a 2 with an orientation from a 1 to a 2. Let λ be a positive real number. Then λa 1 a 2 has a weight λ, relative to a 1 a 2. These informal ideas will become more precise later. The bilinear form B allows one to associate lengths and spatial relations between vectors. One particular spatial relation is orthogonality. Definition Let a, b be vectors. If B(a, b) = 0 we say that a and b are orthogonal. If b 2 = B(b, b) = 0 we say that b is a null vector. A null vector is then, orthogonal to itself. We will later give an example of a geometric algebra containing null vectors. Let us look at some examples of geometric algebras. In the following, we use A to denote the span of elements of A. Example Consider C as a vector space over R. Let G 0 = 1 and G 1 = i. Then C = G 0 G 1. Equip G 1 with the bilinear form B defined by B(i, i) = 1. Then C is a geometric algebra by a straightforward verification. We now construct a geometric algebra with a subspace G 2. Example Let G be the free R-module over the set of formal symbols {1, e 1, e 2, e 12 }. Let G 1 = e 1, e 2 be equipped with the symmetric bilinear form B such that B(e 1, e 1 ) = B(e 2, e 2 ) = 1 and B(e 1, e 2 ) = 0. 5

14 Note that B is non-degenerate. Multiplication is defined by the following table 1 e 1 e 2 e e 1 e 2 e 12 e 1 e 1 1 e 12 e 2 e 2 e 2 -e e 1 e 12 e 12 -e 2 e 1-1 It is a trivial, but tedious exercise, to show that this defines a group. Extending multiplication over addition by bilinearity defines a geometric algebra on G as can be shown by straightforward calculations. Since e 1 e 2 = e 2 e 1 and e 12 = e 1 e 2, by definition e 12 is a 2-blade. It is straightforward to show that G 2 = e 12. Let G 0 = 1, then G = G 0 G 1 G 2. Let a, b G 1. Then a = α 1 e 1 + α 2 e 2, b = β 1 e 1 + β 2 e 2, α 1, α 2, β 1, β 2 R Observe ab = (α 1 e 1 + α 2 e 2 )(β 1 e 1 + β 2 e 2 ) = (α 1 β 1 + α 2 β 2 ) + (α 1 β 2 α 2 β 1 )e 1 e 2. Observe that the product ab, contains a scalar and a 2-blade. The scalar is the familiar dot product from Euclidean geometry. The coefficient of e 1 e 2 can be interpreted as the signed area between a and b, signed in the sense of an orientation from e 1 to e 2. The geometric product between two vectors measures, in some sense, how parallel and how orthogonal the two vectors a and b are. For if a is a scalar multiple of b, then the coefficient of e 1 e 2 is zero 6

15 and ab = B(a, b). If B(a, b) = 0, then ab = (α 1 β 2 α 2 β 1 )e 1 e 2. Finally, notice that G + = 1 + e 12 is a subalgebra of G. A straightforward verification shows that mapping defined by 1 1, i e 12 from C to G + is an isomorphism. Therefore, G contains the complex numbers. The unit i now has a geometric interpretation as a plane. This example should be thought of as the geometric algebra, generated by e 1 and e 2, of the plane represented by e 1 e 2. We denote this geometric algebra by G(R 2 ). The construction of a geometric algebra associated with a finite dimensional vector space, as presented in Example 1.1.8, can be a laborious task when the dimension of the vector space is large. In the appendix, we give a theorem about the structure of an associated geometric algebra over a finite dimensional vector space. The results of the theorem are the following. Theorem Let V be a finite dimensional vector space over R equipped with a non-degenerate symmetric bilinear form B. Then there exists an orthogonal non-null basis {e 1,..., e n } such that the geometric algebra associated of V, is the direct sum of the subspaces spanned by e i1 e ir, 1 i 1 <... < i r n. For convenience, let e i1 i k = e i1 e ik. Given a vector space V, we use the notation G(V ) to denote the associated geometric algebra with V. By Theorem 4.1.8, G(V ) is a direct sum of the subspaces G r spanned by e i1 i r, 1 i 1 < < i r n. Recall the summation convention that k αk e k = α k e k. We shall use this convention unless otherwise stated. Let us look at an example of a geometric algebra of three-dimensional space. Example Let G(R 3 ) be the geometric algebra generated by e 1, e 2, e 3 with the bilinear form B defined so that the generators are orthogonal and square to 1. We have G = 1 e 1, e 2, e 3 e 12, e 23, e 31 e 123. Let a, b G 1, with a = α k e k, b = β k e k. Then ab = (α 1 β 1 + α 2 β 2 + α 3 β 3 ) + (α 1 β 2 α 2 β 1 )e 12 + (α 2 β 3 α 3 β 2 )e 23 + (α 3 β 1 α 1 β 3 )e 31. 7

16 Observe, as in Example 1.1.8, the product contains a scalar term, B(a, b) and a sum of bivectors. Notice also that the coefficients of the bi-vectors are the coefficients from the cross product a b. Furthermore, notice that G + = 1 e 12, e 23, e 31 forms a subalgebra of G. Let i = e 12, j = e 31 and k = e 23. Then i 2 = j 2 = k 2 = ijk = 1. These are the rules of quaternion multiplication discovered by Hamilton. The quaternions i, j, k may be interpreted as planes. Let us look at a geometric algebra used in special relativity. Example Let G(M) be the geometric algebra generated by the orthogonal vectors e 0, e 1, e 2, e 3 with the bilinear form B defined by e 2 0 = e 2 k = 1, k = 1,..., 3. We have G = 1 e 0, e 1, e 2, e 3 e 01, e 02, e 03, e 12, e 13, e 23 e 012, e 123, e 230, e 013 e Let x e 0, e 1, e 2, e 3. Then x = x α e α, x α R. Observe x 2 = B(x α e α, x α e α ) = (x 0 ) 2 (x 1 ) 2 (x 2 ) 2 (x 3 ) 2 is the so called invariant interval of special relativity. In this geometric algebra there exist non-zero vectors that are null. Let n = e 0 + e 1. Then n 2 = 1 1 = 0. There are bi-vectors that square to 1 and -1. Observe (e 01 ) 2 = e 0 e 1 e 0 e 1 = e 2 0e 2 1 = 1 and (e 12 ) 2 = e 1 e 2 e 1 e 2 = e 2 1e 2 2 = 1. With some examples in mind, let us recall a fact associated with a direct sum decompo- 8

17 sition. Since G = G 0 G 1 we have the associated projection maps r : G G r, satisfying the following properties: A + B r = A r + B r λa r = λ A r if λ R A r r = A r. We define A k = 0 if k < 0. There is a relationship between the bilinear form B and the geometric product between vectors. Proposition Let a, b be vectors. Then B(a, b) = ab + ba. 2 Proof. Since or (a + b) 2 = a 2 + b 2 + ab + ba ab + ba = (a + b) 2 a 2 b 2 we have ab + ba = B(a + b, a + b) B(a, a) B(b, b) = B(a, a) + B(b, a) + B(a, b) + B(b, b) B(a, a) B(b, b) = 2B(a, b) as claimed. Corollary ab = ba if and only if B(a, b) = 0. Therefore, anti-commutativity and orthogonality are equivalent. 9

18 Proposition Let a and b be linearly dependent vectors. Then their geometric product commutes. Proof. If a = λb, λ R, then ab = λbb = λb 2 = b 2 λ = bbλ = ba. We shall often work with inverses of elements of the geometric algebra. characterize invertibility for vectors, and then generalize to blades. Let us first Proposition Let b be a non-zero vector. Then b is invertible if and only if b is non-null. Moreover, b 1 = b b 2 when it exists. Proof. Suppose that b is invertible. Then bb 1 = 1 implies b 2 b 1 = b. Since b is invertible and 0 is not, b 0 and we find b 2 b 1 0. Hence, b 2 0 and b is non-null. We may conclude that b 1 = b b 2. Conversely, suppose that b is non-null. We shall show that b 1 = b b 2. b b b 2 = b2 b 2 = 1 10

19 similarly b b 2 b = 1. Thus, b 1 is as claimed. We extend Proposition to blades. Proposition Let A r be a non-zero r-blade. Then A r is invertible if and only if each factor of each representation of A r is non-null. Proof. Suppose that A r is invertible with a representation A r = a 1 a r containing a null vector. We suppose that a 2 1 = 0 without loss of generality. Observe that a 1 A r = a 2 1a 2 a r = 0 implies a 1 = a 1 A r A 1 r = 0 contradicting the fact that A r is non-zero. Conversely, suppose that each factor of some representation of A r = a 1 a r is non-null. By Proposition , each factor of the representation is invertible. A simple verification shows that A 1 r = a 1 r a Grades resulting from multiplication In this section we prove some fundamental results concerning the product of two blades. In particular, we show that the resulting grades of a product of two vectors is a scalar and a 2-vector, of a vector and an r-blade is an r 1 and an (r + 1)-vector and of an r and s-blade is a sum starting at grade s r and incrementing by grade 2, until grade r + s. 11

20 1.2.1 Multiplication of two vectors Definition Let a and b be vectors, with b invertible. We call π(a, b) = B(a, b)b 1 the projection of a onto b and ρ(a, b) = a π(a, b) the rejection of a from b. Lemma Let a and b be vectors, with b invertible. Then π(a, b) commutes with b. Proof. By Proposition , π(a, b) = B(a, b)b 1 = B(a, b) b B(a, b) = b. b2 b 2 Therefore, π(a, b) and b are linearly dependent. By Proposition , π(a, b) and b commute. Lemma Let a and b be vectors, with b invertible. Then ρ(a, b) anti-commutes with b. Thus, is a 2-blade. Proof. First note that Observe that ρ(a, b)b B(b, b 1 ) = B(b, b B(b, b) ) = b2 b 2 = b2 b 2 = 1. B(b, ρ(a, b)) = B(b, a π(a, b)) = B(b, a) B(b, π(a, b)) = B(b, a) B(b, B(a, b)b 1 ) = B(a, b) B(a, b)b(b, b 1 ) = B(a, b) B(a, b) = 0. 12

21 By Corollary , ρ(a, b) and b anti-commute. Although Proposition establishes part of the following lemma, we give a different proof using the notion invertibility. Lemma Let a and b be vectors, with b invertible. Then ab + ba 2 = B(a, b) and ab ba 2 = ρ(a, b)b. Proof. Since we have a = π(a, b) + ρ(a, b) ab = π(a, b)b + ρ(a, b)b and ba = bπ(a, b) + bρ(a, b). By Lemma and Lemma 1.2.3, ab + ba = π(a, b)b + ρ(a, b)b + bπ(a, b) + bρ(a, b) = 2π(a, b)b = 2B(a, b)b 1 b = 2B(a, b) and ab ba = π(a, b)b + ρ(a, b)b bπ(a, b) bρ(a, b) = 2ρ(a, b)b. Proposition Let a and b be vectors, with b invertible. Then ab = ab 0 + ab 2. Moreover, ab 0 = B(a, b). 13

22 Proof. Observe By Lemma 1.2.4, ab = ab + ba 2 + ab ba. 2 ab = B(a, b) + ρ(a, b)b. The result now follows since B(a, b) G 0 and ρ(a, b)b G 2. In establishing Proposition it was assumed that b was invertible. We now show the proposition holds for null vectors as well. Lemma Let n be a null vector. Then there exists non-null vectors x, y such that n = x + y. Proof. Suppose that n = 0. Since B is non-degenerate, there exists a vector x such that B(x, x) 0. Then n = x x is a sum of non-null vectors. Suppose now that n 0. Since B is non-degenerate, there exists a vector x such that B(n, x) 0. Let y λ = n λx where λ R \ {0}. Then yλ 2 = λ( 2B(n, x) + λx 2 ). If x 2 = 0, then y 2 λ = 2λB(n, x) 0. So, n = 1 2 y y 1 is a sum of non-null vectors.. If x 2 0, then choose so that y λ is non-null. Then λ 2B(n, x) x 2 n = y λ + λx is a sum of non-null vectors. Theorem Let a, b be vectors. Then ab = ab 0 + ab 2. 14

23 Proof. By Proposition 1.2.5, the result holds if b is non-null. Suppose that b is null. By Lemma 1.2.6, there exists non-null vectors x, y such that b = x + y. By Proposition 1.2.5, ab = a(x + y) = ax + ay = ax 0 + ax 2 + ay 0 + ay 2 = a(x + y) 0 + a(x + y) 2 = ab 0 + ab 2. The product of two vectors gives a grade 0 and grade 2 term in the sum Multiplication of a vector and a blade We will now generalize Theorem to a vector and a blade. The resulting product will be a sum of a vector of one less and one greater in grade than the original blade. Definition Let a and A r be a vector and an invertible r-blade with a representation A r = a 1 a r. We define the projection of a onto A r by π(a, a 1 a r ) = r k=1 B(a, a k )a 1 k and the rejection of a from A r by ρ(a, a 1 a r ) = a π(a, a 1 a r ). The map π is called the projection because it measures how much of a is in the span of the factors a 1,..., a r and ρ measures how much of a is not in the span of the factors a 1,..., a r. The formula for the projection π appears to depend on the choice of representation of the blade, but we shall see that it is independent of choice of representation and similarly for ρ. We use the following convention. Given a product of vectors a 1 ǎ k a r, the check indicates the the vector a k is not present in the product. 15

24 Lemma Let a and A r be a vector and an invertible r-blade with representation A r = a 1 a r, respectively. Then π(a, a 1 a r )A r = ( 1) r+1 A r π(a, a 1 a r ). Moreover, is an (r 1)-vector. π(a, a 1 a r )A r Proof. Since a 1 k anti-commutes with a i for i k and commutes with a k we have, π(a, a 1 a r )A r = = = ( r k=1 r k=1 B(a, a k )a 1 k ) a 1 a r B(a, a k )a 1 k a 1 a r r B(a, a k )( 1) k 1 a 1 ǎ k a r ( ) k=1 r = a 1 a r ( 1) k 1 ( 1) r k B(a, a k )a 1 k=1 = ( 1) r+1 A r π(a, a 1 a r ). Referring to ( ), since the r 1 factors mutually anti-commute, we have a sum of r, (r 1)-blades. Hence, is an (r 1)-vector. π(a, a 1 a r )A r k Lemma Let a and A r be a vector and an invertible r-blade with representation A r = a 1 a r, respectively. Then ρ(a, a 1 a r )a k = a k ρ(a, a 1 a r ) for k = 1,..., r. 16

25 Proof. Let 1 j r. Observe B(a j, ρ(a, a 1 a r )) = B(a j, a π(a, a 1 a r )) = B(a j, a) B(a j, π(a, a 1 a r )) r r = B(a j, a) B(a j, B(a, a k )a 1 k ) = B(a j, a) B(a, a k )B(a j, a 1 k ) k=1 = B(a j, a) B(a, a j )B(a j, a 1 j ) = B(a j, a) B(a j, a) 0 = 0. r k=1k j By Corollary , ρ(a, a 1 a r ) and a j anti-commute for 1 j r. k=1 B(a, a k )B(a j, a 1 k ) Lemma Let a and A r be a vector and an invertible r-blade with representation A r = a 1 a r, respectively. Then ρ(a, a 1 a r )A r = ( 1) r A r ρ(a, a 1 a r ). Moreover, is an (r + 1)-blade. ρ(a, a 1 a r )A r Proof. By Lemma , ρ(a, a 1 a r ) anti-commutes with each factor a k. Then ρ(a, a 1 a r )A r = ρ(a, a 1 a r )a 1 a r = ( 1) r a 1 a r ρ(a, a 1 a r ) = ( 1) r A r ρ(a, a 1 a r ). Also, since each factor is anti-commuting we have that ρ(a, a 1 a r )A r is an (r+1)-blade. Proposition Let a and A r be a vector and an invertible r-blade, respectively. Then aa r ( 1) r A r a 2 = aa r r 1 and aa r + ( 1) r A r a 2 17 = aa r r+1.

26 Consequently, aa r = aa r r 1 + aa r r+1 and A r a = A r a r 1 + A r a r+1. Proof. Since a = π(a, a 1 a r ) + ρ(a, a 1 a r ), by Lemmas and , aa r + ( 1) r A r a = π(a, a 1 a r )A r + ρ(a, a 1 a r )A r + ( 1) r A r π(a, a 1 a r )+ ( 1) r A r ρ(a, a 1 a r ) = π(a, a 1 a r )A r + ρ(a, a 1 a r )A r + ( 1) 2r+1 π(a, a 1 a r )A r + ( 1) 2r ρ(a, a 1 a r )A r = 2ρ(a, a 1 a r )A r or A similar calculation shows that aa r + ( 1) r A r a 2 aa r ( 1) r A r a 2 = ρ(a, a 1 a r )A r. = π(a, a 1 a r )A r. Then aa r = aa r ( 1) r A r a + aa r + ( 1) r A r a 2 2 = π(a, a 1 a r )A r + ρ(a, a 1 a r )A r = aa r r 1 + aa r r+1. Similarly, A r a = A r a r 1 + A r a r+1. Remark. This shows that π, ρ are independent of choice of representation. 18

27 Corollary Let a and A r be a vector and an invertible r-blade, respectively. Then aa r r+1 = ( 1) r A r a r+1 and aa r r 1 = ( 1) r+1 A r a r 1. Proof. By Proposition and Lemmas and 1.2.9, aa r r+1 = ρ(a, a 1,..., a r )A r = ( 1) r A r ρ(a, a 1,..., a r ) = ( 1) r A r a r+1. The other equality is established similarly. We assumed throughout the invertibility of the blade to establish Proposition In the case when the vector space is finite dimensional, the result still holds for a vector and any blade. Let A r be an r-blade with a representation A r = a 1 a r. Each factor of the representation is a sum of orthogonal non-null vectors e 1,..., e n a k = n α ik e i, k = 1,..., r. i=1 Then n n A r = a 1 a r = α i1 1e i1 α irre ir. i 1 =1 i r=1 After expanding we find that A r is a sum of invertible r-blades. vectors, by linearity of the projection operators we have Then as in the case of Theorem Suppose that G 1 is finite dimensional. Let a and A r be a vector and r-blade, respectively. Then aa r = aa r r 1 + aa r r+1. Remark. The author is working on a way to generalize Theorem when the vector space is not necessarily finite dimensional. As it currently stands, the following results involving the product of blades only are known to hold in the finite dimensional case. Note the general structure of the geometric product of a vector with an r-blade has a sum of an r 1 and (r + 1)-vector. 19

28 Now that we have established the grades resulting from the product of a vector with a blade we shall generalize to the product of a blade with a blade. To help with this generalization we introduce a new map Reversion We shall introduce a very useful map called reversion. Reversion allows one to reverse the order of a product of multivectors and this is useful for algebra manipulations. Definition Let A G given by the unique sum A = Then the reverse of A is A = n r=1 ( 1) r(r 1) 2 A r. n A r, where A r = A r. r=1 Remark. Note that for an r-blade A r, A r = ( 1) r(r 1) 2 A r. So, A r is an r-blade as well. Also, (A ) = A; reversion is an involution. If λ is a scalar and if a is a vector λ = ( 1) 0(0 1) 2 λ = λ, a = ( 1) 1(1 1) 2 a = a. The general pattern is as follows , which we see has a period of 4. Lemma Let a and A be a vector and multivector, respectively. Then (aa) = A a. Proof. We show the result holds for an r-blade A r. By Proposition and Corollary 20

29 1.2.13, (aa r ) = ( aa r r 1 + aa r r+1 ) = aa r r 1 + aa r r+1 = ( 1) (r 1)((r 1) 1) 2 aa r r 1 + ( 1) (r+1)((r+1) 1) 2 aa r r+1 = ( 1) (r 1)((r 1) 1) 2 ( 1) r+1 A r a r 1 + ( 1) (r+1)((r+1) 1) 2 ( 1) r A r a r+1 = ( 1) r2 r+4 2 A r a r 1 + ( 1) r2 r 2 A r a r+1 = ( 1) r(r 1) 2 ( A r a r 1 + A r a r+1 ) = ( 1) r(r 1) 2 A r a = A ra. If A G, then A = k A k. Hence, (aa) = (a k A k ) = ( k a A k ) = k (a A k ) = k A k a = ( k A k ) a = A a. Proposition Reversion satisfies the following properties: 1) (AB) = B A 2) (A + B) = A + B 3) A r = A r Proof. Properties 2) and 3) follow straightforwardly from the definition of reversion. To establish 1) we shall show that (B s A r ) = A rb s for any r-blade and s-blade by induction on s. By Lemma , the results holds for s = 1. Now suppose that the statement holds for some fixed s. Let B s+1 be an (s + 1)-blade. We can factor B s+1 = B s b for some s-blade B s 21

30 and some vector b. By our induction hypothesis and Lemma , (B s+1 A r ) = (B s ba r ) = (B s ba r r 1 + B s ba r r+1 ) = (B s ba r r 1 ) + (B s ba r r+1 ) = ba r r 1B s + ba r r+1b s = ( ba r r 1 + ba r r+1)b s = ( ba r r 1 + ba r r+1 ) B s = (ba r ) B s = (A rb )B s = A r(b B s) = A r(b s b) = A rb s+1. By the principle of mathematical induction the result holds for all s N. That (AB) = B A for all A, B G now follows from expanding A and B as their unique sums of r-vectors Multiplication of two blades We will now generalize Theorem to a blade and a blade. The result will be a generalization of our previous results and will be called the grade expansion identity. Proposition (Grade Expansion Identity). Let A r and B s be an r- and s-blade, respectively. Then A r B s = m A r B s s r +2k where m = min{r, s}. k=0 We begin with two lemmas. 22

31 Lemma Let A r and B s be an r- and s-blade, respectively. If s r, then A r B s = r A r B s s r+2k. k=0 Proof. We proceed by induction on r. When r = 0 the result is evident. Suppose that for some r the result holds for all s blades such that s r. Let A r+1 be an (r + 1)-blade and B s be an s-blade such that s r + 1. Since A r+1 is an (r + 1)-blade we can write A r+1 = aa r, where A r is an r-blade and a is a vector. By the induction hypothesis and Theorem , A r+1 B s = aa r B s r = a A r B s s r+2k = = = k=0 r a A r B s s r+2k k=0 r ( a A r B s s r+2k s r+2k 1 + a A r B s s r+2k s r+2k+1 ) k=0 r ( a A r B s s r+2k s (r+1)+2k + a A r B s s r+2k s (r+1)+2k+2 ). k=0 Also since A r+1 B s G, we have the unique direct sum decomposition A r+1 B s = k A r+1 B s k. We must have for k = 0,..., r and A r+1 B s s (r+1)+2k+1 = 0 A r+1 B s k = 0 23

32 for k s + (r + 1) + 1. Hence, r+1 A r+1 B s = A r+1 B s s (r+1)+2k. k=0 By the principle of mathematical induction, the result holds. Lemma Let A r and B s be an r- and s-blade, respectively. If r s, then A r B s = s A r B s r s+2k. k=0 Proof. Reversion is an involution and hence A r B s = ((A r B s ) ) = (B sa r). Since reversion doesn t alter grade, by Lemma , B sa r = s B sa r r s+2k. k=0 Then ( s ) A r B s = B sa r r s+2k = k=0 s s B sa r r s+2k = A r B s r s+2k. k=0 k=0 Proposition follows from Lemmas and We find from the grade expansion identity that the product of two blades is not necessarily a blade but a sum of blades where the grade increases by two. 1.3 Products derived from the geometric product In this section we introduce four new products: the wedge product, left/right contraction and the scalar product. The right contraction and wedge product shall be used extensively in Chapter 2. 24

33 1.3.1 Outer or wedge product The wedge product is the product of the so-called exterior algebra. We will show that the wedge product is alternating and associative, an r-blade is the wedge product of r vectors, and the wedge of r vectors is an r-blade. Finally, in the case of a finite dimensional vector space, it is shown that the wedge of linearly independent vectors is not zero. Definition Let A r and B s be an r- and s-blade, respectively. Then the outer or wedge product is A r B s = A r B s r+s. We extend the definition of the wedge product to multivectors by bilinearity. Example Consider G(R 3 ), the geometric algebra from Example Let A = e 12 and a = α 1 e 1 + α 2 e 2 + α 3 e 3. Consider a A = aa 3 = (α 1 e 1 + α 2 e 2 + α 3 e 3 )e 12 3 = α 1 e α 2 e α 3 e = α 1 e 2 α 2 e 1 + α 3 e = α 3 e 123. Then, since e 123 0, a A = 0 iff α 3 = 0 iff a = α 1 e 1 + α 2 e 32 iff a e 1, e 2 Intuitively, if a A = 0, then the line a is contained in the plane A. If a A 0 then the line a and plane A form a volume, which is a grade 3 object, formed from grade 1 and 2 objects. Proposition Let A r, B s and C t be r, s and t-blades, respectively. Then 25

34 A r (B s C t ) = (A r B s ) C t ( ) A r B s = ( 1) rs B s A r ( ) Proof. We follow [HS84]. By associativity and expanding (A r B s )C t = A r (B s C t ) each side with the grade expansion identity (Proposition ) we have A r (B s C t ) = A r B s C t s+t = A r B s C t s+t r+(s+t) = A r (B s C t ) r+s+t = (A r B s )C t (r+s)+t = (A r B s ) C t. Noting that reversion is an involution and grade preserving, we further have A r B s = A r B s r+s = ( (A r B s ) r+s ) = ( 1) (r+s)((r+s) 1) 2 ( 1) s(s 1) 2 ( 1) r(r 1) 2 B s A r r+s = ( 1) rs B s A r Linear independence and the wedge product We establish the connection between linear independence and the wedge of a product of vectors when the vector space is finite dimensional. Corollary Let a, b be vectors. Then a b = b a. 26

35 Consequently, a a = 0. Proof. This follows immediately from Proposition We now show that the wedge of linearly dependent vectors is zero. Proposition If the collection of vectors {a 1,..., a r } is linearly dependent, then a 1 a r = 0. Proof. Since a 1,..., a r are linearly dependent there exist scalars λ 1,..., λ r not all zero for which λ 1 a λ r a r = 0. Without loss of generality suppose that λ 1 0. By Corollary 1.3.4, a 1 a 2 a r = λ 1 1 (λ 1 a 1 ) a 2 a r = λ 1 1 (λ 2 a λ r a r ) a 2 a r = 0. The converse of Proposition will be given next, but first a lemma. Lemma Let a 1 a r be a representation of an r-blade. Then a 1 a r = a 1 a r. Proof. We proceed by induction on r. When r = 1 the result holds trivially. Suppose the result holds for any r-blade. By the induction hypothesis, a 1 a 2 a r+1 = a 1 (a 2 a r+1 ) = a 1 (a 2 a r+1 ) r 1 + a 1 (a 2 a r+1 ) r+1 = a 1 a 2 a r+1 r 1 + a 1 (a 2 a r+1 ) r+1 = a 1 (a 2 a r+1 ) = a 1 a 2 a r+1. Corollary Let dim G 1 = n. Let A = {a 1,..., a r } be a set of r n linearly independent vectors. Then a 1 a r 0. 27

36 Proof. Since A is linearly independent we may extend it to a basis of G 1 A {a r+1,..., a n }. Furthermore, by Lemma proved in the appendix, there exists an orthogonal non-null basis {e 1,..., e n } for G 1 where each basis vector squares to ±1. Let L : G 1 G 1 be the linear map defined by a k = L(e k ), k = 1,..., n. Since L maps a basis to a basis, L is non-singular. Suppose that a k = L(e k ) = α s ke s, k = 1,..., n. By Lemma 1.3.6, a 1 a r a n = α j 1 1 e j1 α jn n e jn = ( 1) σ α σ(1) 1 α σ(n) e 1 e n σ S n n = det(l)e 1 e n = det(l)i, where I = e 1 e n. Since each basis vector e i squares to ±1, we have II = ±1. Therefore, I 0. Then since det(l) 0 also, we cannot have a 1 a r = 0; else det(l)i = a 1 a r a n = (a 1 a r ) a n = 0, a contradiction. If the wedge of r vectors is zero, then the vectors must be linearly dependent. We have the following result. Proposition An r-blade a 1 a r dependent. = 0 if and only if {a 1,..., a r } is linearly In Chapter 2, Proposition will allow the correspondence between blades and subspaces to be made. 28

37 1.3.3 The wedge of r vectors is an r-blade We now show that the wedge of r vectors is an r-blade. Recall some basic facts from linear algebra. Lemma Let V be a finite dimensional inner product space. Let T be a self-adjoint linear operator on V. Then there is an orthonormal basis for V each of which is a characteristic vector for T. Translating Lemma into the language of matrices, we have the following. Corollary Let A be an n n real symmetric matrix. Then there is an orthogonal matrix P such that P T AP is diagonal. Proposition Let {a 1,..., a r } be a set of vectors of G 1. Then a 1 a r is an r-blade. Proof. We follow the strategy of [DL03]. If the set of vectors is linearly dependent then a 1 a r = 0, hence an r-blade. Suppose now that the vectors are linearly independent. Let M be the matrix with entries B(a i, a j ). Then M is real symmetric. So, there exists an orthogonal matrix P = (p i j) such that P T MP = D is diagonal. Let e k = p j k a j, k = 1,..., r. Then B(e k, e s ) = B(p j k a j, p i sa i ) = p j k B(a j, a i )p i s = (P T ) kj (M) ji (P ) is = (D) ks. Hence, with respect to our bilinear form B, the vectors e 1,..., e r are orthogonal. This means 29

38 that e 1 e r = e 1 e r = p j1 1a j1 p jrra jr = σ S r ( 1) σ p σ(1)1 p σ(r)r a 1 a r = det(p )a 1 a r. We used here the anti-symmetry of the wedge product. Since P is an orthogonal matrix, we have det(p ) 0. Then a 1 a r = det(p ) 1 e 1 e r is an r-blade Left/right contraction The left/right contraction products might be less familiar operations. The contractions are not associative like the wedge product but an identity will be established connecting the two products. Definition Let A r and B s be an r and s-blade, respectively. Then the right contraction is A r B s = A r B s s r (read A r contracted onto B s ) and the left contraction is Let a, b be vectors. Then A r B s = A r B s r s. a b = ab 0 = B(a, b). The right contraction between the vector a and b is just the bilinear form B evaluated on them. Furthermore, by definition if s < r then A r B s = 0. In general the right contraction 30

39 is not a commutative product. multivectors by bilinearity. We extend the definition of the left/right contraction to Example Consider G(R 3 ), the geometric algebra from Example Let A = e 12 and a = α 1 e 1 + α 2 e 2 + α 3 e 3. Observe a A = aa 1 = (α 1 e 1 + α 2 e 2 + α 3 e 3 )e 12 1 = α 1 e α 2 e α 3 e = α 1 e 2 α 2 e 1 + α 3 e = α 2 e 1 + α 1 e 2. Intuitively, this is a line in the plane A. Furthermore, observe a (a A) = (α 1 e 1 + α 2 e 2 + α 3 e 3 )(α 2 e 1 α 1 e 2 ) 0 = α 1 α 2 e 11 α1e α2e 2 21 α 2 α 1 e 22 + α 3 α 2 e 31 α 3 α 1 e 32 0 = α 1 α 2 α 2 α 1 α1e α2e α 3 α 2 e 31 α 3 α 1 e 32 0 = 0. We interpret a A as the line contained in the plane, orthogonal to the line a. Consider A e 123 = e = e 3. A more general interpretation for the intuition of this result is as follows. In the volume represented by e 123, the line represented by e 3 is most unlike the plane represented by A. Proposition Let A r, B s and C t be r-,s- and t-blades, respectively. Then A r (B s C t ) = (A r B s ) C t. ( ) Proof. We follow [HS84]. By associativity and expanding (A r B s )C t = A r (B s C t ) each side 31

40 with the grade expansion identity (Proposition ) we have if t s + r A r (B s C t ) = A r B s C t t s = A r B s C t t s t s r = A r B s s+r C t t s r = (A r B s )C t t (r+s) = (A r B s ) C t and if t < s + r A r (B s C t ) = 0 = (A r B s ) C t, since negative grades are zero. Identity ( ) will be very useful in Chapter 2. Proposition Let A r and B s be an r- and s-blade, respectively. Then A r B s = ( 1) r(s 1) B s A r ( ) Proof. Recall that the reversion map is an involution and grade preserving. Observe A r B s = A r B s r s = ( (A r B s ) r s ) = ( 1) (r s)((r s) 1) 2 ( 1) r(r 1) 2 ( 1) s(s 1) 2 B s A r r s = ( 1) s(r 1) B s A r Scalar product We introduce the scalar product to define the magnitude of a blade, which will be used in Chapter 3. The product of two multivectors will be shown to commute under the scalar 32

41 product. The scalar product is used heavily in [DL03]. Definition Let A r and B s be an r- and s-blade, respectively. product is A r B s = A r B s 0. Then the scalar We extend the definition of the scalar product to multivectors by bilinearity Cyclic permutation of the scalar product Lemma Let A r and B r be r-blades, respectively. Then A r B r = B r A r. Proof. Recall that the reversion map is an involution. Observe A r B s = A r B s 0 = (A r B s) 0 = B s A r 0 = B s A r. Proposition Let A, B G. Then A B = B A. Proof. Let A, B be given by their unique sum A = r A r, B = s B s, respectively. Note first that A r B s 0 = 0 unless r = s, by the grade expansion identity (Proposition ). By Lemma , AB 0 = r,s A r B s 0 = r A r B r 0 = r B r A r 0 = s,r B s A r 0 = BA 0. 33

42 Corollary Let A 1,..., A n G. Then A 1 A 2 A 3 A n 0 = A 2 A 3 A n A 1 0. ( ) Products of multivectors can be cyclically permuted under the scalar product Signed magnitude For an r-blade A r with a representation A r = a 1 a r, A r A r = A r A r 0 = a 1 a r a r a 1 0 = a 2 1 a 2 r. The scalar product defines in some sense a magnitude of a blade. But one must be careful for the square of a factor may be non-positive. Definition Let A r be an r-blade. Then the (squared) magnitude of A r is A r 2 = A r A r. If λ is a scalar if a is a vector λ 2 = λ λ = λ 2, a 2 = a a = a 2. In general, for an r-blade, A r 2 = A r A r = A r A r 0 = ( 1) r(r 1) 2 A 2 r. ( ) In the case that an r-blade is invertible, we have A r 2 0. So, A r 2 = A r A r which means A 1 r = A r A r = ( 1) r(r 1) 2 A 2 A r 2 r. 34

43 The inverse of a blade is just a scalar multiple of the blade. Example Consider G(R 2 ). Let a = α 1 e 1 + α 2 e 2, b = β 1 e 1 + β 2 e 2. Then a b = (α 1 β 2 α 2 β 1 )e 12. The magnitude of a b is a b 2 = (a b)(a b) = (α 1 β 2 α 2 β 1 ) 2. The squared magnitude of a b, may be interpreted as the square of the Euclidean area of the parallelogram with sides a and b. With the contraction and wedge products, we may write in our new symbols aa r = a A r + a A r, ( a) a A r = aa r ( 1) r A r a, 2 ( b) a A r = aa r + ( 1) r A r a. 2 ( c) 1.4 Additional identities Proposition Let A r, B s and a be an r-,s-blade and a vector, respectively. Then a (A r B s ) = (a A r )B s + ( 1) r A r (a B s ) ( a) = (a A r )B s ( 1) r A r (a B s ), ( b) a (A r B s ) = (a A r )B s ( 1) r A r (a B s ) ( a) = (a A r )B s + ( 1) r A r (a B s ) ( b) Proof. Identity ( a) will be proven, the others follow similarly. Let m = min{r, s}. We follow the strategy of [HS84]. By the grade expansion identity (Proposition ), and 35

44 equation ( b), m 2a (A r B s ) = 2a A r B s r s +2k = 2 = = a k=0 m a A r B s r s +2k k=0 m a A r B s r s +2k ( 1) r s +2k A r B s r s +2k a k=0 m m A r B s r s +2k ( 1) r s A r B s r s +2k a k=0 k=0 = aa r B s ( 1) r+s A r B s a = (aa r ( 1) r A r a + ( 1) r A r a)b s + ( 1) r A r ( ab s + ab s ( 1) s B s a) = 2(a A r )B s + ( 1) r A r 2(a B s ) Corollary Let A r, B s and a be an r-,s-blade and a vector, respectively. Then a (A r B s ) = (a A r ) B s + ( 1) r A r (a B s ) ( ) a (A r B s ) = (a A r ) B s + ( 1) r A r (a B s ) ( ) Proof. The identities follow from Proposition by projecting out and collecting the highest and lowest grades. 2. We introduce the notion of containment only briefly. Much more will be said in Chapter Definition Let A r, B s be r, s-blades, respectively. If a A r = 0 implies a B s = 0 we write A r B s and say that A r is contained in B s. Proposition Let A r, B s and C t be r-,s- and t-blades, respectively. If A r C t, then (A r B s ) C t = A r (B s C t ). ( ) 36

45 Proof. We proceed by induction on r. Let r = 1. By identity ( ), A 1 (B s C t ) = (A 1 B s ) C t + ( 1) s B s (A 1 C t ) = (A 1 B s ) C t. Suppose the statement holds for some fixed r. Let A r+1 be an (r + 1)-blade, with A r+1 C t. We may write A r+1 = a A r where a and A r is a vector and an r-blade, respectively. Note that a, A r C t. By the induction hypothesis, base case, and identity ( ), A r+1 (B s C t ) = a A r (B s C t ) = a ((A r B s ) C t ) = (a (A r B s )) C t = ((a A r ) B s ) C t = (A r+1 B s ) C t. By the Principle of Mathematical induction the result holds for all r N. Lemma Let a, a 1,..., a r G 1. Then where B k = B(a, a k ). 1 2 (aa 1 a r ( 1) r a 1 a r a) = r ( 1) k 1 B k a 1 ǎ k a r k=1 Proof. We induct on r and use Proposition Let r = 1. By Proposition , the result follows trivially. Suppose the statement holds for some r. By the induction hypothesis 37

46 and Proposition , r aa 1 a 2 a r a r+1 = ( ( 1) k 1 2B k a 1 ǎ k a r + ( 1) r a 1 a r a)a r+1 k=1 r = ( 1) k 1 2B k a 1 ǎ k a r a r+1 + ( 1) r a 1 a r aa r+1 k=1 r = ( 1) k 1 2B k a 1 ǎ k a r a r+1 + ( 1) r a 1 a r (2B r+1 a r+1 a) k=1 r+1 = ( 1) k 1 2B k a 1 ǎ k a r a r+1 + ( 1) r+1 a 1 a r a r+1 a. k=1 By the Principle of Mathematical Induction the result holds for all r N. Note that the result holds for any product of vectors regardless of commuting or anticommuting and does not depend on the vectors being invertible. Proposition (Reduction Identity). Let a and A r representation A r = a 1 a r, respectively. Then be a vector and an r-blade with a A r = r ( 1) k 1 B(a, a k )a 1 ǎ k a r. k=1 Proof. By equation ( b) and Lemma 1.4.5, a A r = aa r ( 1) r A r a 2 = r ( 1) k 1 B(a, a k )a 1 ǎ k a r. k=1 38

47 Chapter 2 The geometry of blades This chapter shall discuss properties of the lowest and highest grade of a product of blades, the contraction and wedge. With the wedge product, the notion of containment will be introduced, which will allow the correspondence between blade and subspace. The right contraction, which we shall call just the contraction, will allow us to generalize orthogonality between blades. Much use of our algebraic machinery established in Chapter 1 will take place. 2.1 Wedge product and containment In this section a correspondence between subspaces and blades is developed. Definition Let A r and B s be r- and s-blades, respectively. Then A r is contained in B s, written A r B s, if a A r = 0 implies a B s = 0. Containment is a transitive relation. That is, if A r B s and B s C t, then A r C t ; for if a A r = 0 then a B s = 0 so a C t = 0. Example Consider the geometric algebra G(R 3 ). Let A = e 12 and suppose that a A = 0. We showed in Example that this is equivalent to a e 1, e 2. Then a = α 1 e 1 + α 2 e 2 for some scalars α 1, α 2. Observe a e 123 = (α 1 e 1 + α 2 e 2 )e = α 1 e 23 + α 2 e 31 4 = 0. Therefore, A e 123. Intuitively, e 123 represents a volume, containing the plane A. We show that containment in the sense of the wedge product means containment in the sense of the span of a collection of vectors. 39

48 Proposition Let A r be a non-zero r-blade with a representation A r = a 1 a r. Then b A r = 0 if and only if b a 1,..., a r. Proof. If b a 1,..., a r, then the set {b, a 1,..., a r } is linearly dependent. By Proposition 1.3.8, b A r = 0. Conversely, suppose that b A r = 0. By Proposition 1.3.8, the set {b, a 1,..., a r } is linearly dependent and since {a 1,..., a r } is linearly independent we have b a 1,..., a r. Definition Let A r be an r-blade. Then G 1 (A r ) = {a G 1 : a A r = 0}. We shall call G 1 (A r ) the subspace representation of A r. Corollary Let A r be a non-zero r-blade with a representation A r = a 1 a r. Then G 1 (A r ) = a 1,..., a r. Proof. This follows immediately from Proposition Given a blade, by Corollary 2.1.5, there is a corresponding subspace. In the next section we show the converse Blades represent oriented weighted subspaces With the notion of containment we now establish a correspondence between a subspace and a blade. Proposition Let H be a subspace of vectors, dim H = m. Then there exists a nonzero m-blade H m for which G 1 (H m ) = H. Proof. There exists a basis for H, {h 1,..., h m }. By Proposition , H m = h 1 h m 40

49 is an m-blade and is non-zero by Corollary By Proposition 2.1.3, G 1 (H m ) = H. This is a key result, for it says that given a subspace there is a corresponding blade, and that blade is the wedge between the basis elements. This allows one to do algebra with subspaces. Example Consider the geometric algebra G(R 3 ) and the subspace H = e 1, e 2. A blade representing the subspace is e 1 e 2 = e 12. With this association, we may interpret the subspace as having an orientation, from e 1 to e 2 ; and weight given by the area of the parallelogram determined by e 1 and e 2 which is 1. Consider the subspace L = e 1. A blade representing the subspace is e 1. With this association, we may interpret the line as having the orientation specified by e 1 ; and the weight, which is 1. We could also use the blade 2e 1. In this instance, the orientation is that of e 1 ; and the weight is twice of e 1, or 2. Note the correspondence between the grade of a blade and the dimension of a subspace. It will now be shown that any blade representing a subspace, will have its grade equal to the dimension of the subspace. Proposition Let H be a subspace of vectors, dim H = m, and let A k be a non-zero k-blade. If G 1 (A k ) = H, then k = m. Proof. Suppose that A k has a representation A k = a 1 a k and suppose that {h 1,..., h m } is a basis of H. By our hypothesis, h 1,..., h m = a 1,..., a k. Since the dimension of a vector space is unique and the a ks are linearly independent, we conclude that k = m. We now show that two blades representing the same subspace, are scalar multiples of each other. 41

50 Proposition If A m, B m are non-zero m-blades that represent the same subspace of dimension m, then B m = λa m, λ R. Conversely, if B m = λa m, then G 1 (B m ) = G 1 (A m ). Proof. By Proposition 2.1.6, the blades A m and B m have the form A m = h 1 h m, B m = h 1 h m where {h 1,..., h m }, {h 1,..., h m} are bases for the subspace. Then h k h 1,..., h m, k = 1,..., m. Therefore, there exist scalars η for which h k = η s kh s (sum on s), k = 1,..., m. Hence, B m = h 1 h m = η j 1 1 h j1 η jm m h jm = ( 1) σ η σ(1) 1 η σ(n) h 1 h m σ S n n = λa m where λ = σ S n ( 1) σ η σ(1) 1 η σ(n) n. The converse of the statement is trivial to prove. If we take A m in the proposition above to have unit weight, then B m will have a weight of λ. The wedge product allows the correspondence between blades and subspaces. Even though the correspondence is not unique it is essentially unique up to a scalar multiple. A blade can be thought of as an oriented weighted r dimensional subspace. 42

51 2.1.2 Direct sum We show, under a simple condition, that the wedge of two blades may be interpreted as the direct sum of their representative subspaces. We begin with a vector and blade then generalize to a blade with a blade. Example Consider G(R 3 ). Informally, if we interpret e 1 and e 2 as lines and e 1 e 2 representing a plane containing e 1, e 2. We may view e 1 e 2 as the direct sum of the lines e 1, e 2. Often we do not need a representation for a blade. When this is the case we will use the symbol A r to denote the collection of its factors; for example we may say {a, A r } is a linearly dependent set, which means that if A r has a representation A r = a 1 a r, then {a, a 1,..., a r } is a linearly dependent set. Lemma Let a and A r be a vector and an r-blade, respectively. Then there exists a non-zero vector b a, A r if and only if a A r = 0. Proof. Suppose there exists a non-zero vector b a, A r. By Proposition 2.1.3, b a, A r Then the set {a, A r } is linearly dependent, so that by Corollary 1.3.8, a A r = 0. Conversely, suppose that a A r = 0. Then {a, A r } is linearly dependent. Suppose that A r has a representation A r = a 1 a r. Then there exists scalars λ, λ 1,..., λ r not all zero for which λa + λ 1 a λ r a r = 0. The set {a 1,..., a r } is linearly independent so λ 0 and there is a λ k 0. Then the vector b = λa = λ 1 a λ r a r is contained in a and A r and is non-zero. Proposition Let a and A r be a vector and an r-blade, respectively. If a A r 0, then a G 1 (A r ) = G 1 (a A r ). Proof. Since a A r 0, by Lemma , a G 1 (A r ) = {0}. 43

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Notes on Plücker s Relations in Geometric Algebra

Notes on Plücker s Relations in Geometric Algebra Notes on Plücker s Relations in Geometric Algebra Garret Sobczyk Universidad de las Américas-Puebla Departamento de Físico-Matemáticas 72820 Puebla, Pue., México Email: garretudla@gmail.com January 21,

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

An Introduction to Geometric Algebra

An Introduction to Geometric Algebra An Introduction to Geometric Algebra Alan Bromborsky Army Research Lab (Retired) brombo@comcast.net November 2, 2005 Typeset by FoilTEX History Geometric algebra is the Clifford algebra of a finite dimensional

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

W if p = 0; ; W ) if p 1. p times

W if p = 0; ; W ) if p 1. p times Alternating and symmetric multilinear functions. Suppose and W are normed vector spaces. For each integer p we set {0} if p < 0; W if p = 0; ( ; W = L( }... {{... } ; W if p 1. p times We say µ p ( ; W

More information

LINEAR ALGEBRA: THEORY. Version: August 12,

LINEAR ALGEBRA: THEORY. Version: August 12, LINEAR ALGEBRA: THEORY. Version: August 12, 2000 13 2 Basic concepts We will assume that the following concepts are known: Vector, column vector, row vector, transpose. Recall that x is a column vector,

More information

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

An attempt to intuitively introduce the dot, wedge, cross, and geometric products

An attempt to intuitively introduce the dot, wedge, cross, and geometric products An attempt to intuitively introduce the dot, wedge, cross, and geometric products Peeter Joot March 1, 008 1 Motivation. Both the NFCM and GAFP books have axiomatic introductions of the generalized (vector,

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

0.2 Vector spaces. J.A.Beachy 1

0.2 Vector spaces. J.A.Beachy 1 J.A.Beachy 1 0.2 Vector spaces I m going to begin this section at a rather basic level, giving the definitions of a field and of a vector space in much that same detail as you would have met them in a

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Multivector Calculus

Multivector Calculus In: J. Math. Anal. and Appl., ol. 24, No. 2, c Academic Press (1968) 313 325. Multivector Calculus David Hestenes INTRODUCTION The object of this paper is to show how differential and integral calculus

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all203 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v,

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Clifford Algebras and Spin Groups

Clifford Algebras and Spin Groups Clifford Algebras and Spin Groups Math G4344, Spring 2012 We ll now turn from the general theory to examine a specific class class of groups: the orthogonal groups. Recall that O(n, R) is the group of

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

M. Matrices and Linear Algebra

M. Matrices and Linear Algebra M. Matrices and Linear Algebra. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices.

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula Syllabus for Math 308, Paul Smith Book: Kolman-Hill Chapter 1. Linear Equations and Matrices 1.1 Systems of Linear Equations Definition of a linear equation and a solution to a linear equations. Meaning

More information

Geometric Algebra. 4. Algebraic Foundations and 4D. Dr Chris Doran ARM Research

Geometric Algebra. 4. Algebraic Foundations and 4D. Dr Chris Doran ARM Research Geometric Algebra 4. Algebraic Foundations and 4D Dr Chris Doran ARM Research L4 S2 Axioms Elements of a geometric algebra are called multivectors Multivectors can be classified by grade Grade-0 terms

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra MTH6140 Linear Algebra II Notes 2 21st October 2010 2 Matrices You have certainly seen matrices before; indeed, we met some in the first chapter of the notes Here we revise matrix algebra, consider row

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Math 108A: August 21, 2008 John Douglas Moore Our goal in these notes is to explain a few facts regarding linear systems of equations not included in the first few chapters

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 23, 2015

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 23, 2015 University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra PhD Preliminary Exam January 23, 2015 Name: Exam Rules: This exam lasts 4 hours and consists of

More information

Introduction to Geometry

Introduction to Geometry Introduction to Geometry it is a draft of lecture notes of H.M. Khudaverdian. Manchester, 18 May 211 Contents 1 Euclidean space 3 1.1 Vector space............................ 3 1.2 Basic example of n-dimensional

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

Review of linear algebra

Review of linear algebra Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

1 Fields and vector spaces

1 Fields and vector spaces 1 Fields and vector spaces In this section we revise some algebraic preliminaries and establish notation. 1.1 Division rings and fields A division ring, or skew field, is a structure F with two binary

More information

Rigid Geometric Transformations

Rigid Geometric Transformations Rigid Geometric Transformations Carlo Tomasi This note is a quick refresher of the geometry of rigid transformations in three-dimensional space, expressed in Cartesian coordinates. 1 Cartesian Coordinates

More information

The Spinor Representation

The Spinor Representation The Spinor Representation Math G4344, Spring 2012 As we have seen, the groups Spin(n) have a representation on R n given by identifying v R n as an element of the Clifford algebra C(n) and having g Spin(n)

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Article Review: A Survey of Geometric Calculus and Geometric Algebra

Article Review: A Survey of Geometric Calculus and Geometric Algebra Article Review: A Survey of Geometric Calculus and Geometric Algebra Josh Pollock June 6, 2017 Contents 1 Introduction 2 2 Motivating the Geometric Product and Higher Dimensional Geometric Objects 3 2.1

More information

INNER PRODUCT SPACE. Definition 1

INNER PRODUCT SPACE. Definition 1 INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

a (b + c) = a b + a c

a (b + c) = a b + a c Chapter 1 Vector spaces In the Linear Algebra I module, we encountered two kinds of vector space, namely real and complex. The real numbers and the complex numbers are both examples of an algebraic structure

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

x 1 x 2. x 1, x 2,..., x n R. x n

x 1 x 2. x 1, x 2,..., x n R. x n WEEK In general terms, our aim in this first part of the course is to use vector space theory to study the geometry of Euclidean space A good knowledge of the subject matter of the Matrix Applications

More information

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases MTH6140 Linear Algebra II Notes 7 16th December 2010 7 Inner product spaces Ordinary Euclidean space is a 3-dimensional vector space over R, but it is more than that: the extra geometric structure (lengths,

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

LIE ALGEBRAS: LECTURE 7 11 May 2010

LIE ALGEBRAS: LECTURE 7 11 May 2010 LIE ALGEBRAS: LECTURE 7 11 May 2010 CRYSTAL HOYT 1. sl n (F) is simple Let F be an algebraically closed field with characteristic zero. Let V be a finite dimensional vector space. Recall, that f End(V

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

QUATERNIONS AND ROTATIONS

QUATERNIONS AND ROTATIONS QUATERNIONS AND ROTATIONS SVANTE JANSON 1. Introduction The purpose of this note is to show some well-known relations between quaternions and the Lie groups SO(3) and SO(4) (rotations in R 3 and R 4 )

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg Math 52H: Multilinear algebra, differential forms and Stokes theorem Yakov Eliashberg March 202 2 Contents I Multilinear Algebra 7 Linear and multilinear functions 9. Dual space.........................................

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Representations of quivers

Representations of quivers Representations of quivers Gwyn Bellamy October 13, 215 1 Quivers Let k be a field. Recall that a k-algebra is a k-vector space A with a bilinear map A A A making A into a unital, associative ring. Notice

More information

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 1. Let V be a vector space. A linear transformation P : V V is called a projection if it is idempotent. That

More information

Geometric Algebra. Gary Snethen Crystal Dynamics

Geometric Algebra. Gary Snethen Crystal Dynamics Geometric Algebra Gary Snethen Crystal Dynamics gsnethen@crystald.com Questions How are dot and cross products related? Why do cross products only exist in 3D? Generalize cross products to any dimension?

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Homework 11 Solutions. Math 110, Fall 2013.

Homework 11 Solutions. Math 110, Fall 2013. Homework 11 Solutions Math 110, Fall 2013 1 a) Suppose that T were self-adjoint Then, the Spectral Theorem tells us that there would exist an orthonormal basis of P 2 (R), (p 1, p 2, p 3 ), consisting

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS NEEL PATEL Abstract. The goal of this paper is to study the irreducible representations of semisimple Lie algebras. We will begin by considering two

More information

Homework Set #8 Solutions

Homework Set #8 Solutions Exercises.2 (p. 19) Homework Set #8 Solutions Assignment: Do #6, 8, 12, 14, 2, 24, 26, 29, 0, 2, 4, 5, 6, 9, 40, 42 6. Reducing the matrix to echelon form: 1 5 2 1 R2 R2 R1 1 5 0 18 12 2 1 R R 2R1 1 5

More information

Lecture 11: Clifford algebras

Lecture 11: Clifford algebras Lecture 11: Clifford algebras In this lecture we introduce Clifford algebras, which will play an important role in the rest of the class. The link with K-theory is the Atiyah-Bott-Shapiro construction

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then 1 Sec 21 Matrix Operations Review Let A, B, and C be matrices of the same size, and let r and s be scalars Then (i) A + B = B + A (iv) r(a + B) = ra + rb (ii) (A + B) + C = A + (B + C) (v) (r + s)a = ra

More information

Clifford Analysis, Homework 1

Clifford Analysis, Homework 1 Clifford Analysis, Homework 1 November 1, 017 1 Let w v 1... v k, for vectors v j. Show that ŵ is the result of negating the vectors: ŵ ( v 1 )... ( v k ). Show that w is the result of changing the order

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

= ϕ r cos θ. 0 cos ξ sin ξ and sin ξ cos ξ. sin ξ 0 cos ξ

= ϕ r cos θ. 0 cos ξ sin ξ and sin ξ cos ξ. sin ξ 0 cos ξ 8. The Banach-Tarski paradox May, 2012 The Banach-Tarski paradox is that a unit ball in Euclidean -space can be decomposed into finitely many parts which can then be reassembled to form two unit balls

More information

Matrix Algebra: Vectors

Matrix Algebra: Vectors A Matrix Algebra: Vectors A Appendix A: MATRIX ALGEBRA: VECTORS A 2 A MOTIVATION Matrix notation was invented primarily to express linear algebra relations in compact form Compactness enhances visualization

More information

Matrices. Chapter Definitions and Notations

Matrices. Chapter Definitions and Notations Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Mathematics for Graphics and Vision

Mathematics for Graphics and Vision Mathematics for Graphics and Vision Steven Mills March 3, 06 Contents Introduction 5 Scalars 6. Visualising Scalars........................ 6. Operations on Scalars...................... 6.3 A Note on

More information