THE GEOMETRY IN GEOMETRIC ALGEBRA THESIS. Presented to the Faculty. of the University of Alaska Fairbanks. in Partial Fulfillment of the Requirements

Similar documents
DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Notes on Plücker s Relations in Geometric Algebra

Definition 2.3. We define addition and multiplication of matrices as follows.

A PRIMER ON SESQUILINEAR FORMS

An Introduction to Geometric Algebra

Math Linear Algebra II. 1. Inner Products and Norms

Foundations of Matrix Analysis

W if p = 0; ; W ) if p 1. p times

LINEAR ALGEBRA: THEORY. Version: August 12,

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Chapter 2: Matrix Algebra

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

An attempt to intuitively introduce the dot, wedge, cross, and geometric products

Boolean Inner-Product Spaces and Boolean Matrices

Linear Algebra. Min Yan

Linear Algebra 2 Spectral Notes

The following definition is fundamental.

0.2 Vector spaces. J.A.Beachy 1

October 25, 2013 INNER PRODUCT SPACES

Multivector Calculus

(v, w) = arccos( < v, w >

Stat 159/259: Linear Algebra Notes

NOTES on LINEAR ALGEBRA 1

(v, w) = arccos( < v, w >

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Topics in linear algebra

Clifford Algebras and Spin Groups

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

M. Matrices and Linear Algebra

MATH 583A REVIEW SESSION #1

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

Geometric Algebra. 4. Algebraic Foundations and 4D. Dr Chris Doran ARM Research

6 Inner Product Spaces

Properties of Matrices and Operations on Matrices

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Systems of Linear Equations

Linear Algebra March 16, 2019

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 23, 2015

Introduction to Geometry

Math 341: Convex Geometry. Xi Chen

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Review of linear algebra

First we introduce the sets that are going to serve as the generalizations of the scalars.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

1 Fields and vector spaces

Rigid Geometric Transformations

The Spinor Representation

(v, w) = arccos( < v, w >

Review problems for MA 54, Fall 2004.

Article Review: A Survey of Geometric Calculus and Geometric Algebra

INNER PRODUCT SPACE. Definition 1

A matrix over a field F is a rectangular array of elements from F. The symbol

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

Lecture Notes 1: Vector spaces

a (b + c) = a b + a c

0.1 Rational Canonical Forms

x 1 x 2. x 1, x 2,..., x n R. x n

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Notes on Linear Algebra and Matrix Theory

Elementary linear algebra

LIE ALGEBRAS: LECTURE 7 11 May 2010

Dot Products, Transposes, and Orthogonal Projections

MAT 2037 LINEAR ALGEBRA I web:

QUATERNIONS AND ROTATIONS

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Representations of quivers

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016

Geometric Algebra. Gary Snethen Crystal Dynamics

Math 321: Linear Algebra

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Linear Algebra and Eigenproblems

MATHEMATICS 217 NOTES

Homework 11 Solutions. Math 110, Fall 2013.

1 Matrices and Systems of Linear Equations

Linear Algebra. Preliminary Lecture Notes

1. General Vector Spaces

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents

Homework Set #8 Solutions

Lecture 11: Clifford algebras

LINEAR ALGEBRA REVIEW

Linear Algebra Review. Vectors

Elementary maths for GMT

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Clifford Analysis, Homework 1

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Chapter 6: Orthogonality

= ϕ r cos θ. 0 cos ξ sin ξ and sin ξ cos ξ. sin ξ 0 cos ξ

Matrix Algebra: Vectors

Matrices. Chapter Definitions and Notations

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Numerical Linear Algebra

Chapter 2: Linear Independence and Bases

Lecture Summaries for Linear Algebra M51A

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Mathematics for Graphics and Vision

Transcription:

THE GEOMETRY IN GEOMETRIC ALGEBRA A THESIS Presented to the Faculty of the University of Alaska Fairbanks in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE By Kristopher N. Kilpatrick, B.S. Fairbanks, Alaska December 2014

Abstract We present an axiomatic development of geometric algebra. One may think of a geometric algebra as allowing one to add and multiply subspaces of a vector space. Properties of the geometric product are proven and derived products called the wedge and contraction product are introduced. Linear algebraic and geometric concepts such as linear independence and orthogonality may be expressed through the above derived products. Some examples with geometric algebra are then given. v

Table of Contents Page Signature Page.................................... i Title Page....................................... iii Abstract........................................ v Table of Contents................................... vii Preface......................................... 1 Chapter 1: Preliminaries.............................. 3 1.1 Definitions..................................... 3 1.2 Grades resulting from multiplication...................... 11 1.2.1 Multiplication of two vectors....................... 12 1.2.2 Multiplication of a vector and a blade.................. 15 1.2.3 Reversion................................. 20 1.2.4 Multiplication of two blades....................... 22 1.3 Products derived from the geometric product.................. 24 1.3.1 Outer or wedge product......................... 25 1.3.2 Linear independence and the wedge product.............. 26 1.3.3 The wedge of r vectors is an r-blade................... 29 1.3.4 Left/right contraction.......................... 30 1.3.5 Scalar product.............................. 32 1.3.6 Cyclic permutation of the scalar product................ 33 1.3.7 Signed magnitude............................. 34 1.4 Additional identities............................... 35 Chapter 2: The geometry of blades........................ 39 2.1 Wedge product and containment......................... 39 2.1.1 Blades represent oriented weighted subspaces.............. 40 vii

Page 2.1.2 Direct sum................................. 43 2.2 Contraction and orthogonality.......................... 46 2.2.1 The contraction of blades is a blade................... 48 2.2.2 Orthogonal complement......................... 51 2.3 Geometric, wedge and contraction relations................... 53 2.4 Duality....................................... 55 2.5 Projection operator................................ 59 Chapter 3: Examples with geometric algebra.................. 61 3.1 Lines and planes................................. 61 3.1.1 Lines.................................... 61 3.1.2 Planes................................... 63 3.1.3 The point of intersection of a line and a plane............. 64 3.2 The Kepler Problem............................... 66 3.3 Reflections and rotations............................. 70 3.3.1 Reflections................................. 70 3.3.2 Rotations................................. 71 3.4 Finding the components of a vector....................... 74 Chapter 4: Appendix................................ 77 4.1 Construction of a geometric algebra....................... 77 References....................................... 87 viii

Preface The birth of Clifford algebra can be attributed to three mathematicians: Hermann Gunther Grassmann, William Rowan Hamilton and William Kingdon Clifford. Grassmann contributed greatly to the early theory of linear algebra, and one of those contributions was the exterior, or wedge, product. Hamilton invented the quaternions, a way to extend the complex numbers into 4 dimensions. Clifford synthesized the works of the two mathematicians into an algebra that he coined geometric algebra. The purpose of this thesis is to come to terms with geometric algebra. I originally became interested in geometric algebra from my undergraduate physics teacher. I was informed by him that geometric algebra would be the language used by physicists. I greatly admired that man, and so I picked up the book recommended by him, Clifford Algebra to Geometric Calculus by David Hestenes and Garret Sobczyk [HS84]. I was taken aback by the plethora of identities with no meaning what so ever behind them. My advisor David Maxwell has helped me greatly by asking me questions that I could not answer and could not find answers to in [HS84]. That book is written more for physicists than mathematicians, I believe. I realized that one way to understand this subject would be to start, as a mathematician should, from a set of axioms and build up the theory of geometric algebra. Chapter 1 deals with establishing the product of two vectors, then the product of a vector and a blade, finally the product of a blade with a blade. The wedge, left/right contraction and scalar product are introduced and identities involving them are derived. Chapter 2 deals with establishing the correspondence between a subspace and a blade. Subspaces are then studied with the wedge and right contraction products. Chapter 3 deals with lines and planes, the Kepler problem, rotations and reflections and finding components of a vector via the wedge product. My indebtedness goes to David Maxwell and John Rhodes for their numerous conversations about many mathematical topics and their patience with me. Finally, I should like to express 1

my thanks to the University of Alaska Fairbanks for their financial support in my academic studies. 2

Chapter 1 Preliminaries We begin by defining a Clifford algebra over the reals, or as Clifford called it, a geometric algebra. Henceforth we shall also use the name geometric algebra. After defining a geometric algebra, we establish the result of the product of two blades. We find that the product of two blades always has a highest and lowest grade resulting from multiplication. New products will be defined based on the highest and lowest grade. Useful identities between the products will be established that will facilitate quick and efficient computations. 1.1 Definitions There are two common approaches to defining a geometric algebra. The axiomatic treatments found in [HS84] and [DL03] have the merit of being more accessible, but these works lack full rigor. On the other hand, the treatment in [Che97] is mathematically rigorous, but its abstract, formal style lacks accessibility. Our approach is intermediate between the two. We start with a set of axioms inspired by [HS84], but modified so as to allow for a rigorous subsequent development. Definition 1.1.1. A geometric algebra G is an algebra, with identity, over R with the following additional structure: 1) There are distinguished subspaces G 0, G 1,... such that G = G 0 G 1. 2) 1 G 0. 3

4) For all a G 1 a 2 = B(a, a)1 G 0. 3) G 1 is equipped with a non-degenerate, symmetric bilinear form B. Recall that a symmetric bilinear form B on a vector space V is non-degenerate if for all y V, B(x, y) = 0 implies x = 0. 5) For each integer r 2, G r is spanned by all r-blades, where an r-blade is a product of r mutually anti-commuting elements from G 1. Recall that two elements a, b anti-commute if ab = ba. Remark. The explicit multiplication by 1 will not be written. We shall write a 2 = B(a, a) instead of a 2 = B(a, a)1; that is we are identifying R G 0. Definition 1.1.2. Elements of G will be called multivectors. Elements of G r will be called r-vectors and will be said to have grade r. Elements of G 0 will be called scalars; elements of G 1 will be called vectors. A general element of G is then a sum of r-vectors, where each r-vector is a sum of r-blades. By our definition, a scalar is a 0-blade and a vector is a 1-blade. Definition 1.1.3. Let A r be an r-blade. A representation of A r is a set {a 1,..., a r } of mutually anti-commuting vectors such that A r = a 1 a r. Each a i is called a factor of the representation of A r. 4

Intuitively, an r-blade may be thought of as a weighted, oriented r-dimensional subspace spanned by its factors. Example 1.1.4. Let a be a 1-blade or a vector. We view this as a arrow with an orientation specified by the direction of a. Let λ be a positive real number. Then λa is a scaling of a. We may think of λa has having a weight of λ, relative to a. Example 1.1.5. Let a 1 a 2 be a 2-blade. We view this as a plane spanned by a 1, a 2 with an orientation from a 1 to a 2. Let λ be a positive real number. Then λa 1 a 2 has a weight λ, relative to a 1 a 2. These informal ideas will become more precise later. The bilinear form B allows one to associate lengths and spatial relations between vectors. One particular spatial relation is orthogonality. Definition 1.1.6. Let a, b be vectors. If B(a, b) = 0 we say that a and b are orthogonal. If b 2 = B(b, b) = 0 we say that b is a null vector. A null vector is then, orthogonal to itself. We will later give an example of a geometric algebra containing null vectors. Let us look at some examples of geometric algebras. In the following, we use A to denote the span of elements of A. Example 1.1.7. Consider C as a vector space over R. Let G 0 = 1 and G 1 = i. Then C = G 0 G 1. Equip G 1 with the bilinear form B defined by B(i, i) = 1. Then C is a geometric algebra by a straightforward verification. We now construct a geometric algebra with a subspace G 2. Example 1.1.8. Let G be the free R-module over the set of formal symbols {1, e 1, e 2, e 12 }. Let G 1 = e 1, e 2 be equipped with the symmetric bilinear form B such that B(e 1, e 1 ) = B(e 2, e 2 ) = 1 and B(e 1, e 2 ) = 0. 5

Note that B is non-degenerate. Multiplication is defined by the following table 1 e 1 e 2 e 12 1 1 e 1 e 2 e 12 e 1 e 1 1 e 12 e 2 e 2 e 2 -e 12 1 -e 1 e 12 e 12 -e 2 e 1-1 It is a trivial, but tedious exercise, to show that this defines a group. Extending multiplication over addition by bilinearity defines a geometric algebra on G as can be shown by straightforward calculations. Since e 1 e 2 = e 2 e 1 and e 12 = e 1 e 2, by definition e 12 is a 2-blade. It is straightforward to show that G 2 = e 12. Let G 0 = 1, then G = G 0 G 1 G 2. Let a, b G 1. Then a = α 1 e 1 + α 2 e 2, b = β 1 e 1 + β 2 e 2, α 1, α 2, β 1, β 2 R Observe ab = (α 1 e 1 + α 2 e 2 )(β 1 e 1 + β 2 e 2 ) = (α 1 β 1 + α 2 β 2 ) + (α 1 β 2 α 2 β 1 )e 1 e 2. Observe that the product ab, contains a scalar and a 2-blade. The scalar is the familiar dot product from Euclidean geometry. The coefficient of e 1 e 2 can be interpreted as the signed area between a and b, signed in the sense of an orientation from e 1 to e 2. The geometric product between two vectors measures, in some sense, how parallel and how orthogonal the two vectors a and b are. For if a is a scalar multiple of b, then the coefficient of e 1 e 2 is zero 6

and ab = B(a, b). If B(a, b) = 0, then ab = (α 1 β 2 α 2 β 1 )e 1 e 2. Finally, notice that G + = 1 + e 12 is a subalgebra of G. A straightforward verification shows that mapping defined by 1 1, i e 12 from C to G + is an isomorphism. Therefore, G contains the complex numbers. The unit i now has a geometric interpretation as a plane. This example should be thought of as the geometric algebra, generated by e 1 and e 2, of the plane represented by e 1 e 2. We denote this geometric algebra by G(R 2 ). The construction of a geometric algebra associated with a finite dimensional vector space, as presented in Example 1.1.8, can be a laborious task when the dimension of the vector space is large. In the appendix, we give a theorem about the structure of an associated geometric algebra over a finite dimensional vector space. The results of the theorem are the following. Theorem 4.1.8 Let V be a finite dimensional vector space over R equipped with a non-degenerate symmetric bilinear form B. Then there exists an orthogonal non-null basis {e 1,..., e n } such that the geometric algebra associated of V, is the direct sum of the subspaces spanned by e i1 e ir, 1 i 1 <... < i r n. For convenience, let e i1 i k = e i1 e ik. Given a vector space V, we use the notation G(V ) to denote the associated geometric algebra with V. By Theorem 4.1.8, G(V ) is a direct sum of the subspaces G r spanned by e i1 i r, 1 i 1 < < i r n. Recall the summation convention that k αk e k = α k e k. We shall use this convention unless otherwise stated. Let us look at an example of a geometric algebra of three-dimensional space. Example 1.1.9. Let G(R 3 ) be the geometric algebra generated by e 1, e 2, e 3 with the bilinear form B defined so that the generators are orthogonal and square to 1. We have G = 1 e 1, e 2, e 3 e 12, e 23, e 31 e 123. Let a, b G 1, with a = α k e k, b = β k e k. Then ab = (α 1 β 1 + α 2 β 2 + α 3 β 3 ) + (α 1 β 2 α 2 β 1 )e 12 + (α 2 β 3 α 3 β 2 )e 23 + (α 3 β 1 α 1 β 3 )e 31. 7

Observe, as in Example 1.1.8, the product contains a scalar term, B(a, b) and a sum of bivectors. Notice also that the coefficients of the bi-vectors are the coefficients from the cross product a b. Furthermore, notice that G + = 1 e 12, e 23, e 31 forms a subalgebra of G. Let i = e 12, j = e 31 and k = e 23. Then i 2 = j 2 = k 2 = ijk = 1. These are the rules of quaternion multiplication discovered by Hamilton. The quaternions i, j, k may be interpreted as planes. Let us look at a geometric algebra used in special relativity. Example 1.1.10. Let G(M) be the geometric algebra generated by the orthogonal vectors e 0, e 1, e 2, e 3 with the bilinear form B defined by e 2 0 = e 2 k = 1, k = 1,..., 3. We have G = 1 e 0, e 1, e 2, e 3 e 01, e 02, e 03, e 12, e 13, e 23 e 012, e 123, e 230, e 013 e 0123. Let x e 0, e 1, e 2, e 3. Then x = x α e α, x α R. Observe x 2 = B(x α e α, x α e α ) = (x 0 ) 2 (x 1 ) 2 (x 2 ) 2 (x 3 ) 2 is the so called invariant interval of special relativity. In this geometric algebra there exist non-zero vectors that are null. Let n = e 0 + e 1. Then n 2 = 1 1 = 0. There are bi-vectors that square to 1 and -1. Observe (e 01 ) 2 = e 0 e 1 e 0 e 1 = e 2 0e 2 1 = 1 and (e 12 ) 2 = e 1 e 2 e 1 e 2 = e 2 1e 2 2 = 1. With some examples in mind, let us recall a fact associated with a direct sum decompo- 8

sition. Since G = G 0 G 1 we have the associated projection maps r : G G r, satisfying the following properties: A + B r = A r + B r λa r = λ A r if λ R A r r = A r. We define A k = 0 if k < 0. There is a relationship between the bilinear form B and the geometric product between vectors. Proposition 1.1.11. Let a, b be vectors. Then B(a, b) = ab + ba. 2 Proof. Since or (a + b) 2 = a 2 + b 2 + ab + ba ab + ba = (a + b) 2 a 2 b 2 we have ab + ba = B(a + b, a + b) B(a, a) B(b, b) = B(a, a) + B(b, a) + B(a, b) + B(b, b) B(a, a) B(b, b) = 2B(a, b) as claimed. Corollary 1.1.12. ab = ba if and only if B(a, b) = 0. Therefore, anti-commutativity and orthogonality are equivalent. 9

Proposition 1.1.13. Let a and b be linearly dependent vectors. Then their geometric product commutes. Proof. If a = λb, λ R, then ab = λbb = λb 2 = b 2 λ = bbλ = ba. We shall often work with inverses of elements of the geometric algebra. characterize invertibility for vectors, and then generalize to blades. Let us first Proposition 1.1.14. Let b be a non-zero vector. Then b is invertible if and only if b is non-null. Moreover, b 1 = b b 2 when it exists. Proof. Suppose that b is invertible. Then bb 1 = 1 implies b 2 b 1 = b. Since b is invertible and 0 is not, b 0 and we find b 2 b 1 0. Hence, b 2 0 and b is non-null. We may conclude that b 1 = b b 2. Conversely, suppose that b is non-null. We shall show that b 1 = b b 2. b b b 2 = b2 b 2 = 1 10

similarly b b 2 b = 1. Thus, b 1 is as claimed. We extend Proposition 1.1.14 to blades. Proposition 1.1.15. Let A r be a non-zero r-blade. Then A r is invertible if and only if each factor of each representation of A r is non-null. Proof. Suppose that A r is invertible with a representation A r = a 1 a r containing a null vector. We suppose that a 2 1 = 0 without loss of generality. Observe that a 1 A r = a 2 1a 2 a r = 0 implies a 1 = a 1 A r A 1 r = 0 contradicting the fact that A r is non-zero. Conversely, suppose that each factor of some representation of A r = a 1 a r is non-null. By Proposition 1.1.14, each factor of the representation is invertible. A simple verification shows that A 1 r = a 1 r a 1 1. 1.2 Grades resulting from multiplication In this section we prove some fundamental results concerning the product of two blades. In particular, we show that the resulting grades of a product of two vectors is a scalar and a 2-vector, of a vector and an r-blade is an r 1 and an (r + 1)-vector and of an r and s-blade is a sum starting at grade s r and incrementing by grade 2, until grade r + s. 11

1.2.1 Multiplication of two vectors Definition 1.2.1. Let a and b be vectors, with b invertible. We call π(a, b) = B(a, b)b 1 the projection of a onto b and ρ(a, b) = a π(a, b) the rejection of a from b. Lemma 1.2.2. Let a and b be vectors, with b invertible. Then π(a, b) commutes with b. Proof. By Proposition 1.1.14, π(a, b) = B(a, b)b 1 = B(a, b) b B(a, b) = b. b2 b 2 Therefore, π(a, b) and b are linearly dependent. By Proposition 1.1.13, π(a, b) and b commute. Lemma 1.2.3. Let a and b be vectors, with b invertible. Then ρ(a, b) anti-commutes with b. Thus, is a 2-blade. Proof. First note that Observe that ρ(a, b)b B(b, b 1 ) = B(b, b B(b, b) ) = b2 b 2 = b2 b 2 = 1. B(b, ρ(a, b)) = B(b, a π(a, b)) = B(b, a) B(b, π(a, b)) = B(b, a) B(b, B(a, b)b 1 ) = B(a, b) B(a, b)b(b, b 1 ) = B(a, b) B(a, b) = 0. 12

By Corollary 1.1.12, ρ(a, b) and b anti-commute. Although Proposition 1.1.11 establishes part of the following lemma, we give a different proof using the notion invertibility. Lemma 1.2.4. Let a and b be vectors, with b invertible. Then ab + ba 2 = B(a, b) and ab ba 2 = ρ(a, b)b. Proof. Since we have a = π(a, b) + ρ(a, b) ab = π(a, b)b + ρ(a, b)b and ba = bπ(a, b) + bρ(a, b). By Lemma 1.2.2 and Lemma 1.2.3, ab + ba = π(a, b)b + ρ(a, b)b + bπ(a, b) + bρ(a, b) = 2π(a, b)b = 2B(a, b)b 1 b = 2B(a, b) and ab ba = π(a, b)b + ρ(a, b)b bπ(a, b) bρ(a, b) = 2ρ(a, b)b. Proposition 1.2.5. Let a and b be vectors, with b invertible. Then ab = ab 0 + ab 2. Moreover, ab 0 = B(a, b). 13

Proof. Observe By Lemma 1.2.4, ab = ab + ba 2 + ab ba. 2 ab = B(a, b) + ρ(a, b)b. The result now follows since B(a, b) G 0 and ρ(a, b)b G 2. In establishing Proposition 1.2.5 it was assumed that b was invertible. We now show the proposition holds for null vectors as well. Lemma 1.2.6. Let n be a null vector. Then there exists non-null vectors x, y such that n = x + y. Proof. Suppose that n = 0. Since B is non-degenerate, there exists a vector x such that B(x, x) 0. Then n = x x is a sum of non-null vectors. Suppose now that n 0. Since B is non-degenerate, there exists a vector x such that B(n, x) 0. Let y λ = n λx where λ R \ {0}. Then yλ 2 = λ( 2B(n, x) + λx 2 ). If x 2 = 0, then y 2 λ = 2λB(n, x) 0. So, n = 1 2 y 1 + 1 2 y 1 is a sum of non-null vectors.. If x 2 0, then choose so that y λ is non-null. Then λ 2B(n, x) x 2 n = y λ + λx is a sum of non-null vectors. Theorem 1.2.7. Let a, b be vectors. Then ab = ab 0 + ab 2. 14

Proof. By Proposition 1.2.5, the result holds if b is non-null. Suppose that b is null. By Lemma 1.2.6, there exists non-null vectors x, y such that b = x + y. By Proposition 1.2.5, ab = a(x + y) = ax + ay = ax 0 + ax 2 + ay 0 + ay 2 = a(x + y) 0 + a(x + y) 2 = ab 0 + ab 2. The product of two vectors gives a grade 0 and grade 2 term in the sum. 1.2.2 Multiplication of a vector and a blade We will now generalize Theorem 1.2.7 to a vector and a blade. The resulting product will be a sum of a vector of one less and one greater in grade than the original blade. Definition 1.2.8. Let a and A r be a vector and an invertible r-blade with a representation A r = a 1 a r. We define the projection of a onto A r by π(a, a 1 a r ) = r k=1 B(a, a k )a 1 k and the rejection of a from A r by ρ(a, a 1 a r ) = a π(a, a 1 a r ). The map π is called the projection because it measures how much of a is in the span of the factors a 1,..., a r and ρ measures how much of a is not in the span of the factors a 1,..., a r. The formula for the projection π appears to depend on the choice of representation of the blade, but we shall see that it is independent of choice of representation and similarly for ρ. We use the following convention. Given a product of vectors a 1 ǎ k a r, the check indicates the the vector a k is not present in the product. 15

Lemma 1.2.9. Let a and A r be a vector and an invertible r-blade with representation A r = a 1 a r, respectively. Then π(a, a 1 a r )A r = ( 1) r+1 A r π(a, a 1 a r ). Moreover, is an (r 1)-vector. π(a, a 1 a r )A r Proof. Since a 1 k anti-commutes with a i for i k and commutes with a k we have, π(a, a 1 a r )A r = = = ( r k=1 r k=1 B(a, a k )a 1 k ) a 1 a r B(a, a k )a 1 k a 1 a r r B(a, a k )( 1) k 1 a 1 ǎ k a r (1.2.2.1) k=1 r = a 1 a r ( 1) k 1 ( 1) r k B(a, a k )a 1 k=1 = ( 1) r+1 A r π(a, a 1 a r ). Referring to (1.2.2.1), since the r 1 factors mutually anti-commute, we have a sum of r, (r 1)-blades. Hence, is an (r 1)-vector. π(a, a 1 a r )A r k Lemma 1.2.10. Let a and A r be a vector and an invertible r-blade with representation A r = a 1 a r, respectively. Then ρ(a, a 1 a r )a k = a k ρ(a, a 1 a r ) for k = 1,..., r. 16

Proof. Let 1 j r. Observe B(a j, ρ(a, a 1 a r )) = B(a j, a π(a, a 1 a r )) = B(a j, a) B(a j, π(a, a 1 a r )) r r = B(a j, a) B(a j, B(a, a k )a 1 k ) = B(a j, a) B(a, a k )B(a j, a 1 k ) k=1 = B(a j, a) B(a, a j )B(a j, a 1 j ) = B(a j, a) B(a j, a) 0 = 0. r k=1k j By Corollary 1.1.12, ρ(a, a 1 a r ) and a j anti-commute for 1 j r. k=1 B(a, a k )B(a j, a 1 k ) Lemma 1.2.11. Let a and A r be a vector and an invertible r-blade with representation A r = a 1 a r, respectively. Then ρ(a, a 1 a r )A r = ( 1) r A r ρ(a, a 1 a r ). Moreover, is an (r + 1)-blade. ρ(a, a 1 a r )A r Proof. By Lemma 1.2.10, ρ(a, a 1 a r ) anti-commutes with each factor a k. Then ρ(a, a 1 a r )A r = ρ(a, a 1 a r )a 1 a r = ( 1) r a 1 a r ρ(a, a 1 a r ) = ( 1) r A r ρ(a, a 1 a r ). Also, since each factor is anti-commuting we have that ρ(a, a 1 a r )A r is an (r+1)-blade. Proposition 1.2.12. Let a and A r be a vector and an invertible r-blade, respectively. Then aa r ( 1) r A r a 2 = aa r r 1 and aa r + ( 1) r A r a 2 17 = aa r r+1.

Consequently, aa r = aa r r 1 + aa r r+1 and A r a = A r a r 1 + A r a r+1. Proof. Since a = π(a, a 1 a r ) + ρ(a, a 1 a r ), by Lemmas 1.2.9 and 1.2.11, aa r + ( 1) r A r a = π(a, a 1 a r )A r + ρ(a, a 1 a r )A r + ( 1) r A r π(a, a 1 a r )+ ( 1) r A r ρ(a, a 1 a r ) = π(a, a 1 a r )A r + ρ(a, a 1 a r )A r + ( 1) 2r+1 π(a, a 1 a r )A r + ( 1) 2r ρ(a, a 1 a r )A r = 2ρ(a, a 1 a r )A r or A similar calculation shows that aa r + ( 1) r A r a 2 aa r ( 1) r A r a 2 = ρ(a, a 1 a r )A r. = π(a, a 1 a r )A r. Then aa r = aa r ( 1) r A r a + aa r + ( 1) r A r a 2 2 = π(a, a 1 a r )A r + ρ(a, a 1 a r )A r = aa r r 1 + aa r r+1. Similarly, A r a = A r a r 1 + A r a r+1. Remark. This shows that π, ρ are independent of choice of representation. 18

Corollary 1.2.13. Let a and A r be a vector and an invertible r-blade, respectively. Then aa r r+1 = ( 1) r A r a r+1 and aa r r 1 = ( 1) r+1 A r a r 1. Proof. By Proposition 1.2.12 and Lemmas 1.2.10 and 1.2.9, aa r r+1 = ρ(a, a 1,..., a r )A r = ( 1) r A r ρ(a, a 1,..., a r ) = ( 1) r A r a r+1. The other equality is established similarly. We assumed throughout the invertibility of the blade to establish Proposition 1.2.12. In the case when the vector space is finite dimensional, the result still holds for a vector and any blade. Let A r be an r-blade with a representation A r = a 1 a r. Each factor of the representation is a sum of orthogonal non-null vectors e 1,..., e n a k = n α ik e i, k = 1,..., r. i=1 Then n n A r = a 1 a r = α i1 1e i1 α irre ir. i 1 =1 i r=1 After expanding we find that A r is a sum of invertible r-blades. vectors, by linearity of the projection operators we have Then as in the case of Theorem 1.2.14. Suppose that G 1 is finite dimensional. Let a and A r be a vector and r-blade, respectively. Then aa r = aa r r 1 + aa r r+1. Remark. The author is working on a way to generalize Theorem 1.2.14 when the vector space is not necessarily finite dimensional. As it currently stands, the following results involving the product of blades only are known to hold in the finite dimensional case. Note the general structure of the geometric product of a vector with an r-blade has a sum of an r 1 and (r + 1)-vector. 19

Now that we have established the grades resulting from the product of a vector with a blade we shall generalize to the product of a blade with a blade. To help with this generalization we introduce a new map. 1.2.3 Reversion We shall introduce a very useful map called reversion. Reversion allows one to reverse the order of a product of multivectors and this is useful for algebra manipulations. Definition 1.2.15. Let A G given by the unique sum A = Then the reverse of A is A = n r=1 ( 1) r(r 1) 2 A r. n A r, where A r = A r. r=1 Remark. Note that for an r-blade A r, A r = ( 1) r(r 1) 2 A r. So, A r is an r-blade as well. Also, (A ) = A; reversion is an involution. If λ is a scalar and if a is a vector λ = ( 1) 0(0 1) 2 λ = λ, a = ( 1) 1(1 1) 2 a = a. The general pattern is as follows + + + +, which we see has a period of 4. Lemma 1.2.16. Let a and A be a vector and multivector, respectively. Then (aa) = A a. Proof. We show the result holds for an r-blade A r. By Proposition 1.2.12 and Corollary 20

1.2.13, (aa r ) = ( aa r r 1 + aa r r+1 ) = aa r r 1 + aa r r+1 = ( 1) (r 1)((r 1) 1) 2 aa r r 1 + ( 1) (r+1)((r+1) 1) 2 aa r r+1 = ( 1) (r 1)((r 1) 1) 2 ( 1) r+1 A r a r 1 + ( 1) (r+1)((r+1) 1) 2 ( 1) r A r a r+1 = ( 1) r2 r+4 2 A r a r 1 + ( 1) r2 r 2 A r a r+1 = ( 1) r(r 1) 2 ( A r a r 1 + A r a r+1 ) = ( 1) r(r 1) 2 A r a = A ra. If A G, then A = k A k. Hence, (aa) = (a k A k ) = ( k a A k ) = k (a A k ) = k A k a = ( k A k ) a = A a. Proposition 1.2.17. Reversion satisfies the following properties: 1) (AB) = B A 2) (A + B) = A + B 3) A r = A r Proof. Properties 2) and 3) follow straightforwardly from the definition of reversion. To establish 1) we shall show that (B s A r ) = A rb s for any r-blade and s-blade by induction on s. By Lemma 1.2.16, the results holds for s = 1. Now suppose that the statement holds for some fixed s. Let B s+1 be an (s + 1)-blade. We can factor B s+1 = B s b for some s-blade B s 21

and some vector b. By our induction hypothesis and Lemma 1.2.16, (B s+1 A r ) = (B s ba r ) = (B s ba r r 1 + B s ba r r+1 ) = (B s ba r r 1 ) + (B s ba r r+1 ) = ba r r 1B s + ba r r+1b s = ( ba r r 1 + ba r r+1)b s = ( ba r r 1 + ba r r+1 ) B s = (ba r ) B s = (A rb )B s = A r(b B s) = A r(b s b) = A rb s+1. By the principle of mathematical induction the result holds for all s N. That (AB) = B A for all A, B G now follows from expanding A and B as their unique sums of r-vectors. 1.2.4 Multiplication of two blades We will now generalize Theorem 1.2.14 to a blade and a blade. The result will be a generalization of our previous results and will be called the grade expansion identity. Proposition 1.2.18 (Grade Expansion Identity). Let A r and B s be an r- and s-blade, respectively. Then A r B s = m A r B s s r +2k where m = min{r, s}. k=0 We begin with two lemmas. 22

Lemma 1.2.19. Let A r and B s be an r- and s-blade, respectively. If s r, then A r B s = r A r B s s r+2k. k=0 Proof. We proceed by induction on r. When r = 0 the result is evident. Suppose that for some r the result holds for all s blades such that s r. Let A r+1 be an (r + 1)-blade and B s be an s-blade such that s r + 1. Since A r+1 is an (r + 1)-blade we can write A r+1 = aa r, where A r is an r-blade and a is a vector. By the induction hypothesis and Theorem 1.2.14, A r+1 B s = aa r B s r = a A r B s s r+2k = = = k=0 r a A r B s s r+2k k=0 r ( a A r B s s r+2k s r+2k 1 + a A r B s s r+2k s r+2k+1 ) k=0 r ( a A r B s s r+2k s (r+1)+2k + a A r B s s r+2k s (r+1)+2k+2 ). k=0 Also since A r+1 B s G, we have the unique direct sum decomposition A r+1 B s = k A r+1 B s k. We must have for k = 0,..., r and A r+1 B s s (r+1)+2k+1 = 0 A r+1 B s k = 0 23

for k s + (r + 1) + 1. Hence, r+1 A r+1 B s = A r+1 B s s (r+1)+2k. k=0 By the principle of mathematical induction, the result holds. Lemma 1.2.20. Let A r and B s be an r- and s-blade, respectively. If r s, then A r B s = s A r B s r s+2k. k=0 Proof. Reversion is an involution and hence A r B s = ((A r B s ) ) = (B sa r). Since reversion doesn t alter grade, by Lemma 1.2.19, B sa r = s B sa r r s+2k. k=0 Then ( s ) A r B s = B sa r r s+2k = k=0 s s B sa r r s+2k = A r B s r s+2k. k=0 k=0 Proposition 1.2.18 follows from Lemmas 1.2.19 and 1.2.20. We find from the grade expansion identity that the product of two blades is not necessarily a blade but a sum of blades where the grade increases by two. 1.3 Products derived from the geometric product In this section we introduce four new products: the wedge product, left/right contraction and the scalar product. The right contraction and wedge product shall be used extensively in Chapter 2. 24

1.3.1 Outer or wedge product The wedge product is the product of the so-called exterior algebra. We will show that the wedge product is alternating and associative, an r-blade is the wedge product of r vectors, and the wedge of r vectors is an r-blade. Finally, in the case of a finite dimensional vector space, it is shown that the wedge of linearly independent vectors is not zero. Definition 1.3.1. Let A r and B s be an r- and s-blade, respectively. Then the outer or wedge product is A r B s = A r B s r+s. We extend the definition of the wedge product to multivectors by bilinearity. Example 1.3.2. Consider G(R 3 ), the geometric algebra from Example 1.1.9. Let A = e 12 and a = α 1 e 1 + α 2 e 2 + α 3 e 3. Consider a A = aa 3 = (α 1 e 1 + α 2 e 2 + α 3 e 3 )e 12 3 = α 1 e 112 + α 2 e 212 + α 3 e 312 3 = α 1 e 2 α 2 e 1 + α 3 e 123 3 = α 3 e 123. Then, since e 123 0, a A = 0 iff α 3 = 0 iff a = α 1 e 1 + α 2 e 32 iff a e 1, e 2 Intuitively, if a A = 0, then the line a is contained in the plane A. If a A 0 then the line a and plane A form a volume, which is a grade 3 object, formed from grade 1 and 2 objects. Proposition 1.3.3. Let A r, B s and C t be r, s and t-blades, respectively. Then 25

A r (B s C t ) = (A r B s ) C t (1.3.1.1) A r B s = ( 1) rs B s A r (1.3.1.2) Proof. We follow [HS84]. By associativity and expanding (A r B s )C t = A r (B s C t ) each side with the grade expansion identity (Proposition 1.2.18) we have A r (B s C t ) = A r B s C t s+t = A r B s C t s+t r+(s+t) = A r (B s C t ) r+s+t = (A r B s )C t (r+s)+t = (A r B s ) C t. Noting that reversion is an involution and grade preserving, we further have A r B s = A r B s r+s = ( (A r B s ) r+s ) = ( 1) (r+s)((r+s) 1) 2 ( 1) s(s 1) 2 ( 1) r(r 1) 2 B s A r r+s = ( 1) rs B s A r 1.3.2 Linear independence and the wedge product We establish the connection between linear independence and the wedge of a product of vectors when the vector space is finite dimensional. Corollary 1.3.4. Let a, b be vectors. Then a b = b a. 26

Consequently, a a = 0. Proof. This follows immediately from Proposition 1.3.3. We now show that the wedge of linearly dependent vectors is zero. Proposition 1.3.5. If the collection of vectors {a 1,..., a r } is linearly dependent, then a 1 a r = 0. Proof. Since a 1,..., a r are linearly dependent there exist scalars λ 1,..., λ r not all zero for which λ 1 a 1 + + λ r a r = 0. Without loss of generality suppose that λ 1 0. By Corollary 1.3.4, a 1 a 2 a r = λ 1 1 (λ 1 a 1 ) a 2 a r = λ 1 1 (λ 2 a 2 + + λ r a r ) a 2 a r = 0. The converse of Proposition 1.3.5 will be given next, but first a lemma. Lemma 1.3.6. Let a 1 a r be a representation of an r-blade. Then a 1 a r = a 1 a r. Proof. We proceed by induction on r. When r = 1 the result holds trivially. Suppose the result holds for any r-blade. By the induction hypothesis, a 1 a 2 a r+1 = a 1 (a 2 a r+1 ) = a 1 (a 2 a r+1 ) r 1 + a 1 (a 2 a r+1 ) r+1 = a 1 a 2 a r+1 r 1 + a 1 (a 2 a r+1 ) r+1 = a 1 (a 2 a r+1 ) = a 1 a 2 a r+1. Corollary 1.3.7. Let dim G 1 = n. Let A = {a 1,..., a r } be a set of r n linearly independent vectors. Then a 1 a r 0. 27

Proof. Since A is linearly independent we may extend it to a basis of G 1 A {a r+1,..., a n }. Furthermore, by Lemma 4.1.2 proved in the appendix, there exists an orthogonal non-null basis {e 1,..., e n } for G 1 where each basis vector squares to ±1. Let L : G 1 G 1 be the linear map defined by a k = L(e k ), k = 1,..., n. Since L maps a basis to a basis, L is non-singular. Suppose that a k = L(e k ) = α s ke s, k = 1,..., n. By Lemma 1.3.6, a 1 a r a n = α j 1 1 e j1 α jn n e jn = ( 1) σ α σ(1) 1 α σ(n) e 1 e n σ S n n = det(l)e 1 e n = det(l)i, where I = e 1 e n. Since each basis vector e i squares to ±1, we have II = ±1. Therefore, I 0. Then since det(l) 0 also, we cannot have a 1 a r = 0; else det(l)i = a 1 a r a n = (a 1 a r ) a n = 0, a contradiction. If the wedge of r vectors is zero, then the vectors must be linearly dependent. We have the following result. Proposition 1.3.8. An r-blade a 1 a r dependent. = 0 if and only if {a 1,..., a r } is linearly In Chapter 2, Proposition 1.3.8 will allow the correspondence between blades and subspaces to be made. 28

1.3.3 The wedge of r vectors is an r-blade We now show that the wedge of r vectors is an r-blade. Recall some basic facts from linear algebra. Lemma 1.3.9. Let V be a finite dimensional inner product space. Let T be a self-adjoint linear operator on V. Then there is an orthonormal basis for V each of which is a characteristic vector for T. Translating Lemma 1.3.9 into the language of matrices, we have the following. Corollary 1.3.10. Let A be an n n real symmetric matrix. Then there is an orthogonal matrix P such that P T AP is diagonal. Proposition 1.3.11. Let {a 1,..., a r } be a set of vectors of G 1. Then a 1 a r is an r-blade. Proof. We follow the strategy of [DL03]. If the set of vectors is linearly dependent then a 1 a r = 0, hence an r-blade. Suppose now that the vectors are linearly independent. Let M be the matrix with entries B(a i, a j ). Then M is real symmetric. So, there exists an orthogonal matrix P = (p i j) such that P T MP = D is diagonal. Let e k = p j k a j, k = 1,..., r. Then B(e k, e s ) = B(p j k a j, p i sa i ) = p j k B(a j, a i )p i s = (P T ) kj (M) ji (P ) is = (D) ks. Hence, with respect to our bilinear form B, the vectors e 1,..., e r are orthogonal. This means 29

that e 1 e r = e 1 e r = p j1 1a j1 p jrra jr = σ S r ( 1) σ p σ(1)1 p σ(r)r a 1 a r = det(p )a 1 a r. We used here the anti-symmetry of the wedge product. Since P is an orthogonal matrix, we have det(p ) 0. Then a 1 a r = det(p ) 1 e 1 e r is an r-blade. 1.3.4 Left/right contraction The left/right contraction products might be less familiar operations. The contractions are not associative like the wedge product but an identity will be established connecting the two products. Definition 1.3.12. Let A r and B s be an r and s-blade, respectively. Then the right contraction is A r B s = A r B s s r (read A r contracted onto B s ) and the left contraction is Let a, b be vectors. Then A r B s = A r B s r s. a b = ab 0 = B(a, b). The right contraction between the vector a and b is just the bilinear form B evaluated on them. Furthermore, by definition if s < r then A r B s = 0. In general the right contraction 30

is not a commutative product. multivectors by bilinearity. We extend the definition of the left/right contraction to Example 1.3.13. Consider G(R 3 ), the geometric algebra from Example 1.1.9. Let A = e 12 and a = α 1 e 1 + α 2 e 2 + α 3 e 3. Observe a A = aa 1 = (α 1 e 1 + α 2 e 2 + α 3 e 3 )e 12 1 = α 1 e 112 + α 2 e 212 + α 3 e 312 1 = α 1 e 2 α 2 e 1 + α 3 e 123 1 = α 2 e 1 + α 1 e 2. Intuitively, this is a line in the plane A. Furthermore, observe a (a A) = (α 1 e 1 + α 2 e 2 + α 3 e 3 )(α 2 e 1 α 1 e 2 ) 0 = α 1 α 2 e 11 α1e 2 12 + α2e 2 21 α 2 α 1 e 22 + α 3 α 2 e 31 α 3 α 1 e 32 0 = α 1 α 2 α 2 α 1 α1e 2 12 + α2e 2 21 + α 3 α 2 e 31 α 3 α 1 e 32 0 = 0. We interpret a A as the line contained in the plane, orthogonal to the line a. Consider A e 123 = e 12123 1 = e 3. A more general interpretation for the intuition of this result is as follows. In the volume represented by e 123, the line represented by e 3 is most unlike the plane represented by A. Proposition 1.3.14. Let A r, B s and C t be r-,s- and t-blades, respectively. Then A r (B s C t ) = (A r B s ) C t. (1.3.4.1) Proof. We follow [HS84]. By associativity and expanding (A r B s )C t = A r (B s C t ) each side 31

with the grade expansion identity (Proposition 1.2.18) we have if t s + r A r (B s C t ) = A r B s C t t s = A r B s C t t s t s r = A r B s s+r C t t s r = (A r B s )C t t (r+s) = (A r B s ) C t and if t < s + r A r (B s C t ) = 0 = (A r B s ) C t, since negative grades are zero. Identity (1.3.4.1) will be very useful in Chapter 2. Proposition 1.3.15. Let A r and B s be an r- and s-blade, respectively. Then A r B s = ( 1) r(s 1) B s A r (1.3.4.2) Proof. Recall that the reversion map is an involution and grade preserving. Observe A r B s = A r B s r s = ( (A r B s ) r s ) = ( 1) (r s)((r s) 1) 2 ( 1) r(r 1) 2 ( 1) s(s 1) 2 B s A r r s = ( 1) s(r 1) B s A r. 1.3.5 Scalar product We introduce the scalar product to define the magnitude of a blade, which will be used in Chapter 3. The product of two multivectors will be shown to commute under the scalar 32

product. The scalar product is used heavily in [DL03]. Definition 1.3.16. Let A r and B s be an r- and s-blade, respectively. product is A r B s = A r B s 0. Then the scalar We extend the definition of the scalar product to multivectors by bilinearity. 1.3.6 Cyclic permutation of the scalar product Lemma 1.3.17. Let A r and B r be r-blades, respectively. Then A r B r = B r A r. Proof. Recall that the reversion map is an involution. Observe A r B s = A r B s 0 = (A r B s) 0 = B s A r 0 = B s A r. Proposition 1.3.18. Let A, B G. Then A B = B A. Proof. Let A, B be given by their unique sum A = r A r, B = s B s, respectively. Note first that A r B s 0 = 0 unless r = s, by the grade expansion identity (Proposition 1.2.18). By Lemma 1.3.17, AB 0 = r,s A r B s 0 = r A r B r 0 = r B r A r 0 = s,r B s A r 0 = BA 0. 33

Corollary 1.3.19. Let A 1,..., A n G. Then A 1 A 2 A 3 A n 0 = A 2 A 3 A n A 1 0. (1.3.6.1) Products of multivectors can be cyclically permuted under the scalar product. 1.3.7 Signed magnitude For an r-blade A r with a representation A r = a 1 a r, A r A r = A r A r 0 = a 1 a r a r a 1 0 = a 2 1 a 2 r. The scalar product defines in some sense a magnitude of a blade. But one must be careful for the square of a factor may be non-positive. Definition 1.3.20. Let A r be an r-blade. Then the (squared) magnitude of A r is A r 2 = A r A r. If λ is a scalar if a is a vector λ 2 = λ λ = λ 2, a 2 = a a = a 2. In general, for an r-blade, A r 2 = A r A r = A r A r 0 = ( 1) r(r 1) 2 A 2 r. (1.3.7.1) In the case that an r-blade is invertible, we have A r 2 0. So, A r 2 = A r A r which means A 1 r = A r A r = ( 1) r(r 1) 2 A 2 A r 2 r. 34

The inverse of a blade is just a scalar multiple of the blade. Example 1.3.21. Consider G(R 2 ). Let a = α 1 e 1 + α 2 e 2, b = β 1 e 1 + β 2 e 2. Then a b = (α 1 β 2 α 2 β 1 )e 12. The magnitude of a b is a b 2 = (a b)(a b) = (α 1 β 2 α 2 β 1 ) 2. The squared magnitude of a b, may be interpreted as the square of the Euclidean area of the parallelogram with sides a and b. With the contraction and wedge products, we may write in our new symbols aa r = a A r + a A r, (1.3.7.2a) a A r = aa r ( 1) r A r a, 2 (1.3.7.2b) a A r = aa r + ( 1) r A r a. 2 (1.3.7.2c) 1.4 Additional identities Proposition 1.4.1. Let A r, B s and a be an r-,s-blade and a vector, respectively. Then a (A r B s ) = (a A r )B s + ( 1) r A r (a B s ) (1.4.0.3a) = (a A r )B s ( 1) r A r (a B s ), (1.4.0.3b) a (A r B s ) = (a A r )B s ( 1) r A r (a B s ) (1.4.0.4a) = (a A r )B s + ( 1) r A r (a B s ) (1.4.0.4b) Proof. Identity (1.4.0.3a) will be proven, the others follow similarly. Let m = min{r, s}. We follow the strategy of [HS84]. By the grade expansion identity (Proposition 1.2.18), and 35

equation (1.3.7.2b), m 2a (A r B s ) = 2a A r B s r s +2k = 2 = = a k=0 m a A r B s r s +2k k=0 m a A r B s r s +2k ( 1) r s +2k A r B s r s +2k a k=0 m m A r B s r s +2k ( 1) r s A r B s r s +2k a k=0 k=0 = aa r B s ( 1) r+s A r B s a = (aa r ( 1) r A r a + ( 1) r A r a)b s + ( 1) r A r ( ab s + ab s ( 1) s B s a) = 2(a A r )B s + ( 1) r A r 2(a B s ) Corollary 1.4.2. Let A r, B s and a be an r-,s-blade and a vector, respectively. Then a (A r B s ) = (a A r ) B s + ( 1) r A r (a B s ) (1.4.0.5) a (A r B s ) = (a A r ) B s + ( 1) r A r (a B s ) (1.4.0.6) Proof. The identities follow from Proposition 1.4.1 by projecting out and collecting the highest and lowest grades. 2. We introduce the notion of containment only briefly. Much more will be said in Chapter Definition 1.4.3. Let A r, B s be r, s-blades, respectively. If a A r = 0 implies a B s = 0 we write A r B s and say that A r is contained in B s. Proposition 1.4.4. Let A r, B s and C t be r-,s- and t-blades, respectively. If A r C t, then (A r B s ) C t = A r (B s C t ). (1.4.0.7) 36

Proof. We proceed by induction on r. Let r = 1. By identity (1.4.0.6), A 1 (B s C t ) = (A 1 B s ) C t + ( 1) s B s (A 1 C t ) = (A 1 B s ) C t. Suppose the statement holds for some fixed r. Let A r+1 be an (r + 1)-blade, with A r+1 C t. We may write A r+1 = a A r where a and A r is a vector and an r-blade, respectively. Note that a, A r C t. By the induction hypothesis, base case, and identity (1.3.4.1), A r+1 (B s C t ) = a A r (B s C t ) = a ((A r B s ) C t ) = (a (A r B s )) C t = ((a A r ) B s ) C t = (A r+1 B s ) C t. By the Principle of Mathematical induction the result holds for all r N. Lemma 1.4.5. Let a, a 1,..., a r G 1. Then where B k = B(a, a k ). 1 2 (aa 1 a r ( 1) r a 1 a r a) = r ( 1) k 1 B k a 1 ǎ k a r k=1 Proof. We induct on r and use Proposition 1.1.11. Let r = 1. By Proposition 1.1.11, the result follows trivially. Suppose the statement holds for some r. By the induction hypothesis 37

and Proposition 1.1.11, r aa 1 a 2 a r a r+1 = ( ( 1) k 1 2B k a 1 ǎ k a r + ( 1) r a 1 a r a)a r+1 k=1 r = ( 1) k 1 2B k a 1 ǎ k a r a r+1 + ( 1) r a 1 a r aa r+1 k=1 r = ( 1) k 1 2B k a 1 ǎ k a r a r+1 + ( 1) r a 1 a r (2B r+1 a r+1 a) k=1 r+1 = ( 1) k 1 2B k a 1 ǎ k a r a r+1 + ( 1) r+1 a 1 a r a r+1 a. k=1 By the Principle of Mathematical Induction the result holds for all r N. Note that the result holds for any product of vectors regardless of commuting or anticommuting and does not depend on the vectors being invertible. Proposition 1.4.6 (Reduction Identity). Let a and A r representation A r = a 1 a r, respectively. Then be a vector and an r-blade with a A r = r ( 1) k 1 B(a, a k )a 1 ǎ k a r. k=1 Proof. By equation (1.3.7.2b) and Lemma 1.4.5, a A r = aa r ( 1) r A r a 2 = r ( 1) k 1 B(a, a k )a 1 ǎ k a r. k=1 38

Chapter 2 The geometry of blades This chapter shall discuss properties of the lowest and highest grade of a product of blades, the contraction and wedge. With the wedge product, the notion of containment will be introduced, which will allow the correspondence between blade and subspace. The right contraction, which we shall call just the contraction, will allow us to generalize orthogonality between blades. Much use of our algebraic machinery established in Chapter 1 will take place. 2.1 Wedge product and containment In this section a correspondence between subspaces and blades is developed. Definition 2.1.1. Let A r and B s be r- and s-blades, respectively. Then A r is contained in B s, written A r B s, if a A r = 0 implies a B s = 0. Containment is a transitive relation. That is, if A r B s and B s C t, then A r C t ; for if a A r = 0 then a B s = 0 so a C t = 0. Example 2.1.2. Consider the geometric algebra G(R 3 ). Let A = e 12 and suppose that a A = 0. We showed in Example 1.3.2 that this is equivalent to a e 1, e 2. Then a = α 1 e 1 + α 2 e 2 for some scalars α 1, α 2. Observe a e 123 = (α 1 e 1 + α 2 e 2 )e 123 4 = α 1 e 23 + α 2 e 31 4 = 0. Therefore, A e 123. Intuitively, e 123 represents a volume, containing the plane A. We show that containment in the sense of the wedge product means containment in the sense of the span of a collection of vectors. 39

Proposition 2.1.3. Let A r be a non-zero r-blade with a representation A r = a 1 a r. Then b A r = 0 if and only if b a 1,..., a r. Proof. If b a 1,..., a r, then the set {b, a 1,..., a r } is linearly dependent. By Proposition 1.3.8, b A r = 0. Conversely, suppose that b A r = 0. By Proposition 1.3.8, the set {b, a 1,..., a r } is linearly dependent and since {a 1,..., a r } is linearly independent we have b a 1,..., a r. Definition 2.1.4. Let A r be an r-blade. Then G 1 (A r ) = {a G 1 : a A r = 0}. We shall call G 1 (A r ) the subspace representation of A r. Corollary 2.1.5. Let A r be a non-zero r-blade with a representation A r = a 1 a r. Then G 1 (A r ) = a 1,..., a r. Proof. This follows immediately from Proposition 2.1.3. Given a blade, by Corollary 2.1.5, there is a corresponding subspace. In the next section we show the converse. 2.1.1 Blades represent oriented weighted subspaces With the notion of containment we now establish a correspondence between a subspace and a blade. Proposition 2.1.6. Let H be a subspace of vectors, dim H = m. Then there exists a nonzero m-blade H m for which G 1 (H m ) = H. Proof. There exists a basis for H, {h 1,..., h m }. By Proposition 1.3.11, H m = h 1 h m 40

is an m-blade and is non-zero by Corollary 1.3.8. By Proposition 2.1.3, G 1 (H m ) = H. This is a key result, for it says that given a subspace there is a corresponding blade, and that blade is the wedge between the basis elements. This allows one to do algebra with subspaces. Example 2.1.7. Consider the geometric algebra G(R 3 ) and the subspace H = e 1, e 2. A blade representing the subspace is e 1 e 2 = e 12. With this association, we may interpret the subspace as having an orientation, from e 1 to e 2 ; and weight given by the area of the parallelogram determined by e 1 and e 2 which is 1. Consider the subspace L = e 1. A blade representing the subspace is e 1. With this association, we may interpret the line as having the orientation specified by e 1 ; and the weight, which is 1. We could also use the blade 2e 1. In this instance, the orientation is that of e 1 ; and the weight is twice of e 1, or 2. Note the correspondence between the grade of a blade and the dimension of a subspace. It will now be shown that any blade representing a subspace, will have its grade equal to the dimension of the subspace. Proposition 2.1.8. Let H be a subspace of vectors, dim H = m, and let A k be a non-zero k-blade. If G 1 (A k ) = H, then k = m. Proof. Suppose that A k has a representation A k = a 1 a k and suppose that {h 1,..., h m } is a basis of H. By our hypothesis, h 1,..., h m = a 1,..., a k. Since the dimension of a vector space is unique and the a ks are linearly independent, we conclude that k = m. We now show that two blades representing the same subspace, are scalar multiples of each other. 41

Proposition 2.1.9. If A m, B m are non-zero m-blades that represent the same subspace of dimension m, then B m = λa m, λ R. Conversely, if B m = λa m, then G 1 (B m ) = G 1 (A m ). Proof. By Proposition 2.1.6, the blades A m and B m have the form A m = h 1 h m, B m = h 1 h m where {h 1,..., h m }, {h 1,..., h m} are bases for the subspace. Then h k h 1,..., h m, k = 1,..., m. Therefore, there exist scalars η for which h k = η s kh s (sum on s), k = 1,..., m. Hence, B m = h 1 h m = η j 1 1 h j1 η jm m h jm = ( 1) σ η σ(1) 1 η σ(n) h 1 h m σ S n n = λa m where λ = σ S n ( 1) σ η σ(1) 1 η σ(n) n. The converse of the statement is trivial to prove. If we take A m in the proposition above to have unit weight, then B m will have a weight of λ. The wedge product allows the correspondence between blades and subspaces. Even though the correspondence is not unique it is essentially unique up to a scalar multiple. A blade can be thought of as an oriented weighted r dimensional subspace. 42

2.1.2 Direct sum We show, under a simple condition, that the wedge of two blades may be interpreted as the direct sum of their representative subspaces. We begin with a vector and blade then generalize to a blade with a blade. Example 2.1.10. Consider G(R 3 ). Informally, if we interpret e 1 and e 2 as lines and e 1 e 2 representing a plane containing e 1, e 2. We may view e 1 e 2 as the direct sum of the lines e 1, e 2. Often we do not need a representation for a blade. When this is the case we will use the symbol A r to denote the collection of its factors; for example we may say {a, A r } is a linearly dependent set, which means that if A r has a representation A r = a 1 a r, then {a, a 1,..., a r } is a linearly dependent set. Lemma 2.1.11. Let a and A r be a vector and an r-blade, respectively. Then there exists a non-zero vector b a, A r if and only if a A r = 0. Proof. Suppose there exists a non-zero vector b a, A r. By Proposition 2.1.3, b a, A r Then the set {a, A r } is linearly dependent, so that by Corollary 1.3.8, a A r = 0. Conversely, suppose that a A r = 0. Then {a, A r } is linearly dependent. Suppose that A r has a representation A r = a 1 a r. Then there exists scalars λ, λ 1,..., λ r not all zero for which λa + λ 1 a 1 + + λ r a r = 0. The set {a 1,..., a r } is linearly independent so λ 0 and there is a λ k 0. Then the vector b = λa = λ 1 a 1 + + λ r a r is contained in a and A r and is non-zero. Proposition 2.1.12. Let a and A r be a vector and an r-blade, respectively. If a A r 0, then a G 1 (A r ) = G 1 (a A r ). Proof. Since a A r 0, by Lemma 2.1.11, a G 1 (A r ) = {0}. 43