MA30231: Projective Geometry

Size: px
Start display at page:

Download "MA30231: Projective Geometry"

Transcription

1 MA30231: Projective Geometry Fran Burstall with corrections by James Williams Phil Smith Derek Moniz Ioanna Stylianou Alex Abboud Pooja Khatri Chloe Webbe Hugo Govett Daniel Hurst Tyrah Sanchez Dan Gardham Matthew Pereira Matt D Souza Chris Goodrum Josh Bee Gabriel Glencross Keian Barton Tom Crawley Martin Prigent Ali Craw Ella Gaskin Patrick Jenkinson Matthew Garner Preeta Padda Matt Lorimer Michael Szweda Matt Staniforth Piotr Wozniak

2 Contents 1 The geometry of projective space Projective spaces Bases and homogeneous coordinates Projective linear subspaces Affine space and the hyperplane at infinity Lines in F n and P(F n+1 ) Projective transformations Linear projections Projection from a centre Points in general position Two classical theorems Projective lines and the cross-ratio Duality Dual vector spaces, annihilators and solution sets Duality in projective geometry Duality in the projective plane Quadrics Symmetric bilinear forms and quadratic forms Polars Quadratic forms Quadrics Quadrics on a line Quadrics and hyperplanes Conics Lines and conics Polars in projective geometry Projective subspaces of quadrics Pencils of quadrics

3 2.5.1 The space of quadrics Pencils of conics Invariants of pencils Application to linear algebra Exterior algebra and the space of lines Exterior algebra Lines and 2-vectors The Klein quadric Lines and planes in the Klein quadric

4 Chapter 1 The geometry of projective space 1.1 Projective spaces Projective spaces are built out of vector spaces so let us begin by recalling what a vector space is. Let V be a (finite-dimensional 1 ) vector space over a field F. Thus V is a set along with two binary operations: Addition V V V : (u, v) u + v with respect to which V is an abelian group. Scalar multiplication F V V : (λ, v) λv which distributes over addition: λ(u + v) = λu + λv, for all λ F and u, v V, and satisfies some other axioms such as 1v = v, where 1 F is the multiplicative identity element. Remark. For most of this course, you will not lose much if you think of F as R or C. However, there will come a moment when we need F to be the field Q of rational numbers. Moreover vector (and projective!) spaces over finite fields arise in coding theory and cryptography. The basic example of a vector space over F is F n = F F, the Cartesian product of n copies of F. To make F n into a vector space, I must tell you how to add vectors and do scalar multiplication. The answer is that I do both things component-wise: (λ 1,..., λ n ) + (µ 1,..., µ n ) := (λ 1 + µ 1,..., λ n + µ n ) λ(µ 1,..., µ n ) := (λµ 1,..., λµ n ) Definition. A vector subspace of a vector space V is a non-empty subset U V which is closed under addition and scalar multiplication. In this case, we write U V. Of course, the addition and scalar multiplication on V now restricts to U and so makes U into a vector space in its own right. Example. For v V \ {0}, set [v] := {λv : λ F}. Exercise. [v] V. That is [v] is closed under addition and scalar multiplication. In fact [v] is a 1-dimensional subspace of V. 1 See Section 1.2 if you cannot remember what this means. 3

5 Example. In R 2, the 1-dimensional subspace [v] is simply the lines through the origin and v. Figure 1.1. See [w] v 0 [v] w Figure 1.1: The 1-dimensional subspaces of R 2 are the lines through the origin. With this understood, we can state the basic definition of the course: Definition. The projective space of a vector space V is the set Notation. Write F for F \ {0}. P(V ) := {[v]: v V \ {0}} = {1-dimensional subspaces of V }. Exercise. (Easy) For v, w V \ {0}, [v] = [w] if and only if there is λ F such that w = λv. What all this means: a element A P(V ) is of the form A = [v], for some v V \ {0}. We say that v is a representative vector of A and note that v is defined up to scalar multiplication: if v is a representative vector of A so is λv, for all λ F, and all representative vectors of A arise this way. Let us draw a picture 2 in a simple case: A Figure 1.2: P(R 2 ) is (mostly) parameterised by a copy of R. I We observe: 1. Each A P(R 2 ) cuts the line x = 1 in a unique point except the element I = [(0, 1)], the y-axis, which does not cut it at all. Conversely, each point on x = 1 gives rise to a element of P(R 2 ). 2. As the intersection point gets larger (either positively or negatively), A gets closer to I so it is tempting to call I infinity. 2 You can find an interactive version of this picture at 4

6 Now the line x = 1 is simply a copy of R (just forget the first coordinate: (1, y) y) so we conclude that P(R 2 ) is the union of a copy of R along with an extra point at infinity. Punchline: We shall see that a similar picture holds in general. P(R 2 ) = R { }. 1.2 Bases and homogeneous coordinates We need some more first year linear algebra: recall that v 0,..., v n V is a basis for V if every 3 v V can be written n v = λ i v i, (1.1) λ i F, in a unique 4 way. i=0 If V has a finite basis, the number of elements in that basis, here n + 1, is called the dimension of V, denoted dim V. The intuition is that one needs dim V many scalars to uniquely specify an element of V. We can apply this same intuition to sets which are not vector spaces like our projective spaces. In the example of P(R 2 ) = R { }, we saw that, most of the time, just one scalar is needed to specify an element of P(R 2 ). This suggests: Definition. The dimension of the projective space P(V ), denoted dim P(V ), is given by dim P(V ) = dim V 1. We shall return to this shortly but first, some more about bases. A basis is the same as a linear isomorphism V = F n+1 : n v = λ i v i (λ 0,..., λ n ). Strictly, we should write here as the λ i are functions of the vector v. i=0 v = n λ i (v)v i i=0 The (n + 1) functions λ i : V F are the coordinate functions with respect to v 0,..., v n and, for v V, λ 0 (v),..., λ n (v) are the coordinates of v with respect to v 0,..., v n. Exercise. The coordinate functions λ i are linear functions V F and so elements of the dual vector space V of V. Back to projective space: if A = [v] P(V ) and v = λ i v i, we also say that λ 0,..., λ n are homogeneous coordinates of A (with respect to v 0,..., v n ). Of course, for λ F, A = [λv] also so that λλ 0,..., λλ n are also homogeneous coordinates of A. We write A = [λ 0,..., λ n ] where the equality sign is heavy abuse of notation for has homogeneous coordinates with respect to v 0,..., v n. Remarks. 1. For any λ F, A = [λ 0,..., λ n ] = [λλ 0,..., λλ n ]. 2. Since v is non-zero, at least one λ i 0 so that there is no element A P(V ) with A = [0,..., 0]. 3 That every v can be written as in (1.1) is the assertion that v 0,..., v n span V. 4 The uniqueness amounts to the demand that v 0,..., v n are linearly independent. 5

7 1.3 Projective linear subspaces Observe, if U V then any [u] P(U) is also a 1-dimensional subset of V, that is, [u] P(V ). So P(U) P(V ). This prompts: Definition. X P(V ) is a projective linear subspace (or just subspace) of P(V ) if it is of the form X = P(U) for some vector subspace U V. In this case, we write X P(V ) and, as usual, set dim X = dim U 1. Here are some examples, classified by dimension: Examples. 1. The zero vector space U = {0} has no u U \ {0} so that P({0}) =. Thus is a projective subspace and dim = 1! This last is not a curiosity: we shall put it to use soon. 2. dim X = 0: such an X is of the form P(U) where dim U = 1. This means that U = [u], for any u U \ {0}, so that X = P(U) = {U} is a singleton subset of P(V ). We conclude that dim X = 0 if and only if X = 1. Such X are called (projective) points. Alert: A point is therefore a singleton subset of P(V ) while an element is a member of P(V ). Schematically: point = {element}. We will often blur the distinction between points and elements so do not worry about it too much! 3. If X P(V ) has dim X = 1, X is called a (projective) line. 4. If X P(V ) has dim X = 2, X is called a (projective) plane. 5. If X P(V ) has dim X = dim P(V ) 1, X is called a (projective) hyperplane. In this case, X = P(U) where dim U = dim V 1: such a U is called a linear hyperplane of V. Thus: If P(V ) is a line, its hyperplanes are points. If P(V ) is a plane, its hyperplanes are lines. If dim P(V ) = 3, its hyperplanes are planes. We are going to build new projective subspaces out of old ones. For this, we need a revise a little more linear algebra: let U 1, U 2 V and define subsets of V by: Exercises. U 1 + U 2 := {u 1 + u 2 : u 1 U 1, u 2 U 2 } U 1 U 2 := {u V : u U 1 and u U 2 }. 1. U 1 U 2 and U 1 + U 2 are both vector subspaces of V. 2. If U 1, U 2 W V then U 1 +U 2 W. Thus U 1 +U 2 is the smallest vector subspace of V containing both U i. Proposition 1.1. For U 1, U 2 V, dim(u 1 + U 2 ) = dim U 1 + dim U 2 dim(u 1 U 2 ). Proof. We sketch two proofs: 1. Take a basis of U 1 U 2 and extend it first to a basis of U 1 and then to a basis of U 2. Now show that all these vectors together give a basis of U 1 + U 2 (showing linear independence is a little tricky). 6

8 2. Consider the linear map α : U 1 U 2 U 1 + U 2 given by α(u 1, u 2 ) = u 1 + u 2. It is easy to see that α surjects and that ker α = U 1 U 2. The rank-nullity theorem now bakes the cake. (See an exercise sheet for more details). Lemma 1.2. If U V then dim U dim V with equality if and only if U = V. Proof. The thing to prove is if U V then dim U < dim V. For this, take a basis of U along with a vector in V \ U: these are dim U + 1 linearly independent vectors in V so that dim V dim U + 1. Let us turn this into geometry: Definition. Let X 1 = P(U 1 ), X 2 = P(U 2 ) P(V ). 1. The join of X 1 and X 2 is X 1 X 2 := P(U 1 + U 2 ). 2. The intersection of X 1 and X 2 is just the usual set-theoretic intersection X 1 X 2. We say that X 1 and X 2 intersect if X 1 X 2 (equivalently, U 1 U 2 {0}). Note that X 1 X 2 = P(U 1 U 2 ) so that both join and intersection of projective linear subspaces are again projective linear subspaces. Exercise. If X 1, X 2 Y P(V ) then X 1 X 2 Y. Thus X 1 X 2 is the smallest projective linear subspace that contains both X 1 and X 2. Lemma 1.3. If X 1 X 2 then dim X 1 dim X 2 with equality if and only if X 1 = X 2. Proof. This comes straight from Lemma 1.2. Theorem 1.4 (Dimension Formula). If X 1, X 2 P(V ) then Proof. With X i = P(U i ) as usual, (1.2) reads which is just Proposition 1.1. dim X 1 X 2 = dim X 1 + dim X 2 dim(x 1 X 2 ). (1.2) dim(u 1 + U 2 ) 1 = dim U dim U 2 1 (dim(u 1 U 2 ) 1) The Dimension Formula is a powerful tool that we shall use many times. We begin by showing that projective points and lines behave somewhat as we imagine points and lines should behave. Theorem 1.5. Let P(V ) be a projective space. 1. There is a unique projective line through any two distinct points of P(V ). 2. If P(V ) is a plane (thus dim P(V ) = 2), any pair of distinct lines in P(V ) intersect in a unique point. 7

9 (a) Two points lie on a unique line (b) Two lines in a plane intersect Figure 1.3: The picture for Theorem 1.5 Proof. 1. Let A, B P(V ) be distinct points and let L = AB be their join. Then Theorem 1.4 says so that L is a line through A and B. dim AB = dim A + dim B dim = ( 1) = 1 For uniqueness, if L is another line with A, B L then L = AB L also. However, dim L = dim L = 1 so Lemma 1.3 yields L = L. 2. Let L 1, L 2 be distinct lines in a projective plane P(V ). Then L 1 L 2 P(V ) so that Lemma 1.3 and Theorem 1.4 give 2 = dim P(V ) dim L 1 L 2 = dim L 1 + dim L 2 dim L 1 L 2 = dim L 1 L 2. We conclude that dim L 1 L 2 0 and is therefore not empty. However, L 1 L 2 L 1 since otherwise L 1 L 2 and then L 1 = L 2 by Lemma 1.3. Thus dim L 1 L 2 < dim L 1 = 1. We conclude that dim L 1 L 2 = 0 so that the L i intersect in a single point. Remark. Let us pause for a second to think about what Theorem 1.5(2) is saying: every pair of projective lines in a projective plane intersect. This is very different from the usual lines in the usual plane of R 2 that we know and love: there we have parallel lines that never meet. The intuition is that the projective plane has some extra points at infinity (in fact, one point for each direction) and it is at one of these extra points that parallel lines actually meet. We shall make this precise in the next section. 1.4 Affine space and the hyperplane at infinity What does an n-dimensional projective space over F look like? An approximate answer is that it is a copy of F n along with some extra stuff. To understand this, we use the homogeneous coordinates we introduced in section 1.2. So let V be an (n + 1)-dimensional vector space over F and choose a basis v 0,..., v n. Let A P(V ) have homogeneous coordinates [λ 0,..., λ n ] so that A = [ λ i v i ]. There are two possibilities for λ 0 : first suppose that λ 0 0. Since the λ i are only defined up to common scale, we can multiply them all by 1/λ 0 and then A = [λ 0,..., λ n ] = [1, λ 1 /λ 0,... ] and we arrive at a vector (λ 1 /λ 0,... ) F n. Conversely, any (x 1,..., x n ) F n gives rise to a point of P(V ) with homogeneous coordinates [1, x 1,..., x n ]. More formally: 8

10 Proposition 1.6. If V has basis v 0,..., v n then the map is a bijection. φ 0 : F n P(V ) \ {[v]: λ 0 = 0} (x 1,..., x n ) [1, x 1,..., x n ] Proof. First we show φ 0 injects: if φ 0 (x 1,..., x n ) = φ 0 (y 1,..., y n ) then [1, x 1,..., x n ] = [1, y 1,..., y n ] which means there is λ F such that whence λ = 1 and then (x 1,..., x n ) = (y 1,..., y n ). For surjectivity, if λ 0 0 then 1 = λ1 y i = λx i, [λ 0,..., λ n ] = [1, λ 1 /λ 0,..., λ n /λ 0 ] = φ 0 (λ 1 /λ 0,..., λ n /λ 0 ). Thus the set of A P(V ) with λ 0 0 is a copy of F n but what is left? If λ 0 = 0 the A = [v] with v U 0 := span{v 1,..., v n } = ker λ 0. Thus, in this case, A P(U 0 ). Since dim P(U 0 ) = n 1, P(U 0 ) is a hyperplane which we call the hyperplane at infinity (with respect to v 0,..., v n ). From now on, we will identify P(V ) \ P(U 0 ) with F n using Proposition 1.6: [λ 0,..., λ n ] (λ 1 /λ 0,..., λ n /λ 0 ) [1, x 1,..., x n ] (x 1,..., x n ) The punchline is that we now have a decomposition P(V ) = F n P(U 0 ). We call F n the affine part of P(V ) (with respect to v 0,..., v n ). If n > 1, we can apply the same argument to the hyperplane at infinity: P(U 0 ) = F n 1 X 1 where X 1 = {[v]: λ 0 = λ 1 = 0} and then induct to get: where = [0,..., 0, 1]. P(V ) = F n F n 1 F { }, For n = 1, we have seen this before: P(F 2 ) = F { } (see Figure 1.2). Figure 1.4 shows the picture for the case n = 2. We can see more in this picture: a projective line is of the form P(U) P(R 3 ) with dim U = 2. Then U intersects the plane λ 0 = 1 in a honest line, see Figure 1.5, or not at all (if U = {λ 0 = 0}). We shall see how to confirm this rigorously in the next section. Remark. In the construction of the affine part, we could have chosen any λ j instead of λ 0 to get a similar decomposition with a different hyperplane as the hyperplane at infinity. Indeed, by judicious choice of basis, we can arrange for any hyperplane to be the hyperplane at infinity Lines in F n and P(F n+1 ) Let us explore how projective lines interact with the affine part of the ambient projective space. First we remind ourselves about lines in F n : Definition. An affine line l in F n is a subset of the form l = {x + tv : t F}, for fixed x F n and v F n \ {0}. 9

11 A [1, x 1, x 2 ] 0 λ 0 = 1 λ 0 = 0 Figure 1.4: The affine part of P(R 3 ) Thus an affine line is simply a line in F n that does not necessarily pass through the origin. Exercise. For x y F n, show that there is a unique affine line through x and y. [Hint: take v = y x.] Here are some equivalent formulations of the idea: the affine line = {x + tv : t F} is the translate by x of the 1-dimensional subspace [v]. Otherwise said, it is a coset x + [v]. When n = 2, affine lines are the solution sets of a single inhomogeneous linear equation: with a i, d F and the a i not both zero. l + {(x 1, x 2 ): a 1 x 1 + a 2 x 2 = d}, Let us see how to work with projective lines. There are two approaches to this: via equations or via parametrisations. With this in hand, we ask: how do projective lines L intersect the affine part F n of P(F n+1 )? First, we note a result from the first exercise sheet: Exercise. Let L be a line and X a hyperplane in a projective space P(V ). Then either L X or L intersects X in a single point. Now let L P(F n+1 ), take X = P(U 0 ) and suppose that L does not lie in P(U 0 ). Choose A L F n and let B = L P(U 0 ). Then A = [v], B = [w] where v = (1, x 1,..., x n ) = (1, x) w = (0, v 1,..., v n ) = (0, v), for some x F n, v F n \ {0} and the last equality on each line defines our notation. Now for t = µ/λ F. We therefore conclude: L = AB = {[λ(1, x) + µ(0, v)]: λ, µ not both zero} = {[λ, λx + µv]: λ, µ not both zero} = {[1, x + tv]: t F} {[0, v]}, 10

12 U 0 λ 0 = 1 λ 0 = 0 Figure 1.5: The affine part of a projective line P(U) in a projective plane. L F n is the affine line {x + tv : t F} while L P(U 0 ) = [0, v] gives the direction of this affine line. We conclude that L cuts F n in an affine line or not at all and all affine lines arise this way. Exercise. Two affine lines in F n are parallel if and only if they meet at infinity (that is, the corresponding projective lines meet on the hyperplane at infinity). For projective planes (so n = 2), we have an alternative approach to these matters via equations: a projective line L = P(U) where, in this case, U < F 3 is a hyperplane and so, as we will recall in section 1.9.1, is the solution set of a single homogeneous linear equation: for some a 0, a 1, a 2 F, not all zero. Thus we have so that We now see two cases: U = {(λ 0, λ 1, λ 2 ): a 0 λ 0 + a 1 λ 1 + a 2 λ 2 = 0}, L = {[λ 0, λ 1, λ 2 ]: a 0 λ 0 + a 1 λ 1 + a 2 λ 2 = 0} L F 2 = {(x 1, x 2 ): a a 1 x 1 + a 2 x 2 = 0} L P(U 0 ) = {[0, λ 1, λ 2 ]: a 1 λ 1 + a 2 λ 2 = 0}. (a) a 1, a 2 not both zero: here L F 2 is an affine line and L P(U 0 ) is the single point [0, a 2, a 1 ]. (b) a 1 = a 2 = 0: now L F 2 = and L = P (U 0 ). 11

13 Thus every projective line, except the line at infinity, cuts the affine part in an affine line. Conversely, if l is an affine line in F 2 with equation a 0 + a 1 x 1 + a 2 x 2 = 0, then, writing x i = λ i /λ 0 and multiplying through by λ 0, we see that l = L F 2 where L is the projective line with equation a 0 λ 0 + a 1 λ 1 + a 2 λ 2 = 0. Remark. As it stands, this analysis only works for projective planes. In general, a single linear equation defines a hyperplane in P(V ) and, for example, you need two linearly independent equations to define a line in P(F 4 ). Geometrically, a line in a 3-dimensional projective space is the intersection of two distinct hyperplanes. 1.5 Projective transformations Recall some more linear algebra: Definition. A linear map (or linear transformation) T : V W of vector spaces is a map with for all v, v 1, v 2 V and λ F. The kernel of T is ker T := {v V : T v = 0} V. The image of T is Im T = {T v : v V } W. T (v 1 + v 2 ) = T v 1 + T v 2 T (λv) = λt v, Theorem (Rank-nullity theorem). Let T : V W be a linear map of finite-dimensional vector spaces. Then dim ker T + dim Im T = dim V. As an immediate application, we know that T is injective if and only if ker T = {0} so that, if dim V = dim W, T injects if and only if T surjects (via rank-nullity) if and only if T bijects if and only if T has a (linear) inverse. We now make a basic observation which enables us to use linear maps in projective geometry. If T v 0 (that is, v V \ ker T ), then T [v] = [T v], since T (λv) = λt v, so that we have a well-defined map This prompts: P(V ) \ P(ker T ) P(W ) [v] [T v]. Definition. A projective map is a map τ : P(V ) \ P(U) P(W ), where P(U) P(V ), of the form for some linear map T : V W with ker T = U. τ[v] = [T v], (1.3) A projective transformation is a projective map τ : P(V ) P(W ) of the form (1.3) with T injective. In particular, if dim P(V ) = dim P(W ) then T, and so τ, is invertible. Moreover, τ 1 is then also a projective transformation (induced by T 1 ). Remark. For λ F, λt and T define the same projective map: (λt )[v] = [λt v] = [T v]. 12

14 1.5.1 Linear projections To get an interesting example of a projective map, we need a little more linear algebra: let U 1, U 2 V with V = U 1 U 2. Thus V = U 1 + U 2 and U 1 U 2 = {0}. Then any v V has a unique expression v = v 1 + v 2 with v 1 U 1 and v 2 U 2, see Figure 1.6. We can therefore define the projection onto U 2 along U 1 P : V V by P v = v 2. We know: P is linear. ker P = U 1 and Im P = U 2. P U2 = id U2 so that P 2 = P. id V P is the projection onto U 1 along U 2. U 2 v 2 v U 1 v 1 Figure 1.6: Projections onto U 1 and U Projection from a centre We give an example of a projective map. Let P(V ) be a projective plane, O P(V ) a point and L P(V ) a line with O L. We use this data to define a map, the projection onto L with centre O τ : P(V ) \ O L as follows: draw the line OA through O and A and let τ(a) be where this line cuts L, see Figure 1.7. Thus {τ(a)} = OA L. That τ is well-defined (that is, L and OA intersect in a unique point) is of course straight from Theorem 1.5. We are going to show that τ is a projective map. For this, let O = P(U 1 ) and L = P(U 2 ). Then dim U 1 = 1, dim U 2 = 2 and U 1 U 2 = {0} (since O does not lie on L) so that, by Proposition 1.1, V = U 1 U 2. 13

15 O A τ(a) B L τ(b) Figure 1.7: Projection with centre O onto the line L. Now let P : V U 2 be projection onto U 2 along U 1. I claim that: τ[v] = [P v], for all v / ker P = U 1, so that τ is indeed a projective map. For the claim, for A = [v] P(V ) \ O, v / U 1 so write v = v 1 + v 2, with v i U i. Then, 1. v 2 = P v 0 so that [P v] L. 2. v 2 = v v 1 [v] + U 1 so that [P v] OA. Thus OA intersects L at [P v] whence τ(a) = [P v] as required. We can refine this story by introducing a second line ˆL that does not pass through O and restricting the projection with centre O to ˆL as in Figure 1.8. O ˆL A L τ(a) τ(b) = B Figure 1.8: Projection with centre O from ˆL to L If ˆL = P(Û), then, since O ˆL =, ker P Û transformation. = Û U 1 = {0} so that τ ˆL : ˆL L is a projective 14

16 Exercise. τ : ˆL L is bijective and its inverse is also a projective transformation. We can repeat the analysis with a projective space P(V ) of any dimension using a point O and a hyperplane X not containing O to define projection with centre O onto X, a projective map P(V )\O X. For example, let us see what this map looks like for the 3-dimensional projective space P(R 4 ) which splits P(R 4 ) = R 3 P(R 3 ) into an affine part and a plane at infinity. We choose O R 3, the affine part, and let X P(R 3 ) be a plane which is not the plane at infinity and which does not contain O. Set S := X R 3, an affine plane. Then the restriction of τ, the projection with centre O onto X, to the affine part is pictured in Figure 1.9. y τ(y) y τ(y ) O S Figure 1.9: Projection with centre O in the affine part of P(R 4 ) If we view O as the eye of the artist and S as her canvas 5, then, since light travels in straight lines, τ maps a point in space to the corresponding blob of paint on the canvas. We say that a point y and its image τ(y) are in perspective from O. Thus τ maps the world onto the artist s canvas. Let us look at the picture in more detail. If X is the plane of the ground and L = X P(R 3 ) is its line at infinity, then points on L are joined to O by lines parallel to the ground and these are clearly visible on the canvas S as a line called the vanishing line by artists. In the picture, parallel lines in X R 3 will meet somewhere on the vanishing line unless they are horizontal (that is, parallel to the canvas). See Figure What happens when the position of the canvas is moved as in Figure 1.11 on page 16. We now have two planes S 1 and S 2 with corresponding projections τ 1 and τ 2. The relation between the pictures is given by (why?) τ 1 (τ 2 (y)) = τ 1 (y) so that the picture on S 2 is mapped onto that on S 1 by the projective transformation τ 1 X2 : X 2 X Points in general position We all know that problems in linear algebra can sometimes be simplified by making an appropriate choice of basis. We now turn to a parallel construction in projective geometry. Definition. Let P(V ) be an n-dimensional projective space (so that dim V = n + 1). We say that n + 2 points A 0,..., A n+1 P(V ) are in general position if any n + 1 of them are represented by linearly independent vectors. 5 Or O as the focal point of a camera lens and S as the camera s sensor. 15

17 τl S O (a) Side view S X (b) Head-on view S Figure 1.10: Art as projection onto the canvas S centred at the artist s eye O. y τ 2 (y) S 2 τ 1 (y) O S 1 Figure 1.11: Canvases are related by a projective transformation We note that n + 1 vectors in V are linearly independent if and only if they span V from which it is not difficult to prove: Exercise. A 0,..., A n+1 are in general position if and only if there is no (projective) hyperplane containing any n + 1 of them. 16

18 Examples. n = 1 On a projective line, a hyperplane is just a point so 3 points on a line are in general position so long as 2 of them do not coincide, that is, if all three points are distinct. n = 2 A hyperplane in a plane is a line so 4 points in a plane are in general position if and only if no three of them are collinear. Lemma 1.7. If A 0,..., A n+1 are in general position in an n-dimensional projective space P(V ) then there are representative vectors v 0,..., v n+1, unique up to a common scalar factor, such that n+1 v i = 0. i=0 Proof. Choose arbitrary representative vectors ṽ i of the A i. Since dim V = n + 1 and there are n + 2 of the ṽ i, they must satisfy a linear relation n+1 λ i ṽ i = 0 with not all λ i = 0. In fact, none of the λ i vanish: if λ j = 0 then we have i=0 j 1 λ i ṽ i + i=0 n+1 i=j+1 λ i ṽ i = 0, forcing ṽ 0,..., ṽ j 1, ṽ j+1,..., ṽ n+1 to be linearly dependent, which is a contradiction. We therefore take v i := λ i ṽ i to get representative vectors that sum to zero. For uniqueness up to scale, suppose that v 0,..., v n+1 and ˆv 0,..., ˆv n+1 are representative vectors of A 0,..., A n+1 with n+1 n+1 v i = ˆv i = 0. i=0 Since v i and ˆv i both represent A i, there is µ i F such that ˆv i = µ i v i. Now we have so that i=0 n+1 n+1 0 = µ 0 v i = µ i v i i=0 i=0 n+1 (µ i µ 0 )v i = 0, i=1 where it is important to note that we are summing from i = 1 in the last equation. However, v 1,..., v n+1 are linearly independent so that µ i = µ 0, for all i, that is: ˆv i = µ 0 v i, for all i, and we are done. Remark. The converse of this lemma is emphatically false: just having n + 2 vectors sum to zero does not imply that they represent points in general position! Corollary 1.8. If A 0,..., A n+1 are in general position in P(V ), then there is a basis of V with respect to which where the 1 is in the i-th place. A 0 = [1,..., 1] A i = [0,..., 1,..., 0], i 1, Proof. The basis v 1,..., v n+1 from Lemma 1.7 does the job. For then A 0 = [v 0 ] = [ i 1 v i ] with homogeneous coordinates [ 1,..., 1] = [1,..., 1]. 17

19 Here is the main result of this section: an analogue for projective transformations of the linear algebra result that a linear map is determined by its values on a basis. Theorem 1.9. Let dim P(V ) = dim P(W ) = n and let A 0,..., A n+1 and B 0,..., B n+1 be points in general position in P(V ) and P(W ) respectively. Then there is a unique projective transformation τ : P(V ) P(W ) such that τa i = B i, for all 0 i n + 1. Proof. First we show that τ exists. By Lemma 1.7, we can choose representative vectors v 0,..., v n+1 in V and w 0,..., w n+1 in W of the A i and B i respectively with n+1 i=0 v i = 0 and n+1 i=0 w i = 0. Then v 1,..., v n+1 and w 1,..., w n+1 are bases for V and W so that there is a unique linear map T : V W with T v i = w i, for 1 i n + 1. Indeed, if v V, write v = n+1 i=1 λ iv i and define T v := n+1 i=1 λ iw i. In particular, T v 0 = T ( n+1 n+1 v i ) = w i = w 0. Thus, defining τ : P(V ) P(W ) to be the projective transformation given by we have τa i = B i for all 0 i n + 1. i=1 τ[v] = [T v], For uniqueness, suppose we have projective transformations τ 1, τ 2 : P(V ) P(W ), induced by linear maps T 1, T 2 : V W with τ 1 A i = τ 2 A i = B i, for all i. This means that both T 1 v 0,..., T 1 v n+1 and T 2 v 0,..., T 2 v n+1 are sets of representative vectors for the B i with T 1 v i = T 2 v i = 0. i i The uniqueness assertion in Lemma 1.7 now provides µ F with T 2 v i = µt 1 v i, for all i, yielding T 2 = µt 1 so that τ 1 = τ 2. Remark. The uniqueness part holds even when dim P(W ) > dim P(V ) as the following exercise shows. Exercise. Let A 0,..., A n+1 be in general position in P(V ) and τ 1, τ 2 : P(V ) P(W ) be projective transformations to a projective space with dim P(W ) dim P (V ). Suppose that τ 1 A i = τ 2 A i, for all i. Then τ 1 = τ 2. i=1 1.7 Two classical theorems Theorem 1.10 (Desargues 6 Theorem). Let P(V ) be a projective space with dim P(V ) 2 and let P, A, A, B, B, C, C be distinct points of P(V ) such that the lines AA, BB, CC are distinct and meet at P. Then the points of intersection Q = BC B C, R = AC A C and S = AB A B are collinear. The hypothesis says that the triangles ABC and A B C are in perspective from the point P. punchline says that the triangles are in perspective from the line QR. The situation is pictured in Figure 1.12 and an on-line interactive version can be found at people.bath.ac.uk/feb/ma30231/demo/desargues.html. 6 Girard Desargues, , was the inventor of projective geometry The 18

20 A A Q P C B C B R S Figure 1.12: The theorem of Desargues. Proof. The points P, A, A are distinct points on the line AA and so in general position on that line. We therefore have, by Lemma 1.7, representative vectors p, a, a of P, A, A such that p + a + a = 0 (1.4a) and, similarly, representative vectors of B, B, C, C with p + b + b = 0 p + c + c = 0 (1.4b) (1.4c) where we scale in the last two equations to ensure that we have the same representative vector p of P in all the equations. Now set q := b c = c b r := c a = a c s := a b = b a. (1.5a) (1.5b) (1.5c) The second equalities come from subtracting pairs of equations in (1.4). Now note that [b c] lies on BC while [c b ] lies on B C so that q is a representative vector of Q. Similarly, r and s are representative vectors for R and S. However, summing the equations of (1.5) yields q + r + s = 0 so that q, r, s span at most a 2-dimensional space. Otherwise said, Q, R, S are collinear. 19

21 There is an cheap alternative proof if dim P(V ) 3 and the triangles ABC and A B C lie in different planes P 1 and P 2 : Exercises. 1. The join of AA, BB, CC is 3-dimensional so that, without loss of generality, we can take n = Q, R, S then lie on P 1 P 2 which is a line. Remarks. 1. Desargues Theorem works starting from any of the ten points P, A, A, B, B, C, C, Q, R, S as the point of perspective instead of P (exercise!). 2. The configuration of this theorem is very symmetric: we have ten points and ten lines with three points on every line and three lines through every point. However, not all configurations of this kind come from the Desargues Theorem. 3. The converse of the theorem is true: Let A, B, C and A, B, C be triangles in P(V ) such that Q = BC B C, R = AC A C and S = AB A B exist and are collinear. Then the lines AA, BB and CC are concurrent, that is, they meet at a point. We shall see a conceptual proof of this later, but, for now, try and prove it as an exercise. Our next classical theorem may be thought of as the first ever theorem of Projective Geometry: it is the first theorem purely about collections of lines in a plane and their intersections. Theorem 1.11 (Pappus 7 Theorem). Let A, B, C and A, B, C be distinct triples of collinear points in a projective plane P(V ). Then the three points P := AB A B, Q := AC A C and R := BC B C are collinear. Here we assume that each pair of lines is distinct so that their intersections are uniquely defined. The situation is pictured in Figure 1.12 and an on-line interactive version can be found at people.bath.ac.uk/feb/ma30231/demo/pappus.html. A B C P Q R A B C Figure 1.13: The theorem of Pappus 7 Pappus of Alexandria, c. 290 c

22 Proof of Pappus Theorem. We may assume that A, B, A, R are in general position (otherwise three of them will be collinear and that will break our assumption on distinctness of the pairs of intersecting lines). By Corollary 1.8, we can choose a basis so that A = [1, 0, 0] R = [0, 1, 0] A = [0, 0, 1] B = [1, 1, 1]. For a point X P(V ), we write X = [x 0,..., x 2 ] for its homogeneous coordinates. X, Y, Z are collinear if and only if x 0 x 1 x 2 y 0 y 1 y 2 z 0 z 1 z 2 = 0. We recall 8 that Our configuration contains 8 collinear triples of points and so 8 vanishing determinants, each of which gives us useful information. 1. A, B, C are collinear giving = c 0 c 1 c 2 = c 2 c 1, so that c 1 = c 2 (and neither vanish else A = C). Thus, with c = c 0 /c 1, C = [c, 1, 1]. 2. R, B, C are collinear giving = c 0 c 1 c = c 0 c 2, 2 so that c 0 = c 2 (and neither vanish else R = C ). Thus, with c = c 1/c 0, C = [1, c, 1]. 3. A, P, B are collinear giving = p 0 p 1 p 2 = p 1 p 0, so that p 0 = p 1 (and neither vanish else A = P ). Thus, with p = p 2 /p 0, P = [1, 1, p]. 4. Now B, R, C are collinear giving which yields b 0 = cb = c 1 1 b 0 b 1 b 2 Meanwhile, A, B, C are collinear by hypothesis giving = 1 c 1 b 0 b 1 b 2 and so b 1 = c b 0. Putting all this together gives B = [cb 2, c cb 2, b 2] = [c, c c, 1]. 5. A similar argument using the collinearity of A, Q, C and A Q, C gives us Q = [cc, c, 1]. 8 From question 1 of Exercise Sheet 2. 21

23 6. Finally, the collinearity of A, P, B gives p c cc 1 = 0 so that 1 cc p = 0. (1.6) Now we can bake the cake: we test collinearity of R, P, Q (in that order) by computing the corresponding determinant: p cc c 1 = (1 cc p) = 0, by (1.6). Thus P, Q, R are collinear. Remarks. 1. There is a pretty reformulation of the Pappus Theorem that may resonate later on: the points and intersecting lines form a (jumbled up) hexagon: C B A A B C and then the Pappus Theorem reads: If the alternating vertices of a hexagon are collinear then the intersections of opposite sides are also collinear. 2. The configuration of the theorem is again very symmetrical: we have nine points and nine lines with three of the points on each line and three of the lines through each of the points. Moreover, there are many copies of the Pappus Theorem in each configuration: for example, we could take A, P, B and R, C, B as the original collinear triples. 3. A special case of the Pappus Theorem concerns triples of collinear points in an affine plane F 2 : if A, B, C and A, B, C are collinear points of F 2 with AB parallel to A B and BC parallel to B C then AC is parallel to A C. Exercise. Deduce this from Theorem Projective lines and the cross-ratio Much of the geometry we knew before embarking on Projective Geometry made a big deal of a notion of distance: a function that attaches a scalar to a pair of points. Distance does not make sense in projective geometry: we easily see that it is not preserved by projective transformations. However, there is a function of four (collinear) points that can act as a sort of substitute. This function is the cross-ratio to which we now turn. Recall that we have a bijection F { } = P(F 2 ) which sends x F [1, x] and [0, 1]. The inverse is [λ 0, λ 1 ] λ 1 /λ 0 (so that 1/0 =!). 22

24 We observe that 0 = [1, 0], 1 = [1, 1] and = [0, 1] P(F 2 ) are distinct and so are in general position. Now let A, B, C, D be four distinct points on some, possibly different, projective line 9 P(V ). Then A, B, C are in general position so that we can apply Theorem 1.9 to get a unique projective transformation τ : P(V ) P(F 2 ) with τa = τb = 0 τc = 1. Then τd = x = [1, x], for some x F. This scalar is called the cross-ratio of A, B, C, D. Thus: Definition. For distinct points A, B, C, D on a projective line, the cross-ratio of A, B, C, D (in that order), written (A, B; C, D), is the scalar x for which τd = [1, x] where τ is the unique projective transformation with τa = [0, 1] τb = [1, 0] τc = [1, 1]. The key fact about the cross-ratio is that it is unchanged by projective transformations: Proposition Let P(V ) and P(W ) be projective lines, A, B, C, D distinct points on P(V ) and σ : P(V ) P(W ) a projective transformation. Then (σa, σb; σc, σd) = (A, B; C, D). (1.7) Proof. Let τ : P(V ) P(F 2 ) be the unique projective transformation with τa =, τb = 0, τc = 1. Then ˆτ := τ σ 1 : P(W ) P(F 2 ) is the (unique) projective transformation with ˆτ(σA) =, ˆτ(σB) = 0 and ˆτ(σC) = 1. Then, by definition, (σa, σb; σc, σd) = ˆτ(σD) = τd = (A, B; C, D). Example. Two quadruples of collinear points in a plane, related by projection from some centre, have the same cross-ratio since the projection is a projective transformation. So in this picture, (A, B; C, D) = (A, B ; C, D ): O D A B C D C B A As an application, we see how to define the cross-ratio of four concurrent lines in a plane: let L 0, L 1, L 2, L 3 be four distinct lines in a projective plane that are concurrent at O. Let L be any line not through O and set 9 Here we see a consequence for the underlying field F: it cannot be the two element field for then P(V ) = 3! 23

25 O A = L L 0 D C B L 1 A L L 0 B = L L 1 C = L L 2 D = L L 3. L3 L 2 We can now define the cross-ratio of the lines L 0,..., L 3 by (L 0, L 1 ; L 2, L 3 ) := (A, B; C, D). This is independent of the choice of L by what we said above. So far, we do not have a good way to compute the cross-ratio. We now remedy that lack with not one but two formulae! Proposition Let A, B, C, D be distinct points on a projective line P(V ) with homogeneous coordinates A = [a 0, a 1 ], B = [b 0, b 1 ] and so on, and inhomogeneous coordinates a = a 1 /a 0, b = b 1 /b 0 and so on, with respect to some basis of V. Then (a c)(b d) (A, B; C, D) = (a d)(b c) det(a, C) det(b, D) = det(a, D) det(b, C). (1.8a) (1.8b) Here, for example, det(a, C) = a 0 c 0 a 1 c 1. Proof. We write c(a, B; C, D) for the expression in (1.8b). (a) We show first that c is well-defined, that is, independent of the choice of homogeneous coordinates. If A = [a 0, a 1 ] = [λa 0, λa 1 ], for example, then λa 0 c 0 λa 1 c 1 = λ a 0 c 0 a 1 c 1. Moreover, the two determinants involving A appear once in the numerator and once in the denominator so that the λ s cancel. As a consequence, since we have A = [1, a] and so on, 1 1 a c 1 1 b d (c a)(d b) (a c)(b d) c(a, B; C, D) = 1 1 a d 1 1 = = (d a)(c b) (a d)(b c). b c Thus the expressions in (1.8a) and (1.8b) coincide. (b) Let τ : P(V ) P(W ) be a projection transformation. Then I claim that c(a, B; C, D) = c(τa, τb; τc, τd). (1.9) 24

26 For this, fix a basis of W also. Then τ is induced by a linear map T : V W which has matrix M with respect to our bases. Now, for example, τa = [â 0, â 1 ] where ) ( ) (â0 a0 = M. â 1 a 1 We therefore have ) (â0 ĉ 0 = M â 1 ĉ 1 so that ( ) a0 c 0 a 1 c 1 det(τa, τc) = det(m) det(a, C) and similarly for the other determinants in c. Now all these det(m) cancel in the expression for c and we get (1.9) as claimed. (c) We apply this when τ : P(V ) P(F 2 ) is the projective transformation we used to define the crossratio: so suppose τa = [0, 1], τb = [1, 0], τc = [1, 1] so that τd = [1, x] with x = (A, B; C, D). Then we have, from part (b), x c(a, B; C, D) = c([0, 1], [1, 0], [1, 1], [1, x]) = x 1 1 = x = (A, B; C, D). 0 1 Remark. In (1.8a), if one of A, B, C, D =, we take the inhomogeneous coordinate to be 1/0 and multiply through by 0 to get the answer. It is an exercise to see that this trick gives the right answer, as computed via (1.8b). Here is an application of the cross-ratio to aerial photography (or speed cameras). A car travels on a motorway that is equipped with visible milestones and a camera on a helicopter takes a photo of it. The mission is to locate the car using the photo (and a ruler). To do this, note that the line of the motorway and the corresponding line on the photo are related by a projective transformation with centre at the camera. Thus corresponding quadruples of points have the same cross-ratio. If we take three milestones and the car as our four points, we can use this to locate the car. O d c b d c b a (a) Milestones at a, b, d and car at c a (b) Photo Figure 1.14: Aerial photography For example, if, in Figure 1.14a, the car is at c which is x miles past milestone b while, in the photo, we measure distances a b = 2cm, b c = 0.5cm and c d = 0.25cm then (a, b; c, d) = (a c)(b d) (a d)(b c) = 1 + x 2x 25

27 while (a, b ; c, d ) = Equating these, we solve for x to get 11/ = 15/ Duality Dual vector spaces, annihilators and solution sets We recall some more linear algebra: Definition. The dual space V of a (finite-dimensional) vector space V is the set V = {f : V F: f is linear}. Thus f V is a function f : V F such that f(λv 1 + v 2 ) = λf(v 1 ) + f(v 2 ), for all v 1, v 2 V and λ F. Facts. 1. V is a vector space over F with addition and scalar multiplication defined pointwise as we usually do for functions: for f, g V, v V and λ F. (f + g)(v) := f(v) + g(v) (λf)(v) := λ(f(v)), 2. If v 0,..., v n is a basis for V, define v0,..., vn V by { vi 1 if i = j (v j ) = δ ij := 0 otherwise Remarks. and extending by linearity. Then v 0,..., v n is a basis of V the dual basis to v 0,..., v n. In particular, dim V = dim V. (1.10) (a) V is a vector subspace of the (infinite-dimensional) vector space of all functions V F (with pointwise addition and scalar multiplication). They are the simplest such functions. In the next chapter we shall consider the next simplest vector space of the functions: the quadratic forms. (b) We have met v 0,..., v n before: if v = i λ iv i V, then v j (v) = i λ i v j (v i ) = i λ i δ ji = λ j. Thus vj : v λ j is the j-th coordinate function. We now come to the main ingredients we will need for our application to projective geometry: Definition. (1) For U V, the annihilator U V of U is given by U := {f V : f(u) = 0 for all u U} = {f V : f U = 0}. 26

28 (2) For W V, the solution set sol(w ) V of W is given by sol(w ) = {v V : f(v) = 0 for all f W }. Exercise. If w1,..., wk span W V then k sol(w ) = ker wi. Lemma For U V and W V, U sol(w ) if and only if W U. Proof. Both inclusions mean that f(u) = 0, for all u U and f W. We note: 1. U V : for f, g U, u U and λ F, (λf + g)(u) = λf(u) + g(u) = λ0 + 0 = 0. Similarly sol(w ) V (exercise!). 2. If U 1 U 2 V then U2 U1 : indeed, if f vanishes on U 2 then certainly it vanishes on U 1! Again, if W 1 W 2 V then sol(w 2 ) sol(w 1 ). 3. We have: i=1 These are straightforward exercises. Proposition For U V and W V, (U 1 + U 2 ) = U 1 U 2 sol(w 1 + W 2 ) = sol(w 1 ) sol(w 2 ). dim U + dim U = dim V dim W + dim sol(w ) = dim V = dim V. (1.11a) (1.11b) (We say that U and U have complementary dimension so that W and sol(w ) have complementary dimension also.) Using this, we have: Exercise. (U 1 U 2 ) = U1 + U2 while sol(w 1 W 2 ) = sol(w 1 ) + sol(w 2 ): it is easy to prove that U1 + U2 (U 1 U 2 ) and now use Proposition 1.15 along with Proposition 1.1 to see that both sides have the same dimension. A similar argument works for the solution sets. We now come to the Main Point of our discussion: Theorem The map is a bijection with inverse W sol(w ). U U {U V } {W V } Proof. The statement amounts to the following two assertions: (1) U = sol(u ), for all U V. 27

29 (2) W = (sol(w )), for all W V. For the first of these, put W = U in Lemma 1.14 to get U sol(u ) and then use Proposition 1.15 to see that both spaces have the same dimension and so coincide. A similar argument settles assertion (2). Here is a first, small application of this: Lemma U 1 U 2 if and only if U 2 U 1. Proof. We have already noted the forward implication. For the converse, simply take sol of both sides of U 2 U 1 and use Theorem Duality in projective geometry Now we turn all this into geometry: Definition. For X = P(U) P(V ), the dual subspace to X is X := P(U ) P(V ). The discussion of section immediately gives us: Theorem The map X X : {X P(V )} {Y P(V )} is a bijection called the duality isomorphism. For all X, X 1, X 2 P(V ), we have: dim X + dim X = dim P(V ) 1. X 1 X 2 if and only if X 2 X 1. (X 1 X 2 ) = X 1 X 2. (X 1 X 2 ) = (X 1 )(X 2 ). Thus the duality isomorphism swops joins with intersections and changes the order of inclusions. In particular, let H P(V ) be a hyperplane so that dim H = dim P(V ) 1. Then dim H = 0 so that H is a point in P(V ). The duality isomorphism, restricted to hyperplanes, therefore gives us an identification {hyperplanes in P(V )} = P(V ). This is a big deal: it is telling us that the set of hyperplanes in a projective space, which up until now was just some set, has geometric structure: it can be thought of as a projective space! Let us see what this looks like from a practical point of view: we use dual bases v 0,..., v n and v 0,..., v n to compute homogeneous coordinates in both P(V ) and P(V ). Then if [µ 0,..., µ n ] P(V ), we have where H is the hyperplane [µ 0,..., µ n ] = H H = P ( ker( µ i v i ) ) = {[λ 0,..., λ n ]: µ 0 λ µ n λ n = 0}. (1.12) Thus we identify a hyperplane with the coefficients of the linear equation that defines it. Since those coefficients are defined up to scale, we get a point in a projective space. This identification of the set of hyperplanes in P(V ) with P(V ) has two important consequences: 28

30 1. We can distinguish lines, planes and other projective subspaces among the subsets of the set of hyperplanes in P(V ). 2. Any theorem in projective geometry (such as the theorems of Desargues and Pappus) can be applied to P(V ) and then interpreted, via the duality isomorphism, as a theorem about hyperplanes in P(V ). Example. Any hyperplane in P(V ) is of the form A, for A a point of P(V ). Let us identify the corresponding subset of the set of hyperplanes in P(V ). For this, note that H A if and only if A H. Thus a hyperplane in P(V ) is of the form {H : A H}, some A P(V ), and so corresponds, under the duality isomorphism, to the family of hyperplanes in P(V ) that contain a fixed point A. From a practical point of view, choosing dual bases, if A = [λ 0,..., λ n ] then A = {[µ 0,..., µ n ]: µ 0 λ µ n λ n = 0}. Note the similarity with (1.12): only the roles of the µ i and λ i have been swopped. More generally, we have: Proposition The k-dimensional subspaces of P(V ) are all of the form X = {H : X H}, for some X P(V ) with dim X = dim P(V ) k 1. Example. A line in P(V ) corresponds to the set of hyperplanes in P(V ) that contain a fixed X of dimension dim P(V ) Duality in the projective plane This story works particularly well for projective planes, thus dim P(V ) = 2, where hyperplanes are lines and there are no other projective subspaces to worry about apart from points. Thus, in this case, P(V ) = {L : L P(V ), dim L = 1} = {lines in P(V )}, identifying [µ 0, µ 1, µ 2 ] P(V ) with the line whose equation is µ 0 λ 0 + µ 1 λ 1 + µ 2 λ 2 = 0. A line in P(V ) is of the form A, for A P(V ) a point, and so, thanks to Proposition 1.19, corresponds under the duality isomorphism to the collection of lines in P(V ) through A as in Figure 1.15a. It follows that if a, b P(V ) are two points corresponding to lines L 0, L 1 P(V ) (thus a = L 0 and b = L 1) then ab corresponds to the set of lines through L 0 L 1, see Figure 1.15b: indeed ab corresponds to the set of lines through some point and that set must contain both L 0 and L 1. However, L 0 L 1 is the only point on both lines! Alternatively, we can just compute using the rules listed in Theorem 1.18: ab = (L 0)(L 1) = (L 0 L 1 ). Dually, two lines A and B in P(V ) intersect in the point corresponding to the line AB P(V ) as in Figure 1.15c: A B = (AB). In this setting, the content of Theorem 1.18 is that the duality isomorphism: swops points and lines; 29

31 L 0 L 1 A (a) A line in P(V ) (b) The line through two points of P(V ) (c) Two lines intersect at a point of P(V ) Figure 1.15: Images under the duality isomorphism swops joins and intersections; reverses the order of inclusions. In particular, three points in P(V ) are collinear if and only if the corresponding lines in P(V ) are concurrent. Now any theorem about points and lines in a projective plane can be applied to P(V ) and then, via the duality isomorphism, viewed as a result about lines and points in P(V ). This gives us Poncelet s 10 Principle of Duality: Any theorem about points and lines in a projective plane remains true when the roles of point and line; join and intersection; concurrency and collinearity are exchanged. Let us see the Principle of Duality in action: 10 Jean-Victor Poncelet,

32 Statement Theorem 1.5(1): There is a unique projective line through any two distinct points. That is, any two points are collinear. Desargues Theorem: Let P, A, B, C, A, B, C be distinct points such that P, A, A, P, B, B, P, C, C are collinear triples. Then the intersections of corresponding sides AB A B, AC A C, BC B C are collinear. Pappus Theorem: Let A, B, C and A, B, C be collinear triples in a plane. Then the points AB A B, AC A C, BC B C Dual Statement Theorem 1.5(2): Any two lines in a plane are concurrent. Converse of Desargues: Let p, a, b, c, a, b, c be distinct lines in a plane such that p, a, a, p, b, b, p, c, c are concurrent triples (view a, b, c and a, b, c as sides of two triangles with intersections a a etc, all lying on p). Then the lines through corresponding vertices of the triangles (a b)(a b ), (a c)(a c ), (b c)(b c ) are concurrent. Brianchon s 11 Theorem: Let a, b, c and a, b, c be concurrent triples of lines in a plane. Then the lines (a b )(a b), (a c )((a c), (b c )(b c) are collinear. are concurrent. In each case, the dual statement on the right is a direct consequence of the statement on the left by the Principle of Duality. 11 Charles-Julien Brianchon, In fact, this is just a special case of his theorem. 31

33 Chapter 2 Quadrics Our story so far has been concerned with points, lines, planes, hyperplanes and other projective subspaces of a projective space. The common feature of these subsets is that they are cut out by linear equations, that is to say: they are zero sets of linear functions. We now turn to the next simplest class of subsets of projective space and contemplate the zero sets of quadratic functions. Assumption. In this chapter, we want to divide by 2 := and so assume that, in our field F, , that is, that the characteristic of the field is not Symmetric bilinear forms and quadratic forms We will take a slightly indirect route to quadratic functions that will pay off in the long run. We begin by recalling from Algebra 2B: Definition. A bilinear form B on a vector space V over a field F is a map B : V V F such that B(λv 1 + v 2, v) = λb(v 1, v) + B(v 2, v) B(v, λv 1 + v 2 ) = λb(v, v 1 ) + B(v, v 2 ), for all v, v 1, v 2 V, λ F. (Thus B is linear in each slot separately.) B is symmetric if B(v, w) = B(w, v), for all v, w V. B is non-degenerate if, whenever B(v, w) = 0, for all w V, v = 0. degenerate. Otherwise, we say that B is Example. An inner product on a real vector space is a non-degenerate, symmetric bilinear form. However, we shall mostly be interested in symmetric bilinear forms that are not positive-definite. We can get a practical handle on bilinear forms by introducing a basis v 0,..., v n of V : if B is a bilinear form on V, set β ij := B(v i, v j ) to get an (n + 1) (n + 1) matrix (β ij ). Then B is symmetric if and only if (β ij ) is a symmetric matrix: β ij = β ji (exercise!). Moreover, we can compute all values of B from the β ij : expanding out using the bilinearity gives B( i λ i v i, j µ j v j ) = i,j λ i µ j β ij = (λ 0,..., λ n )(β ij ) This formula defines a bilinear form on V for any (n + 1) (n + 1) matrix (β ij ). µ 0. µ n. 32

34 We get a useful criterion for degeneracy of B using the matrix (β ij ): if v = i λ iv i then B(v, w) = 0, for all w V, if and only if (λ 0,..., λ n )(β ij ) = 0. So such a non-zero v exists if and only if det(β ij ) = 0. We conclude: B is non-degenerate if and only if det(β ij ) 0. Examples. Let V = R Let B(x, y) = x 0 y 1 + x 1 y 0 x 2 y 2. Then B is a symmetric bilinear form with matrix (β ij ) = which is non-singular so that B is non-degenerate. 2. Let B(x, y) = x 0 y 1 + x 1 y 0. Again B is a symmetric bilinear form with matrix (β ij ) = However, this matrix has vanishing determinant so that B is degenerate. The meaning of degeneracy can be found in the following construction which gives a different viewpoint on bilinear forms: given a bilinear form B, define β : V V by We note: β(v)(w) := B(v, w). β(v) V : this is linearity of B in the second slot. β : V V is a linear map: this is linearity of B in the first slot. B(v, w) = 0, for all w V if and only if β(v) = 0 if and only if v ker β. Thus B is non-degenerate if and only if β injects or, since dim V = dim V, β is a linear isomorphism Polars Definition. Let B be a non-degenerate, symmetric bilinear form on a vector space V and let U V. The polar U of U with respect to B is given by U := {v V : B(v, u) = 0, for all u U} V. (2.1) Said another way, Thus we have U = {v V : β(v) U = 0} = β 1 (U ). Proposition 2.1. Let B be a non-degenerate, symmetric bilinear form on V. Then dim U + dim U = dim V, U 1 U 2 if and only if U 2 U 1, 33

35 (U 1 + U 2 ) = U 1 U 2, (U 1 U 2 ) = U 1 + U 2, (U ) = U, for all U, U 1, U 2 V. Proof. All of this comes straight from section and (2.1) except the last item. For this, use the symmetry of B to get U (U ) and then note that both subspaces have the same dimension. We shall return to polars soon Quadratic forms Definition. Let V be a vector space over a field F. A quadratic form on V is a function Q : V F of the form Q(v) = B(v, v), for some symmetric bilinear form B on V. We note: 1. Q(λv) = λ 2 Q(v), for all v V and λ F. 2. We can recover B from Q: B(v, w) = 1 2 (Q(v + w) Q(v) Q(w)), for all v, w V. We say that B is the polarisation of Q. 3. Fix a basis v 0,..., v n of V with respect to B has matrix (β ij ). Then Q( i λ i v i ) = ij β ij λ i λ j = i β ii λ 2 i + 2 i<j β ij λ i λ j. Otherwise said, we have an equality of functions Q = i β ii λ 2 i + 2 i<j β ij λ i λ j, where we view each λ j as the j-th coordinate function. We can reverse this process: if Q = i q iiλ 2 i + i<j q ijλ i λ j then the matrix of B is given by (β ij ) = 1 q 00 2 q ij 1 2 q ji... qnn so that B = i q ii λ i µ i q ij (λ i µ j + λ j µ i ). i<j 34

36 4. Quadratic functions form a vector space (indeed, a subspace of the infinite-dimensional vector space of all functions on V ) under the usual addition and scalar multiplication of functions: (Q 1 + Q 2 )(v) = Q 1 (v) + Q 2 (v) (λq)(v) = λ(q(v)). This vector space is denoted by S 2 V and choosing a basis establishes a linear isomorphism Q (q ij ) from S 2 V to the space of (n + 1) (n + 1) symmetric matrices. Such matrices have n + 1 diagonal entries and n(n + 1)/2 independent off-diagonal entries so that 2.2 Quadrics dim S 2 V = n n(n + 1) 2 = (n + 1)(n + 2). 2 Definition. A quadric is a subset S P(V ) of a projective space of the form S = {[v] P(V ): Q(v) = 0}, where Q is a non-zero quadratic form. We define the dimension of S by dim S := dim P(V ) 1. Observe: 1. The equation defining S is well-defined: we know that Q(λv) = λ 2 Q(v) so that Q(v) = 0 if and only if Q(λv) = Q and λq, for λ F, define the same quadric S. Thus S is really determined by [Q] P(S 2 V ). We shall return to this point later. Definition. Let S P(V ) be a quadric defined by a quadratic form Q with polarisation B so that S = {[v] P(V ): B(v, v) = 0}. Let β : V V be the associated map with β(v)(w) = B(v, w). A = [v] S is a non-singular point of S if β(v) 0 and a singular point otherwise. The set of singular points is called the singular set of S. S is non-singular if all its points are non-singular and singular otherwise. If A S is non-singular, the tangent hyperplane to S at A is the hyperplane A given by A := P([v] ) = P(ker β(v)) = {[w] P(V ): B(v, w) = 0}. Since Q(v) = B(v, v) = 0, we see that A A. Proposition 2.2. Let S P(V ) be a quadric defined by a quadratic form Q with polarisation B. Then S is non-singular if and only if B is non-degenerate. Proof. If B is non-degenerate, β is an isomorphism so that β(v) 0, for all [v] S. Thus S is nonsingular. Conversely, if B is degenerate, then there is a non-zero v ker β. Now so that [v] S is a singular point of S. Q(v) = B(v, v) = β(v)(v) = 0, 35

37 In particular, the proof shows that the singular set of S is P(ker β) and so is a projective subspace of P(V ). Let us see how to compute with these concepts in the presence of a basis. Suppose our quadric is defined by Q = i j q ijλ i λ j so that the polarisation is B = i j 1 2 q ij(λ i µ j + λ j µ i ). Then the tangent hyperplane at [µ 0,..., µ n ] is given by {[λ 0,..., λ n ]: i j q ij (λ i µ j + λ j µ i ) = 0} unless the left hand side of that equation is identically zero in which case [µ 0,..., µ n ] is a singular point. Example. Let S P(R 4 ) be the quadric λ 0 λ 1 λ 2 λ 3 = [1, 0, 1, 0] S: indeed = Let us find the tangent hyperplane at [1, 0, 1, 0]: the polarisation is given by B = 1 2 (λ 0µ 1 + λ 1 µ 0 ) 1 2 (λ 2µ 3 + λ 3 µ 2 ) and we substitute in µ = [1, 0, 1, 0] and set the result to zero to get that the tangent hyperplane is given by λ 1 λ 3 = 0. Example. Let S P(R 3 ) be defined by Q = 2(λ 2 0 λ λ 0 λ 2 λ 1 λ 2 ). Is S non-singular? For this, we need the matrix of the polarisation which is (β ij ) = which is easily seen to have vanishing determinant so that S is singular. Let us try and find the singular set: these are solutions of the linear equations λ λ 1 = λ 2 There is only one solution up to scale and so only one singular point: [1, 1, 2]. Exercise. Try to compute the tangent hyperplane to S at [1, 1 2]: you will get the unhelpful equation 0 = 0. In fact, we can readily check 1 that Q = 2(λ 0 λ 1 )(λ 0 +λ 1 +λ 2 ). Thus if [λ 0, λ 1, λ 2 ] S either λ 0 λ 1 = 0 or λ 0 + λ 1 + λ 2 = 0. These last equations are the equations of two lines and we conclude that S is the union of these lines (see Figure 2.1). Moreover, the singular point [1, 1, 2] is the intersection of those lines. More generally, any pair of hyperplanes in a projective space of any dimension comprises a quadric: Exercise. Let H 1, H 2 P(V ) be hyperplanes in a projective space P(V ). Then H 1 H 2 is a quadric with singular set H 1 H 2. 1 Do it! 36

38 λ 0 λ 1 = 0 λ 0 + λ 1 + λ 2 = 0 [1, 1, 2] Figure 2.1: The quadric λ 2 0 λ λ 0 λ 2 λ 1 λ 2 = Quadrics on a line A general approach to understanding quadrics would be to induct on dimension. The base case is then to understand quadrics on a line. The whole story is contained in the following proposition and pictured in Figure 2.2. Proposition 2.3. Let S be a quadric on a projective line P(V ). Then one of the following holds: 1. S = 2 and then S is non-singular. We say that S is a point-pair. 2. S = 1 and then S is singular. We say that S is a double point. 3. S = and this case is excluded if F is C or any other algebraically closed field. Proof. Let S be defined by the quadratic form Q and let B be the polarisation of Q. Choose v 1 V with Q(v 1 ) 0 and then v 0 V so that v 0, v 1 are a basis of V. Now any [v] [v 1 ], and so any [v] S is of the form [v 0 + tv 1 ], for t F (t is the inhomogeneous coordinate corresponding to this basis). However, Q(v 0 + tv 1 ) = 0 if and only if t 2 Q(v 1 ) + 2tB(v 0, v 1 ) + Q(v 0 ) = 0, (2.2) which is a quadratic equation in t and therefore has: at most 2 solutions; exactly 1 solution if and only if the discriminant b 2 4ac vanishes. In this case, this reads 4B(v 0, v 1 ) 2 4B(v 1, v 1 )B(v 0, v 0 ) = 0, or 4 det(β ij ) = 0 and so holds exactly when B is degenerate. (a) A non-singular quadric (b) A singular quadric Figure 2.2: Quadrics on a line We remark that the last exercise, applied to a projective line (so that hyperplanes are points), shows that any pair of points on a line is a quadric. 37

39 2.2.2 Quadrics and hyperplanes The induction step in our analysis of quadrics is that the intersection of a non-singular quadric with a hyperplane is also a quadric in the hyperplane: Lemma 2.4. Let S P(V ) be a non-singular quadric with dim S 1 (so that dim P(V ) 2) and let H P(V ) be a hyperplane. Then S H is a quadric in H. Proof. Let S be defined by a quadratic form Q with non-degenerate polarisation B and let H = P(U), for U V a linear hyperplane. The Q U is a quadratic form on U with polarisation B U U while S H = {[u] P(U) Q(u) = 0}. Thus the only issue here is to see that Q U is not identically zero. However, in that case, B U U = 0 also. We then choose a basis u 0,..., u n 1 of U and extend by some u n to get a basis of V with respect to B has matrix (β ij ) = 0, with an n n zero matrix in the top left. Since n 2, det(β ij ) = 0 (the first n rows are linearly dependent) contradicting the non-degeneracy of B. We next ask when H S is non-singular. Proposition 2.5. Let S P(V ) be a non-singular quadric with dim S 1 and let H P(V ) be a hyperplane. Then A H S is a singular point of H S if and only if H = A, the tangent hyperplane at A to S. Proof. As before, let H = P(U) and suppose that A = [v]. Then H = A if and only if B(v, u) = 0, for all u U, which is the same as saying that A is a singular point of H S. Observe that a given hyperplane H is of the form A for at most one A: U = [v] implies that U = ([v] ) = [v]. We therefore conclude: Corollary 2.6. Let S P(V ) be a non-singular quadric with dim S 1 and let H P(V ) be a hyperplane. Then H S is a singular quadric in H if and only if H is a tangent hyperplane to S and, in this case, H S has a unique singular point. The story is summarised in Figure 2.3. H 0 H 1 Figure 2.3: Hyperplanes and quadrics: S H 0 is singular and S H 1 is non-singular. 38

40 2.3 Conics Definition. A conic is a 1-dimensional quadric, that is, a quadric in a projective plane. Examples. We look at conics in the real projective plane P(R 3 ). 1. Let C P(R 3 ) be the conic given by λ 2 0 λ 2 1 λ 2 2 = 0. Then dividing by λ 2 0 and setting x i = λ i /λ 0, i = 1, 2, we see that C R 2 = {(x1, x 2 ): [1, x 1, x 2 ] C} = {(x 1, x 2 ): x x 2 2 = 1} is a circle. Moreover, the intersection of C with the line P(U 0 ) at infinity is empty: C P(U 0 ) = {[0, λ 1, λ 2 ]: λ λ 2 2 = 0} =. 2. The hyperbola given by x 1 x 2 = 1 is the affine piece C R 2 of the conic C given by (λ 1 /λ 0 )(λ 2 /λ 0 ) = 1 when λ 0 0, or, more generally by λ 2 0 λ 1 λ 2 = 0. Exercise. What is C P(U 0 )? 3. Similarly, the parabola given by x 2 1 x 2 = 0 is the affine part of the conic C defined by λ 2 1 λ 0 λ 2 = 0. Exercise. What is C P(U 0 )? (a) Ellipse (b) Hyperbola (c) Parabola Figure 2.4: Three non-singular conics (affine parts) It is easy to check that these three conics are non-singular. What is perhaps more surprising, although equally straightforward to see, is that any of these conics can be changed into any other by a simple change of basis. Now for some singular conics, pictured in Figure 2.5: 4. The conic given by λ 1 λ 2 = 0 is a line-pair. Its affine part is the union of the coordinate axes in R 2 and has a single singular point at (0, 0) R The conic given by λ 2 1 = 0 is just the line λ 1 = 0 but every point on it is singular. 6. A point is also a conic! For example, the conic given by λ λ 2 2 = 0 is simply {[1, 0, 0]}. Again this conic is singular. 7. Finally, over R, the empty set is a conic: it is, for example, the conic defined by λ λ λ 2 2 = 0. In fact these examples exhaust the possibilities. Recall from Algebra 2A: Diagonalisation Theorem. Let B : V V R be a symmetric bilinear form on a real vector space. Then there is a basis v 0,..., v n such that B(v i, v j ) = 0 B(v i, v i ) {±1, 0}. for i j 39

41 (a) Line-pair (b) Double line (c) Point Figure 2.5: Singular conics (affine parts) with the singular points in red. For the case n = 2, this yields the following possibilities for quadratic forms: ±(λ λ λ 2 2) which, over R, determines an empty conic. However, over C, we get a non-empty, non-singular conic. ±(λ 2 0 λ 2 1 λ 2 2) which gives us a non-singular conic. ±(λ 2 0 λ 2 1) = ±(λ 0 λ 1 )(λ 0 + λ 1 ) which gives a line-pair. ±(λ λ 2 1) which gives a point over R but factorises over C as ±(λ 0 iλ 1 )(λ 0 + iλ 1 ) to give a line-pair. ±λ 2 0 which gives a double line. In particular, for any non-empty, non-singular conic, we can find a basis with respect to which the conic is given by λ 2 0 λ 2 1 λ 2 2 = 0. So any non-empty, non-singular conic is isomorphic to a circle. Given two such conics, there is a linear isomorphism sending one basis to the other and we conclude: Proposition 2.7. If C 1, C 2 are non-empty, non-singular conics in P(R 3 ), there is a projective transformation τ : P(R 3 ) P(R 3 ) with τc 1 = C 2. In the light of this, one should ask why ellipses, hyperbolae and parabolae look different if they are all isomorphic. The answer is that the affine picture depends on how the conic hits the line at infinity. See Figure 2.6. (a) Ellipse (b) Hyperbola (c) Parabola Figure 2.6: Non-singular conics and the line at infinity. Remark. When F = C, the situation is much simpler: any conic is either non-singular, a line-pair or a double line Lines and conics We specialise our discussion of quadrics and hyperplanes in section to conic and lines in a plane: Proposition 2.8. Let C be a non-empty, non-singular conic in a projective plane P(V ) and L P(V ) a line. Then exactly one of the following holds: 1. C L = 2; 40

42 2. C L = 1 and L is the tangent line A where A = C L; 3. C L = (this is not possible if F is C or another algebraically closed field). Proof. By Proposition 2.4, C L is a quadric on the line L so that Proposition 2.3 applies to tell us first that C L 2 and then that C L = 1 if and only if C L is singular. Moreover, Corollary 2.5 says that the latter case occurs exactly when L = A, for some A C. Here is an application of this: a non-empty, non-singular conic is a copy of a projective line. Theorem 2.9. Let C P(V ) be a non-empty, non-singular conic. Then there is a bijection between C and a projective line. Proof. Fix A C. Then the set of lines through A is, via the duality isomorphism, bijective to the projective line A P(V ). So it suffices to find a bijection α between C and the set of lines through A. For this, given X C, we set { AX if X A; α(x) = A if X = A. Then α is bijective because it has an inverse: for a line L through A, if L A, Proposition 2.8 says that C L \ A contains exactly one point so that we set { α 1 C L \ A if L A ; (L) = A if L = A. See Figure 2.7a for the picture. For a more practical version of the same result, we replace A by a line L P(V ) not through A: Corollary Let C P(V ) be a non-empty, non-singular conic, A C and L P(V ) a line with A / L. Then there is a bijection ˆα : C L with X, ˆα(X), A collinear, for each X C. Proof. We simply set ˆα(X) = α(x) L. Remark. We recognise that ˆα is stereoprojection from A onto L. See Figure 2.7b. C C A A X X α(x) α(a) ˆα(X) ˆα(A) L (a) Abstract version (b) Practical version Figure 2.7: Bijection between a conic and a line. Example. Let us take V = F 3 and contemplate the conic C given by λ λ 2 1 λ 2 2 = 0. Note that C makes sense for any field F. We let A = [1, 0, 1] C and L be the line λ 0 = 0 which does not contain A. We invert ˆα to get a bijection L C which we now compute explicitly. 41

43 For this, let Y = [0, u, v] L. Then ˆα 1 (Y ) is the point on Y A C possibly distinct from A. Now Y A = {[λ(0, u, v) + µ(1, 0, 1)]: [λ, µ] P(F 2 )} = {[µ, λu, λv + µ]: [λ, µ] P(F 2 )} and this intersects C when that is, µ 2 + λ 2 u 2 (λv + µ) 2 = 0, λ 2 (u 2 v 2 ) 2λµv = 0. There are, up to scale, two solutions of this quadratic equation: one is λ = 0 which gives A so we want the other solution where λ(u 2 v 2 ) = 2µv so that [λ, µ] = [2v, u 2 v 2 ]. Substituting this back into our formula for points of Y A, we see that ˆα 1 (Y ) = [u 2 v 2, 2uv, 2v 2 + u 2 v 2 ] = [u 2 v 2, 2uv, v 2 + u 2 ], or, using the affine coordinate t = v/u F, { ˆα 1 [1 t 2, 2t, 1 + t 2 ] t F (Y ) = [ 1, 0, 1] when u = 0. This gives us a formula for the coordinates of any point of our conic. Let us put this to use and solve: Pythagoras Problem: Find all integer solutions of x 2 + y 2 = z 2. For this, we take F = Q and note that if (x, y, z) solves the Pythagoras Problem then [x, y, z] C. Conversely, if [q 0, q 1, q 2 ] C, we get a solution of the Pythagoras Problem by clearing denominators. However, we have just seen that any point of the conic C is either [ 1, 0, 1] (which is a rather boring solution of the problem) or of the form [1 t 2, 2t, 1+t 2 ], for some t = p/q Q. Without loss of generality, we may assume p, q N and multiply by q 2 to get a solution [q 2 p 2, 2pq, p 2 q 2 ] of the Pythagoras Problem. The punchline is that all solutions are of the form: for p, q N. Here are the first few solutions: x = q 2 p 2 y = 2pq z = p 2 + q 2, (p, q) (x, y, z) (1, 2) (3, 4, 5) (1, 3) (8, 6, 10) (2, 3) (5, 12, 13) (2, 5) (21, 20, 29) 2.4 Polars in projective geometry We apply the linear algebra constructions of Section to projective geometry. Definition. Let S P(V ) be a non-singular quadric with corresponding symmetric bilinear form B. For X = P(U) P(V ), the polar subspace (with respect to S) of X is X := P(U ) where U V is the polar of U with respect to B. From Proposition 2.1, we immediately deduce: dim X + dim X = dim P(V ) 1 = dim S; X Y if and only if Y X ; 42

44 (X 1 X 2 ) = X 1 X 2 ; (X 1 X 2 ) = (X 1 )(X 2 ); (X ) = X; for all X, Y, X 1, X 2 P(V ). In particular, if A P(V ) is a point, A is a hyperplane. Example. If A S, we have already met A : it is the tangent hyperplane to S at A. Lemma Let X, Y P(V ). Then X Y if and only if Y X. Proof. From the above, we have X Y if and only if (Y ) X, that is, Y X. We use this to construct polar lines to points with respect to a non-singular conic C in a complex projective plane P(V ): a point A P(V ) has a polar line A and there are two cases: 1. A C and then A is just the tangent line to C at A. 2. A / C. In that case, A is not tangent to C for, if it were, we would have A = X, for some X C, giving A = X C, a contradiction. Therefore, since F = C, Proposition 2.8 says that that A C consists of exactly two points B 1 and B 2, say. Now B 1, B 2 A so that, by Lemma 2.11, A B 1 B 2. However, each B i C so that B i is the tangent line to C at B i. We have therefore proved: Theorem Let C be a non-singular conic in a complex projective plane. Then (a) if A C then A is the tangent line to C at A; (b) if A / C, the polar line A meets C at two points whose tangents intersect at A, see Figure 2.8. B 1 A B 2 A Figure 2.8: Construction of the polar line A of A Exercise. What if F = R? The same argument works so long as A C (that is A is outside C). Try and construct A when A C is empty Projective subspaces of quadrics We know that quadrics can contain points (zero-dimensional subspaces) and that non-singular quadrics cannot contain hyperplanes (this is one way of saying Lemma 2.4). What about other projective subspaces? For this, a useful criterion is given by: 43

45 Lemma Let S P(V ) be a non-singular quadric and X P(V ) a projective subspace. Then X S if and only if X X. Proof. Let S be defined by a quadratic form Q with polarisation B and suppose that X = P(U). Then X S if and only if Q U = 0, or equivalently, after polarising, B U U = 0. But this last means precisely that B(u 1, u 2 ) = 0, for all u 1, u 2 U, that is, that U U or, equivalently, X X. Corollary Let S P(V ) be a non-singular quadric and X P(V ) a projective subspace. If X S then dim X 1 2 dim S. Proof. By Lemma 2.13, if X S then X X so that dim X dim X. However, dim X = dim S dim X so rearranging a little gives 2 dim X dim S. Examples. 1. There are no hyperplanes in a non-singular quadric S unless dim S = In particular, there are no lines in a non-singular conic. Exercise. When F = C, show that a non-singular quadric S does contain a subspace X of the maximum dimension dim S/2. Example. Let S P(V ) be a 2-dimensional quadric in a complex projective space. Then, for each A S, there are two lines L 1, L 2 S with L 1 L 2 = A. To see this, note that A S is (by Proposition 2.6) a singular conic in A with a unique singular point. Any singular conic (F = C here!) is either a double line or a line pair and the first possibility cannot happen because all points of a double line are singular. Thus A S is a line-pair L 1 L 2 with L 1 L 2 = A. For real 2-dimensional quadrics, the same argument gives that either A S is a line-pair or A S is a point. Both possibilities can occur: in Figure 2.9a, the tangent plane intersects the quadric in just one point while, in Figure 2.9b, there are two lines in the quadric through each point and the tangent plane (not pictured) is the join of these. (a) Quadric intersects tangent plane at one point (b) Quadric intersects tangent plane in a line pair (c) Cooling towers Figure 2.9: Real two-dimensional quadrics 44

The geometry of projective space

The geometry of projective space Chapter 1 The geometry of projective space 1.1 Projective spaces Definition. A vector subspace of a vector space V is a non-empty subset U V which is closed under addition and scalar multiplication. In

More information

2. Prime and Maximal Ideals

2. Prime and Maximal Ideals 18 Andreas Gathmann 2. Prime and Maximal Ideals There are two special kinds of ideals that are of particular importance, both algebraically and geometrically: the so-called prime and maximal ideals. Let

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Exercises on chapter 0

Exercises on chapter 0 Exercises on chapter 0 1. A partially ordered set (poset) is a set X together with a relation such that (a) x x for all x X; (b) x y and y x implies that x = y for all x, y X; (c) x y and y z implies that

More information

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps We start with the definition of a vector space; you can find this in Section A.8 of the text (over R, but it works

More information

Projective Geometry Lecture Notes

Projective Geometry Lecture Notes Projective Geometry Lecture Notes Thomas Baird March 26, 2012 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 3 2.1 Vector spaces and their duals......................... 3 2.2 Projective

More information

V (v i + W i ) (v i + W i ) is path-connected and hence is connected.

V (v i + W i ) (v i + W i ) is path-connected and hence is connected. Math 396. Connectedness of hyperplane complements Note that the complement of a point in R is disconnected and the complement of a (translated) line in R 2 is disconnected. Quite generally, we claim that

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) = Math 395. Quadratic spaces over R 1. Algebraic preliminaries Let V be a vector space over a field F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v) for all v V and c F, and

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Throughout these notes, F denotes a field (often called the scalars in this context). 1 Definition of a vector space Definition 1.1. A F -vector space or simply a vector space

More information

Chapter 8. Rigid transformations

Chapter 8. Rigid transformations Chapter 8. Rigid transformations We are about to start drawing figures in 3D. There are no built-in routines for this purpose in PostScript, and we shall have to start more or less from scratch in extending

More information

12. Hilbert Polynomials and Bézout s Theorem

12. Hilbert Polynomials and Bézout s Theorem 12. Hilbert Polynomials and Bézout s Theorem 95 12. Hilbert Polynomials and Bézout s Theorem After our study of smooth cubic surfaces in the last chapter, let us now come back to the general theory of

More information

a (b + c) = a b + a c

a (b + c) = a b + a c Chapter 1 Vector spaces In the Linear Algebra I module, we encountered two kinds of vector space, namely real and complex. The real numbers and the complex numbers are both examples of an algebraic structure

More information

Linear Vector Spaces

Linear Vector Spaces CHAPTER 1 Linear Vector Spaces Definition 1.0.1. A linear vector space over a field F is a triple (V, +, ), where V is a set, + : V V V and : F V V are maps with the properties : (i) ( x, y V ), x + y

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall 2011

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall 2011 Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 1A: Vector spaces Fields

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Finite affine planes in projective spaces

Finite affine planes in projective spaces Finite affine planes in projective spaces J. A.Thas H. Van Maldeghem Ghent University, Belgium {jat,hvm}@cage.ugent.be Abstract We classify all representations of an arbitrary affine plane A of order q

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

MA20216: Algebra 2A. Notes by Fran Burstall

MA20216: Algebra 2A. Notes by Fran Burstall MA20216: Algebra 2A Notes by Fran Burstall Corrections by: Callum Kemp Carlos Galeano Rios Kate Powell Tobias Beith Krunoslav Lehman Pavasovic Dan Corbie Phaidra Anastasiadou Louise Hannon Vlad Brebeanu

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

Equivalence Relations

Equivalence Relations Equivalence Relations Definition 1. Let X be a non-empty set. A subset E X X is called an equivalence relation on X if it satisfies the following three properties: 1. Reflexive: For all x X, (x, x) E.

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Linear Algebra. Chapter 5

Linear Algebra. Chapter 5 Chapter 5 Linear Algebra The guiding theme in linear algebra is the interplay between algebraic manipulations and geometric interpretations. This dual representation is what makes linear algebra a fruitful

More information

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations Page 1 Definitions Tuesday, May 8, 2018 12:23 AM Notations " " means "equals, by definition" the set of all real numbers the set of integers Denote a function from a set to a set by Denote the image of

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

6 Cosets & Factor Groups

6 Cosets & Factor Groups 6 Cosets & Factor Groups The course becomes markedly more abstract at this point. Our primary goal is to break apart a group into subsets such that the set of subsets inherits a natural group structure.

More information

Lax embeddings of the Hermitian Unital

Lax embeddings of the Hermitian Unital Lax embeddings of the Hermitian Unital V. Pepe and H. Van Maldeghem Abstract In this paper, we prove that every lax generalized Veronesean embedding of the Hermitian unital U of PG(2, L), L a quadratic

More information

2. Intersection Multiplicities

2. Intersection Multiplicities 2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.

More information

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS J. WARNER SUMMARY OF A PAPER BY J. CARLSON, E. FRIEDLANDER, AND J. PEVTSOVA, AND FURTHER OBSERVATIONS 1. The Nullcone and Restricted Nullcone We will need

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

3. The Sheaf of Regular Functions

3. The Sheaf of Regular Functions 24 Andreas Gathmann 3. The Sheaf of Regular Functions After having defined affine varieties, our next goal must be to say what kind of maps between them we want to consider as morphisms, i. e. as nice

More information

A Little Beyond: Linear Algebra

A Little Beyond: Linear Algebra A Little Beyond: Linear Algebra Akshay Tiwary March 6, 2016 Any suggestions, questions and remarks are welcome! 1 A little extra Linear Algebra 1. Show that any set of non-zero polynomials in [x], no two

More information

Linear Algebra Lecture Notes

Linear Algebra Lecture Notes Linear Algebra Lecture Notes Lecturers: Inna Capdeboscq and Damiano Testa Warwick, January 2017 Contents 1 Number Systems and Fields 3 1.1 Axioms for number systems............................ 3 2 Vector

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

1 Differentiable manifolds and smooth maps

1 Differentiable manifolds and smooth maps 1 Differentiable manifolds and smooth maps Last updated: April 14, 2011. 1.1 Examples and definitions Roughly, manifolds are sets where one can introduce coordinates. An n-dimensional manifold is a set

More information

Notes on multivariable calculus

Notes on multivariable calculus Notes on multivariable calculus Jonathan Wise February 2, 2010 1 Review of trigonometry Trigonometry is essentially the study of the relationship between polar coordinates and Cartesian coordinates in

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

MATH Linear Algebra

MATH Linear Algebra MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

More information

Abstract Vector Spaces and Concrete Examples

Abstract Vector Spaces and Concrete Examples LECTURE 18 Abstract Vector Spaces and Concrete Examples Our discussion of linear algebra so far has been devoted to discussing the relations between systems of linear equations, matrices, and vectors.

More information

13. Forms and polar spaces

13. Forms and polar spaces 58 NICK GILL In this section V is a vector space over a field k. 13. Forms and polar spaces 13.1. Sesquilinear forms. A sesquilinear form on V is a function β : V V k for which there exists σ Aut(k) such

More information

Algebraic Geometry (Math 6130)

Algebraic Geometry (Math 6130) Algebraic Geometry (Math 6130) Utah/Fall 2016. 2. Projective Varieties. Classically, projective space was obtained by adding points at infinity to n. Here we start with projective space and remove a hyperplane,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Conics and their duals

Conics and their duals 9 Conics and their duals You always admire what you really don t understand. Blaise Pascal So far we dealt almost exclusively with situations in which only points and lines were involved. Geometry would

More information

Math 594. Solutions 5

Math 594. Solutions 5 Math 594. Solutions 5 Book problems 6.1: 7. Prove that subgroups and quotient groups of nilpotent groups are nilpotent (your proof should work for infinite groups). Give an example of a group G which possesses

More information

Math 396. Quotient spaces

Math 396. Quotient spaces Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

and The important theorem which connects these various spaces with each other is the following: (with the notation above)

and The important theorem which connects these various spaces with each other is the following: (with the notation above) When F : U V is a linear transformation there are two special subspaces associated to F which are very important. One is a subspace of U and the other is a subspace of V. They are: kerf U (the kernel of

More information

arxiv:math/ v1 [math.ag] 3 Mar 2002

arxiv:math/ v1 [math.ag] 3 Mar 2002 How to sum up triangles arxiv:math/0203022v1 [math.ag] 3 Mar 2002 Bakharev F. Kokhas K. Petrov F. June 2001 Abstract We prove configuration theorems that generalize the Desargues, Pascal, and Pappus theorems.

More information

Math 113 Winter 2013 Prof. Church Midterm Solutions

Math 113 Winter 2013 Prof. Church Midterm Solutions Math 113 Winter 2013 Prof. Church Midterm Solutions Name: Student ID: Signature: Question 1 (20 points). Let V be a finite-dimensional vector space, and let T L(V, W ). Assume that v 1,..., v n is a basis

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Algebraic Varieties. Chapter Algebraic Varieties

Algebraic Varieties. Chapter Algebraic Varieties Chapter 12 Algebraic Varieties 12.1 Algebraic Varieties Let K be a field, n 1 a natural number, and let f 1,..., f m K[X 1,..., X n ] be polynomials with coefficients in K. Then V = {(a 1,..., a n ) :

More information

β : V V k, (x, y) x yφ

β : V V k, (x, y) x yφ CLASSICAL GROUPS 21 6. Forms and polar spaces In this section V is a vector space over a field k. 6.1. Sesquilinear forms. A sesquilinear form on V is a function β : V V k for which there exists σ Aut(k)

More information

Math 115A: Linear Algebra

Math 115A: Linear Algebra Math 115A: Linear Algebra Michael Andrews UCLA Mathematics Department February 9, 218 Contents 1 January 8: a little about sets 4 2 January 9 (discussion) 5 2.1 Some definitions: union, intersection, set

More information

9. Birational Maps and Blowing Up

9. Birational Maps and Blowing Up 72 Andreas Gathmann 9. Birational Maps and Blowing Up In the course of this class we have already seen many examples of varieties that are almost the same in the sense that they contain isomorphic dense

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information

1 Fields and vector spaces

1 Fields and vector spaces 1 Fields and vector spaces In this section we revise some algebraic preliminaries and establish notation. 1.1 Division rings and fields A division ring, or skew field, is a structure F with two binary

More information

The Hurewicz Theorem

The Hurewicz Theorem The Hurewicz Theorem April 5, 011 1 Introduction The fundamental group and homology groups both give extremely useful information, particularly about path-connected spaces. Both can be considered as functors,

More information

Some notes on linear algebra

Some notes on linear algebra Some notes on linear algebra Throughout these notes, k denotes a field (often called the scalars in this context). Recall that this means that there are two binary operations on k, denoted + and, that

More information

NOTES (1) FOR MATH 375, FALL 2012

NOTES (1) FOR MATH 375, FALL 2012 NOTES 1) FOR MATH 375, FALL 2012 1 Vector Spaces 11 Axioms Linear algebra grows out of the problem of solving simultaneous systems of linear equations such as 3x + 2y = 5, 111) x 3y = 9, or 2x + 3y z =

More information

LECTURE 3: RELATIVE SINGULAR HOMOLOGY

LECTURE 3: RELATIVE SINGULAR HOMOLOGY LECTURE 3: RELATIVE SINGULAR HOMOLOGY In this lecture we want to cover some basic concepts from homological algebra. These prove to be very helpful in our discussion of singular homology. The following

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS

ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS 1. Cardinal number of a set The cardinal number (or simply cardinal) of a set is a generalization of the concept of the number of elements

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

NOTES ON FINITE FIELDS

NOTES ON FINITE FIELDS NOTES ON FINITE FIELDS AARON LANDESMAN CONTENTS 1. Introduction to finite fields 2 2. Definition and constructions of fields 3 2.1. The definition of a field 3 2.2. Constructing field extensions by adjoining

More information

THE THEOREM OF DESARGUES

THE THEOREM OF DESARGUES THE THEOREM OF DESARGUES TIMOTHY VIS 1. Introduction In this worksheet we will be exploring some proofs surrounding the Theorem of Desargues. This theorem plays an extremely important role in projective

More information

1.1 Line Reflections and Point Reflections

1.1 Line Reflections and Point Reflections 1.1 Line Reflections and Point Reflections Since this is a book on Transformation Geometry, we shall start by defining transformations of the Euclidean plane and giving basic examples. Definition 1. A

More information

ALGEBRA II: RINGS AND MODULES OVER LITTLE RINGS.

ALGEBRA II: RINGS AND MODULES OVER LITTLE RINGS. ALGEBRA II: RINGS AND MODULES OVER LITTLE RINGS. KEVIN MCGERTY. 1. RINGS The central characters of this course are algebraic objects known as rings. A ring is any mathematical structure where you can add

More information

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent Lecture 4. G-Modules PCMI Summer 2015 Undergraduate Lectures on Flag Varieties Lecture 4. The categories of G-modules, mostly for finite groups, and a recipe for finding every irreducible G-module of a

More information

ABSTRACT VECTOR SPACES AND THE CONCEPT OF ISOMORPHISM. Executive summary

ABSTRACT VECTOR SPACES AND THE CONCEPT OF ISOMORPHISM. Executive summary ABSTRACT VECTOR SPACES AND THE CONCEPT OF ISOMORPHISM MATH 196, SECTION 57 (VIPUL NAIK) Corresponding material in the book: Sections 4.1 and 4.2. General stuff... Executive summary (1) There is an abstract

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Math 550 Notes. Chapter 2. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 2. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 2 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 2 Fall 2010 1 / 20 Linear algebra deals with finite dimensional

More information

Exercises for Unit I (Topics from linear algebra)

Exercises for Unit I (Topics from linear algebra) Exercises for Unit I (Topics from linear algebra) I.0 : Background Note. There is no corresponding section in the course notes, but as noted at the beginning of Unit I these are a few exercises which involve

More information

Characterizations of the finite quadric Veroneseans V 2n

Characterizations of the finite quadric Veroneseans V 2n Characterizations of the finite quadric Veroneseans V 2n n J. A. Thas H. Van Maldeghem Abstract We generalize and complete several characterizations of the finite quadric Veroneseans surveyed in [3]. Our

More information

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality (October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are

More information

Exercises for Unit I (Topics from linear algebra)

Exercises for Unit I (Topics from linear algebra) Exercises for Unit I (Topics from linear algebra) I.0 : Background This does not correspond to a section in the course notes, but as noted at the beginning of Unit I it contains some exercises which involve

More information

Math 396. Bijectivity vs. isomorphism

Math 396. Bijectivity vs. isomorphism Math 396. Bijectivity vs. isomorphism 1. Motivation Let f : X Y be a C p map between two C p -premanifolds with corners, with 1 p. Assuming f is bijective, we would like a criterion to tell us that f 1

More information

Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

More information

DIVISORS ON NONSINGULAR CURVES

DIVISORS ON NONSINGULAR CURVES DIVISORS ON NONSINGULAR CURVES BRIAN OSSERMAN We now begin a closer study of the behavior of projective nonsingular curves, and morphisms between them, as well as to projective space. To this end, we introduce

More information

1 Euclidean geometry. 1.1 The metric on R n

1 Euclidean geometry. 1.1 The metric on R n 1 Euclidean geometry This chapter discusses the geometry of n-dimensional Euclidean space E n, together with its distance function. The distance gives rise to other notions such as angles and congruent

More information

Unless otherwise specified, V denotes an arbitrary finite-dimensional vector space.

Unless otherwise specified, V denotes an arbitrary finite-dimensional vector space. MAT 90 // 0 points Exam Solutions Unless otherwise specified, V denotes an arbitrary finite-dimensional vector space..(0) Prove: a central arrangement A in V is essential if and only if the dual projective

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information

Linear Algebra MAT 331. Wai Yan Pong

Linear Algebra MAT 331. Wai Yan Pong Linear Algebra MAT 33 Wai Yan Pong August 7, 8 Contents Linear Equations 4. Systems of Linear Equations..................... 4. Gauss-Jordan Elimination...................... 6 Linear Independence. Vector

More information

Chapter One. The Real Number System

Chapter One. The Real Number System Chapter One. The Real Number System We shall give a quick introduction to the real number system. It is imperative that we know how the set of real numbers behaves in the way that its completeness and

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

Formal power series rings, inverse limits, and I-adic completions of rings

Formal power series rings, inverse limits, and I-adic completions of rings Formal power series rings, inverse limits, and I-adic completions of rings Formal semigroup rings and formal power series rings We next want to explore the notion of a (formal) power series ring in finitely

More information

5 Set Operations, Functions, and Counting

5 Set Operations, Functions, and Counting 5 Set Operations, Functions, and Counting Let N denote the positive integers, N 0 := N {0} be the non-negative integers and Z = N 0 ( N) the positive and negative integers including 0, Q the rational numbers,

More information

Holomorphic line bundles

Holomorphic line bundles Chapter 2 Holomorphic line bundles In the absence of non-constant holomorphic functions X! C on a compact complex manifold, we turn to the next best thing, holomorphic sections of line bundles (i.e., rank

More information

Exterior powers and Clifford algebras

Exterior powers and Clifford algebras 10 Exterior powers and Clifford algebras In this chapter, various algebraic constructions (exterior products and Clifford algebras) are used to embed some geometries related to projective and polar spaces

More information

Math 110, Spring 2015: Midterm Solutions

Math 110, Spring 2015: Midterm Solutions Math 11, Spring 215: Midterm Solutions These are not intended as model answers ; in many cases far more explanation is provided than would be necessary to receive full credit. The goal here is to make

More information

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition.

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition. Lecture notes - Math 110 Lec 002, Summer 2016 BW The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition. 1 Contents 1 Sets and fields - 6/20 5 1.1 Set notation.................................

More information

C. Fields. C. Fields 183

C. Fields. C. Fields 183 C Fields 183 C Fields By a field one means a set K equippedwith two operations: addition and multiplication Both are assumed to be commutative and associative, and satisfying the distributive law: a(b+c)

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information