Algebraic Geometry. Contents. Diane Maclagan Notes by Florian Bouyer. Copyright (C) Bouyer 2011.

Similar documents
Projective Varieties. Chapter Projective Space and Algebraic Sets

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

ALGEBRAIC GROUPS. Disclaimer: There are millions of errors in these notes!

Math 418 Algebraic Geometry Notes

MATH32062 Notes. 1 Affine algebraic varieties. 1.1 Definition of affine algebraic varieties

12. Hilbert Polynomials and Bézout s Theorem

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ.

This is a closed subset of X Y, by Proposition 6.5(b), since it is equal to the inverse image of the diagonal under the regular map:

10. Smooth Varieties. 82 Andreas Gathmann

Math 203A - Solution Set 3

Pure Math 764, Winter 2014

Polynomials, Ideals, and Gröbner Bases

MAS 6396 Algebraic Curves Spring Semester 2016 Notes based on Algebraic Curves by Fulton. Timothy J. Ford April 4, 2016

CHAPTER 1. AFFINE ALGEBRAIC VARIETIES

Institutionen för matematik, KTH.

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism

PROBLEMS, MATH 214A. Affine and quasi-affine varieties

2. Intersection Multiplicities

Summer Algebraic Geometry Seminar

FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS.

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 4: MORE ABOUT VARIETIES AND REGULAR FUNCTIONS.

MATH 326: RINGS AND MODULES STEFAN GILLE

10. Noether Normalization and Hilbert s Nullstellensatz

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra

Math 203A - Solution Set 1

Algebraic Geometry. Instructor: Stephen Diaz & Typist: Caleb McWhorter. Spring 2015

Chapter 1. Affine algebraic geometry. 1.1 The Zariski topology on A n

4. Images of Varieties Given a morphism f : X Y of quasi-projective varieties, a basic question might be to ask what is the image of a closed subset

Yuriy Drozd. Intriduction to Algebraic Geometry. Kaiserslautern 1998/99

ALGEBRAIC GEOMETRY (NMAG401) Contents. 2. Polynomial and rational maps 9 3. Hilbert s Nullstellensatz and consequences 23 References 30

Rings and groups. Ya. Sysak

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY

Algebraic Varieties. Chapter Algebraic Varieties

ALGEBRAIC GEOMETRY CAUCHER BIRKAR

MATH 8253 ALGEBRAIC GEOMETRY WEEK 12

1 Rings 1 RINGS 1. Theorem 1.1 (Substitution Principle). Let ϕ : R R be a ring homomorphism

Groebner Bases and Applications

CHEVALLEY S THEOREM AND COMPLETE VARIETIES

INTRODUCTION TO ALGEBRAIC GEOMETRY. Throughout these notes all rings will be commutative with identity. k will be an algebraically

D-MATH Algebra I HS18 Prof. Rahul Pandharipande. Solution 6. Unique Factorization Domains

Lecture 2. (1) Every P L A (M) has a maximal element, (2) Every ascending chain of submodules stabilizes (ACC).

4.5 Hilbert s Nullstellensatz (Zeros Theorem)

14. Rational maps It is often the case that we are given a variety X and a morphism defined on an open subset U of X. As open sets in the Zariski

Math 203A - Solution Set 1

AN EXPOSITION OF THE RIEMANN ROCH THEOREM FOR CURVES

12. Linear systems Theorem Let X be a scheme over a ring A. (1) If φ: X P n A is an A-morphism then L = φ O P n

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Assigned homework problems S. L. Kleiman, fall 2008

div(f ) = D and deg(d) = deg(f ) = d i deg(f i ) (compare this with the definitions for smooth curves). Let:

Resolution of Singularities in Algebraic Varieties

. HILBERT POLYNOMIALS

On some properties of elementary derivations in dimension six

The Geometry-Algebra Dictionary

Lecture 1. (i,j) N 2 kx i y j, and this makes k[x, y]

Abstract Algebra for Polynomial Operations. Maya Mohsin Ahmed

Integral Extensions. Chapter Integral Elements Definitions and Comments Lemma

D-MATH Algebraic Geometry FS 2018 Prof. Emmanuel Kowalski. Solutions Sheet 1. Classical Varieties

8. Prime Factorization and Primary Decompositions

Reid 5.2. Describe the irreducible components of V (J) for J = (y 2 x 4, x 2 2x 3 x 2 y + 2xy + y 2 y) in k[x, y, z]. Here k is algebraically closed.

2. Prime and Maximal Ideals

Math 429/581 (Advanced) Group Theory. Summary of Definitions, Examples, and Theorems by Stefan Gille

ABSTRACT. Department of Mathematics. interesting results. A graph on n vertices is represented by a polynomial in n

The most important result in this section is undoubtedly the following theorem.

A course in. Algebraic Geometry. Taught by Prof. Xinwen Zhu. Fall 2011

Algebraic Geometry. Andreas Gathmann. Class Notes TU Kaiserslautern 2014

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples

Algebraic Geometry. Question: What regular polygons can be inscribed in an ellipse?

8 Appendix: Polynomial Rings

π X : X Y X and π Y : X Y Y

Local properties of plane algebraic curves

12 Hilbert polynomials

Math 145. Codimension

A MODEL-THEORETIC PROOF OF HILBERT S NULLSTELLENSATZ

Lecture Notes Math 371: Algebra (Fall 2006) by Nathanael Leedom Ackerman

Section III.6. Factorization in Polynomial Rings

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 27

Algebra Exam Fall Alexander J. Wertheim Last Updated: October 26, Groups Problem Problem Problem 3...

Homework 2 - Math 603 Fall 05 Solutions

LECTURE NOTES IN CRYPTOGRAPHY

Math 214A. 1 Affine Varieties. Yuyu Zhu. August 31, Closed Sets. These notes are based on lectures given by Prof. Merkurjev in Winter 2015.

where m is the maximal ideal of O X,p. Note that m/m 2 is a vector space. Suppose that we are given a morphism

3. The Sheaf of Regular Functions

9. Birational Maps and Blowing Up

2 ALGEBRA II. Contents

4.4 Noetherian Rings

Counting Zeros over Finite Fields with Gröbner Bases

DIVISORS ON NONSINGULAR CURVES

ne varieties (continued)

Algebraic Geometry (Math 6130)

Factorization in Integral Domains II

MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA

Polynomial Rings. i=0. i=0. n+m. i=0. k=0

11. Dimension. 96 Andreas Gathmann

TROPICAL SCHEME THEORY

Another way to proceed is to prove that the function field is purely transcendental. Now the coordinate ring is

HARTSHORNE EXERCISES

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Algebraic geometry of the ring of continuous functions

Math 4370 Exam 1. Handed out March 9th 2010 Due March 18th 2010

Algebraic Geometry. Andreas Gathmann. Notes for a class. taught at the University of Kaiserslautern 2002/2003

Transcription:

Algebraic Geometry Diane Maclagan Notes by Florian Bouyer Contents Copyright (C) Bouyer 2011. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license can be found at http://www.gnu.org/licenses/fdl.html 1 Introduction and Basic Denitions 2 2 Grobner Bases 3 2.1 The Division Algorithm.................................... 4 3 Zariski Topology 7 3.1 Morphism............................................ 7 3.2 Images of varieties under morphism.............................. 9 4 Sylvester Matrix 11 4.1 Hilbert's Nullstellensatz.................................... 14 5 Irreducible Components 16 5.1 Rational maps......................................... 18 6 Projective Varieties. 21 6.1 Ane Charts.......................................... 21 6.2 Morphisms of projective varieties................................ 23 6.2.1 Veronese Embedding.................................. 23 6.2.2 Segre Embedding.................................... 24 6.2.3 Grassmannian..................................... 24 7 Dimension and Hilbert Polynomial 26 7.1 Singularities........................................... 27 1

Books: Hasset: Introduction to Algebraic Geometry Cox, Little, O'shea Ideals, Varieties and Algorithm 1 Introduction and Basic Denitions Algebraic geometry starts with the study of solutions to polynomial equations. e.g.: {(x, y) C 2 : y 2 = x 3 2x + 1} (an elliptic curve) e.g.: {(x, y, w, z) C 4 : x + y + z + w = 0, x + 2y + 3z = 0} (Subspace of C 4 ) The goals of this module is to understand solutions to polynomial equations varieties. That is properties, maps between them, how to compute them and examples of them. Why would we do that? Because varieties occurs in many dierent parts of mathematics: e.g.: A robot arm: any movement can be described by polynomial equations (and inequalities) e.g.: {(x, y) (Q \ {0}) 2 : x 4 + y 4 = 1} = (by Fermat's Last Theorem) Algebraic geometry seeks to understand these spaces using (commutative) algebra. Denition 1.1. Let S be the ring of polynomial with coecients in a eld k. Notation. S = k[x 1,..., x n ] Denition 1.2. The ane space is A n = {(y!,..., y n ) : y i k}. That is k n without the vector space structure. Denition 1.3. Given polynomial f 1,... f r S the ane variety dened by the f i is V (f 1,..., f r ) = {y = (y 1,..., y n ) A n : f i (y) = 0 i} Example. V (x 2 + y 2 1) = circle of radius 1 Note. Two dierent sets of polynomials can dene the same varieties. Example. V (x + y + z, z + 2y) = V (y z, x + 2z) = {(2a, a, a) : a k} Recall: The ideal generated by f 1,..., f r S is I = f 1,..., f r = { r i=1 h if i : h i S}. It is closed under addition and multiplication by elements of S. Lemma 1.4. V (f 1,..., f r ) = {y A n : f(y) = 0 f f 1,..., f r }. Thus if f 1,..., f r = g 1,... g s then V (f 1,..., f r ) = V (g 1,..., g s ). Proof. We show the inclusion both ways: : Let y V (f 1,..., f r ). Then f i (y) = 0 i, so let f = r i=1 h if i f 1,..., f r, then f(y) = 0. : Conversely if f(y) = 0 f f 1,..., f r then f i (y) = 0 i. Hence y V (f 1,..., f r ). Notation. If I = f 1,..., f r we write V (I) for V (f 1,..., f r ). Denition. Let X A n be a set. The ideal of function vanishing on X is I(X) = {f S : f(y) = 0 y X} Example. X = {0} A 1. Then I(X) = x. Note that I I(V (I)). To see this we have f I f(y) = 0 y V (I) f I(V (I)). On the other hand we don't have always equality. e.g., I = x 2 k[x], then V (I) = {0} A n, so I(V (I)) = x x 2. e.g., k = R and I = x 2 + 1. Then V (I) = so I(V (I)) = 1 = R[x] I 2. 2

2 Grobner Bases Question: Given f 1,..., f r, f S, how can we decide if f f 1,..., f r? That is: given generators for I(X) how can we decide if f vanishes on X? Example 2.1. n = 1, k = Q. Is x 2 3x + 2, x 2 4x + 4 = x 3 6x 2 + 12x 8, x 2 5x + 6 = x 2? Yes since we are in a PID so we can use Euler's algorithm to nd the generator. This is a solved problems Any n and f 1,..., f r are linear Is y z x + y + z, x + 2y? Yes. Is 5x 1 +3x 2 7x 4 +8x 5 x 1 + x 2 + x 3 + x 4 + x 5, 3x 1 7x 4 + 9x 5, 2x 1 + 3x 4 = f 1, f 2, f 3? If f f 1, f 2, f 3 then f = af 1 + bf 2 + cf 3 for a, b, c k. So the question now becomes: is (5, 3, 0, 7, 8) row 1 1 1 1 1 3 0 0 7 9? 2 0 0 3 0 To solve this we use Gaussian elimination from Linear Algebra As we seen from the above examples, we need a common generalization. This is the Theory of Grobner bases. Denition 2.2. A term order (or monomial order ) is a total order on the monomials (polynomial in one variable) is S = k[x 1,..., x n ] such that: 1. 1 < x u for all u 0 2. x u < x v x u+w < x v+w for all w N n. Several term orders: Lexicographic order X u < X v if the rst non-zero element of v u is positive. Example. f = 3x 2 8xz 9 + 9y 10. If x > y > z, then x 2 > xz 9 > y 10 (since if v = (2, 0, 0), u = (1, 0, 9) then v u = (1, 0, 9). Degreelicographic order X u < X v if { deg(x u ) < deg(x v ) X u < lex X v if deg(x u ) = deg(x v ). Example. f = 3x 2 8xz 9 + 9y 10. Then xz 9 > y 10 > x 2. Reverse lexicographic order (revlex) X 2 < X v is { deg(x u ) < deg(x v ) the last non-zero entry of v u is negative if deg(x u ) = deg(x Example. f = 3x 2 8xz 9 + 9y 10. Then y 10 > xz 9 > z 2. Denition 2.3. Given a polynomial f = c u X u S and a term order <, the initial term of f is c v X v with X v > X u for all u and c v 0. This is denoted in < (f). Denition 2.4. The initial ideal of I with respect to < is in < (I) = in < (f) : f I Warning: If I = f 1,..., f r then in < (I) is not necessarily generated by in < (f 1 ),... in < (f r ). e.g., Let I = x + y + z, x + 2y and let the term ordering be x > y > z. Then in < (I) = x, y. Denition 2.5. A set {g 1,... g s } is a Grobner basis for I if {g 1,..., g s } I and in < (I) = in < (g 1 ),..., in < (g s ). The point of this is that long division by a Grobner basis decides the ideal membership problem, that is, is f f 1,..., f r? Denition 2.6. A monomial ideal is an ideal I S generated by monomials X u. 3

Lemma 2.7. Let I be a monomial ideal, I = X u : u A for some A N n. Then: 1. X v I if and only if X u X v for some u A. 2. If f = c v X v I then each X v with c v non-zero is divisible by some X u for U A, hence they lies in I. Proof. Note that part 1. is a special case of part 2. Since f I we can write f = h u X u with u A, h u S and all but nitely many are 0. Let us expand the RHS as a sum of monomials. Then each term is a multiple of some X u so lies in I, hence the same is true for the terms of f. Theorem 2.8 (Dickson's Lemma). Let I = X u : u A for some set A N n, then there exists a 1,... a s A with I = X a1,..., X as. Proof. The proof is by induction on n. n = 1: n > 1: We have I = X u for U = min{u : U A}, this uses the fact that N is well ordered Name the variables of the polynomial ring x 1,..., x n 1, y.. Let J = X u : j 0 with x u y j I k[x 1,..., x n 1 ]. By induction hypothesis J = X ai 1,... X ais where (a ij, m j ) A for some m j N. Let m = max(m j ). For 0 l m 1, let J l = X u : x u y l I k[x 1,..., x n 1 ]. So again by induction we have that J l = x b l1,..., x r(l) b where bls N n 1 and x b ls y l I. We now claim that I = x b ls y l : 0 l m 1, 1 s r(l) + x aij y mj : 1 j s. Indeed if x u y j I, if j < m then x u J j so x bjs x u for some b js so x bjs y j x u y j. If j m then x u J, so there is a i with X ai X u so X ai y mi X u y j. In particular, every monomial generator of I lies in x b ls y l, x aij y mj so the ideals are equal and I is nitely generated. For each of the nite number of generators we can nd a i A with X ai dividing the generator (using the previous lemma). Corollary 2.9. A term order is well ordered (every set of monomials has a least element) Proof. If not, there would be an innite chain X u1 > X u2 >.... Let I = X ui : i 1 k[x 1,..., x n ], then by Dickson's lemma I = X ui 1,..., X uis for some i 1 < i 2 < < i s. In particular for j i s there exists l such that X ui l X uj. Thus X uj = X ui l X w, but then X ui l < X uj because 1 < X W. This is a contradiction. Corollary 2.10. Let I be an ideal in k[x 1,..., x n ] then there exists g 1,... g s I with in < (I) = in < (g 1 ),..., in < (g s ). Hence a Grobner basis exists. Proof. By denition in < (I) = in < (f) : f I. By Disckson's lemma, there exists g 1,..., g s I with in < (g 1 ),..., in < (g s ) = in < (I). 2.1 The Division Algorithm Input: f 1,..., f s, f S, < the term order s Output: Expression of the form i=1 h if i + r where h i S and r = cu X u with {c u 0 X u is not divisible by in < (f i ) i}, such that if in < (f) = c u X u, in < (h i f i ) = c vi X vi then X u X vi i. Step 1: Initialize h 1 = = h s = 0, r = 0, p = f. Step 2: While p 0 do: i = 1 Divisionoccured = false While i s and Divisionoccured = false do: If in < (f i ) in < (p) then: h i = h i + in<(p) in <(f i) p = p in<(p) in <(f i) f i 4

Divisionoccured = true Else: i = i + 1 If Divisionoccured = false then: r = r + in < (p) p = p in < (p) Step 3: Output: h 1,..., h s, r. Example 2.11. Input: f 1 = x + y + z, f 2 = 3x 2y, f = 5y + 3z, < lex (x < y < z) Step 1: h 1 = 0, h 2 = 0, r = 0, p = 5y + 3z. Step 2: i = 1 Divisionoccured = false does in < (f 1 ) in < (p)? Yes: h 1 = 0 + 3 p = 5y + 3z) 3 (x + y + z) = 3x + 2y Divisionoccured = true Step 2: i = 1 Divisionoccured = false does in < (f 1 ) in < (p)? No: i = 2 does in < (f 2 ) in < (p)? Yes: h 2 = 0 + 1 p = 3x + 2y + ( 1) (3x 2y) = 0 Divisionoccured = true Step 3: Output: h 1 = 3, h 2 = 1, r = 0 Note that the division algorithm depends on the ordering. (In the above example if x > y > z then the output is h 1 = h 2 = 0 and r = 5y + 3z) Proposition 2.12. The above algorithm terminates with the correct output. Proof. As each stage the initial term in < (p) decreases with respect to <. Since < is a well-order, this cannot happen an innite number of times, hence the algorithm must terminate. At each stage we have f = p+ h i f i +r, where h i f i and r satisfy the condition, so when it outputs with p = 0, the output has the desired correct form. Proposition 2.13. If {g 1,..., g s } is a Grobner basis for I with respect to <, then f I if and only if the division algorithm outputs r = 0. Proof. The division algorithm writes f = h i g i + r, where no monomial in r is divisible by in < (g i ). Thus f I if and only if r I. Now if r 0 then in < (r) / in < (I) = in < (g 1 ),..., in < (g s ), so r / I. Hence r = 0 if and only if r I. Corollary 2.14. If {g 1,..., g s } is a Grobner basis for I then I = g 1,..., g s Proof. We have g 1,..., g s I by the denition of Grobner basis. If f I, then we divide f by g 1,..., g s to get f = h i g i + r, but r = 0. So we have f g 1,..., g s, hence I g 1,..., g s. Corollary 2.15 (Hilbert Basis Theorem). Let I S be an ideal. Then I is nitely generated. Proof. We know that I has a nite Grobner basis (since monomial ideals are nitely generated). By the previous corollary, this Grobner basis generates I Denition 2.16. A ring R is Noetherian if all its ideals are nitely generated. Hence the Hilbert basis theorem says S is Noetherian. Note that there is a standard algorithm (the Buchberger algorithm) to compute Grobner bases. Denition 2.17. A reduced Grobner basis for I with respect to <, is a Grobner basis of I which satises: 5

1. Coecients of in < (g i ) is 1 2. No in < (g i ) divides any other-way 3. No in < (g i ) divides any other term of g j. Such a reduced Grobner basis exists and is unique. With this we can check whether two ideals are equal. To do this we x a term order and compute a reduced Grobner basis for I and J. 6

3 Zariski Topology Recall that a topological space is a set X and a collection θ = {U} of subsets of X called open sets, satisfying: 1. θ 2. X θ 3. If U, U θ then U U θ 4. If U α θ for α A, then α U α θ. A set Z is closed if its compliment is open. Denition 3.1. The Zariski Topology on A n has close set V (I) for I S an ideal. Example. In A 1, under the Zariski Topology, the closed sets are nite set, A 1 or. (A 1 = V (0) and = V (S)) Recall: If I, J are ideals in S then I + J = {i + j : i I, j J}, while IJ = ij : i I, j J. In terms of generators, if I = f 1,..., f s and J = g 1,..., g r then I + J = f 1,..., f s, g 1,..., g s and IJ = f i g j : 1 i s, 1 j r. Proposition 3.2. Let X = V (I) and Y = V (J) be two varieties in A n then: X Y = V (I + J) X Y = V (I J) = V (IJ) Proof. Let y X Y. Then f(y) = 0 for all f I and g(y) = 0 for all g J. So (f + g)(y) = 0 for all f I and g J. Hence by denition y V (I + J). Conversely: let y V (I + J), then h(y) = 0 for all h = f + 0 with f I, hence y V (I). Similarly h(y) = 0 for all h = 0 + g with g J, hence y V (J). So y X Y. Let y X Y. Then y X or y Y. If y X then f(y) = 0 f I, so f(y) = 0 f I J, hence y V (I J). Similarly if y Y then g(y) = 0 g I, so g(y) = 0 g I J, hence y V (I J). Let y V (IJ). Then h(y) = 0 h = fg with f I, g J. Thus h(y) = f(y)g(y) f I, g J. Suppose y / Y, that is there exists g J with g(y) 0, then f(y) = 0 f I, hence y V (I) = X. Thus we have y X Y. So V (IJ) X Y. Note that I J IJ so V (I J) V (IJ) (This follows from the general fact I J V (J) V (J)) We have shown V (I J) V (IJ) X Y V (I J), thus they are all equal. In fact, if {X α : α A} is a collection of varieties in A n with X α = V (I α ), then α X α = V ( I α ). Challenge question: What goes wrong with arbitrary union. Corollary 3.3. The Zariski topology is a topology on A n. Note: This topology is weird compare to the Euclidean topology, for example it is not Haussdorf and open sets are dense. 3.1 Morphism Denition 3.4. A morphism is a map φ : A n A m with φ(y 1,..., y n ) = (φ 1 (y 1,..., y n ),..., φ m (y 1,..., y n )) where φ k[x 1,..., x n ]. Example. φ : A 2 A 2 dened by φ(x, y) = (x 2 y 2, x 2 + 2xy + 3y 2 ). Morphism plays the role of continuous functions in topology. Questions: are all continuous functions morphism? No. 7

{ x + 1 x / Q Example. f(x) =. This is a continuous function in the Zariski topology. We don't x x Q want this, hence why we restrict to morphism. Denition 3.5. For f k[z 1,..., z m ], φ : A n A m, the function f φ k[x 1,..., x n ] is called the pullback if f by φ. Note. φ f = f φ. Recall: a k-algebra is a ring R containing the eld k. A k-algebra homomorphism is a ring homomorphism φ with φ(a) = a a k. Lemma 3.6. The map φ : k[z 1,..., z m ] k[x 1,..., x n ] is a k-algebra homomorphism φ (1) = 1 φ (0) = 0 φ (a) = a a k φ (fg) = φ (f)φ (g) φ (f + g) = φ (f) + φ (g) Proof. Exercise Note: The polynomial ring is the ring of morphism from A n to A 1. Denition 3.7. The coordinate ring k[x] of a variety X = V (I) A n is the ring of polynomial functions from X to A 1. Equivalently: k[x] = {f k[x 1,..., x n ]}/ where f g if f(y) = g(y) for all y X. Note. f(y) = g(y) y X if and only if (f g)(y) = 0 y X, that is, if and only if f g I(X). So k[x] = k[x 1,..., x n ]/I(X) and in particular k[x] is a ring. Example. X = V (x 2 + y 2 1) then k[x] = k[x, y]/ x 2 + y 2 1 X = V (x 3 ) A 1 then k[x] = k[x]/ x = k. Denition 3.8. Fix X = V (I) A n. Two morphism φ, ψ : A n A m are equal in X if the induced pullback φ, ψ : k[z 1,..., z m ] k[z] = k[x 1,..., x n ]/I(X) are equal. Denition 3.9. A morphism φ : X A n is an equivalence class of such morphism. Example 3.10. Let X = V (x 2 + y 2 1), ψ : A 2 A 1 dened by ψ(x, y) = x 4 and φ : A 2 A 1 dened by φ(x, y) = (y 2 1) 2. We claim that φ = ψ on X since ψ : k[z] k[x, y] is dened by z x 4 while φ : k[z] k[x, y] is dened by z (y 2 1) 2. But k[x] = k[x, y]/(x 2 + y 2 1), and in there x 4 = (y 2 1) 2, hence φ = ψ. Lemma 3.11. If φ, ψ : A n A m are equal on X then φ(y) = ψ(y) for all y X. Proof. If φ(y) ψ(y) for some y X then they dier in some coordinate i. Then z i (φ(y)) z i (ψ(y)), so φ z i (y) ψ z i (y). Hence φ z i ψ z i / I(X), so the pullback homomorphism φ and ψ are dierent. Denition 3.12. Let X A n and Y A m be varieties. A morphism φ : X Y is a morphism φ : X A m with φ(x) Y. Example. Let X = A 1 and Y = V (cy y 2 ) A 3 and let φ : A 1 A 3 be dened by φ(t) = (t, t 2, t 3 ). Then φ : k[x, y, z] k[t] is dened by x t, y t 2 and z t 3. Since tt 3 (t 2 ) 2 = 0, φ(a 1 ) Y, so φ is a morphism from A 1 Y. Proposition. Let X A n, Y A m be varieties. Any morphism φ : X Y induces a k-algebra homomorphism φ : k[y ] k[x]. Conversely given a k-algebra homomorphism from k[y ] k[x] is φ for some morphism φ : X Y. 8

Proof. Let φ : X Y be a morphism. Since φ(x) Y we have f φ(x) = 0 x X and f I(Y ). Hence φ f I(X) f I(Y ), therefore the induced map φ : k[z 1,..., z m ] k[x] = k[x 1,..., x n ]/I(X) factors through k[y ]. So given a morphism φ : X Y we get φ : k[y ] k[x]. Conversely given a k-algebra homomorphism α : k[y ] k[x] it suces to nd a k-algebra homomorphism α : k[z 1,..., z m ] k[x 1,..., x n ] for which we have a commutating diagram k[z 1,..., z m ] α k[x 1,..., x n ] i Y k[y ] α i X k[x] Then α will be a morphism A n A m with α(x) Y. We construct such α as follow. Let g i be any polynomial in k[x 1,..., x n ] with i X (g i) = α(i Y (z i)). Set α = g i and extend as a k- algebra homomorphism. (g i exists since the map i X is surjective). This denes α : k[z 1,..., z m ] k[x 1,..., x n ] and i X α (z 1 ) = α i Y (z i) by construction, hence the diagram commutes. Example 3.13. Let φ : k[t] k[x, y, z]/(x 2 y, x 3 z). Then φ (t) = x and φ (t) = x + x 2 y is the same. This is φ for φ : V (x 2 y, x 3 z) A 1 dened by φ(x, y, z) = x (or φ(x, y, z) = x + x 2 y as while they are dierent morphism they agree on X) So to sum up: Morphism φ : X Y are the same as k-algebra homomorphism of the coordinate rings φ : k[y ] k[x]. note that the homomorphism goes the other way! (contragradient). Exercise 3.14. If X φ Y ψ Z with X α Z. Then α : k[z] k[x] is φ ψ. Denition 3.15. An isomorphism of ane varieties is a morphism φ : X Y for which there is a morphism φ 1 : Y X with φ φ 1 = id Y and φ 1 φ = id X. An automorphism of an ane variety is an isomorphism φ : X X. WARNING: A morphism that is a bijection needs not be an isomorphism. 3.2 Images of varieties under morphism That is, given φ : A n A m what is φ(x)? Warning: φ(x) needs not to be a variety. For example X = V (xy 1) A 1 and φ : A 2 A 1 dened (x, y) x. Then φ(x) = A 1 \{0}. (REMEMBER THIS EXAMPLE!). Notice that the closure of φ(x), is φ(x) = A 1. Another question is: Given X A n and φ : A n A m, how do we compute φ(x). We use the following clever trick: let X A n, rst we send x (x, φ(x)), then project unto the last m coordinates, i.e., φ(x) is the composition of the inclusion of X into the graph of φ with the projection onto the last m coordinates. This breaks the problem into two parts: Describe the image of X A n A m Describe π(y ) for Y A n A m, where π is the projection onto the last m coordinates. For part 1, the image of X = V (I) is V (I) V (z i φ i (x)) A n A m = (x 1,..., x n, z 1,..., z m ) Example. Let φ : A 2 A 2 dened by φ(x, y) = (x + y, x y) and let X = V (x 2 y 2 ). Then the graph of X in A 2 A 2 is V (x 2 y 2, z 1 z y, z 2 x + y) (x, y, z 1, z 2 ). Then φ(x, y) = (z 1, z 2 ) Theorem 3.16. Let X A n be a variety and let φ : A n A m be the projection onto the last m coordinates. Then π(x) = V (I(X) k[x n m+1,..., x n ]) Note. We'll soon show that if k = k then we can replace I(X) by I. But it is not true otherwise, for example, consider k = R and X = V (x 2 y 2 + 1) A 2 and π : (x, y) y. Then X =, π(x) = and I(X) = 1. But x 2 y 2 + 1 k[y] = 0 9

Proof. If f I(X) k[x n m+1,..., x m ] then f(y) = 0 y X, so f(y n m+1,..., y n ) = 0 (y n m+1,..., y n ) with y X, hence f(π(y)) = 0 y X and thus π(x) V (I(X) k[x n m+1,..., x n ]). Conversely if g I(π(X)) then g(y n m+1,..., y n ) = 0 y = (y 1,..., y n ) X. So g I(X) k[x n m+1,..., x n ] so I(π(X)) I(X) k[x n m+1,..., x n ]. But since π(x) = V (I(π(X)) this shows V (I(X) k[x n m+1,..., x n ]) π(x). This leaves the question: Given I k[x 1,..., x n, z 1,..., z m ] how can we compute I k[z 1,..., z m ]? The answer is to use Grobner basis. Recall: the lexicographic term order with x 1 > > x n > z 1 > > z m has x u z v > x u z v if (u u, v v ) has rst non-zero entry positive. Proposition 3.17. Let I k[x 1,..., x n ] = S and let G = {g!,..., g s } be a lexicographic Grobner basis for I. Then a lexicographic Grobner basis for I k[x n m+1,..., x n ] is given by G k[x n m+1,..., x n ] = S, i.e., those elements of G that are polynomials in x n m+1,..., x n. Proof. G S is a collection of polynomials in I S, so we just need to show that in <lex (g) : g G S = in <lex (I S ) S. Let f I S. Then in <lex (f) in <lex (I), so there is g G with in <lex (g) in <lex (f). Since f S, in <lex (g) is not divisible by x 1,..., x n m and thus g S. Hence in <lex (f) in <lex (g) : g G S, so G S is a Grobner basis for I S. The next question is: Given X = V (I), what is I(X)? Hilbert's Nullstellensatz. If k = k, then I(V (I)) = I, where I is the radical of I. (Denoted r(i) in Commutative Algebra) Proof. This proof will come later in the course. 10

4 Sylvester Matrix Given f, g k[x], how can we decide if they have a common factor? Denition. f = 5x 5 + 6x 4 x 3 + 2x 2 1 and g = 7x 5 + 8x 3 3x 2 + 1. Or f = ax + b, g = cx + d. In this case we have that f, g has a common factor if and only if a b c d = 0. Notice the analogy with Z, that is, n, m Z have a common factor when there is no a, b such that an + bm = 1. This naturally leads to the next proposition. Proposition 4.1. Let f = l i=0 a ix i and g = m j=0 b jx j be two polynomials in k[x]. Then the following are equivalent. 1. f, g have a common root, i.e., there exists α k such that f(α) = g(α) = 0 2. f, g have a non-constant common factor h 3. There does not exists A, B k[x] with Af + Bg = 1 4. f, g k[x] 5. There exists Ã, B k[x] with deg(ã) m 1, deg( B) l 1 and Ãf + Bg = 0. Proof. 1 3: If f(α) = g(α) = 0 and Af + Bg = 1 then A(α)f(α) + B(α)g(α) = 1 0 + 0 = 1 which is a contradiction, hence no such A, B exists. 3 4: Suppose f, g = 1 = k[x], then 1 f, g so there exists A, B k[x] with Af + Bg = 1 4 2: If f, g k[x] then, since k[x] is a PID, the ideal f, g = h for some h k[x] nonconstant. So f, g h,that is, f = fh, g = gh and thus f, g have a non-constant common factor. 2 5: We write f = fh, g = gh and set à = g and B = f. Then Ãf + Bg = 0 and Ã, Bsatisfy the degree bound. 5 2: If Ãf + Bg = 0, then every irreducible factor of g divides Ãf, since k[x] is a UFD. Since deg(g) > deg(ã) at least one irreducible factor must divide f. Hence f and g have a common factor. 2 1: If f, g have a non-constant common factor h, let α be any root of h, then f(α) = g(α) = 0. So f and g have a common root. Part 5 is the key idea here. Given f = a i x i and g = b j x j with 0 i l and 0 j m, write à = m 1 i=0 c ix i and B = l 1 j=0 d jx j where c i, d j are undeterminate coecients. 0 = (c m 1 x m 1 + + c 0 )(a l x l + + a 0 ) + (d l 1 x l 1 + + d 0 )(b m x m + + b 0 ) = (c m 1 a l + d l 1 b m )x l+m 1 + (c m 1 a l 1 + c m 2 a l + d l 1 b m 1 + d l 2 b m )x l+m 2 + + (c 0 a 0 + d 0 b 0 ) Thus all the coecients of x j are zero. Remember that a i and b j are given, so we have a set of linear equations in the c and d variables. We can count that we have l + m variables and linear equations. 11

This gives the following matrix a l 0... b m 0... a l 1 a l... b m 1 b m.... a l 2 a... l 1 bm 2 b.. m 1.......... a 0 b 0 } {{ }} {{ } m There exists non-zero Ã, B if the correct degree with Ãf + Bg = 0 if and only if the determinant of this matrix is zero. Denition 4.2. Let f = l i=0 a ix i and g = m j=0 b jx j be polynomials in k[x] with a l, b m 0. The Sylvester matrix of f, g with respect to x is the (l + m) (l + m) matrix a l 0... b m 0... a l 1 a l... b m 1 b m.... a l 2 a... l 1 bm 2 b.. m 1 Syl(f, g, x) =.......... a 0 b 0 } {{ }} {{ } m The determinant of Syl(f, g, x) is a polynomial in a i, b i with integer coecients. This is called the resultant of f and g and is denoted Res(f, g, x). Example. Let f = x 2 + 3x + a and g = x + b. Then Syl (f,g,x) = 1 1 0 3 b 1 a 0 b so Res(f, g, x) = b 2 (3b a) = b 2 3b + a, so f and g have a common factor if and only if a = 3b b 2. Theorem 4.3. Fix f, g k[x], then f, g have a common factor if and only if Res(f, g, x) = 0 Proof. This is what the previous work has been about. Example. f = x 2 + 2x + 1, g = x 2 + 3x + 2 1 0 1 0 Syl(f, g, x) = 2 1 3 1 1 2 2 3 0 1 0 2 We see that (r 3 r 1 ) (r 2 r 1 ) r 4 = 0, so Res(f, g, x) = 0. (In fact the common factor is x + 1) f = ax 2 + bx + c, g = f = 2ax + b a 2a 0 Syl(f, g, x) = b b 2a c 0 b So Res(f, g, x) = ab 2 2a(b 2 2ac) = ab 2 + 4a 2 c = a(b 2 4ac) l l 12

Notice how in the second example we nearly ended up with the discriminant of a quadratic equations. Denition 4.4. Let f = l i=0 a ix i. Then the discriminant of f, disc(f) = ( 1)l 1 a l Res(f, f, x) Proposition 4.5. The polynomial disc(f) lies in Z[a 0,..., a l ]. The polynomial f has a multiple root if and only if disc(f) = 0. Proof. Note that the rst row of Syl(f, f, x) is (a l, 0,..., 0, la l, 0,..., 0) so a l Res(f, f, x) and thus disc(f) Z[a 0,..., a l ]. Since deg(f) = l and a l 0, we have disc(f) = 0 if and only if f and f have a common root, so we just need to check that this happens if and only if f has a multiple root. Fix a root α of f and write f = (x α) m f where f(α) 0. Then f = m(x α) m 1 f + (x α) m f, so f (α) = 0 if m > 1. If m = 1 then f (α) = f(α) 0. So α is a root of f if and only if α is a multiple root of f. Generalizations: 1. More variables: Given f, g k[x 1,..., x n ], write f = l i=0 a ix i 1 and g = m j=0 b jx j 1 where a i, b j k[x 2,..., x n ] and a l, b m 0. Then Res(f, g, x 1 ) = det(syl(f, g, x 1 )) k[x 2,..., x n ]. Note. We can think about f, g as polynomials in k(x 2,..., x n )[x 1 ] (elds of rational functions). So this is a special case of the rst one. In particular, either Res(f, g, x 1 ) = 0 or there exists A, B k(x 1,..., x n )[x 1 ] with Af + Bg = 1. Example. Ã = ARes(f, g, x 1 ), B = BRes(f, g, x 1 ) are polynomials in k[x 1, x 2,..., x n ] so Ãf + Bg = Res(f, g, x 1 ). A and B comes from solution to Syl(f, g, x 1 ) c m 1. c 0 d l 1. d 0 0 =. 1 Cramer's rule states Ax = b, x i = ( 1) Ai A where A i is A with i th column replaced by b. By Cramer's rule, the c i and d j have the form polynomial is x 2,... x n /Res(f, g, x 1 ). So ARes(f, g, x 1 ) is a polynomial in k[x 1,..., x n ] As a corollary to all of this we have that Res(f, g, x 1 ) f, g k[x 2,..., x n ]. This is a cheaper way to do elimination/projection. Proposition 4.6. Fix f, g k[x 1,..., x n ] for degrees l, m in x 1 respectively. If Res(f, g, x 1 ) k[x 2,..., x n ] is zero at (c 2,..., c n ) k n 1 then either a l (c 2,, c n ) = 0 or b m (c 2,..., c n ) = 0 or c 1 k such that f(c 1,..., c n ) = g(c 1,..., c n ) = 0. Proof. Let f(x 1, c) = f(x 1, c 2,..., c n ) k[x 1 ] and similarly let g(x 1, c) k[x 1 ]. If neither a l (c), b m (c) = 0 then f(x 1, c) had degree l and g(x 1, c) has degree m. So Syl(f(x 1, c), g(x 1, c, ), x 1 ) is Syl(f, g, x 1 ) with c 2,..., c n substituted for x 2,..., x n. Thus Res(f(x 1, c), g(x 1, c), x 1 ) = 0, so f(x 1, c) and g(x 1, c) have a common root c i k. Hence f(c 1, c 2,..., c n ) = g(c 1, c 2,..., c n ) = 0. 2. Resultants of several polynomials. Given f 1,..., f s k[x 1,..., x n ] we introduce new variables u 2,..., u s and let g = u 2 f 2 +... u s f s. Write Res(f 1, g, x 1 ) = h α (x 2,..., x n )u α with α N s 1. We call h α k[x 2,..., x n ] the generalised resultant. 13

Example. Let f 1 = x 3 + 3x + 2, f 2 = x + 1, f 3 = x + 5. Then g = u 2 (x + 1) + u 3 (x + 5) and 1 u 2 + u 3 0 Syl(f, g, x 1 ) = 3 u 2 + 5u 3 u 2 + u 3 2 0 u 2 + 5u 3 so Res(f 1, g, x 1 ) = 4u 2 u 3 + 12u 2 3. Hence h 1,1 = 4 and h 0,2 = 12. Lemma 4.7. The polynomial h α lies in f 1,..., f s k[x 2,..., x n ] Proof. Write Res(f 1, g, x 1 ) = Af 1 + Bg for A, B k[u 2,..., u s, x 1,..., x n ]. Write A = A α u α and B = B β u β for A α, B β k[x 1,..., x n ]. Then Res(f 1, g, x 1 ) = h α u α = α (A αf 1 + s i=2 B α e i f i )u α. So h α = A α f 1 + B α ei f i f 1,..., f s. Furthermore h α k[x 2,..., x n ] by construction. 4.1 Hilbert's Nullstellensatz Consider π : A n A m projection onto the last m coordinates. We saw π(x) = V (I(X) k[x n m+1,..., x n ]). The question is what do we add then we take the closure? Given y π(x) is y π(x)? Theorem 4.8 (Extension Theorem. ). Let k = k. Let X = V (I) A n and let π : A n A n 1 be projection onto the last n 1 coordinates. Write I = f 1,..., f s with f i = g i (x 2,..., x n )x Ni 1 +l.o.t. in x i. Let (c 2,..., c n ) V (I k[x 2,..., x n ]). If (c 2,..., c n ) / V (g 1,..., g s ) A n 1 then c 1 k with (c 1,..., c n ) X. Example. X = V (xy 1), f 1 = xy 1, g 1 = x. Then the theorem say if c 1 V (0) = A 1 and c 1 / V (x) then there exists c 2 with (c 1, c 2 ) V (xy 1). Note that V (0) comes from xy 1 k[x]. Note. I I(X) so I k[x 2,..., x n ] I(X) k[x 2,..., x n ] so V (I k[x 2,..., x n ]) π(x). How useful this is depends on the choice of the generators of I. The theorem talks about I, not I(X), so this brings us closer to the Nullstellensatz. Proof. s = 1: In this case f = g 1 (x 2,..., x n )x N 1 + l.o.t.we have f k[x 2,..., x n ] = 0, and (c 2,..., c n ) V ( f k[x 2,..., x n ] Case 1. Case 2. N 0: g 1 (c 1,..., c n ) 0, then f(x 1, c 2,... c n ) is a polynomial of degree N in x 1 so has a root c 1 in k. N = 0 then g 1 = f 1, so if (c 2,..., c n ) V ( f k[x 2,..., x n ]) = V (f) A n 1 s = 2: s 3: The (previous) proposition shows that if g 1 (c 1,..., c n ) 0 and g 2 (c 2,..., c n ) 0 then the desired c 1 exists. Suppose (c 2,..., c n ) / V (g 1, g 2 ) then without loss of generality g 1 (c 2,..., c n ) 0. If g 2 (c 2,..., c n ) 0 then c 1 exists. Otherwise replace f 2 by f 2 + x N 1 f 1 for N 0. This does not change the ideal f 1, f 2 and it does not change (c 2,..., c n ) / V (g 1, g 2 ) = V (g 1, g 1 ). Then the proposition implies there exists c 1 with f 1 (c 1,..., c n ) = f 2 (c 1,... c n ) = 0. Also assume g 1 (c 2,..., c n ) 0. Replace f 2 by f 2 + x N 1 f 1 for N 0 if necessary to guarantee g 2 (c) 0 and deg x1 (f 2 ) > deg x1 (f i ) for i > 2. Write Res(f, s i=2 u if i, x 1 ) = h α u α. Since h α I k[x 2,..., x n ] we have h α (c 2,..., c n ) = 0 α. Thus Res(f, u i f i, x 1 )(c 2,..., c n, u 1,..., u s ) is the zero polynomial. By construction the coecients of the maximal power of x 1 in f 1 and in u i f i are g 1 and g 1 u 1, so are non-zero are (c 2,..., c n ). Thus 0 = Res(f, u i f i, x 1 )(c 2,..., c n, u 1,..., u s ) = Res(f 1 (x 1, c 2,..., c n ), u i f i (x 1, c 2,..., c n ), x 1 ). Thus there exists F k(u 2,..., u s )[x 1 ] with deg x1 F > 0, F f 1 (x 1, c 2,..., c n ). Write F = F /g where F = k[u 2,..., u s, x 1 ], g k[u 2,..., u s ]. Then F divides f 1 (x 1, c 2,..., c n )g(u 2,..., u s ). Let F be an irreducible factor of F with positive degree in x 1. Then F f 1 (x 1, c 2,..., c n ). Thus it does not contain any u i. So F u i f i (x i, c 2,..., c n ) but F k[x 1 ] thus F f i (x 1, c 2,..., c n ) for 2 i n. Then F f i (x 1, c 2,..., c n ) for all 1 i s. Then choose a root c 1 of F. Then F (c 1 ) = 0 so f i (c 1,..., c n ) = 0 so (c 1,..., c n ) X 14

Weak Nullstellensatz. Let k = k. Suppose I k[x 1,..., x n ] satises V (I) =, then I = 1 = k[x 1,..., x n ]. Proof. We use an induction proof on n. n = 1: I = f k[x 1 ]. If f / k there exists α k with f(α) = 0 so V (I). Thus if V (I) =, I = f = 1. n > 1: Let I = f 1,..., f s and suppose V (I) =. We may assume deg(f i ) > 0 for all i. Let the degree of f 1 be N. Consider the morphism φ : A n A n given by φ : k[x 1,..., x m ] k[z 1,..., z n ] with φ (x i ) = z i + a i z 1 with a 1 = 0 and a i K for i > 1. Note: φ is an isomorphism, since the matrix is 1 0 0... 0 a 2 1 0... 0 a 3 0 1... 0....... a n 0 0... 1 is invertible. (φ ) 1 (z i ) = x i a i x 1. This means that 1 I if and only if 1 φ (I), and φ 1 (X) = V (φ (I)) =. (This is because V (φ (I)) = {y : φ f(y) = 0 f I} = {y : f φ(y) = 0 f I} = {y : φ(y) V (I)}. ) Let f 1 = c u x u. Note that φ (f 1 ) = c(a 2,..., a n )z1 N + l.o.t in z 1 where c(a 2,..., a n ) is the non-zero polynomial in a 2,..., a n, i.e., c(a 2,..., a n ) = u =N c u a u i i. Thus we can choose (a 2,..., a n ) k n 1 with c(a 2,..., a n ) 0. (Exercise: this holds because the eld is innite) Then g 1 k, so V (g 1,..., g s ) = for φ 1(f i ) = g i z Ni 1 + l.o.t. Let J = φ (I) k[z 1,..., z n ], then by the extension theorem, if (c 2,..., c n ) V (J) then there exists c 1 k with (c 1,..., c n ) V (φ (I)). Thus V (J) = and by induction J = 1, so 1 φ (I) and so 1 I. Note that is 1 I, we can write 1 = A i f i with A i k[x 1,..., x n ]. Nullstellensatz. Let k = k. Then I(V (I)) = I. Proof. Let f m I, then f m (x) = 0 x V (I) so f(x) = 0 for all x V (I). Hence f I(V (I)), thus I I(V (I)) For the reverse inclusion, suppose f I(V (I)) and let I = f 1,..., f s and Ĩ = f 1,..., f s, 1 yf k[x 1,..., x n, y]. Now that V (Ĩ) = since if f 1(x 1,..., x n ) = = f s (x 1,..., x n ) = 0 then f(x 1,..., x n ) = 0 so 1 yf(x 1,..., x n ) 0 y. So by the Weak Nullstellensatz we have that 1 Ĩ. So there exists p 1,..., p s, q k[x 1,..., x n, y] with 1 = p i f i + q(1 yf). Regard this as an expression in k(x 1,..., x n, y) and substitute y = 1 f, then 1 = 1 p i (x 1,..., x n f )f i. Choose m > 0 for which 1 p i (x 1,..., x n f )f m k[x 1,..., x n ] then f m = 1 (p i (x 1,..., x n f )f m )f i, hence f m I and thus f I. 15

5 Irreducible Components (There is some cross-over with commutative algebra here, revise both!) Denition 5.1. A variety X A n is reducible if X = X 1 X 2 with X 1, X 2 non-empty varieties in A n and X 1, X 2 X X is irreducible if it is not reducible. Example. X = V (x, y) A 2 then X = V (x) V (y). V (x 2 + y 2 1) A 2 is irreducible but it is not trivial to prove. We will prove this later. X A 1 is reducible if and only if 1 < X < X = V (f) is a hypersurface in A n. Let f = cf α1 1... fr αr where c k and f i are distinct irreducible polynomials. Then V (f) = V (f 1 ) V (f r ). Claim: If r > 1 then V (f) is reducible. We just need too show that V (f i ), X for all i. Now V (f i ) since 1 / f i. If V (f i ) = X then V (f j ) V (f i ) for some j i. Hence f i I(V (f j )) = f j = f j (exercise). So f j f i which contradicts f i, f j being distinct irreducible. Actually V (f i ) are all irreducible, so X = V (f 1 ) V (f r ) is a decomposition into irreducible. Theorem 5.2. Let X A n be a variety. Then X = X 1 X r, where each X r is irreducible. This representation is unique up to permutation provided it is irredundant (i.e., X i X j for any i j) Proof. For this theorem, we use the fact that k[x 1,..., x n ] is Noetherian, in particular, that there is no innite ascending chain I 1 I 2 I 3... of ideals in k[x 1,..., x n ]. Existence: If X is irreducible then we are done. Otherwise write X = X 1 X 2 where X 1, X 2 are proper subvarieties. Again if both are irreducible then we are done. Otherwise we can write X 1 = X 11 X 12 and X 2 = X 21 X 22 where X ij are proper subvarieties of X i. Iterate this process. We claim that this process terminates with X = X j (Finite union). If not we have an innite descending chain X X 1 X 11 X 111.... This gives a reverse containment I(X) I(X 1 ) I(X 11 ).... This chain must stabilize, so I(X 111...11 ) = I(X 111...11111 ) but V (I(X 11...111 ) = X 11...11 which contradicts the proper inclusion of varieties. Since V (I(V (I))) = V (I). Hence the decomposition process must terminates. Uniqueness: Suppose X = X 1 X 2 X r = X 1 X s are two irredundant irreducible decompositions. Consider X (X i) = X i = (X 1 X r ) X i = (X 1 (X i)) (X r (X i)) Since X i is irreducible, there must be j with X j (X i ) = X i, so X i X j. The same argument shows that there is k with X j X k, so we have X i X j X k. Since the decomposition is irredundant, X i = X k = X k. This construct a bijection between {X j } and {X i }, hence r = s and the decomposition is unique up to permutation. Note: This was a topological proof. A topological space with no innite descending chain of closed set is called Noetherian (note how this is the opposite condition to Noetherian ring). Noetherian topological spaces have irreducible decompositions. Theorem 5.3. Let X A n be a variety. The following are equivalent: 1. X is irreducible 2. The coordinate ring k[x] is a domain 3. I(X) k[x 1,..., x n ] is prime. 16

Proof. 2 3: Recall k[x] = k[x 1,..., x n ]/I(X). So if f, g k[x] satisfy fg = 0 then there are lifts f, g k[x 1,..., x n ] such that f, g / I(X) and f g I(X). Same argument works the other way round. 1 3: Suppose I(X) is not prime, that is, there exists f, g / I(X) with fg I(X). Let X 1 = V (f) X and X 2 = V (g) X. Since f, g / I(X) then X 1, X 2 X. However X 1 X 2 = (V (f) X) (V (g) X) = ((V (f) V (g)) X = V (fg) X = X since fg I(X). So X is reducible. 3 1: Suppose X = X 1 X 2 with X 1 and X 2 proper subvarieties. Then I(X) I(X 1 ), I(X) I(X 2 ) (To see this take V (_) of both side then V (I(X i )) = X i ). So we may choose f I(X 1 )\I(X) and g I(X 2 )\I(X). Now fg I(X 1 ) I(X 2 ), so V (I(X 1 ) I(X 2 )) V (fg). But V (I(X 1 ) I(X 2 )) = V (I(X 1 )) V (I(X 2 )) = X 1 X 2 = X. So fg I(X) so I(X) is not prime. Remark. Some text reserve the word variety for irreducible varieties and call what we call varieties algebraic sets. Warning: If X = V (I) is irreducible, this does not imply that I is prime, just that I(X) is. This about I = x 2, xy 2 k[x, y]. Theorem 5.4. Let k = k (this condition is unnecessary as there exists a commutative algebra proof which show this theorem holds for k k. See Commutative Algebra notes, this is the whole theory of Primary Decomposition). Let I = I (a radical ideal) in k[x 1,..., x n ], then I = P 1 P r where each P i is prime. This decomposition is unique up to order if irredundant. Proof. Let X = V (I) and let X = X 1 X r be an irredundant irreducible decomposition. Let P i = I(X i ) which is prime by the previous theorem and let P = P i. Then V (P ) = V (P i ) = X i = X. So P = I(X) = I. If f m P for some m > 0 then f m P i for all i, so f P i for all i. Hence f P i = P and thus P = P. So I = P i Uniqueness follows from the uniqueness of primary decomposition. The next question to come up is how can we determine the P i, that is the prime decomposition of radical ideals. Denition 5.5. Let I, J be ideals. Then the colon (or quotient) ideal is (I : J) = {f k[x 1,..., x n ] : fg = I g J} I. Example. Let I = x 2, xy 2 and J = x. Then (I : J) = {f : fg x 2, xy 2 g x } = {f : fx x 2, xy 2 } = x, y 2 Theorem 5.6. Let I = I and let I = P i be an irredundant primary decomposition. Then the P i are precisely the prime ideals of the form (I : f) for f k[x 1,..., x n ]. { 1 f P Proof. Notice: (I : f) = ( P i : f) = (P i : f). Now for any prime P we have (P : f) = P f / P. So (I : f) = f / Pi P i. Fix P i, since P j P i for any j i, we can nd f j P j \P i. Let f = i j f j then f j i P j \P i. So (I : f) = P i. Conversely if (I : f) = P is prime for some f, then P = f / Pi P i (as P i = P i so P = P i for some i. In more details P P i for all i. If P P i for all i then we can nd f i P i \P, so f = f i P i \P which is a contradiction. So P = P i for some i) Example. Let I = xy, xz, yx, then V (I) = union of x, y, z axes = V (x, y) V (x, z) V (y, z). So I = x, y x, z y, x. We want to see the theorem in action, so notice that (I : z) = x, y,(i : y) = x, z and (I : x) = y, z. Warning: (I : x + y) = z, xy (not as obvious.) Let I = x 3 xy 2 x. Then (I : x 2 + y 2 1) = x and (I : x) = x 2 + y 2 1. Note. If X, Y are varieties in A n then (I(X) : I(Y )) = I(X\Y ). To see this: x x X\Y, since x / Y there is g I(Y ) with g(x) 0. So if f (I(X) : I(Y )) then f(x)g(x) = 0, so f(x) = 0 and thus f I(X\Y). Conversely if f I(X\Y ) and g I(Y ) then fg I(X) so f (I(X) : I(Y )). Hence (X\Y ) = V (I(X) : I(Y )). 17

5.1 Rational maps How can we decide if X is irreducible? This is hard in general! We use the following trick. If φ : Y X is surjective and X = X 1 X 2 then Y = φ 1 (X 1 ) φ 1 (X 2 ). Now both sides are closed and proper if both X 1 and X 2 are. So if X is reducible then so is Y. Or if Y is irreducible then so is X. Denition 5.7. A morphism φ : X Y of ane varieties is dominant if φ(x) = Y Example. Take φ : V (xy 1) A 1 dened by (x, y) x. This is not surjective but it is dominant. Proposition 5.8. A morphism φ : X Y is dominant if and only if φ : k[y ] k[x] is injective. Example. k[x] k[x, y]/ xy 1, x x (linked to the previous exampled) is injective. Proof. A morphism φ : X Y induces a homomorphism φ : k[y ] k[x]. Now φ(x) Z Y (Z a variety) if and only if there exists g I(X)\I(Y ) with g(φ(x)) = 0 x X, so φ (g(x)) = 0 x X. Hence φ g I(X) and thus the image of g in k[y ] is non-zero but is mapped to zero by φ so φ is not injective. Proposition 5.9. If φ : X Y is dominant and X is irreducible then so is Y. Proof. Since φ is dominant, the map φ : k[y ] k[x] is injective. Since X is irreducible, we have k[x] is a domain, and thus so is k[y ]. Hence Y is also irreducible. Denition 5.10. A rational map φ : A n A m is dened by φ(x 1,..., x n ) = (φ 1 (x 1,..., x n ),..., φ m (x 1,... x n )) with φ i k(x 1,..., x n ) (the eld of rational functions) Example. φ : A 1 A 1 dened by φ(x) = 1 x. Warning: φ is not necessarily a function dened on all of A n. Write φ i = fi g i for f i, g i k[x 1,..., x n ] and let U = {x A n : g i (x) }. Then φ : U A m is well dened. Notice that U is an open set (U = A n \V ( g i )). In the above example φ is dened on U = {x A 1 : x 0}. Note. A rational map induces a k-algebra homomorphism φ : k[z 1,..., z m ] k(x 1,..., x n ) dened by z i φ i. Conversely any such k-algebra homomorphism determines a rational map. Example. Let φ : A 1 A 2 be the inverse stereographic projection, that is, dened by φ(t) = ( t2 1 t 2 +1, 2t t 2 +1 ). It is a rational map from A1 to V (x 2 + y 2 1). It is dened on A 1 \ ±i and the image V (x 2 + y 2 1)\{(1, 0)}. Denition 5.11. Let Y A m, a rational map φ : A Y is a rational map φ : A n A m with φ (I(Y )) = 0, so φ : k[y ] k(x 1,..., x n ). Example. Let φ : A 1 V (x 2 + y 2 1) be the inverse stereographic projection. Then φ (x) = t 2 1 t 2 +1, φ (y) = 2t t 2 +1. Hence φ (x 2 + y 2 1) = ( t4 2t 2 +1+4t 2 t 4 +2t 2 +1 1) = 0, so φ is indeed a rational map. What about rational maps φ : X A m? We recall that a morphism X A m was an equivalence class of morphism A n A m. But we have some problems: consider X = V (x) A 2 and φ : A 2 A 3 dened by φ(x, y) = (x 2, 1 xy, y3 ). Then φ is dene on U = {(x, y) : x, y 0}, φ is not dened at (x, y) for any (x, y) X. The solution to this is to allow rational maps that are dened on enough of X. Denition 5.12. Let R be a commutative ring with identity. An element f R is a zero-divisor if there exists g R with g 0 such that fg = 0. Denition 5.13. Let φ : A n A m be a rational map with φ i = fi g i where f i and g i have no common factors. Then φ is admissible on X if the image of each g i in k[x] is a non-zero divisor. Example. φ : A 2 A 3, φ(x, y) = (x 2, 1 xy, y3 ) is not admissible on V (x). Let X = V (x 2 y 2 ) A 2 and φ : A 2 A 1 dened by (x, y) 1 x+y. Then φ is not admissible on X as x + y is a zero divisor in k[x, y]/(x 2 y 2 ). On the other hand ψ : A 2 A 2 dened by ψ(x, y) = ( 1 x, 1 y ) is. 18

Denition 5.14. Let U be the set of non-zero divisor on a ring R. Note U 0 since 1 U. The total quotient ring (ring under obvious multiplication and addition) is as follow Q(R) = R[U 1 ] = { r s : r R, s U} r 1 s 1 = r2 (This like the localization in Commutative Algebra) Example. R = Z, U = Z \ {0} then Q(R) = Q. R = k[x 1,..., x n ] then Q(R) = k(x 1,..., x n ). s 2 if r 1 s 2 = r 2 s 1 Denition 5.15. If X is a variety, the total quotient ring of k[x] is written k(x) and is called the ring of rational functions of X. Note. If R is a domain, U = R\{0}, so Q(R) is the eld of fractions of R. So if X is irreducible, k(x) is the eld of fractions of k[x]. Proposition 5.16. Let φ : A n A m be a rational map admissible on a ane variety X A n. Then φ induces a k-algebra homomorphism φ : k[z 1,..., z n ] k(x). Conversely each such homomorphism arise from a rational map. Proof. Write φ i = f i = g i with f i and g i not sharing any irreducible factors. By hypothesis each g i is a non-zero divisor on k[x]. So fi g i is a well dened element of k(x). Thus φ : k[z 1,..., z m ] k(x) given by φ (z i ) = fi g i is well dened. Conversely given φ : k[z 1,..., z m ] k(x) write φ (z i ) = fi g i for some f i, g i k[x 1,..., x n ] with g i a non-zero divisor on k[x]. Then φ : A n A m dened by φ i (x) = f i (x)/g i (x) is admissible on X. Denition 5.17. Let X A n be an ane variety. Two rational maps φ, ψ : A n A m admissible on X are said to be equivalent on X if the induced homomorphism φ, ψ : k[z 1,..., z m ] k(x) are equal. Example. Let X = V (x + y) A 2. Let φ : A 2 A 2 be dened by φ(x, y) = ( 3x 2x 2y, 2 3x+5y ). This is dened on U φ = {(x, y) : y 2 0, 3x + 5y 0}. Let ψ : A 2 A 2 be dened by ψ(x, y) = ( 3 2x, 1). This is dened on U ψ = {(x, y) : x 0}. These are clearly not the same rational maps but we will show that they are equivalent on X. φ : k[z 1, z 2 ] k(x, y) is dened by φ (z 1 ) = 3x 2y and φ (z 2 2 ) = 2x 3x+5y. And ψ : k[z 1, z 2 ] k(x, y) is dened by ψ (z 1 ) = 3 2x and ψ (z 2 ) = 1. Now in k(x) = k[x, y]/(x + y) we have 3x 2y 2 = 3x 2x 2 = 3 2x 2x = 2x 3x + 5y 2y = 1 So φ, ψ : k[z 1, z 2 ] k(x) are equal so φ, ψ are equivalent on X. (Check φ = ψ on U ψ U φ X) Denition. Let X A n and Y A m be ane varieties. A rational map φ : X Y is an equivalence class of rational maps φ : A n Y admissible on X. Corollary 5.18. Let X A n and Y A m be ane varieties. Then there is a one to one correspondence between rational maps X Y and k-algebra homomorphism k[y ] k(x). Denition 5.19. A rational map φ : X Y is dominant if φ : k[y ] k(x) is injective. Example. Let φ : A 1 V (x 2 + y 2 1) dened by φ(t) = ( t2 1 t 2 +1, 2t t 2 +1 ). Then φ is dominant. Lemma 5.20. If φ : X Y is dominant and X is irreducible, then so is Y Proof. Since φ is dominant we have by denition φ : k[y ] k(x) is injective. Since X is irreducible, k[x] is a domain, so k(x) is a eld. Hence k[y ] is also a domain and thus Y is irreducible. 19

Corollary 5.21. V (x 2 + y 2 1) is irreducible. Denition 5.22. Let Y A n be an ane variety. A rational parametrisation of Y is a rational map φ : A n Y such that Y = im(φ), i.e., a dominant rational map φ : A n Y. Such Y are called unirational. Note. Unirational varieties are irreducible, by the lemma, and we have k(y ) k(x 1,..., x n ). Denition 5.23. A variety X is rational if it admits a rational parametrisation φ : A n X such that the induced eld extension φ : k(x) k(x 1,..., x n ) is an isomorphism. Corollary 5.24. X is rational if and only if k(x) = k(x 1,..., x n ) Proof. If X is rational then k(x) = k(x 1,..., x n ) by denition, so we just need to show the converse. Suppose we have φ : k(x) k(x 1,..., x n ). Then φ k[x] is injective, so denes a dominant rational map φ : A n X. Hence X is rational. Denition. Let X, Y be irreducible varieties. We say X, Y are birational if k(x) = k(y ) as k-algbera. Proposition 5.25. If X, Y are irreducible varieties and k(x) = k(y ) then there exists dominant rational maps X Y and Y X that are inverses. Proof. If φ : k(x) = k(y ), then φ k[x] is injective, so the corresponding rational map φ : Y X is dominant. Similarly φ 1 induces a dominant rational map φ 1 : X Y. By construction φ φ 1 = id Y. 20

6 Projective Varieties. Denition 6.1. Projective Space P n over a eld k is (k n+1 \{0})/ where v λv for λ k = k\{0}. A point in P n correspond to a line through the origin in k n+1. Notation. [x 0 : x 1 : : x n ] is the equivalence class of (x 0, x 1,..., x n ) k n+1. Recall: A polynomial f = c u x u is homogeneous if u = d for all u with c u 0 for some d. Denition 6.2. An ideal I k[x 0, x 1,..., x n ] is homogeneous if I = f 1,..., f s, where each f i is homogeneous. Example. 7x 2 0 + 8x 1 x 2 + 9x 2 1, 3x 3 1 + x 3 2 is, x + y 2, y 2 = x, y 2 is. Denition 6.3. Let f k[x 0,..., x n ]. Then f = f i where each f i is a homogeneous polynomial of degree i. The f i are called the homogeneous components of f. Example. Let I be a homogeneous ideal and let f I. Then each homogeneous component of f is in I. Idea: we choose g 1,..., g s homogeneous with I = g 1,..., g s. Then we can write f = c ui x ui f i, where the f i could be repeated. Then f i = j:deg(x u j )+deg(g i)=i c u i x ui g i I. Denition 6.4. Let I be a homogeneous ideal in k[x 0, x 1,..., x n ]. The projective variety dened by I is V(I) = {[x] P n : f(x) = 0 for all homogenous f I} Example. Let I = 2x 0 x 1, 3x 0 x 2. Then V(I) = {[1 : 2 : 3]} P 3. I = x 0 x 2 x 2 1. Then V(I) = {[1 : t : t 2 ] : t k} {[0 : 0 : 1]} I = x 0, x 1, x 2 k[x 0, x 1, x 2 ]. V(I) =. Note that the weak Nullstellenzatz does not apply here. I = x 0 x 3 x 1 x 2, x 0 x 2 x 2 1, x 1 x 3 x 2 2 k[x0, x 1, x 2, x 3 ]. Then V(I) = ([1, t, t 2, t 3 ] : t k {[0 : 0 : 0 : 1]}. The twisted cubic Note. Points in V(I) correspond to lines through 0 in V (I) A n+1. V (I) is called the ane cover over I. Denition 6.5. The Zariski topology on P n has closed sets V(I) for I k[x 0,..., x n ]. 6.1 Ane Charts Denition 6.6. Let U i = {[u] P n : x i 0}. We can write x U i uniquely as [x 0 : : 1 : : x n ] (1 in i th position). U i bijection with A n. P n = n i=1 U i ane cover of P n. We can think of P n = U 0 {[x] : x 0 = 0}. See that U 0 is a kind of like A n while the set is P n 1 sometime called hyperplane at innity. Fix I homogeneous in k[x 0,..., x n ] and let X = V(I) P n. Let X U i = {[x] P n : f(x) = 0 f I} = {[x 0 : : 1 :..., x n ] P n : f(x 0,..., 1,..., x n ) = 0 f I} = V (I i ) A n where I i = f(x 0,..., 1,..., x n ) : f I = 1 x1=1. X = n X U i. Is a union of ane varieties. This is called an ane cover, let X U i are called ane charts. Example. X = V(x 0 x 2 x 2 1) P n. Then: X U 0 = V (x 2 x 2 1) = {(t, t 2 ) : t k}. X U 1 = V (x 0 x 2 1) = {(t, 1 t ) : t k}. X U 2 = V (x 0 x 2 1) = {(t 2, t) : t k} i=0 21