DETAILED ANSWERS TO STARRED EXERCISES

Similar documents
Solutions of exercise sheet 11

D-MATH Algebra I HS17 Prof. Emmanuel Kowalski. Solution 12. Algebraic closure, splitting field

NOTES ON FINITE FIELDS

FIELD THEORY. Contents

φ(xy) = (xy) n = x n y n = φ(x)φ(y)

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples

Math 2070BC Term 2 Weeks 1 13 Lecture Notes

Algebraic structures I

Algebra Ph.D. Entrance Exam Fall 2009 September 3, 2009

Math 429/581 (Advanced) Group Theory. Summary of Definitions, Examples, and Theorems by Stefan Gille

SUMMARY ALGEBRA I LOUIS-PHILIPPE THIBAULT

Math 121 Homework 5: Notes on Selected Problems

Algebra Exam Fall Alexander J. Wertheim Last Updated: October 26, Groups Problem Problem Problem 3...

COURSE SUMMARY FOR MATH 504, FALL QUARTER : MODERN ALGEBRA

DISCRETE MATH (A LITTLE) & BASIC GROUP THEORY - PART 3/3. Contents

MATH 101A: ALGEBRA I, PART D: GALOIS THEORY 11

Math 120 HW 9 Solutions

Algebra SEP Solutions

Factorization in Polynomial Rings

7 Orders in Dedekind domains, primes in Galois extensions

ALGEBRA PH.D. QUALIFYING EXAM September 27, 2008

Algebra Review. Instructor: Laszlo Babai Notes by Vincent Lucarelli and the instructor. June 15, 2001

Chapter 8. P-adic numbers. 8.1 Absolute values

ALGEBRA QUALIFYING EXAM SPRING 2012

Homework Problems, Math 200, Fall 2011 (Robert Boltje)

Math 201C Homework. Edward Burkard. g 1 (u) v + f 2(u) g 2 (u) v2 + + f n(u) a 2,k u k v a 1,k u k v + k=0. k=0 d

RINGS: SUMMARY OF MATERIAL

Solutions of exercise sheet 8

Graduate Preliminary Examination

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations

Rings and groups. Ya. Sysak

1 Rings 1 RINGS 1. Theorem 1.1 (Substitution Principle). Let ϕ : R R be a ring homomorphism

0 Sets and Induction. Sets

Definitions, Theorems and Exercises. Abstract Algebra Math 332. Ethan D. Bloch

Algebra Prelim Notes

GALOIS THEORY. Contents

ERRATA. Abstract Algebra, Third Edition by D. Dummit and R. Foote (most recently revised on March 4, 2009)

IUPUI Qualifying Exam Abstract Algebra


Solutions of exercise sheet 6

but no smaller power is equal to one. polynomial is defined to be

MAT 535 Problem Set 5 Solutions

CHAPTER I. Rings. Definition A ring R is a set with two binary operations, addition + and

ALGEBRA QUALIFYING EXAM, FALL 2017: SOLUTIONS

MATH 326: RINGS AND MODULES STEFAN GILLE

Introduction to Groups

(Rgs) Rings Math 683L (Summer 2003)

Ohio State University Department of Mathematics Algebra Qualifier Exam Solutions. Timothy All Michael Belfanti

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d

18. Cyclotomic polynomials II

Teddy Einstein Math 4320

MT5836 Galois Theory MRQ

Note that a unit is unique: 1 = 11 = 1. Examples: Nonnegative integers under addition; all integers under multiplication.

Abstract Algebra, Second Edition, by John A. Beachy and William D. Blair. Corrections and clarifications

Theorems and Definitions in Group Theory

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition).

CHAPTER 1. AFFINE ALGEBRAIC VARIETIES

IDEAL CLASSES AND RELATIVE INTEGERS

ABSTRACT ALGEBRA 2 SOLUTIONS TO THE PRACTICE EXAM AND HOMEWORK

Algebra Qualifying Exam Solutions. Thomas Goller

Exercises on chapter 1

120A LECTURE OUTLINES

Course 2316 Sample Paper 1

Galois theory (Part II)( ) Example Sheet 1

HARTSHORNE EXERCISES

CYCLOTOMIC POLYNOMIALS

Solutions to Assignment 4

NOTES ON DIOPHANTINE APPROXIMATION

Algebra. Travis Dirle. December 4, 2016

2) e = e G G such that if a G 0 =0 G G such that if a G e a = a e = a. 0 +a = a+0 = a.

MATH 403 MIDTERM ANSWERS WINTER 2007

ALGEBRA QUALIFYING EXAM PROBLEMS

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

THROUGH THE FIELDS AND FAR AWAY

ERRATA. Abstract Algebra, Third Edition by D. Dummit and R. Foote (most recently revised on February 14, 2018)

Algebraic Number Theory

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism

Supplement. Dr. Bob s Modern Algebra Glossary Based on Fraleigh s A First Course on Abstract Algebra, 7th Edition, Sections 0 through IV.

2. THE EUCLIDEAN ALGORITHM More ring essentials

Algebra qual study guide James C. Hateley

FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS.

MATH 8253 ALGEBRAIC GEOMETRY WEEK 12

Math 210B: Algebra, Homework 4

2 Lecture 2: Logical statements and proof by contradiction Lecture 10: More on Permutations, Group Homomorphisms 31

A connection between number theory and linear algebra

Definition List Modern Algebra, Fall 2011 Anders O.F. Hendrickson

MATH 1530 ABSTRACT ALGEBRA Selected solutions to problems. a + b = a + b,

Dedekind Domains. Mathematics 601

RUDIMENTARY GALOIS THEORY

Profinite Groups. Hendrik Lenstra. 1. Introduction

Contradiction. Theorem 1.9. (Artin) Let G be a finite group of automorphisms of E and F = E G the fixed field of G. Then [E : F ] G.

Public-key Cryptography: Theory and Practice

Algebra Exam Syllabus

AN INTRODUCTION TO GALOIS THEORY

MATH 361: NUMBER THEORY TENTH LECTURE

Finite Fields. Sophie Huczynska. Semester 2, Academic Year

Rings. Chapter Homomorphisms and ideals

CYCLOTOMIC POLYNOMIALS

School of Mathematics and Statistics. MT5824 Topics in Groups. Problem Sheet I: Revision and Re-Activation

CSIR - Algebra Problems

Transcription:

DETAILED ANSWERS TO STARRED EXERCISES 1. Series 1 We fix an integer n > 1 and consider the symmetric group S n := Sym({1,..., n}). For a polynomial p(x 1,..., X n ) C[X 1,..., X n ] in n variables and σ S n, we define p σ = p(x σ(1),..., X σ(n) ). We then define the polynomial f = (X i X j ) C[X 1,..., X n ]. 1 i<j n Proposition 1. (1) For every permutation σ S n, there exists a unique element α(σ) {±1} such that f σ = α(σ)f. (2) The resulting map α : S n {±1} is a group homomorphism. (3) Let a b be elements of {1,..., n}, and let τ S n be the transposition switching a and b and fixing all the other elements. We then have α(τ) = 1. Proof. (1) The definition shows that (pq) σ = p σ q σ for any two polynomials p and q. In particular, we obtain f σ = (X σ(i) X σ(j) ). 1 i<j n For each factor X i X j with i < j of f, we have X i X j = X σ(k) X σ(l) where k = σ 1 (i), l = σ 1 (j). If k < l, the pair (k, l) appears in the product defining f σ, so this factor is a common factor of f and f σ. If k > l, then the pair (k, l) does not appear, but the pair (l, k) does, and since we have X σ(k) X σ(l) = (X σ(l) X σ(k) ), this is also a factor up to the sign 1. This means that f σ = ±f. Here is an alternative argument: reversing all the signs, we find that f = ( 1) n(n 1)/2 i<j(x j X i ) (where n(n 1)/2 is the number of pairs 1 i < j n). This is also f = ( 1) n(n 1)/2 i>j(x i X j ), Date: January 6, 2015, 7:39. 1

and hence Consequently we get f 2 = ε i j(x i X j ), ε = ( 1) n(n 1)/2. f 2 σ = ε i j(x σ(i) X σ(j) ) and since σ permutes the pairs (i, j) with i j, putting (i, j) = (σ(k), σ(l)), we deduce f 2 σ = f 2. This implies that f σ = ±f, as desired. (2) The definition shows that for σ and τ in S n, we have (since X στ(i) = (X τ(i) ) σ ). Hence f στ = (f τ ) σ α(στ)f = (α(τ)f) σ = α(τ)α(σ)f, which proves that α(στ) = α(σ)α(τ). (3) We may assume that a < b, by exchanging them if needed. In the product defining f, all factors X i X j where neither i nor j is in {a, b} are unchanged. So to compute f τ, we have to take into account the sign changes involved in replacing X a X j by X b X j if j / {a, b} and a < j, X i X a by X i X b if i / {a, b} and i < a, X b X j by X a X j if j / {a, b} and b < j, X i X b by X i X a if i / {a, b} and i < b, X a X b by X b X a. The first case introduces a minus sign if a < j but b > j, which means for j = a + 1,..., b 1. In the second case, since i < a < b, the sign is unchanged. Similarly for the third case: b < j implies a < j. The last case introduces a minus sign if i < b but i > a, which means for i = a + 1,..., b 1. Hence the signs of the first four cases compensate each other. There only remains the sign change from the last case. This means that α(τ) = 1. 2. Series 2 Proposition 2. Let G be a group. The map { G Sym(G) χ g χ g : (x g x) is an injective homomorphism from G to Sym(G). 2

Proof. We first check that χ g is a permutation of G for all g. Indeed, denoting f = χ g, we see that f is injective because f(x) = f(y) is equivalent with gx = gy, hence with x = y (since G is a group). Moreover, since x = g(g 1 x) = f(g 1 x), we see that f is surjective. Hence f Sym(G). Next we check that χ is group homomorphism. Let g 1 and g 2 be elements of G and f i = χ gi. We have for any x G that f 1 f 2 (x) = f 1 (f 2 (x)) = f 1 (g 2 x) = g 1 g 2 x = χ g1 g 2 (x) so that χ g1 χ g2 = χ g1 g 2, as claimed. Finally, we check that χ is injective. Suppose that g G satisfies χ g = Id. This means that χ g (x) = x for all x, so that gx = x for all x G. Taking x = 1 G gives g = 1 G. So χ is indeed injective. 3. Series 3 Let G be a group. The commutator subgroup [G, G] is the group generated by the commutators [a, b] = aba 1 b 1 of all elements a, b in G. Proposition 3. (1) For any a and b, we have [a, b] 1 = [b, a], and [a, b] = 1 G if and only if a and b commute. (2) The subgroup [G, G] is normal in G. (3) The quotient G ab = G/[G, G] is abelian. (4) For any abelian group A and morphism φ : G A, we have ker φ [G, G]. (5) For any abelian group A and morphism φ : G A, there exists a unique morphism ψ : G ab A such that φ = ψ π, where π : G G ab denotes the projection from G to G ab. Proof. (1) We have [a, b] 1 = (aba 1 b 1 ) 1 = bab 1 a 1 = [b, a]. Moveover [a, b] = 1 if and only if aba 1 b 1 = 1, which by multiplying on the right first by b and then by a is equivalent to ab = ba. (2) For all a and b in G and for any g G, we have g[a, b]g 1 = gaba 1 b 1 g 1 = (gag 1 )(gbg 1 )(ga 1 g 1 )(gb 1 g 1 ) = [gag 1, gbg 1 ] so any conjugate of a commutator is a commutator. This implies that g[g, G]g 1 [G, G] for any g G, and therefore that [G, G] is normal in G. (3) Let π : G G/[G, G] be the projection. We obtain for all a and b in G, but 1 [G,G] = π([a, b]) π([a, b]) = π(aba 1 b 1 ) = [π(a), π(b)] and so we see that π(a) commutes with π(b) for any a and b. Since π is surjective, this means that any element of G/[G, G] commutes with any other, or in other words that the quotient is abelian. 3

(4) Since A is abelian we find φ([a, b]) = [φ(a), φ(b)] = 1 A for any a and b in G. So all commutators belong to ker φ, and therefore the subgroup [G, G] that they generate is also contained in ker φ. (5) By the fundamental property of quotients, since ker φ [G, G], there exists a unique homomorphism ψ with φ = ψ π. 4.1. Exercise 5. 4. Series 4 Proposition 4. Let A be a finite simple abelian group. (1) There exists x A different from 1 such that A is generated by x. (2) There exists a prime number p such that A is isomorphic to Z/pZ. Conversely, Z/pZ is a simple group if p is prime. Proof. (1) Since A is simple, we have A {1 A }. Let x A be different from 1 A. The cyclic subgroup generated by x is normal in A, since A is abelian, and therefore is equal to A. (2) Let x 1 A be fixed. Consider the homomorphism { Z A ψ n nx, which is surjective from what we just saw. Let H Z be its kernel. By the fundamental property of quotients, we get an isomorphism Z/H A, and since H = kz for some k 1, we deduce that there exists k 1 and an isomorphism A Z/kZ. We need to check that k is a prime number. If not, write k = de with both d > 1 and e > 1. Consider the element y = ψ(d) A. Then y 1 A (since d / kz = H) but we will see that the subgroup generated by y does not contain x, which contradicts that A is simple. To see this, note that if my = x, we get x = ψ(1) = my = ψ(md), so that 1 md H = kz, or md + kn = 1 for some n Z. But this would imply that d divides 1 (since d divides k), which is impossible. Conversely, if A = Z/pZ with p prime and H A is any subgroup, with H {0}, then any element x 0 generates a subgroup whose order is different from 1 (it contains 0 and x) and divides p (by Lagrange s theorem). This means that H = A, and so A is simple. 4.2. Exercise 6. Proposition 5. (1) Let f : G H be a group homomorphism. Then is exact if and only if f is injective, and 1 G f H G f H 1 is exact if and only if f is surjective. (2) Let 1 H G K 1 4

be an exact sequence of groups. There exists a normal subgroup H of G isomorphic to H such that G/H is isomorphic to K. Proof. (1) By definition, the sequence 1 G f H is exact if and only if Im(1 G) = ker(f). But Im(1 G) = {1 G } for any group G, so that the sequence is exact if and only if ker(f) = {1 G }, if and only if f is injective. Similarly, the sequence G f H 1 is exact if and only if Im(f) = ker(h 1), and since ker(h 1) = H, this holds if and only if f is surjective. (2) Define H as the image of the homomorphism f 1 : H G of the exact sequence. Since this homomorphism is injective (by (1)), the map f 1 gives an isomorphism f 1 : H H. Since H = Im(f 1 ) = ker(g K), it follows that H is normal in G. Consider the homomorphism f 2 : G K. Since it satisfies ker(f 2 ) = H, the fundamental property of quotients shows that it induces an isomorphism G/H Im(f 2 ), and since f 2 is surjective (by (1) again applied to G K 1), we deduce that f 2 induces an isomorphism G/H K. 5. Series 5 For a prime number p, a p-group G is a group of order a power of p. A p-sylow subgroup of a finite group G is a subgroup of order p k where p k is the highest power of p dividing G. Proposition 6. Let G be a finite group and p a prime number. (1) There exists at least one p-sylow subgroup of G. (2) Any two p-sylow subgroups H 1 and H 2 of G are conjugate: there exists g G such that H 2 = gh 1 g 1. (3) The number n p of p-sylow subgroups of G satisfies n p 1 (mod p), or in other words, n p 1 is divisible by p. Proof of (1). We write G = p n h where p does not divide h and n 0. We let P = {I G I = p n } (elements of P are subsets of G, not necessarily subgroups). (a) The group G acts on P by g I = gi. Indeed, since any g G acts bijectively on G by multiplication on the left, we have gi = I = p n for any I P. Furthermore, by definition we have g(hi) = (gh)i and 1 G I = I, so this is indeed an action. 5

(b) The size of P is ( ) ( ) G p n h = p n p n = pn h(p n h 1) (p n h p n + 1) p n! = p n 1 i=0 p n h i p n i. We claim that each factor (hp n i)/(p n i) is a rational number of the form a i /b i with a i and b i both not divisible by p. Indeed, if we write i = p k j with j 1 not divisible by p, we have k < n (since i p n 1), and so we get p n i = p k (p n k j) and hp n i = p k (hp n k j) so that p n h i p n i = pn k h j p n k j. Here the numerator and the denominator on the right are both not divisible by p (otherwise p would divide also j, since n k 1, which contradicts the definition of j). So this representation has the desired form. Now if we write P = p n 1 i=0 we deduce that B P = A where B (resp. A) is the product of the b i s (resp. the a i s). Since no factor is divisible by p, we see that A and B are not divisible by p. This implies that P is also not divisible by p. The decomposition of P in orbits shows that P = O O G\P and since the left-hand side is not divisible by p, at least one orbit O 0 must have order not divisible by p. On the other hand, for such an orbit and any fixed I 0 O, we have O 0 = a i b i, G Stab G (I 0 ) by the orbit-stabilizer theorem, so that O 0 divides G = p n h. As O 0 is not divisible by p, it follows that O 0 divides h. (c) For any g G, and any x 0 I 0, we have g hi 0 where h = gx 1 0. This implies that the union of all elements gi 0 of the orbit O 0 is the whole set G. In particular from G = I I O 0 we get G O 0 p n, and therefore O 0 G p n = h. Comparing with (b), it follows that O 0 = h, and then by the orbit-stabilizer theorem, that Stab G (I 0 ) = G O 0 = pn. In particular, the stabilizer Stab G (I 0 ) is a p-sylow subgroup of G. 6

Proof of (2). We write again G = p n h where p does not divide h and n 0. We fix a p-sylow subgroup P of G. Let Q G be any p-subgroup of G. (d) We know that G acts on G/P by x gp = xgp for any x G and g G. Restricting to Q, we obtain an action of Q on G/P. (e) For any orbit O of this action, the size of the orbit divides Q (it is the index of the stabilizer Stab Q (x 0 P ) of any element of the orbit). In particular, since Q is a power of p, this size is either 1 or is divisible by p. Writing G/P = O where the sum is over the orbits of Q on G/P, and noting that G/P = h is not divisible by p, it follows that some orbit has size not divisible by p. Such an orbit O 0 must be of size 1, and so O 0 = {g 0 P } for some g 0 G. But then g 0 P is a fixed point for the action of Q on G/P. Writing xg 0 P = g 0 P for all x Q, we deduce that g0 1 xg 0 P for all x Q, or in other words that g0 1 Qg 0 P. In particular Q is contained in a conjugate of P. If Q is also a p-sylow subgroup, it follows by comparing orders that g0 1 Qg 0 = P, so Q is conjugate to P. Proof of (3). We write again G = p n h where p does not divide h and n 0. We fix a p-sylow subgroup P of G. (f) Let X be the set of p-sylow subgroups of Q. Defining O g Q = gqg 1 for g G, we obtain an action of G on X (a conjugate of a p-sylow subgroup being also one). Restricting to P gives an action of P on X. (g) A p-sylow subgroup Q X is a fixed point for this action of P if and only if gqg 1 = Q for all g P. By (2), we know that Q = xp x 1 is a conjugate of P, and the condition becomes gxp x 1 g 1 = xp x 1 for all g P. This is equivalent to x 1 gx N G (P ) for all g P (where N G (P ) is the normalizer of P in G), or in other words to x 1 P x N G (P ). However, x 1 P x is then a subgroup of order p n in N G (P ); since the order of N G (P ) divides that of G, this means that x 1 P x is a p-sylow subgroup of N G (P ). But P N G (P ) is also (for the same reason) a p-sylow subgroup of N G (P ). Hence, by (2) applied to the group N G (P ), P and xp x 1 are conjugate in N G (P ): there exists n N G (P ) such that np n 1 = xp x 1. But by definition of the normalizer, we have np n 1 = P, and we deduce that Q = xp x 1 = P. Since conversely the subgroup P is a fixed point for the action of P on X, we see that this action has the unique fixed point P. In general, any orbit of P for this action, as in the previous proof, has order dividing P, hence either is a single fixed point, or has order divisible by p. 7

Applying the class counting formula, we find that n p = X = 1 + O O where O runs over orbits of P in X of size divisible by p. Therefore n p 1 is divisible by p. We define a set 6. Series 6 X = {(f, ε) ε > 0, f :] ε, ε[ C continuous map}. We then define a relation on X by (f 1, ε 1 ) (f 2, ε 2 ) if there exists ε > 0 such that ε min(ε 1, ε 2 ) and f 1 ] ε,ε[ = f 2 ] ε,ε[. Proposition 7. (1) The relation is an equivalence relation. We denote A = X/. (2) Defining [(f 1, ε 1 )] + [(f 2, ε 2 )] = [(f 1 ] ε,ε[ + f 2 ] ε,ε[, ε)], [(f 1, ε 1 )] [(f 2, ε 2 )] = [(f 1 ] ε,ε[ f 2 ] ε,ε[, ε)], where ε = min(ε 1, ε 2 ), and 0 A = [(0, 1)] and 1 A = [(1, 1)], one obtains a ring structure on A. (3) The map (f, ε) f(0) induces a ring homomorphism A C. The set I = {[(f, ɛ)] f(0) = 0} is a maximal ideal of A with A/I isomorphic to C. (4) We have A = A \ I and I is the unique maximal ideal of A. Proof. (1) It is clear by definition that (f, ε) (f, ε) for any (f, ε) X, and that (f 1, ε 1 ) (f 2, ε 2 ) if and only if (f 2, ε 2 ) (f 1, ε 1 ). To show transitivity, assume (f 1, ε 1 ) (f 2, ε 2 ) and (f 2, ε 2 ) (f 3, ε 3 ). There exist then η 1 > 0 and η 2 > 0 such that Define ε = min(η 1, η 2 ). Then we get f 1 ] η1,η 2 [ = f 2 ] η1,η 1 [, f 2 ] η2,η 2 [ = f 3 ] η2,η 2 [. f 1 ] ε,ε[ = f 3 ] ε,ε[ so that (f 1, ε 1 ) (f 3, ε 3 ). (2) We need first to check that the addition and multiplication are well-defined. Suppose (g i, η i ) (f i, ε i ) for i = 1, 2. Then there exists η > 0, η min(ε 1, ε 2, η 1, η 2 ) such that f i ] η,η[ = g i ] η,η[. In particular, the functions f 1 + f 2 and g 1 + g 2 (resp. f 1 f 2 and g 1 g 2 ) can be defined and coincide on ] η, η[. The class in A of the element (f 1 ] η,η[ + f 2 ] η,η[, η) coincides with that of [(f 1 ] ε,ε[ + f 2 ] ε,ε[, ε)] where ε = min(ε 1, ε 2 ), as well as with that of [(g 1 ] ε,ε [ + g 2 ] ε,ε [, ε )], where ε = min(η 1, η 2 ), and this means that [(f 1, ε 1 )] + [(f 2, ε 2 )] is well-defined. Similarly [(f 1, ε 1 )][(f 2, ε 2 )]) is well-defined. 8

Among the axioms for rings, we just check that multiplication is associative. Let x i = [(f i, ε i )], 1 i 3, be elements of A. Define ε = min(ε 1, ε 2, ε 3 ). Then one sees from the definition that x 1 + (x 2 + x 3 ) is the class of (f 1 ] ε,ε[ + (f 2 ] ε,ε[ + f 3 ] ε,ε[ ), ε). However, by associativity of the addition of functions, this is the same as the class of ((f 1 ] ε,ε[ + f 2 ] ε,ε[ ) + f 3 ] ε,ε[, ε), which is (x 1 + x 2 ) + x 3. (3) Let l(f, ε) = f(0) for (f, ε) X. To check that this induces a ring homomorphism A X, we must check that if (f, ε) (g, η), we have l(f, ε) = l(g, η). But this is clear since f and g are continuous functions coinciding on a neighborhood of 0. The set I is just the kernel of the ring homomorphism induced by l, hence it is an ideal. The constant functions x t = (t, 1) X for t C satisfy l(t, 1) = t, which shows that l is surjective, and hence that it induces an isomorphism A/I C. Since C is a field, this implies also that I is a maximal ideal. (4) If x = [(f, ε)] I, then x is not invertible in A, since it maps to 0 in the field C. Conversely, suppose x / I. Then f(0) 0, hence by continuity there exists η > 0 such that f(x) 0 for x < η. In that case g = 1/f is well-defined on ] η, η[, and by definition (g, η) X safisfies [(f, ε)] [(g, η)] = [(1, 1)] in A. Thus (f, ε) A. This shows that A = A I. From this it follows that I is the only maximal ideal in A. Indeed, let J be a maximal ideal. If J is not equal to I, then J is not contained in I (because J is maximal, and so J I would imply J = I). So there exists x J I A I = A, which implies however that J = A, contradicting the assumption that J is a proper ideal. 7.1. Exercise 1. Proposition 8. Let R be a ring, and 7. Series 7 0 L α M β N 0 an exact sequence of R-modules. Then N is determined by α up to isomorphism of R modules, precisely N is isomorphic to M/ Im(α). Proof. Exactly as in the case of Series 4, Exercise 6, one sees that β induces an isomorphism of R-modules. Example 9. Let R = Z, L = Z and M/ Im(α) M = Z i 1 9 N Z/2Z.

Defining α(n) = 2n + 0, we obtain an exact sequence where M/ Im(α) is isomorphic to by 0 L M M/ Im(α) 0 Z/2Z i 0 (n, (n i )) (π(n), (n i )) where n Z, n i Z/2Z for i 1, and π : Z Z/2Z is the natural projection. 7.2. Exercise 4. Let R be a commutative ring. Proposition 10. (1) For any R-modules, M 1, M 2, N and any R-linear map f : M 1 M 2, the map { f Hom R (M 2, N) Hom R (M 1, N) : g g f is R-linear. It satisfies for any R-module M. (2) There is a natural isomorphism (f 2 f 1 ) = f 1 f 2, Id M = Id HomR (M,N) Hom R (M 1 M 2, N) Hom R (M 1, N) Hom R (M 2, N) for any R-modules M 1, M 2, N. (3) If 0 A B C 0 is an exact sequence of R-modules, then the sequence 0 Hom R (C, N) Hom R (B, N) Hom R (A, N) is exact. (4) Let A = End R (M) where M is an R-vector space of countable dimension. Then A 2 is isomorphic to A as an A-module. Proof. (1) For g 1 and g 2 in Hom R (M 2, N), the R-linear map f (g 1 + g 2 ) sends x M 1 to (g 1 + g 2 )(f(x)) = g 1 (f(x)) + g 2 (f(x)) = f (g 1 )(x) + f (g 2 )(x), so that f (g 1 + g 2 ) = f g 1 + f g 2. Similarly, we see that f (tg) = tf (g) if t R and g Hom R (M 2, N). So the map f is R-linear. Given f 1 : M 1 M 2 and f 2 : M 2 M 3, putting f = f 2 f 1, both f and f1 f2 are R-linear maps from Hom R (M 3, N) to Hom R (M 1, N). We have f (g) = g f = g f 2 f 1 = (g f 2 ) f 1 = f 1 (g f 2 ) = f 1 (f 2 (g)) for any g Hom R (M 3, N). This means that (f 2 f 1 ) = f 1 f 2. Since g Id = g, we also see that Id M = Id HomR (M,N). (2) Both M 1 and M 2 are submodules of M 1 M 2, and so we may restrict any linear map g : M 1 M 2 N to either of them and obtain a pair (g 1, g 2 ) of linear maps g i : M i N. 10

This pair can be seen as an element of Hom R (M 1, N) Hom R (M 2, N), and this defines a map γ : Hom R (M 1 M 2, N) Hom R (M 1, N) Hom R (M 2, N). It is a consequence of the definitions that γ is itself R-linear. Now we check that γ is an isomorphism. First, if γ(g) = 0, then g is zero restricted to M 1 and to M 2. Since M 1 and M 2 together span M 1 M 2, this means that g = 0. Therefore g is injective. Next, given (g 1, g 2 ) Hom R (M 1, N) Hom R (M 2, N), we can define a unique R-linear map g on M 1 M 2 by asking that it coincide with g i on M i (because this is a direct sum). Then γ(g) = (g 1, g 2 ), so γ is surjective. (3) Given the exact sequence we obtain a sequence of maps 0 A α B β C 0, 0 Hom R (C, N) β Hom R (B, N) α Hom R (A, N) 0 by (1). The claim is that it is exact, except for the last step, that is, except that α is not necessarily surjective. We first prove that β is injective, which means that the beginning 0 Hom R (C, N) β Hom R (B, N) of the sequence is exact. Thus let g Hom R (C, N) be such that β (g) = g β = 0. This means that g is zero on the image of β. But β is surjective, and hence g is zero on C. This proves the injectivity. We next prove that Hom R (C, N) β Hom R (B, N) α Hom R (A, N) is exact. We have α β = (β α) = 0 = 0 by (1) again, so the image of β is contained in the kernel of α. To show the converse inclusion, let g Hom R (B, N) be such that α (g) = 0. This means that g α = 0, so that g vanishes on the image of α. By exactness of the original sequence, this image is the kernel of β. So ker(g) ker(β), and hence by the fundamental property of quotients, there exists an R-linear map g : B/ ker(β) N such that g π = g, where π : B B/ ker(β) is the natural projection. On the other hand, B/ ker(β) is isomorphic, by the map induced by β, to Im(β) = C (since β is surjective by exactness). So we obtain a map g : C N such that g β = g. This means g = β (g ), so we get surjectivity. (Note that in fact we did not use the exactness of 0 A B in this argument.) (4) Let M be a countably infinite dimensional R-vector space (e i ) i Z as a basis. We have an isomorphism M α M M defined by α((n i ) i Z ) = ((m i ), (m i)) 11

where m i = n 2i and m i = n 2i+1. The inverse θ of α is given by where n i = θ((m i ), (m i)) = (n i ) { m i/2 m (i 1)/2 if i is even if i is odd. (that α and θ are inverse of each other is checked by just computing the compositions α θ and θ α). From (2) applied with M 1 = M 2 = M and N = M, we deduce that there is an isomorphism of R-vector spaces Hom R (M, M) Hom R (M M, M) Hom R (M, M) Hom R (M, M), where the first map is θ. Note that, as R-vector spaces, we have A = Hom R (M, M). So to prove that A is isomorphic to A 2 as A-module, it suffices to check that this isomorphism of vector spaces is in fact an isomorphism of A-modules. For this purpose, it suffices to check that the map is A-linear. We can view Hom R (M M, M) as an A-module by f g = g f for g : M M M and f A. It then suffices (by composition) to check that θ is A-linear, and that the map ξ : Hom R (M M, M) Hom R (M, M) Hom R (M, M) is also A-linear. The case of θ is easy, since the product in A also corresponds to composition: for f 1, f 2 A, we get θ (f 1 f 2 ) = f 1 f 2 θ = f 1 f 2 θ = f 1 (f 2 θ) = f 1 (θ (f 2 )) = f 1 θ (f 2 ) by the definition of the A-module structure on Hom R (M M, M). For the second map, note that the construction in (2) shows that ξ = (j 1, j 2), where j i is the i-th injection M M M. For f A and g Hom R (M M, M), we get ξ(f g) = ξ(f g) = (j 1(f g), j 2(f g)) proving the A-linearity. = (f g j 1, f g j 2 ) = f (g j 1, g j 2 ) = f ξ(g), Example 11. The following example shows that exactness does not hold for the last step of the exact sequence in the previous proposition. Let R = Z and consider A = Z/2Z, B = Z/4Z, C = Z/2Z with 0 A α B β C 0, defined by α(x) = 2x, β(x) = x. (Check that these are well-defined and that the sequence is exact.) The claim is that Hom Z (B, N) α Hom Z (A, N) 0 12

is not always exact. We take simply N = A and we claim that the element h = Id A Hom Z (A, N) is not in the image of α. This will prove that α is not surjective in this case. Indeed, if g : Z/4Z Z/2Z = N satisfies α (g) = g α = h, then in particular g must be surjective. So its kernel is a subgroup of order 2 of Z/4Z. However, the only such subgroup is {0, 2}, and this is also the image of α. Therefore g α = 0, which is a contradiction! Let A be a commutative ring and 8. Series 8 V = {(a i ) i 0 a i A, a i = 0 for i large enough.} We define an A-module structure on V by componentwise sum and scalar multiplication. For a = (a i ) V and b = (b i ) V, we define ab = (c i ) where (1) c i = a j b k = a 0 b i + a 1 b i 1 + + a i b 0. j,k 0 j+k=i Proposition 12. (1) The product is well-defined. (2) The element 1 V = (1, 0, 0,...) is a neutral element for the product on V and V is a commutative ring with this product. (3) Let Y be the element (α i ) with α 1 = 1 and α i = 0 for i 1. Then Y j is the element (β i ) with β i = 0 unless i = j and β j = 1. The elements (1, Y, Y 2,...) form a basis of V as A-module. (4) For any commutative ring B and ring homomorphism f 0 : A B, and for any b B, there exists a unique ring homomorphism f : V B such that f(y ) = b and f(a 1 V ) = f 0 (a) for a A. (5) For any A-module M and A-linear map T : M M, there exists a unique structure of V -module on M such that Y m = T (m) for m M and (a 1 V ) m = am for a A and m M. (6) There exists an isomorphism V A[X] such that a 1 V a for a A and Y X. Proof. (1) To show that the product is well-defined, we must check that c i exists and the c i = 0 for i large enough. For the first part, note that if j + k = i and j, k 0, then j i and k i, so the sum defining c i is a finite sum. For the second, assume that j 0 (resp. k 0 ) is such that a j = 0 for j > j 0 (resp. b k = 0 for k > k 0 ). Then consider c i where i > j 0 + k 0. If j + k = i, then either j > j 0 or k > k 0 (by contraposition), and so either a j = 0 or b k = 0. Hence, in that case, all terms in the sum defining c i are 0, and so c i = 0 for i > j 0 + k 0. So (c i ) belongs to V. (2) Let x = (x i ) V be given. When computing the i-th component of x 1 V, we consider pairs (j, k) with j + k = i; whenever k 1, the k-th coefficient of 1 V is 0, and so only k = 0 can lead to a non-zero contribution. But for k = 0, we have j = i, and so we find that x 1 V = (x i ) i 0 = x. Since A is commutative, we can exchange the role of j and k in the formula (1) defining c = ab, and deduce that ab = ba. In particular we get 1 V x = x. 13

We next show that the product is associative. For a = (a i ), b = (b i ) and c = (c i ), we put x = a(bc) = (x i ), and find that x i = ( ) a i b m c n = a i b m c n j+k=i m+n=k j+m+n=i (where all indices i, j, k, m, n are 0). If we compute (ab)c, we get exactly the same expression, and so the product is associative. We check distributivity with respect to addition to finish the proof that V is a ring. Let a = (a i ), b = (b i ) and c = (c i ) be elements of V. Then a(b + c) = (x i ) with x i = a j (b k + c k ) = a j b k + a j c k j+k=i j+k=i j+k=i (by definition of the addition), and this is the i-coefficient of ab + ac. So we get a(b + c) = ab + ac, and then (a + b)c = c(a + b) = ca + cb = ac + bc. (3) To prove the desired formula, let e i be the element of V where only the i-th component is non-zero, and is equal to 1. So e 0 = 1 V, e 1 = Y, and we need to prove that e j = Y j for all j 0. We proceed by induction on j. Since already the formula is true for j 1, we assume that e j = Y j for some j 1, and compute Y j+1 = Y Y j = Y e j by induction. Let Y e j = (a i ). Then in the sum expressing a i for all i 0, only the pair (j, k) = (1, j) can lead to a non-zero contribution, since otherwise one of the coefficients of Y or of e j is zero (see (1)). So only a 1+j may be non-zero, and indeed a 1+j = 1 1 = 1, so that we find indeed that Y j+1 = e j+1. Since (e i ) i 0 is a basis of V as A-module, by definition, so is (Y j ) j 0. (4) We define f : V B by ( ) (2) f a i Y i = f 0 (a i )b i. i 0 i 0 This is indeed well-defined since all but finitely many a i are zero, so that the sum is finite. We then have f(a 1 V ) = f(ay 0 ) = f 0 (a) and f(y ) = b by definition. We claim that f is a ring-homomorphism. We leave to the reader to check that f(x + y) = f(x) + f(y) and prove that f(xy) = f(x)f(y). We have xy = ( x j y k )Y i, i 0 j+k=i hence f(xy) = ( ) f 0 (x j )f 0 (y k ) b i i 0 j+k=i since f 0 is a ring morphism. But we can rewrite this as f(xy) = ( ) f 0 (x j )b j f 0 (y k )b k = f 0 (x j )b j f 0 (y k )b k = f(x)f(y). i 0 j 0 j+k=i For uniqueness, note that if f exists, it has to be given by the formula (2). (5) We define the V -module structure on M by ( ) a i Y i m = a i T i (m) i 0 i 0 14 k 0

for m M (which is also the only possibility). We just check that (xy) m = x (y m), leaving the other conditions as easy checks. We have (xy) m = ( x j y k )T i (m) i 0 j+k=i = ( x j y k )T j (T k (m)) i 0 j+k=i = ( ) x j T j y k T k (m) j 0 by linearity of T. This is also x (y m), as desired. (6) Consider B = A[X] and b = X B. Let f 0 : A B be the inclusion of A in A[X]. Then by (4) we get a ring morphism f : V A[X] such that f(y ) = X, and f(a 1 V ) = a. This is given by ( ) f a i Y i = a i X i. i 0 i 0 By definition of polynomials, this map is injective (since a polynomial is zero if and only if all coefficients are zero). It is also surjective, and so f is an isomorphism. k 0 9. Series 9 Proposition 13. Let K be a field. (1) Any non-zero polynomial P K[X] of degree d has at most d roots in K. (2) If A is a division ring, there may exist polynomials with coefficients in A of degree d 0 with more than d roots. (3) If K is infinite, and P K[X] satisfies P (x) = 0 for all x K, then P = 0. (4) If K is infinite and P K[X 1,..., X n ] is such that P (x 1,..., x n ) = 0 for all (x i ) K n, then P = 0. Proof. (1) We argue by induction on the degree d of P. If d = 0, then P = a K is a non-zero constant, and hence there is no root of P in K. Assume then that the statement is valid for polynomials of degree d 1 and that deg(p ) = d 1. If P (x) 0 for all x K, then P has 0 d roots in K. Otherwise, let x 0 K be one root of P. By euclidean division of P by X x 0, we get P = (X x 0 )Q for some polynomial Q K[X]. By additivity of the degree, we have deg(q) = d 1. Furthermore, since K is a field, the equation P (x) = 0 holds if and only if either x = x 0 or Q(x) = 0. By induction, there are at most d 1 solutions to the equation Q(x) = 0, and together with x 0, this means that there are at most d solutions to the equation P (x) = 0, which finishes the induction step. (2) Let A be the division ring of quaternions and P the polynomial X 2 + 1 A[X]. Then the equation P (x) = 0 has, at least, the three roots i, j, k of the standard basis (1, i, j, k) of A as a real vector space. 15

(3) If K is infinite, then a polynomial P such that P (x) = 0 for all x K can not have any degree d 0, by (1), and so P must be the zero polynomial which does not have a degree. (4) We argue by induction on the number n of variables. For n = 1, the statement is proved in (3). Assume it holds for n 1 variables, with n 2. Let P K[X 1,..., X n ] be given with P (x 1,..., x n ) = 0 for all (x i ) K n. We can write P = i Q i (X 1,..., X n 1 )X i n where Q i K[X 1,..., X n 1 ] are polynomials in n 1 variables. Fix any (x 1,..., x n 1 ) K n 1 and let Q = Q i (x 1,..., x n 1 )X i K[X]. i Then Q(x) = P (x 1,..., x n 1, x) = 0 for all x K by assumption. Hence, by (3), it follows that Q i (x 1,..., x n 1 ) = 0 for all i. Since this holds for all (x 1,..., x n 1 ) K n 1, this means by induction that Q i = 0 for each i. But then P = 0 also, finishing the induction step. 10. Series 10 10.1. Exercise 1. Let A be a Principal Ideal Domain. Proposition 14. (1) For any two non-zero elements a and b of A, we have aa + ba = da where d is a greatest common divisor of a and b. (2) For any two non-zero elements a and b of A, we have aa ba = ma where m is a smallest common multiple of a and b. (3) If R = C[X, Y ], a factorial ring which is not a principal ideal domain, then the elements a = X and b = Y of R are irreducible, satisfy ar br, but ar + br R. Proof. (1) Since A is a principal ideal domain, there exists d A such that aa + ba = da. Since aa 0 and aa aa + ba, it follows that d 0. Since aa da, we get d a, and similarly d b, so that d is a common divisor of a and b. But conversely, if x A is a common divisor of a and b, so that a = xa and b = xb for some a and b in A, then we get da = aa + ba = x(a A + b A) xa, and so x d, so d is a greatest common divisor. (2) Since A is a principal ideal domain, there exists m A such that aa ba = ma. Since ab 0 aa ba, it follows that m 0. Since ma aa, we get a m, and similarly b m, so that m is a common multiple of a and b. But conversely, if x A is a common multiple of a and b, so that x = aa and x = bb for some a and b in A, then we get xa = a aa aa, xa = b ba ba, and so xa ma, or m x, which means that x is a multiple of m, and m is therefore a least common multiple. (3) The element X of C[X, Y ] is irreducible because if X = P 1 P 2, then looking at the degree with respect to X, we see that P 1 must be a polynomial in C[Y ] and P 2 of the form Q 1 X + Q 2 for some Q 1, Q 2 C[Y ], or conversely. Then the equation X = P 1 Q 1 X + P 1 Q 2 16

is only possible if P 1 Q 2 = 0 and P 1 Q 1 = 1. Since P 1 = 0 is impossible this means that Q 2 = 0 and that P 1 and Q 1 are non-zero (inverse) constants, hence units in R. Similarly, Y is irreducible in R. It is immediate that XR Y R, since for instance X XR but X / Y R. On the other hand, XR + Y R R because any P = XP 1 + Y Q 1, with P 1, Q 1 R, satisfies P (0, 0) = 0, and so 1 / XR + Y R. 10.2. Exercise 3. Proposition 15. Let A be a Principal Ideal Domain and M 0 a finitely generated torsion A-module. There exist k 1 and non-zero elements in A, a i / A, such that M is isomorphic to a 1 a 2 a k A/a 1 A A/a k A. Proof. This is a rearrangement of the classification statement for finitely generated torsion modules over A. We know that there exist m 1 and irreducible elements r 1,..., r m in A such that the ideals r i A are pairwise coprime, and for each i there exists s i 1 and integers such that M is isomorphic to 1 ν i,1 ν i,si i A. A/r νi,j 1 i m 1 j s i Now define k to be the largest of the integers s i. Further, extend ν i,j to j < 0 by putting ν i,j = 0 if j < 0. Note that the sequence (ν i,j ) j si is non-decreasing. Then define a k l = 1 i m r ν i,s i l i for 0 l k 1. This defines elements a 1,..., a k. Since ν i,si l 0 for all l (including if l > s i by the previous definition), we deduce that a i A {0} for 1 i k. Further, we have a 1 a 2 a k in A, because for each i, the sequence ν i,si l is non-increasing as l increases. Precisely, we have a k l a k l 1 = 1 i m r ν i,s i l ν i,si l 1 i (in the fraction field of A) and this belongs to A since ν i,si l ν i,si l 1. There remains to prove that M is isomorphic to the module N = A/a 1 A A/a k A = A/a k l A. 0 l k 1 But note that, by the Chinese Remainder Theorem, the module N is isomorphic to A/r ν i,s i l i A 0 l k 1 1 i m 17

and we can rearrange the direct sum as 1 i m 0 l k 1 A/r ν i,s i l i A and then, dropping all the summands A/ri 0 A = {0}, as A/r ν i,s i l i A = 1 i m 0 l s i 1 which is indeed isomorphic to M. i A, A/r νi,j 1 i m 1 j s i Let L/K be a finite field extension. 11. Series 11 Proposition 16. (1) For each x L, the map m x { L L y xy is a K-linear map. (2) The map r L/K : x m x is an injective ring homomorphism L End K (L). (3) Let Tr L/K = Tr r L/K and N L/K = det r L/K. Then Tr L/K is a K-linear map from L to K, and N L/K (xy) = N L/K (x)n L/K (y) for all (x, y) L 2, with N L/K (x) = 0 if and only if x = 0. (4) For a tower L 2 /L 1 /K, we have (5) Let x L be such that L = K(x). If is the minimal polynomial of x over K, then Tr L2 /K = Tr L2 /L 1 Tr L1 /K. Irr(x, K) = X d + a d 1 X d 1 + + a 1 X + a 0 Tr L/K (x) = a d 1, N L/K (x) = ( 1) d a 0. Proof. (1) For y 1, y 2 L and a, b K, we have m x (ay 1 + by 2 ) = x(ay 1 + by 2 ) = am x (y 1 ) + bm x (y 2 ) so that m x is K-linear. (2) The map r L/K is well-defined by (1) (since m x End K (L)). Furthermore, for any x 1 and x 2 in L and any y L, we have so that m x1 +x 2 = m x1 + m x2. Also, we have m x1 +x 2 (y) = (x 1 + x 2 )y = m x1 (y) + m x2 (y) m x1 x 2 (y) = x 1 x 2 y = x 1 (x 2 y) = m x1 (x 2 y) = (m x1 m x2 )(y) so that m x1 x 2 = m x1 m x2. Therefore r L/K is a ring homomorphism. Finally, assume x L is such that m x = 0. This means that xy = 0 for all y L, and therefore that x = 0 (taking y = 1). So r L/K is injective. 18

(3) By composition of K-linear maps, Tr L/K is K-linear. Similarly, since det(ab) = det(a) det(b) for any two elements A, B of End L (K), and r L/K is a ring morphism, we see that N L/K (x 1 x 2 ) = N L/K (x 1 )N L/K (x 2 ) for all (x 1, x 2 ) L 2. It is clear that N L/K (0) = 0 since m 0 = 0. If x 0, then we get 1 = N L/K (1 L ) = N L/K (x)n L/K (x 1 ) (since m 1 = Id) so that N L/K (x) 0. Hence N L/K (x) = 0 if and only if x = 0. (4) Let B 1 = (e 1,, e k ) be an L 2 -basis for L 1, and B 2 = (f 1,, f l ) be a K-basis for L 2. We have seen in class that B = (e 1 f 1, e 1 f 2,, e 1 f l, e 2 f 1,, e 2 f l,, e k f 1,, e k f l ) is a K-basis for L 1. Fix α L 1. We can then find coefficients λ ij L 2, with 1 i, j k, such that αe i = k λ ij e j j=1 for 1 i k. The matrix of m α, as an element of End L2 (L 1 ), with respect to the basis B 1 has coefficients (λ i,j ), and therefore Tr L1 /L 2 (α) = k λ ii. Similarly, for each i, j as above and 1 s, t l, we can find coefficients µ ijst L 2 such that l λ ij f s = µ ijst f t. t=1 for each (i, j, s). The matrix of m λi,j as an element of End K (L 2 ) with respect to the basis B 2 has coefficients (µ i,j,s,t ), with s and t serving as indices for rows and colums, and therefore i=1 Tr L2 /K(λ ij ) = Combining the previous formulas, we get l µ ijss. s=1 αe i f s = k l µ ijst e j f t j=1 t=1 for each i and s. The matrix of m α as an element of End K (L 1 ) has coefficients µ ijst, where (i, j) and (s, t) serve as indices for rows and columns, and therefore Tr L1 /K(α) = k i=1 s=1 19 l µ iiss.

Now we can simply compute and find that ( k ) Tr L2 /K(Tr L1 /L 2 (α)) = Tr L2 /K λ ii = i=1 k Tr L2 /K(λ ii ) = i=1 k i=1 l µ iiss = Tr L1 /K(α). Since this holds for every α L 1, we are done. (5) We know that the elements e i = x i, for 0 i d 1, form a basis of L = K(x) as a K-vector space. In this basis, we have { e i+1, if 0 i d 2, m x (e i ) = a d 1 e d 1 a 1 e 1 a 0 if i = d 1. since m x (e i ) = x i+1, and in particular m x (e d 1 ) = x d. The trace of x is the sum of the diagonal coefficients, which is therefore a d 1 (the diagonal coefficient in the i-th column is zero unless i = d 1). For the determinant, we use expansion with respect to the last column: d N L/K (x) = ( 1) d+i ( a i 1 ) det(a i ) i=1 where A i is the matrix of size d 1 obtained by removing the last column and i-th row of the matrix. For i = 0, the matrix A 0 is just the identity matrix (corresponding to the fact that m x (e i ) = e i+1 for 0 i < d 1). This term contributes a 0 ( 1) d+1 = ( 1) d a 0 to the determinant. On the other hand, for 1 i d 1, the matrix A i has the first row identically zero, and so has determinant 0. Therefore N L/K (x) = ( 1) d a 0. Example 17. (1) Let K = R, L = C and α C. Write α = a + ib with a and b in R. Since m α (1) = α = a + ib and m α (i) = αi = b + ia we see that the matrix representing m α in the basis (1, i) of L/K is ( ) a b. b a (2) Let p be an odd prime number and ζ p = exp(2iπ/p). Let K p = Q(ζ p ). From Exercise 2.4 from Series 9, ζ p is a root of the cyclotomic polynomial Φ p (X) = X p 1 + + X + 1 = Xp 1 X 1 Q[X], which is irreducible in Q[X]. Hence Irr(ζ p, Q) = Φ p. Therefore we see that Tr Kp/Q(ζ p ) = 1 and N Kp/Q(ζ p ) = 1 from the previous proposition (using the fact that ( 1) p 1 = 1 since p is odd). 20 s=1

Since we have Q(ζ p ) = Q(ζ p 1), we have Irr(ζ p 1, Q) = Φ p (X + 1) = (X + 1)p 1. X Again by the previous proposition, we get N L/K (ζ p 1) = p by computing the value at 0 of this polynomial. 12. Series 12 Let p be a prime number and n 1 an integer. We denote F p n a finite field of size p n, and define n 1 n 1 Tr(x) = x pj, N(x) = x pj for x F p n. j=0 Proposition 18. (1) We have Tr(x) F p and N(x) F p for all x F p n. (2) The map Tr : F p n F p is F p -linear. (3) We have N(xy) = N(x)N(y) for all (x, y) F 2 pn, and N(x) = 0 if and only if x = 0. Proof. (1) We use the characterization: an element y F p n belongs to F p if and only if ϕ(y) = y, where ϕ(y) = y p. Here we get j=0 n 1 n 1 n 1 ϕ(tr(x)) = Tr(x) p = x pj+1 = x pn + x pj = x + j=0 j=1 j=1 x pj = Tr(x) since ϕ is an automorphism and since x pn = x for x F p n. So Tr(x) F p. Similarly n 1 n 1 ϕ(n(x)) = N(x) p = x pj+1 = x pn j=0 so that N(x) F p. (2) For x 1, x 2 F p n and a, b F p, we have j=1 x pj n 1 = x j=1 x pj = N(x), n 1 n 1 n 1 Tr(ax 1 + bx 2 ) = (ax 1 + bx 2 ) pj = (ax 1 ) pj + (bx 2 ) pj = a Tr(x 1 ) + b Tr(x 2 ) j=0 j=0 because the fact that a and b belong to F p implies that a pj = a and b pj = b for 0 j n 1. Hence Tr is F p -linear. (3) Since (xy) pj = x pj y pj for all j, we get similarly that N(xy) = N(x)N(y). And since N(x) is defined as a product, we have N(x) = 0 if and only if there exists j, 0 j n 1, such that x pj = 0, if and only if x = 0. j=0 13. Series 13 Let K be a field of characteristic p > 0, containing F p. Let a K. 21

Proposition 19. (1) The polynomial f = X p X a is separable in K[X]. Let K be an algebraic closure of K and α K a root of f. (2) We have {roots of f in K} = {α + x x F p }. (3) If a is not of the form y p y for some y K, then [K(α) : K] = p. (4) If K(α) K, then the set G of all field automorphisms σ : K(α) K(α) such that σ(x) = x for x K is group for the operation of composition. It is cyclic of order p. (5) For any prime p, the polynomial Q p = X p X 1 defines F p p in the sense that F p p = F p (α) for any root α of Q p in an algebraic closure of F p. Proof. (1) The derivative of f is f = px p 1 1 = 1, since K has characteristic p, and hence f and f are coprime, and therefore f is separable. (2) If x F p, then we have f(α + x) = (α + x) p (α + x) a = α p + x p α x a = α p α a + (x p x) = f(α) = 0 since x p = x for x F p. Hence any element of the form x + α, x F p, is a root of f. This shows that f has at least p distinct roots in K, and since deg(f) = p, these must be all the roots of f in K. (One can also observe that if f(β) = 0, then (β α) p = β p α p = (β + a) (α + a) = β α, and so β α F p ; this also provides a direct check that f is separable since it has deg(f) distinct roots.) (3) If a is not of the form y p y with y K, then f has no root in K, and so α / K. We will prove that f is irreducible. Indeed, let f 1 f be a monic polynomial in K[X] dividing f with d = deg(f 1 ) 1. Then the roots of f 1 in K are of the form α + x for x ranging over a subset A of F p, with A = d. It follows that f 1 = x A(X (α + x)). This polynomial has coefficients in K by assumption. In particular, the coefficient of X d 1 is in K. But this coefficient is a d 1 = x A (α + x) = A α + x A x. Since F p K, we see that a d 1 K if and only if A α K. By assumption α / K, and so this is only possible if A = 0 in K, which means that p A. Since A F p, it follows that A = F p, which means that deg(f 1 ) = p, and therefore that f 1 = f. This means that f is irreducible in K[X]. This being established, we then know that [K(α) : K] = deg(f) = p. (4) The fact that G is a group under composition is immediate and only uses the fact that elements of G are defined as automorphisms of the structure of field, and that the property that σ : K(α) K(α) is the identity on K is stable under composition. Now we show that G = p. Any σ G must send α to another root of f (because σ acts like the identity of the coefficients of f, which are in K), and σ is determined by σ(α) and the requirement that σ(y) = y if y K. So G deg(f) = p. We next prove the converse inequality. 22

First, observe that (3) implies that K(α) = K (equivalently, α K) if and only if α is of the form y p y for some y K. Indeed, (3) shows that K(α) K if α is not of this form, and conversely, if α = y p y with y K, then α K so that K(α) = K. Consider the set I of embeddings of the field K(α) in K. What we just saw and the fact that K(α) K show that (3) applies to prove that [K(α) : K] = p. Since f is separable, this implies that I = [K(α) : K] s = [K(α) : K] = p. Now we claim any σ I send K(α) to itself. Indeed, we know that σ is determined by σ(α). By (2), σ(α) = α + x for some x F p. In particular, σ(α) K(α) since F p K. As α generates K(α) over K, it follows that σ(k(α)) K(α) as claimed. This property means that any embedding σ I defines an element τ G by the requirement that σ = i τ, where i is the injection of K(α) in K. Different elements of I give different elements of G, because they are both determined by the value at α. Hence G I = p, and this shows that G = p. Finally, observe that G, like any group of prime order, is cyclic (because any element σ 1 generates a subgroup which must be of order p, hence equal to G). (5) Let Q p = X p X 1 F p [X]. The element 1 is not of the form y p y for y F p, simply because y p y = 0. Hence by (3), the splitting field of Q p is of degree p over F p, and therefore is equal to F p p. 23