Algebras. Chapter Definition

Similar documents
Structure of rings. Chapter Algebras

Topics in linear algebra

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Exercises on chapter 4

Math 594. Solutions 5

MATH 326: RINGS AND MODULES STEFAN GILLE

ALGEBRA HW 4. M 0 is an exact sequence of R-modules, then M is Noetherian if and only if M and M are.

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Modules Over Principal Ideal Domains

REPRESENTATION THEORY, LECTURE 0. BASICS

ALGEBRA EXERCISES, PhD EXAMINATION LEVEL

COURSE SUMMARY FOR MATH 504, FALL QUARTER : MODERN ALGEBRA

Math 121 Homework 5: Notes on Selected Problems

Introduction to modules

Topics in Module Theory

REPRESENTATION THEORY WEEK 9

4.2 Chain Conditions

(Rgs) Rings Math 683L (Summer 2003)

Rings and groups. Ya. Sysak

Ring Theory Problems. A σ

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d

FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS.

Formal power series rings, inverse limits, and I-adic completions of rings

ASSOCIATIEVE ALGEBRA. Eric Jespers. webstek: efjesper HOC: donderdag uur, F.5.207

MATH5735 Modules and Representation Theory Lecture Notes


COHEN-MACAULAY RINGS SELECTED EXERCISES. 1. Problem 1.1.9

4.4 Noetherian Rings

ALGEBRA QUALIFYING EXAM, FALL 2017: SOLUTIONS

Injective Modules and Matlis Duality

Representation Theory

REPRESENTATIONS OF S n AND GL(n, C)

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent

NOTES ON SPLITTING FIELDS

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations

2. Prime and Maximal Ideals

Algebra Homework, Edition 2 9 September 2010

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

12. Projective modules The blanket assumptions about the base ring k, the k-algebra A, and A-modules enumerated at the start of 11 continue to hold.

Lecture 2. (1) Every P L A (M) has a maximal element, (2) Every ascending chain of submodules stabilizes (ACC).

A Little Beyond: Linear Algebra

The most important result in this section is undoubtedly the following theorem.

its image and kernel. A subgroup of a group G is a non-empty subset K of G such that k 1 k 1

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Review of Linear Algebra

Rings. Chapter Homomorphisms and ideals

Exercises on chapter 1

ALGEBRAIC GROUPS J. WARNER

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers

ALGEBRA QUALIFYING EXAM SPRING 2012

3.1. Derivations. Let A be a commutative k-algebra. Let M be a left A-module. A derivation of A in M is a linear map D : A M such that

Math 429/581 (Advanced) Group Theory. Summary of Definitions, Examples, and Theorems by Stefan Gille

The Gelfand-Tsetlin Basis (Too Many Direct Sums, and Also a Graph)

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

Exercises on chapter 0

Representations. 1 Basic definitions

Presentation 1

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg =

Projective modules: Wedderburn rings

(d) Since we can think of isometries of a regular 2n-gon as invertible linear operators on R 2, we get a 2-dimensional representation of G for

Algebra Exam Syllabus

HARTSHORNE EXERCISES

Math 121 Homework 4: Notes on Selected Problems

INJECTIVE MODULES: PREPARATORY MATERIAL FOR THE SNOWBIRD SUMMER SCHOOL ON COMMUTATIVE ALGEBRA

Exercises on chapter 4

ALGEBRA QUALIFYING EXAM PROBLEMS

Chapter 2 Linear Transformations

Math 210C. The representation ring

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples

Some notes on linear algebra

Infinite-Dimensional Triangularization

Representations and Linear Actions

A PROOF OF BURNSIDE S p a q b THEOREM

SUMMARY ALGEBRA I LOUIS-PHILIPPE THIBAULT

4.3 Composition Series

Classification of semisimple Lie algebras

ne varieties (continued)

Reid 5.2. Describe the irreducible components of V (J) for J = (y 2 x 4, x 2 2x 3 x 2 y + 2xy + y 2 y) in k[x, y, z]. Here k is algebraically closed.

Representations of quivers

Representations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III

1 Fields and vector spaces

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS

NOTES ON FINITE FIELDS

ADVANCED COMMUTATIVE ALGEBRA: PROBLEM SETS

ALGEBRA II: RINGS AND MODULES OVER LITTLE RINGS.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Bare-bones outline of eigenvalue theory and the Jordan canonical form

January 2016 Qualifying Examination

A PRIMER ON SESQUILINEAR FORMS

5 Dedekind extensions

and this makes M into an R-module by (1.2). 2

LECTURE 4: REPRESENTATION THEORY OF SL 2 (F) AND sl 2 (F)

Algebra Exam Topics. Updated August 2017

THE CLOSED-POINT ZARISKI TOPOLOGY FOR IRREDUCIBLE REPRESENTATIONS. K. R. Goodearl and E. S. Letzter

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ.

11. Finitely-generated modules

LECTURE NOTES AMRITANSHU PRASAD

Introduction to Arithmetic Geometry Fall 2013 Lecture #18 11/07/2013

Math 762 Spring h Y (Z 1 ) (1) h X (Z 2 ) h X (Z 1 ) Φ Z 1. h Y (Z 2 )

Transcription:

Chapter 4 Algebras 4.1 Definition It is time to introduce the notion of an algebra over a commutative ring. So let R be a commutative ring. An R-algebra is a ring A (unital as always) that is an R-module (left, say) such that r(ab) = (ra)b = a(rb) for all r R, a, b A. You can say it the other way round: an R-algebra is an R-module equipped with an R-bilinear multiplication R A A making it into a ring. Thus, this multiplication satisfies the conditions (A1) 1 R a = a; (A2) (r + s)a = ra + sa; (A3) r(a + b) = ra + rb; (A4) (rs)a = r(sa); (A5) r(ab) = (ra)b = a(rb) for all r, s R, a, b A. (Note strictly speaking I should call this an associative, unital R-algebra there are other important sorts of algebra which are not associative but since we won t meet them in this course I ll just stick to algebra.) Note a Z-algebra is just the old definition of ring: Z-algebras = rings just as Z-modules = Abelian groups. So you should view the passage from rings to R-algebras as analogous to the passage from Abelian groups to R-modules! This is the idea of studying objects (e.g. Abelian groups, rings) relative to a fixed commutative base ring R. There is an equivalent formulation of the definition of R-algebra: an R-algebra is a ring A together with a distinguished ring homomorphism (the structure map ) s : R A such that the image of s lies in the center Z(A) = {a A ab = ba for all b A}. Indeed, given such a ring homomorphism, define a multiplication R A A by (r, a) s(r)a. Now check this satisfies the above axioms (A1) (A5) (the last one being because im s Z(A)). Conversely, given an R-algebra as defined originally, one obtains a ring homomorphism s : R A by defining s(r) = r1 A, and the image lies in Z(A) by (A5). Let A be an R-algebra. Then, given an A-module M, we can in particular think of M as just an R-module, defining rm = s(r)m for r R, m M. So you can hope to exploit the additional structure of the base ring R in studying A-modules. In particular, if R = F is a field and A is 97

98 CHAPTER 4. ALGEBRAS an F -algebra, A and any A-module M is in particular a vector space over F. So we can talk about finite dimensional F -algebras and finite dimensional modules over an F -algebra, meaning their underlying dimension as vector spaces over F. Note given A-modules M, N, an A-module homomorphism between them is automatically an R- module homomorphism (for the underlying R-module structure). However, it is not necessarily the case that a ring homomorphism between two different R-algebras is an R-module homomorphism. So one defines an R-algebra homomorphism f : A B between two R-algebras A and B to be a ring homomorphism in the old sense that is in addition R-linear (i.e. it is an R-module homomorphism too). Ring homomorphisms and R-algebra homomorphisms are different things! Now for examples. Actually, we already know plenty. Let R be a commutative ring. Then, the polynomial ring R[X 1,..., X n ] is evidently an R-algebra: indeed, R[X 1,..., X n ] contains a copy of R as the subring consisting of polynomials of degree zero. The ring M n (R) of n n matrices over R is an R-algebra: again, it contains a copy of R as the subring consisting of the scalar matrices. But note there is a big difference between this and the previous example: M n (R) is finitely generated as an R-module (indeed, it is free of rank n 2 ) whereas R[X 1,..., X n ] is not. In case F is a field, M n (F ) is a finite dimensional F -algebra, F [X 1,..., X n ] is not. For the next example, let M be any (left) R-module. Consider the Abelian group End R (M). We make it into a ring by defining the product of two endomorphisms of M simply to be their composition. Now I claim that End R (M) is in fact an R-algebra: indeed, we define rθ for r R, θ End R (M) by setting (rθ)(m) = r(θ(m))(= θ(rm)) for all m M. Let us check that rθ really is an R-endomorphism of M. Take another s R. Then, s((rθ)(m)) = sr(θ(m)) = θ(srm) = θ(rsm) = (rθ)(sm). Note we really did use the commutativity of R! We can generalize the previous two examples. Suppose now that A is a (not necessarily commutative) R-algebra and M is a left A-module. Then, the ring D = End A (M) is also an R-algebra, defining (rd)(m) = r(d(m)) for all r R, d D, m M. For the final example of an algebra, let G be any group. Define the group algebra RG to be the free R-module on basis the elements of G. Thus an element of RG looks like r g g g G for coefficients r g R all but finitely many of which are zero. Multiplication is defined by the rule ( ) r g g s h h = r g s h (gh). g G h G g,h G In other words, the multiplication in RG is induced by the multiplication in G and R-bilinearity. Note the RG contains a copy of R as a subring, namely, R1 G, so is certainly an R-algebra. The construction of the group algebra RG allows the possibility of studying the abstract group G by studying the category of RG-modules so module theory can be applied to group theory. Note finally in case G is a finite group and F is a field, the group algebra F G is a finite dimensional F -algebra.

4.2. SOME MULTILINEAR ALGEBRA 99 4.2 Some multilinear algebra Let F be a field (actually, everything here could be done more generally over an arbitrary commutative ring, e.g. Z but let s stick to the case of a field for simplicity). Given a vector space V over F, we have defined V V (meaning V F V ). Hence, we have (V V ) V and V (V V ). But these two are canonically isomorphic, the isomorphism mapping a generator (u v) w of the first to u (v w) in the second. From now on we ll identify the two, and write simply V V V for either. More generally, we ll write T n (V ) for V V (n times), where the tensor is put together in any order you like, all different ways giving canonically isomorphic vector spaces. Set T (V ) = n 0 T n (V ) and call it the tensor algebra on V. Note T 0 (V ) = F by convention, just a copy of the ground field, and T 1 (V ) = V. Then, T (V ) is an F -vector space given, but we can define a multiplication on it making it into an F -algebra as follows: define a map T m (V ) T n (V ) T m+n (V ) = T m (V ) T n (V ) by (x, y) x y. Then extend linearly to give a map from T (V ) T (V ) T (V ). Since is associative, the resulting multiplication is associative, and T (V ) is an F -algebra. To be quite explicit, suppose that e i (i I) is a basis for V. Then, {e i1 e i2 e in n 0, i 1,..., i n I} gives a basis of monomials for T (V ). Multiplication of the basis elements is just by joining tensors together. In other words, T (V ) looks like non-commutative polynomials in the e i (i I) an arbitrary element being a linear combination of finitely many non-commuting monomials. The importance of the tensor algebra is as follows: Universal property of the tensor algebra. Given any F -linear map f : V A to an F - algebra A, there is a unique F -linear ring homomorphism f : T (V ) A such that f = f i, where i : V T (V ) is the inclusion of V into T 1 (V ) T (V ). (Note: this universal property could be taken as the definition of T (V )!) Proof. The vector space T n (V ) is defined by the following universal property: given a multilinear map f n : V V (n copies) to a vector space W, there exists a unique F -linear map f n : T n (V ) W such that f n = f n i n, where i n is the obvious map V V T n (V ). Now define the map f n so that f n (v 1, v 2,..., v n ) = f(v 1 )f(v 2 )... f(v n ) (multiplication in the ring A). Check that it is multilinear in the v i, since A is an F -algebra. Hence it induces a unique f n : T n (V ) A. Now define f : T (V ) A by glueing all the f n together. In other words, using the basis notation above, we have that f(e i1 e in ) = f(e i1 )f(e i2 )... f(e in ). This is an F -linear ring homomorphism, and is clearly the only option to extend f to such a thing. We can restate the theorem as follows: Let X be a basis of V. Then, given any set map f from X to an F -algebra A, there exists a unique F -linear homomorphism f : T (V ) A extending f (proof: extend the map X A to a linear map V A in the unique way then apply the theorem). In other words, T (V ) plays the role of the free F -algebra on the set X. Then you can define algebras by generators and relations but we won t go into that... Next, we come to the symmetric algebra on V. Continue with V being a vector space. We ll be working with V V (n times). Call a map f : V V W

100 CHAPTER 4. ALGEBRAS a symmetric multilinear map if its multilinear, i.e. f(v 1,..., cv i + c v i,..., v n ) = cf(v 1,..., v i,..., v n ) + c f(v 1,..., v i,..., v n ) for each i, and moreover f(v 1,..., v i, v i+1,..., v n ) = f(v 1,..., v i+1, v i,..., v n ) for each i (so its symmetric). The symmetric power S n (V ), together with a distinguished map i : V V S n (V ), is characterized by the following universal property: given any symmetric multilinear map f : V V W to a vector space W, there exists a unique linear map f : S n (V ) W such that f = f i. Note this is exactly the same universal property as defined T n (V ) in the proof of the universal property of the tensor algebra, but we ve added the word symmetric too. As usual with universal properties, S n (V ), if it exists, is unique up to canonical isomorphism (so we call it the symmetric algebra). Still, we need to prove existence with some sort of construction. So now define S n (V ) = T n (V )/I n where I n = v 1 v i v i+1 v n v 1 v i+1 v i v n is the subspace spanned by all such terms for all v 1,..., v n V. The distinguished map i : V V S n (V ) is the obvious map V V T n (V ) followed by the quotient map T n (V ) S n (V ). Now take a symmetric multilinear map f : V V W. By the universal property defining T n (V ), there is a unique F -linear map f 1 : T n (V ) W extending f. Now since f is symmetric, we have that f 1 (v 1 v i v i+1 v n v 1 v i+1 v i v n ) = 0. Hence, f 1 annihilates all generators of I n, hence all of I n. So f 1 factors through the quotient S n (V ) of T n (V ) to induce a unique map f : S n (V ) W with f = f i. Note we have really just used the univeral property of T n followed by the universal property of quotients! So now we ve defined the nth symmetric power of a vector space. Note its customary to denote the image in S n (V ) of (v 1,..., v n ) V V under the map i by v 1 v n. So v 1 v n is the image of v 1 v n under the quotient map T n (V ) S n (V ). We can glue all S n (V ) together to define the symmetric algebra on V : S(V ) = n 0 S n (V ) where again S 0 (V ) = F, S 1 (V ) = V. Since each S n (V ) is by construction a quotient of T n (V ), we see that S(V ) is a quotient of T (V ), i.e. S(V ) = T (V )/I where I = n 0 I n. I claim in fact that I is an ideal of T (V ), so that S(V ) is actually a quotient algebra of T (V ). Actually, I claim even more, namely: 4.2.1. Lemma. I is the two-sided ideal of T (V ) generated by the elements v w w v for all v, w V. Proof. Let J be the two-sided ideal generated by the given elements. By definition, I is spanned as an F -vector space by terms like v 1 v i v i+1 v n v 1 v i+1 v i v n = v 1 v i 1 (v i v i+1 v i+1 v i ) v i+2 v n.

4.2. SOME MULTILINEAR ALGEBRA 101 This clearly lies in J, hence I J. On the other hand, the generators of the ideal J lie in I, so it just remains to show that I is a two-sided ideal of T (V ). Since T (V ) is spanned by monomials of the form u = u 1 u m, it suffices to check that for any generator v = v 1 v i v i+1 v n v 1 v i+1 v i v n of I, we have that uv and vu both lie in I. But that s obvious! The lemma shows that S(V ) really is an F -algebra, as a quotient of T (V ) by an ideal. Multiplication of two monomials v 1 v m and w 1 w n in S(V ) is just by concatenation, giving v 1 v m w 1 w n. Moreover, since is symmetric, we can reorder this as we like in S(V ) to see that v 1 v m w 1 w n = w 1 w n v 1 v n. Hence, since these pure elements (the images of the pure tensors which generate T (V )) span S(V ) as an F -vector space, we see that S(V ) is a commutative F -algebra. Note not all elements of S(V ) can be written as v 1 v m, just as not all tensors are pure tensors. The symmetric algebra S(V ), together with the inclusion map i : V S(V ), is characterized by the following universal property (compare with the universal property of the tensor algebra): Universal property of the symmetric algebra. Given an F -linear map f : V A, where A is a commutative F -algebra, there exists a unique F -linear homomorphism f : S(V ) A such that f = f i. Proof. Since A is commutative, the map f 1 : T (V ) A given by the universal property of the tensor algebra annihilates all elements v w w v T 2 (V ). These generate the ideal I by the preceeding lemma. Hence, f 1 factors through the quotient S(V ) of T (V ). Now suppose that V is finite dimensional on basis e 1,..., e m. Then, we ve seen S(V ) before! Indeed, let F [x 1,..., x m ] be the polynomial ring over F in indeterminates x 1,..., x m. The map e i x i extends to a unique F -linear map V F [x 1,..., x m ], hence since the polynomial ring is commutative, the universal property of symmetric algebras gives us a unique F -linear homomorphism S(V ) F [x 1,..., x m ]. Now, S(V ) is spanned by the images of pure tensors of the form e i1 e in. Moreover, any such can be reordered in S(V ) using the symmetric property to assume that i 1 i n. Hence, S(V ) is spanned by the ordered monomials of the form e i1... e in for all i 1 i n and all n 0. Clearly, such a monomial maps to x i1... x in in the polynomial ring F [x 1,..., x m ]. But we know (by definition) that the ordered monomials give a basis for the F -vector space F [x 1,..., x m ]. Hence, they must in fact be linearly independent in S(V ) too, and we ve constructed an isomorphism: Basis theorem for symmetric powers. Let V be an F -vector space of dimension m. Then, S(V ) is isomorphic to F [x 1,..., x m ], the isomorphism mapping a basis element e i of V to the indeterminate x i in the polynomial ring. In particular, S n (V ) has basis given by all ordered monomials in the basis of the form e i1 e in with 1 i 1 i n m. In this language, the universal property of symmetric algebras gives that the polynomial algebra F [x 1,..., x m ] is the free commutative F -algebra on the generators x 1,..., x m. Note in particular that if V is finite dimensional, say of dimenion m, the basis theorem implies ( ) m + n 1 dim S n (V ) = n (exercise!).

102 CHAPTER 4. ALGEBRAS The last important topic in multilinear algebra I want to cover is the exterior algebra. The construction goes in much the same way as the symmetric algebra, however unlike there we do not have a known object like the polynomial algebra to compare it with so we have to work harder to get the analogous basis theorem to the basis theorem for symmetric powers. Start with V being any vector space. Define K to be the two-sided ideal of the tensor algebra T (V ) generated by the elements {x x x V }. Note that (v + w) (v + w) = v v + w v + v w + w w. So K also contains all the elements {v w + w v v, w V } automatically. Define the exterior (or Grassmann) algebra (V ) to be the quotient algebra (V ) = T (V )/K. We ll write v 1 v n for the image in (V ) of the pure tensor v 1 v n T (V ). So, now we have anti-symmetric properties like: and v 1 v i v i... v n = 0 v 1 v i v i+1... v n = v 1 v i+1 v i... v n. Since the ideal K is generated by homogeneous elements, K = n 0 K n where K n = K T n (V ). It follows that n (V ) = (V ) n 0 where n (V ) = T n (V )/K n is the subspace spanned by the images of all pure tensors of degree n. We ll call n (V ) the nth exterior power. The exterior power n (V ), together with the distinguished map i : V V n (V ), (v 1,..., v n ) v 1 v n, is characterized by the following universal property. First, call a multlinear map f : V V W to a vector space W alternating if we have that f(v 1,..., v j, v j+1,..., v n ) = 0 whenever v j = v j+1 for some j. The map i : V V n V is multilinear and alternating. Moreover, given any other multilinear, alternating map f : V V W, there exists a unique linear map f : n V W such that f = f i. The proof is the same as for the symmetric powers, you should compare the two constructions! Using this all important universal property, we can now prove: Basis theorem for exterior powers. Suppose that V is finite dimensional, with basis e 1,..., e m. Then, n (V ) has basis given by all monomials e i1 e in for all sequences 1 i 1 < < i n m. In particular, ( ) dim n m (V ) =, n and is zero for n > m.

4.2. SOME MULTILINEAR ALGEBRA 103 Proof. Since n (V ) is a quotient of T n (V ), we certainly have that n (V ) is spanned by all monomials e i1 e in. Now using the antisymmetric properties of, we get that it is spanned by the given strictly ordered monomials. The problem is to prove that the given elements are linearly independent. For this, we proceed by induction on n, the case n = 1 being immediate since 1 (V ) = V. Now consider n > 1. Let f 1,..., f n be the basis for V dual to the given basis e 1,..., e n for V. For each i = 1,..., n, we wish to define a map F i : n (V ) n 1 (V ). Start with the multilinear, alternating map f i : V V n 1 (V ) defined by f i (v 1,..., v n ) = n ( 1) j f i (v j )v 1 v j 1 v j+1 v n. j=1 Its clearly multilinear, and to check its alternating, we just need to see that if v j = v j+1 then: f i (v 1,..., v j, v j+1,..., v n ) = ( 1) j f i (v j )v 1 v j+1 v n + ( 1) j+1 f i (v j+1 )v 1 v j v n which is zero. Now the univeral property of gives that f i induces a unique linear map F i : n V n 1 V. Now we show that the given elements are linearly independent. Take a linear relation i 1< <i n a i1,...,i n e i1 e in = 0. Apply the linear map F 1, which annihilates all e j except for e 1 by definition. We get: 1<i 2< <i n a 1,i2,...,i n e i2 e in = 0. By the induction hypothesis, all such monomials are linearly independent, hence all a 1,i2,...,i n are zero. Now apply F 2 in the same way to get that all a 2,i2,...,i n are zero, etc... Now let s focus on a special case. Suppose that dim V = n and consider n V. Its dimension, by the theorem, is ( n n) = 1, and it has basis just the element e1 e n. Let f : V V be a linear map. Define f : V V n (V ) by f(v 1,..., v n ) = f(v 1 ) f(v n ). One easily checks that this is multilinear and alternating. Hence, it induces a unique linear map we ll denote n f : n (V ) n (V ). But n (V ) is one dimensional, so such a linear map is just multiplication by a scalar. In other words, there is a unique scalar which we ll call det f determined by the equation f(e 1 ) f(e n ) = (det f)e 1 e n. Of course, det f is exactly the determinant of the linear transformation f. Note our definition of determinant is basis free, always a good thing. You can see what we re calling det f is the same as usual, as follows. Suppose the matrix of f in our fixed basis is A, defined from f(e j ) = i A i,j e i.

104 CHAPTER 4. ALGEBRAS Then, f(e 1 ) f(e n ) = i 1,...,i n A i1,1a i2,2... A in,ne i1 e in. Now terms on the right hand side are zero unless all i 1,..., i n are distinct, i.e. (i 1,..., i n ) = (g1, g2,..., gn) for some permutation g S n. So: f(e 1 ) f(e n ) = g S n A g1,1 A g2,2... A gn,n e g1 e gn. Finally, e g1 e gn = sgn(g)e 1 e n where sgn denotes the sign of a permutation. So: This shows that f(e 1 ) f(e n ) = det f = g S n sgn(g)a g1,1 A g2,2... A gn,n e 1 e n. g S n sgn(g)a g1,1 A g2,2... A gn,n which is exactly the usual Laplace expansion definition of determinant. Here s a final result to illustrate how nicely the universal property definition of determinant gives its properties: Multiplicativity of determinant. For linear transformations f, g : V V, we have that det(f g) = det f det g. Proof. By definition, n [f g] is the map uniquely determined by ( n [f g])(v 1 v n ) = f(g(v 1 )) f(g(v n )). But n f n g also satisfies this equation. So, n (f g) = n f n g. The left hand side is scalar multiplication by det(f g), the right hand side is scalar multiplication by det f det g. 4.3 Chain conditions Let M be a (left or right) R-module. Then, M is called Noetherian if it satisfies the ascending chain condition (ACC) on submodules. This means that every ascending chain M 1 M 2 M 3... of submodules of M eventually stabilizes, i.e. M n = M n+1 = M n+2 =... for sufficiently large n. Similarly, M is called Artinian if it satisfies the descending chain condition (DCC) on submodules. So every descending chain M 1 M 2 M 3... of submodules of M eventually stabilizes, i.e. M n = M n+1 = M n+2 =... for sufficiently large n. A ring R is called left (resp. right) Noetherian if it is Noetherian viewed as a left (resp. right) R-module. Similarly, R is called left (resp. right) Artinian if it is Artinian viewed as a left (resp. right) R-module. In the case of commutative rings, we can omit the left or right here, but we cannot in general as the pathological examples on the homework show. In this chapter, we ll mainly be concerned with the Artinian property, but it doesn t make sense to introduce one chain condition without the other. We will discuss Noetherian rings in detail in the chapter on commutative algebra. By the way, you should think of the Noetherian property

4.3. CHAIN CONDITIONS 105 as a quite weak finiteness property on a ring, whereas the Artinian property is rather strong (see Hopkin s theorem below for the justification of this). We already know plenty of examples of Noetherian and Artinian rings. For instance, any PID is Noetherian (Lemma 2.4.1), but it need not be Artinian (e.g. Z is not Artinian: (p) (p 2 )... is an infinitely descending chain). The ring Z p n for p prime is both Noetherian and Artinian: indeed, here there are only finitely many ideals in total, the (p i ) for 0 i n. The main source of Artinian rings is as follows. Let F be a field and suppose that R is an F -algebra. Then I claim that every R-module M which is finite dimensional as an F -vector space is both Artinian and Noetherian. Well, R-submodules of M are in particular F -vector subspaces. And clearly in a finite dimensional vector space, you cannot have infinite chains of proper subspaces. So finite dimensionality of M does the job immediately! In particular, if the F -algebra R is finite dimensional as a vector space over F, then R is both left and right Artinian and Noetherian. Now you see why finite dimensional algebras over a field (e.g. M n (F ), the group algebra F G for G a finite group, etc...) are particularly nice things! 4.3.1. Lemma. Let 0 K i M π Q 0 be a short exact sequence of R-modules. Then M is Noetherian (resp. Artinian) if and only if both K and Q are Noetherian (resp. Artinian). Proof. I just prove the result for the Noetherian property, the Artinian case being analogous. First, suppose M satisfies ACC. Then obviously K does as it is isomorphic to a submodule of M. Similarly Q does by the lattice isomorphism theorem for modules. Conversely, suppose K and Q both satisfy ACC. Let M 1 M 2... be an ascending chain of R-submodules of M. Set K i = i 1 (i(k) M i ), Q i = π(m i ). Then, K 1 K 2... is an ascending chain of submodules of K, and Q 1 Q 2... is an ascending chain of submodules of Q. So by assumption, there exists n 1 such that K N = K n and Q N = Q n for all N n. Now, we have evident short exact sequences and 0 K n M n Q n 0 0 K N M N Q N 0 for each N n. Letting α : K n K N, β : M n M N and γ : Q n Q N be the inclusions, one obtains a commutative diagram with α and γ being isomorphisms. Then the five lemma implies that β is surjective, hence M N = M n for all N n as required. 4.3.2. Theorem. If R is left (resp. right) Noetherian, then every finitely generated left (resp. right) R-module is Noetherian. Similarly, if R is left (resp. right) Artinian, then every finitely generated left (resp. right) R-module is Artinian. Proof. Let s do this for the Artinian case, the Noetherian case being similar. So assume that R is left Artinian and M is a finitely generated left R-module. Then, M is a quotient of a free R-module with a finite basis. So it suffices to show that R n is an Artinian left R-module. We proceed by induction on n, the case n = 1 being given. For n > 1, the submodule of R n spanned by the first (n 1) basis elements is isomorphic to R (n 1), and the quotient by this submodule is isomorphic to R. By induction both R (n 1) and R are Artinian. Hence, R n is Artinian by Lemma 4.3.1. The next results are special ones about Noetherian rings (but see Hopkin s theorem below!) 4.3.3. Lemma. Let R be a left Noetherian ring and M be a finitely generated left R-module. Then every R-submodule of M is also finitely generated.

106 CHAPTER 4. ALGEBRAS Proof. By Theorem 4.3.2, M is Noetherian. Let N be an arbitrary R-submodule of M and let A be the set of finitely generated R-submodules of M contained in N. Note A is non-empty as the zero module is certainly there! Now I claim that A contains a maximal element. Well, pick M 1 A. If M 1 is maximal in A, we are done. Else, we can find M 2 A with M 1 M 2. Repeat. The process must terminate, else we construct an infinite ascending chain of submodules of N M, contradicting the fact that M is Noetherian. (Warning: as with all such arguments we have secretly appealed to the axiom of choice here! So now let N be a maximal element of A. Say N is generated by m 1,..., m n. Now take any m N. Then, (m 1,..., m n, m) is a finitely generated submodule of M contained in N and containing N. By maximality of N, we therefore have that N = (m 1,..., m n, m), i.e. m N. This shows that N = N, hence N is finitely generated. 4.3.4. Corollary. Let R be a ring. Then, The following are equivalent: (i) R is left Noetherian; (ii) every left ideal of R is finitely generated; (Similar statements hold on the right, of course). Proof. (i) (ii). This is a special case of Lemma 4.3.3, taking M = R. (ii) (i). Let I 1 I 2... be an ascending chain of left ideals of R. Then, I = n 1 I n is also a left ideal of R, hence finitely generated, by a 1,..., a m say. Then for some sufficiently large n, all of a 1,..., a m lie in I n. Hence I n = I and R is Noetherian. Now you see that for commutative rings Noetherian is an obvious generalization of a PID. Instead of insisting all ideals are generated by a single element, one has that every ideal is generated by finitely many elements. 4.4 Wedderburn structure theorems Now we have the basic language of algebras and of Artinian rings and modules, we can begin to discuss the structure of rings. The first step is to understand the structure of semisimple rings. Recall that an R-module M is simple or irreducible if it is non-zero and has no submodules other than M and (0). Schur s lemma. Let M be a simple R-module. Then, End R (M) is a division ring. Proof. Let f : M M be a non-zero R-endomorphism of M. Then, ker f and im f are both R-submodules of M, with ker f M and im f (0) since f is non-zero. Hence, since M is simple, ker f = (0) and im f = M. This shows that f is a bijection, hence invertible. Throughout the section, we will also develop parallel but stronger versions of the results dealing with finite dimensional algebras over algebraically closed fields (recall a field F is algebraically closed if every monic f(x) F [X] has a root in F ). Relative Schur s lemma. Let F be an algebraically closed field and A be an F -algebra. Let M be a simple A-module which is finite dimensional as an F -vector space. Then, End R (M) = F. Proof. Let f : M M be an A-endomorphism of M. Then in particular, f : M M is an F -linear endomorphism of a finite dimensional vector space. Since F is algebraically closed, f has an eigenvector v λ M of eigenvalue λ F (because the characteristic polynomial of f has a root in F ).

4.4. WEDDERBURN STRUCTURE THEOREMS 107 Now consider f λ id M. It is also an A-endomorphism of M, and moreover its kernel is non-zero as v λ is annihilated by f λ id M. Hence since M is irreducible, the kernel of f λ id M is all of M, i.e. f = λ id M. This shows that the only R-endomorphisms of M are the scalars, as required. Now, let R be a non-zero ring. Call R left semisimple if R R is semisimple as a left R-module (recall section 3.4). Similarly, R is right semisimple if R R is semisimple as a right R-module. Finally, R is simple if it has no two-sided ideals other than R and (0). Here s a lemma from the previous chapter that should be quite familiar by now. 4.4.1. Lemma. Let A be an R-algebra, M a left A-module and D = End A (M). Then, as R-algebras. End A (M n ) = M n (D) Proof. Let us write elements of M n as column vectors m 1. m n with m i M. Suppose we have f End A (M n ). Let f i,j : M M be the map sending m M to the ith coordinate of the vector 0. 0 f m 0. 0 where the m is in the jth row. Then, f i,j is an A-module homomorphism, so is an element of D. Moreover, m 1 f 1,1... f 1,n m 1 f. =...... (matrix multiplication!), m n f n,1... f n,n m n so that f is uniquely determined by the f i,j. Thus, we obtain an R-algebra isomorphism f (f i,j ) between End A (M n ) and M n (D). Wedderburn s first structure theorem. The following conditions on a non-zero ring R are equivalent: (i) R is simple and left Artinian; (i ) R is simple and right Artinian; (ii) R is left semisimple and all simple left R-modules are isomorphic; (ii ) R is right semisimple and all simple right R-modules are isomorphic; (iii) R is isomorphic to the matrix ring M n (D) for some n 1 and D a division ring. Moreover, the integer n and the division ring D in (iii) are determined uniquely by R (up to isomorphism). Proof. First, note the condition (iii) is left-right symmetric. So let us just prove the equivalence of (i),(ii) and (iii).

108 CHAPTER 4. ALGEBRAS (i) (ii). Let U be a minimal left ideal of R. This exists because R is left Artinian so we have DCC on left ideals! Then, U = Rx for any non-zero x U, and Rx is a simple left R-module. Since R is a simple ring, the non-zero two-sided ideal RxR must be all of R. Hence, R = RxR = a R Rxa. Note each Rxa is a homomorphic image of the simple left R-module Rx, so is either isomorphic to Rx or zero. Thus we have written R R as a sum of simple R-modules, so by Lemma 3.4.1, there exists a subset S R such that R = a S Rxa with each Rxa for a S being isomorphic to U. Hence, R is left semisimple. Moreover, if M is any simple left R-module, then M = R/J for a maximal left ideal J of R. We can pick a S such that xa / J. Then, the quotient map R R/J maps the simple submodule Rxa of R to a non-zero submodule of M. Hence using simplicity, M = Rxa. This shows all simple R-modules are isomorphic to U. (ii) (iii). Let U be a simple left R-module. We have that R = a S Ra for some subset S of R, where each left R-module Ra is isomorphic to U. I claim that S is finite. Indeed, 1 R lies in a S Ra hence in Ra 1 Ra n for some finite subset {a 1,..., a n } of S. But 1 R generates R as a left R-module, so in fact {a 1,..., a n } = S. Thus, R = U n for some n, uniquely determined as the composition length of R R. Now, by Schur s lemma, End R (U) = D, where D a Division ring uniquely determined up to isomorphism as the endomorphism ring of a simple left R-module. Hence applying Lemma 4.4.1, Now finally, we observe that End R ( R R) = End R (U n ) = M n (D). End R ( R R) = R op, the isomorphism in the forward direction being determined by evaluation at 1 R. To see that this is an isomorphism, one constructs the inverse map, namely, the map sending r R to f r : R R R R with f r (s) = sr for each s R ( f r is right multiplication by r ). Now we have shown that R op = Mn (D). Hence, noting that matrix transposition is an isomorphism between M n (D) op and M n (D op ), we get that R = M n (D op ) and D op is also a division ring. (iii) (i). We know that M n (D) is simple as it is Morita equivalent to D, which is a division ring (or you can prove this directly!). It is left Artinian for instance because M n (D) is finite dimensional over the division ring D. This gives (i). Corollary. Every simple left (or right) Artinian ring R is an algebra over a field. Proof. Observe M n (D) is an algebra over Z(D), which is a field. The relative version for finite dimensional algebras over algebraically closed fields is as follows:

4.4. WEDDERBURN STRUCTURE THEOREMS 109 Relative first structure theorem. Let F be an algebraically closed field and A be a finite dimensional F -algebra (hence automatically left and right Artinian). (i) A is simple; (ii) A is left semisimple and all simple left R-modules are isomorphic; (ii ) A is right semisimple and all simple right R-modules are isomorphic; (iii) A is isomorphic to the matrix ring M n (F ) for some n (indeed, n 2 = dim A). Proof. The proof is the same, except that one gets that the division ring D equals F since one can use the stronger relative Schur s lemma. Wedderburn s second structure theorem. Every left (or right) semisimple ring R is isomorphic to a finite product of matrix rings over division rings: R = M n1 (D 1 ) M nr (D r ) for uniquely determined n i 1 and division rings D i. Conversely, any such ring is both left and right semisimple. Proof. Since R is left semisimple, we can decompose R R as i I U i where each U i is simple. Note 1 R already lies in a sum of finitely many of the U i, hence the index set I is actually finite. Now gather together isomorphic U i s to write where RR = H 1 H m H i = U n i i for irreducible R-modules U i and integers n i 1, with U i = Uj for i j. By Schur s lemma, D i = End R (U i ) is a division ring. Moreover, there are no R-module homomorphisms between H i and H j for i j, because none of their composition factors are isomorphic. Hence: R = End R ( R R) op = EndR (H 1 H m ) op = EndR (H 1 ) op End R (H m ) op = M n1 (D 1 ) op M nm (D m ) op = Mn1 (D op 1 ) M n m (Dm op ). The integers n i are uniquely determined as the multiplicities of the irreducible left R-modules in a composition series of R R, while the division rings D i are uniquely determined up to isomorphism as the endomorphism rings of the simple left R-modules. For the converse, we need to show that a product of matrix algebras over division rings is left and right semisimple. This follows because a single matrix algebra M n (D) over a division ring is both left and right semisimple according to the first Wedderburn structure theorem. The theorem shows: Corollary. A ring R is left semisimple if and only if it is right semisimple. So we can now just call R semisimple if it is either left or right semisimple in the old sense. Also: Corollary. Any semisimple ring is both left and right Artinian. Proof. A product of finitely many matrix rings over division algebras is left and right Artinian. Finally, we need to give the relative version:

110 CHAPTER 4. ALGEBRAS Relative second structure theorem. Let F be an algebraically closed field and A be a finite dimensional F -algebra that is semisimple. Then, for uniquely determined integers n i 1. Proof. A = M n1 (F ) M nr (F ) Just use the relative Schur s lemma in the same proof. This theorem lays bare the structure of finite dimensional semisimple algebras over algebraically closed fields: A is uniquely determined up to isomorphism by the integers n i, which are precisely the dimensions of the simple A-modules. 4.5 The Jacobson radical I want to explain in this section how understanding of the case of semisimple rings gives information about more general rings. The key notion is that of the Jacobson radical, which will be defined using the following equivalent properties: Jacobson radical theorem. Let R be a ring, a R. The following are equivalent: (i) a annihilates every simple left R-module; (i) a annihilates every simple right R-module; (ii) a lies in every maximal left ideal of R; (ii) a lies in every maximal right ideal of R; (iii) 1 xa has a left inverse for every x R; (iii) 1 ay has a right inverse for every y R; (iv) 1 xay is a unit for every x, y R. Proof. Since (iv) is left-right symmetric, I will only prove equivalence of (i) (iv). (i) (ii). If I is a maximal left ideal of R, then R/I is a simple left R-module. So, a(r/i) = 0, i.e. a I. (ii) (iii). Assume (ii) holds but 1 xa does not have a left inverse for some x R. Then, R(1 xa) R. So, there exists a maximal left ideal I with R(1 xa) I < R. But then, 1 xa I, and a I by assumption. Hence, 1 I so I = R, a contradiction. (iii) (i). Let M be a simple left R-module. Take u M. If au 0, then Rau = M as M is simple, so u = rau for some r R. But this implies that (1 ra)u = 0, hence since (1 ra) has a left inverse by assumption, we get that u = 0. This contradiction shows that in fact au = 0 for all u M, i.e. am = 0. (iv) (iii). This is trivial (take y = 1). (i),(iii) (iv). For any y R and any simple left R-module M, aym am = 0. So ay also satisfies the condition in (i), hence (iii). So for every x R, 1 xay has a left inverse, 1 b say. The equation (1 b)(1 xay) = 1 implies b = (b 1)xay. Hence bm = 0 for all simple left R-modules M, so b satisfies (i) hence (iii). So, 1 b has a left inverse, 1 c, say. Then, hence 1 c = (1 c)(1 b)(1 xay) = 1 xay, c = xay. Now we have that 1 b is a left inverse to 1 xay, and 1 xay is a left inverse to 1 b. Hence, 1 xay is a unit with inverse 1 b.

4.5. THE JACOBSON RADICAL 111 Now define the Jacobson radical J(R) to be J(R) = {a R a satisfies the equivalent conditions in the theorem}. Thus for instance using (i), J(R) is the intersection of the annihilators in R of all the simple left R-modules, or using (ii) J(R) = I I where I runs over all maximal left ideals of R. This implies that J(R) is a left ideal of R, but equally well using (ii), J(R) = I I where I runs over all maximal right ideals of R so that J is a right ideal of R. Hence, J(R) is a two-sided ideal of R. The following result is a basic trick in ring theory and maybe gives a first clue as to why the Jacobson radical is so important. Nakayama s lemma. Let R be a ring and M be a finitely generated left R-module. If J(R)M = M then M = 0. Proof. Suppose that M is non-zero and set J = J(R) for short. Let X = {m 1,..., m n } be a minimal set of generators of M, so m 1 0. Since JM = M, we can write for some j i J. So, m 1 = j 1 m 1 + + j n m n (1 R j 1 )m 1 = j 2 m 2 + + j n m n. But 1 R j 1 is a unit by the definition of J(R). So we get that m 1 = (1 R j 1 ) 1 j 2 m 2 + + (1 R j 1 ) 1 j n m n. This contradicts the minimality of the initial set of generators chosen. We will apply Nakayama s lemma on the next chapter, but actually never in this chapter. The remainder of the section is concerned with the Jacobson radical in an Artinian ring!!! All these results are false if the ring is not Artinian... First characterization of the Jacobson radical for Artinian rings. Suppose that R is left (or right) Artinian. Then, J(R) is the unique smallest two-sided ideal of R such that R/J(R) is a semisimple algebra. Proof. Pick a maximal left ideal I 1 of R. Then (if possible) pick a maximal left ideal I 2 such that I 2 I 1 (hence I 1 I 2 is strictly smaller than I 1 ). Then (if possible) pick a maximal left ideal I 3 such that I 3 I 1 I 2 (hence I 1 I 2 I 3 is strictly smaller than I 1 I 2 ). Keep going! The process must terminate after finitely many steps, else you construct an infinite descending chain R I 1 I 1 I 2 I 1 I 2 I 3... of left ideals of R, contradicting the fact that R is left Artinian. We thus obtain finitely many maximal left ideals I 1, I 2,..., I r of R such that I 1 I r is contained in every maximal left ideal of R. In other words, J(R) = I 1 I r, so the Jacobson radical of an Artinian ring is the intersection of finitely many maximal left ideals.

112 CHAPTER 4. ALGEBRAS Consider the map R R/I 1 R/I r, a (a + I 1,..., a + I r ). It is an R-module map with kernel I 1 I r = J(R). So it induces an embedding R/J R/I 1 R/I r. The right hand side is a semisimple R-module, as it is a direct sum of simples, hence R/J is also a semisimple R-module. This shows that the quotient R/J is a semisimple ring. Now let K be another two-sided ideal of R such that R/K is a semisimple ring. By the lattice isomorphism theorem, we can write R/K = i I B i /K where B i is a left ideal of R containing K, i I B i = R and B i ( j i B j) = K for each i. Set C i = j i B j for short. Then, R/C i = (C i + B i )/C i = Bi /(B i C i ) = B i /K. This is simple, so C i is a maximal left ideal of R. Hence, each C i J(R). So K = C i also contains J(R). Thus J(R) is the unique smallest such ideal. Thus you see the first step to understanding the structure of an Arinian ring: understand the semisimple ring R/J(R) in the sense of Wedderburn s structure theorem. 4.5.1. Corollary. Let R be a left Artinian ring and M be a left R-module. Then, M is semisimple if and only if J(R)M = 0. Proof. If M is semisimple, it is a direct sum of simples. Now, J(R) annihilates all simple left R-modules by definition, hence J(R) annihilates M. Conversely, if J(R) annihilates M, then we can view M as an R/J(R)-module by defining (a + J(R))m = am for all a R, m M (one needs J(R) to annihilate M for this to be well-defined!). Since R/J(R) is semisimple by the previous theorem, M is a semisimple R/J(R)-module. But the R-module structure on M is just obtained by lifting the R/J(R)-module structure, so this means that M is semisimple as an R-module too. Second characterization of the Jacobson radical for Artinian rings. Let R be a left (or right) Artinian ring. Then, J(R) is a nilpotent two-sided ideal of R (i.e. J(R) n = 0 for some n > 0) and is equal to the sum of all nilpotent left ideals of R. Proof. Suppose x R is nilpotent, say x n = 0. Then, (1 x) is a unit, indeed, (1 x)(1 + x + x 2 + + x n 1 ) = 1. So if I is a nilpotent left ideal of R, then every x I satisfies condition (iii) of the Jacobson radical theorem. This shows every nilpotent ideal of R is contained in J(R). It therefore just remains to prove that J(R) is itself a nilpotent ideal. Set J = J(R). Consider the chain J J 2 J 3... of two-sided ideals of R. Since R is left Artinian, the chain stabilizes, so J k = J k+1 =... for some k. Set I = J k, so I 2 = I. We need to prove that I = 0. Well, suppose for a contradiction that I 0. Choose a left ideal K of R minimal such that IK 0 (use the fact that R is left Artinian). Take any a K with Ia 0. Then, I 2 a = Ia 0, so the left ideal Ia of R coincides with K by the minimality of K. Hence, a K lies in Ia, so we can write a = xa for some x I. So, (1 x)a = 0. But x J, so 1 x is a unit, hence a = 0, which is a contradiction.

4.6. CHARACTER THEORY OF FINITE GROUPS 113 4.5.2. Corollary. In particular, a left (or right) Artinian ring R is semisimple if and only if it has no non-zero nilpotent ideals. We end with an important application: Hopkin s theorem. Let R be a left Artinian ring and M be a left R-module. The following are equivalent: (i) M is Artinian; (ii) M is Noetherian; (iii) M has a composition series; (iv) M is finitely generated. Proof. (i) (iii) and (ii) (iii). Let J = J(R). Then, J is nilpotent by the second characterization of the Jacobson radical. So, J n = 0 for some n. Consider M JM J 2 M J n M = 0. It is a descending chain of R-submodules of M. Set F i = J i M/J i+1 M. Then, F i is annihilated by J, hence is a semisimple left R-module. Now, if M is Artinian (resp. Noetherian), so is each F i, so each F i is in fact a direct sum of finitely many simple R-modules. Thus, each F i obviously has a composition series, so M does too by the lattice isomorphism theorem. (iii) (iv). Let M = M 1 M n = 0 be a composition series of M. Pick m i M i M i+1 for each i = 1,..., n 1. Then, the image of m i in M i /M i+1 generates M i /M i+1 as it is a simple R-module. It follows that the m i generate M. Hence, M is finitely generated. (iv) (i) is Theorem 4.3.2. (iii) (ii) and (iii) (i). These both follow immediately from the Jordan-Hölder theorem (actually, the Schreier refinement lemma using that any refinement of a composition series is trivial). Now we can show that Artinian implies Noetherian : 4.5.3. Corollary. If R is left (resp. right) Artinian, then R is left (resp. right) Noetherian. Proof. Apply Hopkin s theorem to the Artinian R-module R R. 4.6 Character theory of finite groups Wedderburn s theorem has a very important application to the study of finite groups, indeed, it is the starting point for the study of character theory of finite groups. In this section, I ll follow Rotman section 8.5 fairly closely. Maschke s theorem. Let G be a finite group and F be a field either of characteristic 0 or of characteristic 0 < p G. Then, the group algebra F G is a semisimple algebra. Proof. It is certainly enough to show that every left F G-module M is semisimple. We just need to show that every short exact sequence 0 K M π Q 0 of F G-modules splits. Well, pick any F -linear map θ : Q M such that π θ = id Q. The problem is that θ need not be an F G-module map. Well, for g G, define a new F -linear map g θ : Q M, x gθ(g 1 x).

114 CHAPTER 4. ALGEBRAS Then, (π g θ)(x) = π(gθ(g 1 x)) = g(π θ)(g 1 x) = gg 1 x = x so each g θ also satisfies π g θ = id Q. Now define Avθ = 1 G (note G is invertible in F by assumption on characteristic). This is another F -linear map such that π Avθ = id Q. Moreover, Avθ is even an F G-module map: havθ(x) = 1 G g G Hence, Avθ defines the required splitting. g G hgθ(g 1 x) = 1 G g θ gθ(g 1 hx) = Avθ(hx). g G 4.6.1. Remark. Maschke s theorem is actually if and only if... F G is a semisimple algebra if and only if char F = 0 or char F G. Let me just give one example. Take F of characteristic p > 0 and G = C p = x. The group algebra F G is commutative and artinian. Now recall a commutative artinian ring is semisimple if and only if it has no non-zero nilpotent elements. But The corollary can be applied in particular to commutative rings. In that case, if x R is a nilpotent element, the ideal (x) it generates is a nilpotent ideal. So, a commutative Artinian ring is semisimple if and only if it has no non-zero nilpotent elements. Now let G = C p, the cyclic group of order p. Maschke s theorem shows that if F is any field of characteristic different from p, then F G is a semisimple. Conversely, suppose F is a field of characterisitic p. The group algebra F G is finite dimensional, hence is a commutative Artinian ring. Consider the element where x is a generator of G. We have that x 1 F G (x 1) p = x p + ( 1) p = 1 + ( 1) p = 0. Hence, it is a non-zero nilpotent element. Thus, F G is not semisimple in this case. We are going to be interested from now on in the following situation: G is a finite group (so that F G is artinian); the field F is of characteristic 0 (so that F G is semisimple); the field F is algebraically closed (so that the souped-up version of Wedderburn s theorem holds). Actually I ll just assume F = C from now on. Then, we have shown that (1) CG = M n1 (C) M nr (C) where r is the number of isomorphism classes of irreducible CG-modules and n 1,..., n r are their dimensions. These give some interesting invariants of the group G defined using modules... Note first (considering dimension of each side as a C-vector space) that: G = (n 1 ) 2 + + (n r ) 2. The number r has a simply group theoretic meaning too: 4.6.2. Lemma. The number r in (1) is equal to the number of conjugacy classes in the group G.

4.6. CHARACTER THEORY OF FINITE GROUPS 115 Proof. Let us compute dim Z(CG) in two different ways. First, since the center of M n (C) is one dimensional spanned by the identity matrix, dim Z(CG) = r (one for each matrix algebra in the Wedderburn decomposition). On the other hand if a g g CG g G is central then conjugating by h G you see that a g = a hgh 1 for all h G. Hence the coefficients a g are constant on conjugacy classes. Hence if C 1,..., C s are the conjugacy classes of G, the elements z i = g C i g form a basis {z 1,..., z s } for Z(CG). Hence r = dim Z(CG) = s. 4.6.3. Example. Say G = S 3. There are three conjugacy classes. Hence r = 3, i.e. there are three isomorphism classes of irreducible CS 3 -modules. Moreover n 2 1 + n 2 2 + n 2 3 = 6 so the dimensions can only be 1, 1 and 2. 4.6.4. Example. Say G is abelian. Then there are r = G conjugacy classes, and n 2 1 + +n 2 r = r hence each n i = 1. This proves that there are n isomorphism classes of irreducible CG-module, and all of these modules are one dimensional. Before going any further I want to introduce a little language. A (finite dimensional complex) representation of G means a pair (V, ρ) where V is a finite dimensional complex vector space and ρ : G GL(V ) is a group homomorphism. The representation is called faithful if this map ρ is injective. Note having a representation (V, ρ) of G is exactly the same as having a finite dimensional CG-module V : given a representation (V, ρ) we get a CG-module structure on V by defining gv := ρ(g)(v); conversely given a CG-module structure on V we get a representation ρ : G GL(V ) by defining ρ(g) to be the linear map v gv. This is all one big tautology. But I will often switch between calling things CG-modules and calling them representations. Sorry. Say ρ : G GL(V ) is a representation of G. Pick a basis for V, v 1,..., v n. Then each element of GL(V ) becomes an invertible n n matrix, and in this way you can regard ρ instead as a group homomorphism ρ : G GL n (C). This is a matrix representation: ρ maps the group G to the group of n n invertible matrices. You should compare the notion of matrix representation with that of permutation representation from before: that was a map from G to the symmetric group S n for some n. It corresponded to having a G-set X of size n... Its all very analogous to what we re doing now, but sets have been replaced by vector spaces. Before we called a permutation representation faithful if the map G S n was injective (then it embedded G as a subgroup of a symmetric group); now we are calling a (linear) representation faithful if the map G GL n (C) is injective (then it embeds G as a subgroup of the matrix group)... 4.6.5. Example. Recall that GL(C) = GL 1 (C) is just the multiplicative group C. For any group G, there is always the trivial representation, namely, the homomorphism mapping every g G to 1 GL 1 (C). This corresponds to the trivial CG-module equal to C as a vector space with every g G acting as 1. 4.6.6. Example. For the symmetric group S n, there is a group homomorphism sgn : S n {±1}. The latter group sits inside C, so you can view this as another 1-dimensional representation, the sign representation. The corresponding module is not isomorphic to the trivial module (providing n > 1). Now we ve constructed both of the one dimensonal CS 3 -modules: one is trivial, one is sign. What about the two dimensional CS 3 -module? There is a simple way to construct (linear) representations of G out of permutation representations. Suppose X = {x 1,..., x n } is a finite G-set. Let CX be the C-vector space on basis X. Each