Notes on Representations of Finite Groups. Ambar N. Sengupta

Similar documents
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

Math 121 Homework 5: Notes on Selected Problems

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Math 250: Higher Algebra Representations of finite groups

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent

Rings and groups. Ya. Sysak

Elementary linear algebra

ALGEBRA QUALIFYING EXAM PROBLEMS


Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Representation Theory

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg =

Math 210C. The representation ring

The Gelfand-Tsetlin Basis (Too Many Direct Sums, and Also a Graph)

Math 594. Solutions 5

NOTES ON FINITE FIELDS

NOTES ON POLYNOMIAL REPRESENTATIONS OF GENERAL LINEAR GROUPS

REPRESENTATION THEORY, LECTURE 0. BASICS

Representations. 1 Basic definitions

12. Projective modules The blanket assumptions about the base ring k, the k-algebra A, and A-modules enumerated at the start of 11 continue to hold.

0.2 Vector spaces. J.A.Beachy 1

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Structure of rings. Chapter Algebras

1 Fields and vector spaces

A connection between number theory and linear algebra

Algebra Exam Syllabus

Infinite-Dimensional Triangularization

MATH 326: RINGS AND MODULES STEFAN GILLE

Math 396. Quotient spaces

ALGEBRA QUALIFYING EXAM, FALL 2017: SOLUTIONS

Category O and its basic properties

Representations of quivers

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d

REPRESENTATION THEORY OF S n

Chapter 5. Linear Algebra

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Real representations

Math 120 HW 9 Solutions

Exercises on chapter 4

ALGEBRAIC GROUPS J. WARNER

Problems in Linear Algebra and Representation Theory

Chapter 2 Linear Transformations

Formal power series rings, inverse limits, and I-adic completions of rings

Topics in linear algebra

A PRIMER ON SESQUILINEAR FORMS

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers

Pacific Journal of Mathematics

Math 429/581 (Advanced) Group Theory. Summary of Definitions, Examples, and Theorems by Stefan Gille

Exercises on chapter 1

Representation Theory. Ricky Roy Math 434 University of Puget Sound

4.4 Noetherian Rings

1 Linear transformations; the basics

Graduate Preliminary Examination

SUMMARY ALGEBRA I LOUIS-PHILIPPE THIBAULT

Linear and Bilinear Algebra (2WF04) Jan Draisma

Topics in Representation Theory: Fourier Analysis and the Peter Weyl Theorem

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ.

10. Smooth Varieties. 82 Andreas Gathmann

5 Dedekind extensions

SPECTRAL THEORY EVAN JENKINS

ALGEBRA EXERCISES, PhD EXAMINATION LEVEL

3.1. Derivations. Let A be a commutative k-algebra. Let M be a left A-module. A derivation of A in M is a linear map D : A M such that

August 2015 Qualifying Examination Solutions

Linear Algebra. Min Yan

Bare-bones outline of eigenvalue theory and the Jordan canonical form

ALGEBRA PH.D. QUALIFYING EXAM September 27, 2008

TCC Homological Algebra: Assignment #3 (Solutions)

REPRESENTATION THEORY WEEK 7

Math 762 Spring h Y (Z 1 ) (1) h X (Z 2 ) h X (Z 1 ) Φ Z 1. h Y (Z 2 )

3. The Sheaf of Regular Functions

MATH 8253 ALGEBRAIC GEOMETRY WEEK 12

Summer Algebraic Geometry Seminar

REPRESENTATION THEORY NOTES FOR MATH 4108 SPRING 2012

Algebra Qualifying Exam Solutions. Thomas Goller

REPRESENTATIONS OF S n AND GL(n, C)

Math 249B. Geometric Bruhat decomposition

Linear algebra 2. Yoav Zemel. March 1, 2012

First we introduce the sets that are going to serve as the generalizations of the scalars.

Topics in Module Theory

Math 145. Codimension

CHAPTER 6. Representations of compact groups

9. Birational Maps and Blowing Up

9. Integral Ring Extensions

Algebra SEP Solutions

Review of Linear Algebra

Yuriy Drozd. Intriduction to Algebraic Geometry. Kaiserslautern 1998/99

Equivalence Relations

Representations of semisimple Lie algebras

MODEL ANSWERS TO THE FIRST HOMEWORK

Algebra Homework, Edition 2 9 September 2010

5 Dedekind extensions

Exercises on chapter 0

1 Invariant subspaces

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS

55 Separable Extensions

CHARACTERS AS CENTRAL IDEMPOTENTS

38 Irreducibility criteria in rings of polynomials

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples

Algebras. Chapter Definition

Transcription:

Notes on Representations of Finite Groups Ambar N. Sengupta 18th December, 2007

2 Ambar N. Sengupta

Contents Preface............................... 6 1 Basic Definitions 7 1.1 Representations of Groups.................... 7 1.2 Representations and their Morphisms.............. 8 1.3 Sums, Products, and Tensor Products.............. 9 1.4 Invariant Subspaces and Quotients............... 9 1.5 Irreducible Representations................... 10 1.6 Matrix Elements......................... 10 1.7 Inner Products and Unitarity.................. 11 Exercises.............................. 12 2 The Group Algebra 13 2.1 Definition of k[g]......................... 13 2.2 Representations of G and k[g].................. 14 2.3 The center of k[g]........................ 15 2.4 A glimpse of some results.................... 16 2.5 k[g] is semisimple......................... 18 Exercises.............................. 20 3 Semisimple Modules and Rings: Structure and Representations 21 3.1 Schur s Lemma.......................... 21 3.2 Semisimple Modules....................... 23 3.3 Structure of Semisimple Rings.................. 29 3.3.1 Simple Rings....................... 32 3.4 Semisimple Algebras as Matrix Algebras............ 36 3.5 Idempotents............................ 37 3.6 Modules over Semisimple Rings................. 45 3

4 Ambar N. Sengupta Exercises.............................. 49 4 The Regular Representation 53 4.1 Structure of the Regular Representation............ 53 4.2 Representations of abelian groups................ 56 Exercises.............................. 56 5 Characters of Finite Groups 57 5.1 Definition and Basic Properties................. 57 5.2 Some arithmetic properties of characters............ 58 5.3 Character of the Regular Representation............ 59 5.4 Fourier Expansion......................... 62 5.5 Orthogonality Relations..................... 64 5.6 The Invariant Inner Product................... 67 5.7 The Invariant Inner Product on Function Spaces........ 68 5.8 Orthogonality Revisited..................... 73 Exercises.............................. 74 6 Representations of S n 77 6.1 Conjugacy Classes and Young Tableaux............. 78 6.2 Construction of Irreducible Representations of S n....... 80 6.3 Some properties of Young tableaux............... 82 6.4 Orthogonality of Young symmetrizers.............. 86 7 Commutants 89 7.1 The Commutant......................... 89 7.2 The Double Commutant..................... 91 Exercises.............................. 93 8 Decomposing a Module using the Commutant 95 8.1 Joint Decomposition....................... 95 8.2 Decomposition by the Commutant............... 98 8.3 Submodules relative to the Commutant............. 102 Exercises.............................. 106 9 Schur-Weyl Duality 109 9.1 The Commutant for S n acting on V n............. 109 9.2 Schur-Weyl Character Duality I................. 111 9.3 Schur-Weyl Character Duality II................. 113

Representations of Finite Groups 5 Exercises.............................. 119 10 Representations of Unitary Groups 121 10.1 The Haar Integral and Orthogonality of Characters...... 122 10.1.1 The Weyl Integration Formula.............. 123 10.1.2 Schur Orthogonality................... 124 10.2 Characters of Irreducible Representations............ 125 10.2.1 Weights.......................... 125 10.2.2 The Weyl Character Formula.............. 126 10.2.3 Weyl dimensional formula................ 130 10.2.4 Representations with given weights........... 131 10.3 Characters of S n from characters of U(N)........... 133 11 Frobenius Induction 137 11.1 Construction of the Induced Representation.......... 137 11.2 Universality of the Induced Representation........... 138 11.3 Character of the Induced Representation............ 140 End Notes............................. 141 Bibliography............................ 143

6 Ambar N. Sengupta Preface These notes describe the basic ideas of the theory of representations of finite groups. Most of the essential structural results of the theory follow immediately from the structure theory of semisimple algebras, and so this topic occupies a long chapter. Material not covered here include the theory of induced representations. The arithmetic properties of group characters are also not dealt with in detail. It is not the purpose of these notes to give comprehensive accounts of all aspects of the topics covered. The objective is to see the theory of representations of finite groups as a coherent narrative, building some general structural theory and applying the ideas thus developed to the case of the symmetric group S n. It is also not the objective to present a very efficient and fast route through the theory. For many of the ideas we pause the examine the same set of results from several different points of view. For example, in Chapter 8, we develop the the theory of decomposition of a module with respect to the commutant ring in three distinct ways. I wish to thank Thierry Lévy for many useful observations. These notes have been begun at a time when I have been receiving support from the US National Science Foundation grant DMS-0601141. Any opinions, findings and conclusions or recomendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

Chapter 1 Basic Definitions Traditionally, a group arises as the set of symmetries of an algebraic or geometric object. For instance, it may be of interest to consider the group of all permutations of n abstract variables X 1,..., X n which leave invariant some given algebraic expression(s) in these variables. Instead of permutations one could consider linear transformations, i.e. replacement of each X i by a linear combination of these variables, which, again, leave invariant some given algebraic expressions in these variables. In this incarnation, the abstract group of transformations is represented concretely as linear transformations in an n-dimensional space. The groups we will be concerned with in these notes are, unless otherwise noted, finite groups. In this chapter we set up the basic definitions of representations of finite groups. We will use G to denote any finite group, and k is a field over which we will consider vector spaces on which G will be represented. 1.1 Representations of Groups A representation ρ of a group G on a vector space V associates to each element x G a linear map ρ(x) : V V : v ρ(x)v such that ρ(xy) = ρ(x)ρ(y) for all x, y G. (1.1) 7

8 Ambar N. Sengupta We denote by End k (V ) the ring of endomorphisms of a vector space V, over a field k. A representation ρ of G on V is thus a map ρ : G End k (V ) satisfying (1.1). The homomorphism condition (1.1) implies ρ(e) = I, ρ(x 1 ) = ρ(x) 1 for all x G. We will often say the representation E instead of the representation ρ on the vector space E. As an example, consider the group S 3 of permutations of {1, 2, 3}, acting on the three dimensional vector space k 3 by permutation of coordinates. 1.2 Representations and their Morphisms If ρ and ρ are representations of G on vector spaces E and E over k, and is a linear map such that A : E E ρ (x) A = A ρ(x) for all x G (1.2) then we can consider A to be a morphism from the representation ρ to the representation ρ. The representations of G on all vector spaces over a given field k, along with all the morphisms, form a category. This is just something to keep in mind; we will construct everything explicitly, without relying on any general category theoretic procedures. Two representations ρ and ρ of G are isomorphic if there is an invertible intertwining operator between them, i.e. a linear isomorphism A : E E for which Aρ(x) = ρ (x)a for all x G (1.3)

Representations of Finite Groups 9 1.3 Sums, Products, and Tensor Products If ρ and ρ are representations of G on E and E, respectively, then we have the direct sum ρ ρ representation on E E : (ρ ρ )(x) = (ρ(x), ρ (x)) End k (E E ) If bases are chosen in E and E then the matrix for (ρ ρ )(x) is block diagonal, with the blocks ρ(x) and ρ (x) on the diagonal: [ ] ρ 0 0 ρ This notion clearly generalizes to a direct sum (or product) of any family representations. We also have the tensor product ρ ρ of the representations, specified through (ρ ρ )(x) = ρ(x) ρ (x) (1.4) 1.4 Invariant Subspaces and Quotients A subspace W V is said to be invariant under ρ if In this case, ρ(x)w W for all x G. x ρ(x) W End k (W ) is a representation of G on W. It is a subrepresentation of ρ. Put another way, the inclusion map W V is a morphism from ρ W to ρ. If W is invariant, then there is induced, in the natural way, a representation on the quotient space V/W gven by ρ V/W (x) : a + W ρ(x)a + W, for all a V (1.5)

10 Ambar N. Sengupta Consider the representation of the permutation group S n on the n-dimensional vector space k n. An invariant subspace is The complementary subspace is also invariant. {(x 1,..., x n ) k n : x 1 + + x n = 0} {(x, x,..., x) : x k} 1.5 Irreducible Representations A representation ρ on V is irreducible if V 0 and the only invariant subspaces of V are 0 and V. As we will see later, every representation of a finite group is a direct sum of irreducible representations. One of the major tasks of representation theory is to determine the irreducible representations of a group. 1.6 Matrix Elements For a vector space V, we denote by V the dual space of all linear functionals on V. The evaluation of φ V on a vector v V will be denoted φ(v) = φ v. Consider a representation ρ of a group G on V. If {e a } a I is a basis of V, we denote by {e a } a I the dual basis in V, satisfying e a e b = δb a (1.6) Then ρ(x) has the matrix form [ e b ρ(x)e a ] a,b I (1.7) The entries of this matrix are called matrix elements of the representations; more generally, a matrix element of ρ is a function of the form for some f V and v V. G k : x f ρ(x)v (1.8)

Representations of Finite Groups 11 1.7 Inner Products and Unitarity Consider a representation ρ of G on E, and assume that the field k is R or C (or some subfield thereof). Consider any inner product, on E. (Note that in the case of k = C, an inner product is conjugate linear in the second variable.) It is often useful to introduce the following inner-product on E: v, w = 1 G ρ(x)v, ρ(x)w, (1.9) x G where G is the number of elements in G. The division by G is not essential, but often convenient. The advantage of this inner-product is that the representation ρ is then unitary: ρ(g)v, ρ(x)w = v, w (1.10) for all g G and all v, w V. Thus, for instance, with k = C, when V is finite dimensional, for each ρ(x) there is an orthonormal basis of V with respect to which the matrix of ρ(x) is diagonal, with unit-modulus elements along the diagonal. Part of this result is, however, more general: Proposition 1.7.1 Suppose ρ is a representation of a finite group G on a finite-dimensional vector space E over an algebraically closed field of characteristic 0. Then for every g G there is a basis of V with respect to which the matrix of ρ(g) is diagonal, with diagonal entries being roots of unity. For the proof we will use the Jordan canonical form of any non-zero element T End k (E). Since End k (E) is a finite-dimensional vector space over k, there is a smallest positive integer n such that there is a polynomial p(x) of degree n for which p(t ) equals 0. Since k is algebraically closed, p(x) factorizes as p(x) = (X a 1 ) m 1...(X a r ) mr for some a 1,..., a r k and positive integers m i. The fact that p(x) is the polynomial of smallest degree which vanishes on T implies that E i def = ker(t a i I) mi 0

12 Ambar N. Sengupta for each i, for otherwise these factors could be dropped from p(x). Moreover, E = E 1... E r, which can be shown from the fact that the factors (X a i ) m i are distinct and hence of greatest common divisor equal to 1 (or any non-zero element in k). Let N i be equal to T a i I on E i, and equal to 0 on all other E j. Let I i be the identity map in E i, and 0 on all other E j. Then r T = (a i I i + N i ) (1.11) i=1 Note that each N i is nilpotent in the sense that a positive power of N i is 0, and the different terms a i I + N i commute with each other. Proof. Let g G, and T = ρ(g). Since G is finite there is a positive integer p such that g p is the identity. Hence T p = I. Expressing T as in (1.11) it follows then that r I = T p = (a p i I i + N i), i=1 where each N i is nilpotent and zero on all E j except j = i. Then I r i=1 ap i I i is nilpotent, which implies that it is, in fact, 0. Thus N i is 0. Writing out what N i is, in terms of N i, we have pa p 1 i N i + N i N 2 i = 0 for some N i. If ν i is the smallest positive integer for which N ν i i ν i were 2, then we would have 0 = (pa p 1 i N i + N i Ni 2 )N ν i 2 i = pa p 1 i N ν i 1 i = 0, and if We have assume that the field k has characteristic 0. The only possibility then is that ν i = 1, which means that N i = 0. Thus, T is diagonal. Moreover, a p i = 1 for each i. QED Exercises 1. Give an example of a representation ρ of a finite group G on a finitedimensional vector space V over a field of characteristic 0, such that there is an element g G for which ρ(g) is not diagonal in any basis of V.

Chapter 2 The Group Algebra All through this chapter G is a finite group, and k a field of characteristic 0. For some results, we need to also assume that k is algebraically closed. As we will see, representations of G correspond to representations of an algebra, called the group algebra, formed from k and G. A vast amount of information about the representations of G will fall out of studying representations of such algebras. For us, a ring is always a ring with a unit element 1 0. 2.1 Definition of k[g] It is extremely useful to introduce the group algebra k[g] This is the set of all formal linear combinations a 1 x 1 + + a m x m with any m {1, 2,...}, a 1,..., a m k, and x 1,..., x m G. We add and multiply these new objects in the only natural way that is sensible. For example, (2x 1 + 3x 2 ) + ( 4x 1 + 5x 3 ) = ( 2)x 1 + 3x 2 + 5x 3 and (2x 1 4x 2 )(x 4 + x 3 ) = 2x 1 x 4 + 2x 1 x 3 4x 2 x 4 4x 2 x 3. 13

14 Ambar N. Sengupta Officially, k[g] consists of all maps a : G k : x a x (If G is allowed to be infinite, then a x is required to be 0 for all except finitely many x G.) This function is more conveniently written as a = x G a x x. Addition and multiplication, as well as multiplication by elements c k, are defined in the obvious way: (a x + b x )x (2.1) a x x + b x x = x G x G x G ( ) a y b y 1 x x (2.2) x G a x x x G b x x = x G c x G y G a x x = x G ca x x (2.3) It is readily checked that k[g] is an algebra over k, i.e. it is a ring, a k-module, and the multiplication k[g] k[g] k[g] : (a, b) ab is k-bilinear. Sometimes it is useful to think of G as a subset of k[g], by identifying x G with the element 1x k[g]. The multiplicative unit 1e G in k[g] will be denoted 1, and in this way k can be viewed as a subset of k[g]: k k[g] : c ce G. 2.2 Representations of G and k[g] The utility of the algebra k[g] stems from the observation that any representation ρ : G End k (E)

Representations of Finite Groups 15 defines, in a unique way, a representation of k[g] in terms of operators on E. More specifically, we have, for each element a = x a x x k[g] an element ρ(a) def = x a x ρ(x) End k (E) (2.4) This induces a left k[g]-module structure on E: ( ) a x x v = a x ρ(x)v (2.5) x G x G It is very useful to look at representations in the way. Put another way, we have an extension of ρ to an algebra-homomorphism ρ : k[g] End k (E) : x G a x x x G a x ρ(x) (2.6) Thus, a representation of G corresponds to a module over the ring k[g]. A subrepresentation or invariant subspace corresponds to a submodule, and the notion of direct sum of representations corresponds to direct sums of modules. A morphism of representations corresponds to a k[g]-linear map, and an isomorphism of representations is simply an isomorphism of k[g]- modules. Conversely, if E is a k[g]-module, then we have a representation of G on E, by simply restricting multiplication to the elements in k[g] which are in G. The category of representations of G on vector spaces over k is then equivalent to the category of modules over the ring k[g]. 2.3 The center of k[g] It is easy to determine the center Z(k[G])

16 Ambar N. Sengupta of the algebra k[g]. An element a = x G a x x belongs to the center if and only if it commutes with every g G, i.e. if and only if gag 1 = a, i.e. if and only if This holds if and only if x G a x gxg 1 = x G a x x. a g 1 xg = a x for every x G (2.7) Thus, a is in the center if and only if it can be expressed as a linear combination of the elements b C = x C x, C a conjugacy class in G. (2.8) If C and C are distinct conjugacy classes then b C and b C are sums over disjoint sets of elements of G, and so the collection of all such b C is linearly independent. Thus, we have a simple but important observation: Proposition 2.3.1 The center of k[g] is a vector space with basis given by the elements b C, with C running over all conjugacy classes of G. In particular, the dimension of the center of k[g] is equal to the number of conjugacy classes in G. 2.4 A glimpse of some results We have seen that representations of G correspond to representations of the algebra k[g]. A substantial amount of information about representations of G can be gleaned just by studying the structure of modules over rings, the ring of interest here being k[g]. Of course, k[g] is not just any ring, it has a very special structural property called semisimplicity, which leads to a great deal of information about modules over it.

Representations of Finite Groups 17 Decomposing a representation into irreducible components corresponds to decomposing a k[g]-module into simple submodules (i.e. non-zero submodules which have no proper non-zero submodules). A module which is a direct sum of simple submodules is a semisimple module. As we will see, the key fact is that the ring k[g] itself, viewed as a left module over itself, is semisimple. Such a ring is called a semisimple ring. For a semisimple ring A, we will prove the following significant facts: (i) Every module over A is semisimple; applied to k[g] this means that every representation of G is a direct sum of irreducible representations. (ii) There are a finite number of simple left ideals L 1,..., L s in A such that every simple A-module is isomorphic to exactly one of the L j ; thus, there are only finitely many isomorphism classes of irreducible representations of G, and each irreducible representation is isomorphic to a copy inside k[g]. In particular, every irreducible representation of G is finite dimensional. (iii) If A j is the sum of all left ideals in A isomorphic to L j (as in (ii)) then A j is a two-sided ideal. As a ring, A is isomorphic to the product s j=1 A j. Each A j can be expressed also as a direct sum of left ideals isomorphic to L j. Applied to the algebra k[g], where k is algebraically closed, the ring A j is isomorphic, as a k-algebra, to End k (L j ), and so has dimension (dim k L j ) 2. Thus, the dimension of the k-vector space k[g] is G = L j ( dimk L j ) 2, (2.9) where the sum is over all the non-isomorphic irreducible representations L j of G. (iv) The two-sided ideals A j are of the form where u 1,..., u s are idempotents: A j = u j A u 2 j = u j, u j u k = 0 for every j k, (2.10) 1 = u 1 + + u s, (2.11)

18 Ambar N. Sengupta and u 1,..., u s form a basis of the vector space Z(A), the center of A. In applying to the algebra k[g], we note that Proposition 2.3.1 says that a basis of the center of k[g] is given by all elements of the form b C = x C x, with C running over all conjugacy classes in G. Consequently, the number of distinct irreducible representations of G is equal to the number of conjugacy classes in G. 2.5 k[g] is semisimple In this section we will prove a fundamental structural property of the algebra k[g] which will yield a large trove of results about representations of G. This property is semisimplicity, which we define as: Definition 2.5.1 A module E over a ring is semisimple if for any submodule F in E there is a complementary submodule F, i.e. a submodule F for which E is the direct sum of F and F. A ring is semisimple if it is semisimple as a left module over itself. Our choice of terminology isn t perfect, but eventually it will make more sense. The definition of semisimplicity of a ring here is in terms of viewing itself as a left module over itself. It will turn out, eventually, that a ring is left semisimple if and only if it is right semisimple. Out immediate objective is to prove: Theorem 2.5.1 Every module over the ring k[g] is semisimple. In particular, k[g] is semisimple. We use our standing hypothesis that the character of k is 0. But it is sufficient to assume that the character, if non-zero, is not a divisor of G. Proof. Let E be a k[g]-module, and F a k[g]-submodule. We have then the k-linear inclusion j : F E

Representations of Finite Groups 19 and so, since E and F are vector spaces over k, there is a k-linear map satisfying P : E F P j = id F (Choose a basis of F and extend to a basis of E. Then let P be the map which keeps each of the basis elements of F fixed, but maps all the other basis elements to zero.) All we have to do is modify P to make it k[g]-linear. The action of G on Hom k (F, E) given by (g, A) gag 1 keeps the inclusion map j invariant. Consequently, xp x 1 j = id F for all x G So, using for the first time our standing hypothesis that the character of k is 0 (all we need is that it is not a divisor of G ), we have where P = 1 G P j = id F xp x 1, x G Clearly, P is G-invariant and hence k[g]-linear. Therefore, E splits as a direct sum of k[g]-submodules: where E = F F, F = ker P is also a k[g]-submodule of E. Thus, every submodule of a k[g]-module has a complementary submodule. In particular, this applies to k[g] itself, and so k[g] is semisimple. QED The map k[g] k[g] : x ˆx = x(g)g 1 g G

20 Ambar N. Sengupta reverses left and right in the sense that (xy) = ŷˆx This makes every right k[g]-module s a left k[g]-module by defining the left module structure through g v = vg 1, and then every sub-right-module is a sub-left-module. Thus, k[g], viewed as a right module over itself, is also semisimple. Exercises 1. Let G be a finite group and G the set of all non-zero multiplicative homomorphisms G k. For f G, let s f = x G f(x 1 )x. (i) Show that {cs f : c k} is an invariant subspace of k[g]. (ii) Show that if f G then x G f(x)f(x 1 ) 0. [Hint: Let P f be the k[g]-linear projection onto the invariant subspace {cs f : c k}. Consider P f (P f e G ).] 2. If k is a field of characteristic p > 0, and G a finite group with G a multiple of p, show that k[g] is not semisimple. 3. For g G, let T g : k[g] k[g] : a ga. Show that { G if g = e; Tr(T g ) = 0 if g e (2.12) 4. For g, h G, let T (g,h) : k[g] k[g] : a gah 1. Show that { 0 if g and h are not conjugate; Tr(T (g,h) ) = G if g and h belong to the same conjugacy class C. C (2.13)

Chapter 3 Semisimple Modules and Rings: Structure and Representations In this chapter we will determine the structure of semisimple modules and rings. A large number of results on representations of such algebras will follow easily once the structure theorems have been obtained. We will be working with modules over a ring A with unit 1 0. 3.1 Schur s Lemma Let A be a ring with unit 1 0. Note that A need not be commutative (indeed, for the purposes of this section and the next, A need not even be associative). Definition 3.1.1 A module E over a ring is simple if it is 0 and if its only submodules are 0 and E. Suppose f : E F is linear, where E is a simple A-module and F an A-module. The kernel ker f = f 1 (0) is a submodule of E and hence is either {0} or E itself. If, moreover, F is also simple then f(e), being a submodule of F, is either {0} or F. Thus we have the simple, but powerful, Schur s Lemma: 21

22 Ambar N. Sengupta Proposition 3.1.1 If E and F are simple A-modules then in Hom A (E, F ) every non-zero element is a bijection, i.e. an isomorphism of E onto F. For a simple A-module E, this implies that End A (E) is a division ring. We can now specialize to a case of interest, where A is a finite-dimensional algebra over an algebraically closed field k. We can view k as a subring of End A (E): k k1 End A (E), where 1 is the identity element in End A (E). The assumption that k is algebraically closed implies that k has no proper finite extension, and this leads to the following consequence: Proposition 3.1.2 Suppose A is a finite-dimensional algebra over an algebraically closed field k. Then for any simple A-module E, which is a finite dimensional vector space over k, End A (E) = k, upon identifying k with k1 End A (E). Moreover, if E and F are simple A-modules, then Hom A (E, F ) is either {0} or a one-dimensional vector space over k. Proof. Let x End A (E). Suppose x / k1. Note that x commutes with all elements of k1. Since End A (E) End k (E) is a finite-dimensional vector space over k, there is a smallest natural number n {1, 2,...} such that 1, x,..., x n are linearly dependent over k, i.e. there is a polynomial p(x) k[x], of lowest degree, with deg p(x) = n 1, such that p(x) = 0. Since k is algebraically closed, p(x) factorizes over k as p(x) = (X λ)q(x)

Representations of Finite Groups 23 for some λ k. Consequently, x λ1 is not invertible, for otherwise q(x), of lower degree, would be 0. Thus, by Schur s lemma, x = λ1 k1. Now suppose E and F are simple A-modules, and suppose there is a nonzero element f Hom A (E, F ). By Proposition 3.1.1, f is an isomorphism. If g is also an element of Hom A (E, F ), then f 1 g is in End A (E, E), and so, by the first part of this result, is a multiple of the identity element in End A (E). Consequently, g is a multiple of f. QED 3.2 Semisimple Modules Let A be a ring with unit element 1 0, possibly non-commutative. We have in mind, as always, the example of k[g]. Indeed, we will not need A to be associative either; A could, for example, be a Lie algebra. One fact we will, however, need is that for any element x in an A-module M, the subset Ax is also an A-module. Recall that a module E is semisimple if every submodule has a complement, i.e. if F is a submodule of E then there is a submodule F such that E is the direct sum of F and F. Below we shall prove that this is equivalent to E being a direct sum of simple submodules, but first let us observe: Proposition 3.2.1 Submodules and quotient modules of semisimple modules are semisimple. Proof. Let E be a semisimple module and F a submodule. Let G be a submodule of F. Then G has a complement G in E: E = G G. If f F then we can write this uniquely as with g G and g G. Then f = g + g g = f g F and so, in the decomposition of f F as g + g, both g and g are in f. We conclude that F = G (G F )

24 Ambar N. Sengupta Thus every submodule of F has a complement inside F. Thus, F is semisimple. If F is the complementary submodule to F in E, then E/F F, and so E/F, being isomorphic to the submodule F, is semisimple. QED Before turning to the fundamental facts about semisimplicity of modules, let us recall a simple fact from vector spaces: if T is a linearly independent subset of a vector space, and S a subset which spans the whole space, then a basis of the vector space is formed by adjoining to T a maximal subset of S which respects linear independence. A similar idea will be used in the proof below for simple modules. Theorem 3.2.1 The following conditions are equivalent for an A-module E: (i) E is a sum of simple submodules (ii) E is a direct sum of simple submodules (iii) Every submodule F of E has a complement, i.e. there is a submodule F such that E = F F. If E = {0} then the sum is the empty sum. Proof. Suppose {E j } j J is a family of simple submodules of E, and F a submodule of E with F j J E j. By Zorn s lemma, there is a maximal subset K of J such that the sum H = F + k K E k is a direct sum. For any j J, the intersection E j H is either 0 or E j. It cannot be 0 by maximality of K. Thus, E j H for all j J, and so j J E j H. Thus, j J E j = F + k J E j, the latter being a direct sum.

Representations of Finite Groups 25 Applying this observation to the case where {E j } j J span all of E, and taking F = 0, we see that E is a direct sum of some of the simple submodules E k. This proves that (i) implies (ii). Applying the result to a family {E j } j J which gives a direct sum decomposition of E, and taking F to be any submodule of E, it follows that E = F F, where F is a direct sum of some of the simple submodules E k. Thus, (ii) implies (iii). Now assume (iii). Let F be the sum of a maximal collection of simple submodules of E. Then E = F F, by (iii), for a submodule F of E. We will show that F = 0. Suppose F 0. Then, as we prove below, F has a simple submodule, and this contradicts the maximality of F. Thus, E is a sum of simple submodules. It remains to show that if (iii) holds then every non-zero submodule F contains a simple submodule. Since F 0, it contains a non-zero element x which generates a submodule Ax. If Ax is simple then we are done. Suppose then that Ax is not simple. We will produce a maximal proper submodule of Ax; its complement inside Ax will then have to be simple. Any increasing chain {F α } of proper submodules of Ax has union α F α also a proper submodule of Ax, because x is outside each F α. Then, by Zorn s lemma, Ax has a maximal submodule M. By assumption (iii) and Proposition 3.2.1, the submodule Ax will also have the property that every submodule has a complement; in particular, M has a complement F in Ax, which must be non-zero since M Ax. The submodule F cannot have a proper non-zero submodule, because that would contradict the maximality of M. Thus, we have produced a simple submodule inside any given non-zero submodule F in E. QED Theorem 3.2.1 leads to a full structure theorem for semisimple modules. But first let us oberve something about simple modules, which again is analogous to the situation for vector spaces. Indeed, the proof below is by means of viewing a module as a vector space. Proposition 3.2.2 If E is a simple A-module, then E is a vector space over the division ring End A (E). If E n E m as A-modules, then n = m. Proof. If E is a simple A-module then, by Schur s lemma, D def = End A (E)

26 Ambar N. Sengupta is a division ring. Thus, E is a vector space over D. Then E n is the product vector space over D. If dim D E were finite, then we would be done. In the absence of this, the procedure (which seems like a clever trick) is to look at End A (E n ). This is a vector space over D, because for any λ D and A-linear f : E n E n, the map λf is also A-linear. In fact, each element of End A (E n ) can be displayed, as usual, as an n n matrix with entries in D. Moreover, this effectively provides a basis of the D-vector space End A (E n ) consisting of n 2 elements. Thus, E n E m implies n = m. QED Now we can turn to the uniqueness of the structure of semisimple modules of finite type: Theorem 3.2.2 Suppose a module E over a ring A can be expressed as E E m 1 1... E mn n (3.1) where E 1,..., E n, are non-isomorphic simple modules, and each m i a positive integer. Suppose also that E can be expressed also as E F j 1 1... F jm m where F 1,..., F m, are non-isomorphic simple modules, and each j i a positive integer. Then m = n, and each E a is isomorphic to one and only one F b, and then m a = j b. Proof. Let G be any simple module isomorphic to a submodule of E. Then composing this map G E with the projection E E r, we see that there exists an a for which the composite G E a is not zero and hence G E a. Similarly, there is a b such that G F b. Thus each E a is isomorphic to some F b. The rest follows by Proposition 3.2.2. QED It is now clear that the ring End A (E) can be identified as a ring of matrices : Theorem 3.2.3 If E is a semisimple module over a ring A, and E is the direct sum of finitely many simple modules: E E m 1 1... E mn n then the ring End A (E) is isomorphic to a product of matrix rings: End A (E) n Matr mi (D i ) (3.2) i=1

Representations of Finite Groups 27 where Matr mi (D i ) is the ring of m i m i D i = End A (E i ). matrices over the division ring The endomorphisms of the A-module E are those additive mappings E E which commute with the action of all the elements of A. Thus, End A (E) is the commutant of the ring A acting on E. The preceding result shows that if E is semisimple as an A-module, and is a sum of finitely many simple modules, then the commutant is a direct product of matrix rings. We shall see later that every such ring is semisimple (as a module over itself). Let us now examine simple modules over semisimple rings. First consider a left ideal L in A. Then A = L L, where L is also a left ideal. Then we can express the multiplicative unit 1 as 1 = 1 L + 1 L, where 1 L L and 1 L L. For any l L we then have l = l1 = l1 L + l1 L and l1 l being then in both L and L must be 0. Consequently, L LL. Of course, L being a left ideal, we also have LL L. Thus, LL = L (3.3) Using this we will prove the following convenient characterization of modules isomorphic to a given left ideal. Lemma 3.2.1 Let A be a semisimple ring, L a simple left ideal in A, and E a simple A-module. Then exactly one of the following holds: (i) LE = 0 and L is not isomorphic to E; (ii) LE = E and L is isomorphic to E.

28 Ambar N. Sengupta Proof. Since LE is a submodule of E, it is either {0} or E. We will show that LE equals E if and only if L is isomorphic to E. Assuming LE = E, take a y E with Ly 0. By simplicity of E, then Ly = E. The map L E = Ly : a ay is an A-linear surjection, and it is injective because its kernel, being a submodule of the simple module L, is {0}. Thus, if LE = E then L is isomorphic to E. Now we will show that, conversely, if L is isomorphic to E then LE = E. If f : L E is A-linear we have then E = f(l) = f(ll) = Lf(L) = LE Thus, if f is an isomorphism then E = LE. QED Let us note that any two isomorphic left ideals are right translates of each another: Proposition 3.2.3 If L and M are isomorphic left ideals in a semisimple ring A then L = Mx, for some x A. Proof. Suppose F : M L is an isomorphism. Composing with a projection p M : A M, we obtain a map which is A-linear. Hence, where G = F p M : A L G(a) = G(a1) = ag(1) = ax, x = G(1) = F (p M (1)). Restricting the map G to M we see that G(M) = Mx. But p M and F are both surjective, and so Mx = L. QED

Representations of Finite Groups 29 3.3 Structure of Semisimple Rings In this section we will work with a semisimple ring A. Recall that this means that A is semisimple as a left module over itself. Recall that, by semisimplicity, A decomposes as a direct sum of simple submodules. A submodule in A is just a left ideal. Thus, we have a decomposition A = {all simple left ideals of A} In this section we will see that if we sum up all those ideals which are isomorphic to each other and call this submodule A i, then A i is a two-sided ideal and a subring in A, and A is the direct product of these rings. Let {L i } i R be a maximal family of non-isomorphic simple left ideals in A. Let By (3.2.1), we have So A i = {L : L is a left ideal isomorphic to L i } LL = 0 if L is not isomorphic to L. A i A j = 0 if i j (3.4) Since A is semisimple, it is the sum of all its simple submodules, and so A = i I A i. Now each A i is clearly a left ideal. It is also right ideal because A i A = A i A j = A i A i A i. j Thus, A is a sum of two-sided ideals A i. We will soon see that the index set R is finite. The unit element 1 A decomposes as 1 = i R u i (3.5)

30 Ambar N. Sengupta where u i A i, and the sum is finite, i.e. all but finitely many u i are 0. For any a A we can write a = i R a i with each a i in A i. Then, on using (3.4), a j = a j 1 = a j u j = au j Thus a determines the components a j uniquely, and so the sum A = i R A i is a direct sum. If some u j were 0 then all the corresponding a j would be 0, which cannot be since each A j is non-zero. Consequently, the index set R is finite. Since we also have, for any a A, a = 1a = i u i a, we have from the fact that the sum A = i A i is direct, u i a = a i = au i. Thus A i is a two-sided ideal. Clearly, u i is the identity in A i. We have arrived at the wonderful structure theorem for semisimple rings: Theorem 3.3.1 Suppose A is a semisimple ring. Then there are finitely many left ideals L 1,..., L r in A such that every left ideal of A is isomorphic, as a left A-module, to exactly only of the L j. Furthermore, A i = sum of all left ideals isomorphic to L i is a two-sided ideal, with a non-zero unit element u i, and A is the product of the rings A i : r A A i (3.6) Any simple left ideal in A i is isomorphic to L i. Moreover, i=1 1 = u 1 + + u r (3.7) A i = Au i (3.8) A i A j = 0 for i j (3.9)

Representations of Finite Groups 31 In Theorem 3.3.3 below, we will see that there is a ring isomomorphism A i End Ci (L i ), where C i = End A (L i ). Thus, any semisimple ring A can be decomposed as a product of endomorphism rings: r A End Ci (L i ) (3.10) i=1 where L 1,..., L r is a maximal collection of non-isomorphic simple left ideals in A, and C i = End A (L i ). An element a A is mapped, by this isomorphism, to (ã i ) 1 i r, where ã i : L i : L i : x ax. (3.11) The two-sided ideals A j are, it turns out, minimal two-sided ideals, and every two-sided ideal in A is a sum of certain A j. We will prove this using some results which we prove later in subsection 3.3.1 below. Proposition 3.3.1 Each A j is a minimal two-sided ideal in A, and every two-sided ideal in A is a sum of some of the A j. Proof. Let I be a two-sided ideal in A. Then AI I, but also I AI since 1 A. Hence I = AI = A 1 I + + A r I Note that A j I is a two-sided ideal, and A j I A j. The ring A j has the special property that every simple left ideal in A j is isomorphic to the same simple left ideal, L j. As we prove in Proposition 3.3.3 below, this implies that the only two-sided ideals in A j are 0 and A j. Thus, A j I is either 0 or A j. Consequently, I = A j. QED j: A j I 0 It is useful to summarize the properties of the elements u i : Proposition 3.3.2 The elements u 1,..., u r satisfy u 2 i = u i, u i u j = 0 if i j (3.12) u 1 + + u r = 1 (3.13) Multiplication by u i in A is the identity on A i and is 0 on all A j for j 1. If A is a finite dimensional algebra over an algebraically closed field k, then u 1,..., u r form a vector space basis of Z(A).

32 Ambar N. Sengupta For the last claim above, we use Proposition 3.3.4, which implies that each Z(A i ) is the field k imbedded in A i, and so every element in Z(A i ) is a multiple of the unit element u i. The decomposition A r i=1 A i then implies that the center of A is the linear span of the u i. We will return to a more detailed examination of idempotents later. 3.3.1 Simple Rings The subrings A j are isotypical or simple rings, in that they are sums of simple left ideals which are all isomorphic to the same left ideal L j. Definition 3.3.1 A ring B is simple if it is a sum of simple left ideals which are all isomorphic to each other as left B-modules. Since, by Proposition 3.2.3, all isomorphic left ideals are right translates of one another, a simple ring B is a sum of right translates of any given simple left ideal L. Consequently, B = LB if B is a simple ring, and L any simple left ideal. (3.14) As consequence we have: Proposition 3.3.3 The only two-sided ideals in a simple ring are 0 and the whole ring itself. Proof. Let I be a two-sided ideal in a simple ring B, and suppose I 0. By simplicity, I is a sum of simple left ideals, and so, in particular, contains a simple left ideal L. Then by (3.14) we see that LB = B. But LB I, because I is also a right ideal. Thus, I = B. QED For a ring B, any B-linear map f : B B is completely specified by the value f(1), because f(b) = f(b1) = bf(1) Moreover, if f, g End B (B) then and so we have a ring isomorphism (fg)(1) = f(g(1)) = g(1)f(1) End B (B) B opp : f f(1) (3.15)

Representations of Finite Groups 33 where B opp is the ring B with multiplication in opposite order: We then have (a, b) ba Theorem 3.3.2 If B is a simple ring, isomorphic as a module to M n, for some simple left ideal M and positive integer n, then B is isomorphic to the ring of matrices B Matr n (D opp ), (3.16) where D is the division ring End B (M). Proof. We know that there are ring isomorphisms B opp End B (B) = End B (M n ) Matr n (D) Taking the opposite ring, we obtain an isomorphism of B with Matr n (D) opp. But now consider the transpose of n n matrices: Matr n (D) opp Matr n (D opp ) : A A t. Then, working in components of the matrices, and denoting multiplication in D opp by : (A B) t ik = (BA) ki = n B kj A ji = j=1 n A ji B kj, j=1 which is the ordinary matrix product A t B t in Matr n (D opp ). Thus, the transpose gives an isomorphism Matr n (D) opp Matr n (D opp ). QED The opposite ring often arises in matrix representations of endomorphisms. If M is a 1-dimensional vector space over a division ring D, with a basis element v, then to each T End D (M) we can associate the matrix element ˆT D specified through T (v) = ˆT v. But then, for any S, T End D (M) we have ŜT = ˆT Ŝ Thus, End D (M) is isomorphic to D opp, via its matrix representation. There is a more abstract, coordinate free version of Theorem 3.3.2. First let us observe that for a module M over a ring A, the endomorphism ring A = End A (M)

34 Ambar N. Sengupta is the commutant for A, i.e. all additive maps M M which commute with the action of A. Next, A = End A (M) is the commutant of A. Since, for any a A, the multiplication commutes with every element of A, it follows that Note that l(a) : M M : x ax (3.17) l(a) A l(ab) = l(a)l(b) and l maps the identity element in A to that in A, and so l is a ring homomorphism. The following result is due to Rieffel: Theorem 3.3.3 Let B be a simple ring, L a non-zero left ideal in B, B = End B (L), B = End B (L) and l : B B the natural ring homomorhism given by (3.17). Then l is an isomorphism. In particular, every simple ring is isomorphic to the ring of endomorphisms on a module. Proof. To avoid confusion, it is useful to keep in mind that elements of B and B are all maps Z-linear maps L L. The ring morphism l : B B is given explicitly by l(b)x = bx, for all b B, and x L. It maps the unit element in B to the unit element in B, and so is not 0. The kernel of l 0 is a two-sided ideal in a simple ring, and hence is 0. Thus, l is injective. We will show that l(b) is B. Since 1 l(b), it will be sufficient to prove that l(b) is a left ideal in B.

Representations of Finite Groups 35 Since LB contains L as a subset, and is thus not {0}, and is clearly a two-sided ideal in B, it is equal to B: LB = B. This key fact implies l(l)l(b) = l(b) Thus, it will suffice to prove that l(l) is a left ideal in B. We can check this as follows: if f B and x, y L then ( fl(x) ) (y) = f(xy) = f(x)y because L L : x xy is in B = End B (L) = l ( f(x) ) (y), thus showing that f l(x) = l ( f(x) ), and hence l(l) is a left ideal in B. QED Lastly, let us make an observation about the center of a simple ring: Proposition 3.3.4 If B is a simple ring then its center Z(B) is a field. If B is a finite-dimensional simple algebra over an algebraically closed field k, then Z(B) = k1. Proof. Let z Z(B), and consider the map l z : B B : b zb. Because z commutes with all elements of B, this map is B-linear. The kernel ker l z is a two-sided ideal; it would therefore be {0} if l z 0. Now l z (1) = z, and so ker l z must be {0} if z is not 0: ker l z = {0} if z 0. The image l z (B) is also a two-sided ideal and so: l z (B) = B if z 0. Thus, l z is a linear isomorphism if z 0, and l : Z(B) End B (B) : z l z

36 Ambar N. Sengupta is a Z-linear injection. Moreover, l z l w = l zw. For z 0 in Z(B), writing y = lz 1 (1), we have yz = zy = l z (y) = 1. Thus, every non-zero element in Z(B) is invertible. Since Z(B) is commutative and contains 1 0, we conclude that it is a field. Suppose now that B is a finite dimensional k-algebra, and k is algebraically closed. Then any z Z(B) not in k would give rise to a proper finite extension of k and this is impossible. In more detail, since Z(B) is a finite-dimensional vector space over k, there is a smallest integer n 1 such that 1, z,...,z n are linearly dependent, and so there is a polynomial p(x) of degree n, with coefficients in k, such that p(z) = 0. Since k is algebraically closed, there is a λ k, and a polynomial q(x) of degree n 1 such that p(x) = (X λ)q(x), and so z λ1 is not invertible and therefore z = λ1. Thus, Z(B) = k1. QED 3.4 Semisimple Algebras as Matrix Algebras Let us pause to put together some results we have already proved to see that: (i) every finite dimensional simple algebra over an algebraically closed field k is isomorphic to the algebra of all d d matrices over k, for some d; (ii) every finite dimensional semisimple algebra over an algebraically closed field k is isomorphic to the algebra of all matrices of block-diagonal form, the i-th block running over d i d i matrices over k; (iii) every finite dimensional semisimple algebra A over an algebraically closed field k is isomorphic to its opposite algebra A opp. Observation (i) may be stated more completely, as the following consequence of Theorem 3.3.3: Proposition 3.4.1 If an algebra B over an algebraically closed field k is simple, and L is any simple left ideal in B, then B is isomorphic as a k- algebra to End k (L). In particular, if B is finite dimensional over k then dim k B = [dim k (L)] 2 (3.18)

Representations of Finite Groups 37 Observation (ii) then follows from the fact that every semisimple algebra is a product of simple algebras. Observation (iii) follows from (ii), upon noting that the matrix transpose operation produces an isomorphism between the algebra of all square matrices of a certain degree with its opposite algebra. 3.5 Idempotents Idempotents play an important role in the structure of semisimple algebras. When represented on a module, an idempotent is a projection map. The decomposition of 1 as a sum of idempotents corresponds to a decomposition of a module into a direct sum of submodules. Idempotents will be a key tool in constructing representations of S n in Chapter 6. Before proceeding to the results, let us note an example. Consider a finite group G and let τ : G k be a one-dimensional representation of G (for example τ(x) = 1 for all x G). Then consider u τ = 1 τ(x 1 )x k[g], G x G assuming that the character of k does not divide G. checked that u τ is an idempotent: u τ 2 = u τ. Then it is readily We have used this idempotent already (in the case τ = 1), in proving semisimplicity of k[g]. Idempotents generate left ideals, and, conversely, as we see shortly, every left ideal in a semisimple ring is generated by an idempotent. Consider a left ideal L in a semisimple ring A. We have then a complementary ideal L with A = L L and so there is an A-linear projection map p L : A L.

38 Ambar N. Sengupta But A-linearity puts a serious restriction on this map. Indeed, we have p L (a) = p(a 1) = ap L (1) (3.19) and so p L is simply multiplication on the right by the constant The image of p is then u L = p L (1). p L (A) = Au L But p L, being the projection onto L, is surjective! Thus, Thus, every left ideal is of the form Note that L = Au L. (3.20) Au L. u L = p L (1) A Moreover, since p L is the identity when restricted to L, we have l = p L (l) = lu L, for all l L (3.21) In particular, applying this to l = u L, we see that u L is an idempotent: u 2 L = u L. (3.22) Indeed, this is a reflection of the fundamental property of a projection map: p L (p L (a)) = p L (a) for all a A. Let us summarize our observations: Proposition 3.5.1 Every left ideal L in a semisimple ring A is of the form Au L for some u L L: L = Au L (3.23) The element u L is an idempotent, i.e. u 2 L = u L. (3.24)

Representations of Finite Groups 39 Moreover, yu L = y if and only if y L. (3.25) Conversely, if u is an idempotent then Au is a left ideal and the map A Au : x xu is an A-linear projection map onto the submodule Au, carrying 1 to the generating element u. Now suppose M L is a left ideal contained in L. We have then a decomposition which yields a decomposition L = M M A = L L = M M L. Thus, the projection p ML of L onto M composes with p L to give p M : Applying this to the unit element 1, we have: On the other hand, applying (3.26) to u L gives: Combining these observations, we have p M = p ML p L. (3.26) u M = p ML (u L ) (3.27) u L u M = p ML (u L ) (3.28) u L u M = u M. (3.29) Similarly, u L u M = u M. Viewing these idempotents all as projections of the unit element 1 onto the various ideals, we see also that Consequently, We say that u M and u M are orthogonal. Thus, u L = u M + u M. (3.30) u Mu M = 0 = u M u M. (3.31)

40 Ambar N. Sengupta Proposition 3.5.2 If M L are left ideals in a semisimple ring A, then there are idempotents u L, u M and u in A, such that (i) L = Au L, (ii) M = Au M, (iii) u L = u M + u, and (iv) u M u = 0 = u u M. Thus, L is the direct sum of the ideals M and Au, which have orthogonal idempotent generators u M and u. Note that it may well be that M and a complementary module M (with L = M + M as a direct sum) have other non-orthogonal idempotent generators. (See Exercise 3.3.) In the converse direction we have: Proposition 3.5.3 Suppose u decomposes into a finite sum of orthogonal idempotents v i : u = v 1 + + v m, v 2 j = v j, v j v k = 0 when j k. Then u is an idempotent: u 2 = u, and Au is the internal direct sum of the submodules Av j : Au = m Av j. j=1 Proof. Squaring u gives u 2 = j v 2 j + j k v j v k = j v j = u. Thus, u is an idempotent. It is clear that Au j Av j

Representations of Finite Groups 41 For the converse direction, we have v j u = vj 2 + v j v k = vj 2 = v j, k j (3.32) and so Av j j = Av j u j Au. Next, suppose a j v j = 0, for some a j A. Multiplying on the right by v i gives: j 0 = j a j v j v i = a i v 2 i + 0 = a i v i, and so each a i v i is 0. QED This result suggests that we could start with the unit element 1 A and keep splitting it into orthogonal idempotents, as long as possible. Thus we would aim to write 1 = u 1 + + u r, where u 1,..., u r are idempotents, and u i u j = 0 when i j, in such a way that this process cannot be continued further. This leads us to the following natural concept: Definition 3.5.1 A primitive idempotent in a ring A is an element u A which is an idempotent, i.e. satisfies u 2 = u, and is primitive in the sense that u 0 and if u = v + w with v and w being also idempotents, such that then v or w is 0. vw = 0 = wv,