Integrality of Two Variable Kostka Functions Friedrich Knop* Department of Mathematics, Rutgers University, New Brunswick NJ 08903, USA knop@math.rutgers.edu 1. Introduction q-alg/9603027 29 Mar 96 Macdonald dened in [M1] a remarkable class of symmetric polynomials P (x; q; t) which depend on two parameters and interpolate between many families of classical symmetric polynomials. For example P (x; t) = P (x; 0; t) are the Hall-Littlewood polynomials which themselves specialize for t = 0 to Schur functions s. Also Jack polynomials arise by taking q = t and letting t tend to 1. The Hall-Littlewood polynomials are orthogonal with respect to a certain scalar product h; i t. The scalar products K = hs ; s i 0 are known as Kostka numbers. Since it has an interpretation as a weight multiplicity of a GL n -representation, it is a natural number. Also the scalar products K (t) = hp (x; t); s i t are important polynomials in t. Their coecients have been proven to be natural numbers by Lascoux-Schutzenberger [LS]. Macdonald conjectured in [M1] that even the scalar products K (q; t) = hp (x; q; t); s i t are polynomials in q and t with non-negative integers as coecients. In this note we prove that K (q; t) is at least a polynomial. The positivity seems to be much deeper and is, to my knowledge, unsolved up to know. Our main tool has also been introduced by Macdonald. Following a construction of Opdam [O1] in the Jack polynomial case, he constructed a family E of non-symmetric polynomials. Here the indexing set is now all of N n. They have properties which are very similar to those of the symmetric functions, but in a sense they are easier to work with. In particular, we exhibit a very simple recursion formula (a \creation operator") in terms of Hecke operators for them. This formula enables us to prove an analogous conjecture for these non-symmetric polynomials, at least what polynomiality concerns. At the end, we obtain our main result by symmetrization. * Partially supported by a grant of the NSF 1
The proof follows very closely a proof of an analogous conjecture for Jack polynomials. In this case we were even able to settle positivity. This will appear along with a new combinatorial formula for Jack polynomials as a joint paper with S. Sahi. The main dierence to the Jack polynomial case is, of course, the appearence of Hecke operators. Furthermore, we introduce a certain basis which is the non-symmetric analogue of Hall- Littlewood polynomials. But the main point is again a very special creation operator which in the present case is f(z 1 ; : : : ; z n ) = z n f(q?1 z n ; z 1 ; : : : ; z n?1 ): Its existence is extremely connected to the fact that we are working with the root system of GL n as opposed to SL n. Therefore, I don't think that the techniques presented here generalize to arbitrary other root systems. 2. Denitions Let := N n and + be the subset of all partitions. For = ( i ) 2 we put jj := P i i and l() := maxfi j i 6= 0g (with l(0) := 0). To each 2 corresponds a monomial z. There is a (partial) order relation on. First, recall the usual order on the set + : we say if jj = jj and 1 + 2 + : : : + i 1 + 2 + : : : + i for all i = 1; : : : ; n: This order relation is extended to all of as follows. Clearly, the symmetric group W = S n on n letters acts on and for every 2 there is a unique partition + in the orbit W. For all permutations w 2 W with = w + there is a unique one, denoted by w, of minimal length. We dene if either + > + or + = + and w w in the Bruhat order of W. In particular, + is the unique maximum of W. Let k be the eld C (q; t), where q and t are formal variables and let P := k[z 1 ; : : : ; z n ] be the polynomial ring and P 0 = k[z 1 ; z?1 1 ; : : : ; z n; z?1 n ] the Laurent polynomial ring over k. There are involutory automorphisms : x 7! x of k=c with q = q?1, t := t?1, and a - semilinear involution P 0 7! P 0 : f 7! f with z i = z?1 i. For f 2 P let [f] 1 be its constant term. Fix an r 2 N and put t = q r. Then Cherednik (see [C1], [M3] x5) denes a certain Laurent polynomial r (x; t) and a pairing on P 0 which is Hermitian with respect to by hf; gi := [ r fg] 1 : Non-symmetric Macdonald polynomials are dened by the following theorem. 2.1. Theorem. ([M3] x6) For every 2 there is a unique polynomial E (z; q; t) 2 P satisfying 2
P i) E (z) = z + c 2:< (q; t)z and ii) he (z); z i r = 0 for all 2 with < and almost all r 2 N. Moreover, the collection fe j 2 g forms a k-linear basis of P. The symmetric group W acts also on P in the obvious way. We are going to dene a basis of P W, the algebra of symmetric functions, which is parametrized by +. One basis is already given by the monomial symmetric functions m, 2 +. Next, we dene the symmetric Macdonald polynomials: 2.2. Theorem. ([M3] 1.5 For every 2 + there is a unique symmetric polynomial J (z; q; t) 2 P W satisfying P i) J (z) = m + 2 + :< c0 (q; t)m and ii) hj (z); m i r = 0 for all 2 + with < and almost all r 2 N. Moreover, the collection fj j 2 g forms an k-linear basis of P W. 3. The Hecke algebra The scalar product above is not symmetric in the variables z i. Therefore, we dene operators which replace the usual W -action. Let s i 2 W be the i-th simple reection. First, we dene the operators N i := (z i? z i+1 )?1 (1? s i ) and then H i := s i? (1? t)n i z i H i := s i? (1? t)z i+1 N i : They satisfy the relations H i? H i = t? 1; H i H i = t: This means that both H i and?h i solve the equation (x + 1)(x? t) = 0. Also the braid relations hold H i H i+1 H i = H i+1 H i H i+1 i = 1; : : : ; n? 2 H i H j = H j H i ji? jj > 1 This means that the algebra generated by the H i is a Hecke algebra of type A n?1. For all this see [M3]. For compatibility, let me remark that our parameter t is t 2 in [M3] and our H i is tt i there. Furthermore, the simple roots (or rather their exponentials) are z i+1 =z i. The connection with the W -action is that we get the same set of invariants in the following sense: f 2 P W if and only if H i (f) = f for all i if and only if H i (f) = t f for all i. 3
The braid relations imply that H w := H i1 : : : H i k is well dened if w = s i1 : : : s i k 2 W is a reduced decomposition and similarly for H w. The following relations hold: z i+1 H i = H i z i ; H i z i+1 = z i H i i = 1; : : : ; n? 1: Now, we introduce the operator by f(z 1 ; : : : ; z n ) := f(q?1 z n ; z 1 ; : : : ; z n?1 ): The following relations are easily checked z i+1 = z i i = 1; : : : ; n? 1; z 1 = q?1 z n ; H i+1 = H i i = 1; : : : ; n? 1; 2 H 1 = H n?1 2 : The last relation means that if we dene H 0 := H 1?1 =?1 H n?1 then H 0 ; : : : ; H n?1 generate an ane Hecke algebra while H 1 ; : : : ; H n?1 ; generate the extended ane Hecke algebra corresponding to the weight lattice of GL n. In particular, there must be a family of n commuting elements: the Cherednik operators. In our particular case, they (or rather their inverses) have a very nice explicit form. For i = 1; : : : ; n put?1 i := H i H i+1 : : : H n?1 H 1 H 2 : : : H i?1 : We have the following commutation relations i+1 H i = H i i ; H i i+1 = i H i i = 1; : : : ; n? 1; i H j = H j i i 6= j; j + 1: The relation to Macdonald polynomials is as follows. For 2 dene 2 k n as i := q i t?k i where k i := #fj = 1; : : : ; i? 1 j j i g + #fj = i + 1; : : : ; n j j > i g: The following Lemma is easy to check: 3.1. Lemma. ([M3] 4.13,5.3) a) The action of i on P is triangular. More precisely, i (z ) = i z + P < c z. b) Let r 2 N. Then, with respect to the scalar product h; i r, the adjoints of H i,, i are H?1 i,?1,?1 i respectively. 4
Since = implies =, we immediately get: P 3.2. Corollary. The i admit a simultaneous eigenbasis E of the form z + c < z with eigenvalue i. Moreover, these functions coincide with those dened in Theorem 2.1. Actually, this gives a proof of Theorem 2.1 and that the i commute pairwise. 4. Recursion relations Observe that the operators z i and i behave very similarly. The only exception is, that there is no simple commutation rule for 1 as there is for z 1. Therefore, we introduce another operator which simply is := z n = qz 1. Then one easily checks the relations i+1 = i ; 1 = q n : This implies: 4.1. Theorem. Let 2 with n 6= 0 and put := ( n? 1; 1 ; : : : ; n?1 ). Then E = (E ). Observe that also the following relations hold: z i+1 = z i i = 1; : : : ; n? 1; H i+1 = H i i = 1; : : : ; n? 1; 2 H 1 = H n?1 2 : This means that if H0 ~ := H 1?1 =?1 H n?1 then H0 ~ ; H 1 ; : : : ; H n?1 generate another copy of the ane Hecke algebra, but note H0 ~ 6= H 0! Indeed, H 0 is acting on P, while H0 ~ acts only on P 0. The theorem above works as a recursion relation for E if n 6= 0. The next (well known) lemma tells how to permute two entries of. 4.2. Theorem. Let 2 and s i a simple reection. a) Assume i = i+1. Then H i (E ) = te and H i (E ) = E. b) Let i > i+1 and x := 1? i = i+1. Then xe = [xh i + 1? t]e s i(). Proof: a) Let <. Then it follows from properties of the Bruhat order that H i (z ) is a linear combination of z with <. Hence, Lemma 3.1a implies that H i (E ) is orthogonal to all z with <. By denition it must be proportional to E. The assertion follows by comparing the coeents of z. b) Denote the right hand side by E. Since the coecients of z are the same, it suces to prove that E is an eigenvector of j with eigenvalue j. This is only non-trivial 5
for j = i; i + 1. Let j = i. Then i (E) = i [x(h i + t? 1) + (1? t)]e s i() = [x i H i + (1? t) i = i+1 i ]E s i() = [x i H i + (1? t) i ]E s i() = i E: The case j = i + 1 is handled similarly. These two formulas suce to generate all E but we will need a more rened version. 4.3. Lemma. Let 2 with a 6= 0, a+1 = : : : = b = 0. Let # equal except that the a-th and b-th components are interchanged. Then (1? a t a )E = [H a H a+1 : : : H b?1? a t a H a H a+1 : : : H b?1 ]E #: Proof: We prove this by induction on b? a. Let 0 equal but with the a-th and b? 1-st components interchanged. Then 0 b?1 = a and 0 b = t?b+1. With x 0 := 1? a t b?1, Theorem 4.2b implies x 0 E 0 = [x 0 H b?1 + 1? t]e #. Hence, with x := 1? a t a, we get by induction xx 0 E = [H a : : : H b?2? (1? x)h a : : : H b?2 ][x 0 H b?1 + 1? t]e # = [x 0 H a : : : H b?2 (H b?1 + t? 1) + (1? t)h a : : : H b?2?? x 0 (1? x)h a : : : H b?1? (1? t)(1? x)h a : : : H b?2 ]E # Now observe H i (E #) = te # for i = a; : : : ; b? 2 and H i (E #) = E # by Theorem 4.2a. Hence the expression above becomes [x 0 H a : : : H b?1? x 0 (1? x)h a : : : H b?1 + + (t? 1)x 0 + (1? t)? (1? t)(1? x)t b?a?1 ]E #: The lemma follows since the constant terms cancel out. For m = 1; : : : ; n dene the operators A m := H m H m+1 : : : H n?1 A m := H m H m+1 : : : H n?1 Then we obtain: 4.4. Corollary. Let 2 with m := l() > 0. Put := ( m?1; 1 ; : : : ; m?1 ; 0; : : : ; 0). Then (1? m t m )E = [A m? m t m A m ]E. 6
5. Integrality To remove the denominators in the coecients of E we use a normalization as follows. Recall, that the diagram of 2 is the set of points (usually called boxes) s = (i; j) 2 Z 2 such that 1 i n and 1 j i. For each box s we dene the arm-length a(s) and leg-length l(s) as a(s). = i? j l 0 (s). = #fk = 1; : : : ; i? 1 j j k + 1 i g l 00 (s). = #fk = i + 1; : : : ; n j j k i g l(s). = l 0 (s) + l 00 (s) If 2 + is a partition then l 0 (s) = 0 and l 00 (s) = l(s) is just the usual leg-length. Now we dene E := Y s2(1? q a(s)+1 t l(s)+1 )E : With this normalization, we obtain: 5.1. Theorem. With the notation of Corollary 4.4 let X := q m?1 (A m? m t m A m ). Then E = X (E ). Proof: It suces to check the coecient of z. The factor q m?1 cancels the eect of = z n on this coecient. The diagram of is obtained from by taking the last nonempty row, removing the rst box s 0 and putting the rest on top. It is easy to check that arm-length and leg-length of the boxes s 6= s 0 don't change. Hence the assertion follows from Corollary 4.4 since the factor corresponding to s 0 is just 1? m t m. Now we can state our rst integrality result: 5.2. Corollary. Let E = P c z. Then c 2 Z[t; q]. Proof: Every E is obtained by repeated application of operators X. Looking at the denition of we conclude that the c are in Z[q; q?1 ; t]. We exclude the possibility of negative powers of q. For this write = z n = z n H?1 n?1 : : : H?1 1?1 1 Now, E # is an eigenvector of?1 1 with eigenvalue ( m )?1 = q? m+1 t a with some a 2 Z. This shows (by induction) that q m?1 (E ) doesn't contain negative powers of q. 7
Our goal is a more rened integrality result. For this we replace the monomial basis z by a more suitable one. For 2 let w ; ~w be the shortest permutations such that + := w?1 () is dominant (i.e. a partition) and? := ~w?1 () is antidominant. Now we dene the t-monomial m := H ~w (z? ). The reason for this is that the action of the H i becomes nicer: H i (m ) = tms i() if i i+1 m s i() + (t? 1)m if i < i+1 H i (m ) = tms i() + (1? t)m if i > i+1 m s i() if i i+1 This is easily proved by induction on the length of ~w. Moreover, it is easy to see that the transition matrix between t-monomials and ordinary monomials is unitriangular. Now we dene a length function on by L() := l(w ) = #f(i; j) j i < j; i < j g. = P tl() m, where the sum runs through all permu- 5.3. Lemma. The function m (0) tations of, is symmetric. Proof: It suces to prove H i (m (0) ) = tm(0) description of the action given above. for all i. This follows easily from the explicit Clearly, the symmetric t-monomials m (0), 2 + (later we will see that they are nothing else than the Hall-Littlewood polynomials) also have a unitriangular transition matrix to the monomial symmetric functions m. For technical reasons we need also partially symmetric t-monomials. Let 0 m n xed and 2 let 0 := ( 1 ; : : : ; m ) and 00 := ( m+1 ; : : : ; n ). We also write = 0 00. Let (m) be the set of those such that 00 is a partition. For these elements we form m (m) := X t L() m 0 where runs through all permutations of 00. For k 2 N let ' k (t) := (1? t)(1? t 2 ) : : : (1? t k ). Then [k]! := ' k (t)=(1? t) k is the t- Q factorial. For a partition we dene m i () := #fj j j = ig and b (t) := i1 ' mi()(t). Now we dene the augmented partially symmetric t-monomial as ~m (m) := b 00(t)m (m). The key result of this paper is P 5.4. Theorem. For m l() consider the expansion E =. Then the coecients c are in Z[q; t]. 8 c ~m (m) 2 (m)
Proof: By Corollary 5.2, the only denominators which can occur are products of factors of the form 1? t k (or divisors thereof). In particular, it suces to show that the c are in Z[q; q?1 ; t; t?1 ]. Therefore, as in the proof Corollary 5.2, we may replace by and A m, A m by 0 := z n H?1 n?1 : : : H?1 1 = t 1?n z n H n?1 : : : H 1 A 0 m := H m H m+1 : : : H n?1 0 A 0 m := H mh m+1 : : : H n?1 0 Therefore, the theorem is proved with the next lemma. 5.5. Lemma. a) Every ~m (m) is a linear combination of ~m (m+1) with coecients in Z[t]. b) The operators A 0 m and A 0 m commute with H m+1 ; : : : ; H n?1. In particular, they leave the space stable which is spanned by all ~m (m). c) The matrix coecients of A 0 m and A 0 m with respect to this basis are in Z[t; t?1 ]. Proof: a) is obvious and b) an easy consequence of commutation and braid relations. For c) let us start with another lemma. 5.6. Lemma. Let 2 with n 6= 0. Then 0 (m ) = t?a m where a = #fi < n j i > n g. Proof: Using braid and commutation relations, one veries 0 H i = H i?1 0 for all i > 1. Now assume i?1 > i for some i < n. Then using induction on l( ~w ) we may assume the result is correct for m s i?1(). Thus, 0 (m ) = 0 H i (m s i( )) = H i?1 0 (m s i( )) = =t?a H i?1 m s i?1() = t?a m Thus, we may assume 1 : : : n?1. Let l := n? 1 and b := maxfi < n j i lg = n? 1? a. For simplicity, we denote m by []. Thus t n?1 0 [ ] = z n H n?1 : : : H 1 [ ] = t b H n?1 : : : H b+1 z b+1 [: : : ; b ; l; b+1 ; : : :] = = t b H n?1 : : : H b+1 [: : : ; b ; l + 1; b+1 ; : : :] = t b [: : : ; n?1 ; l + 1] We are continuing with the proof of Lemma 5.5. By part a), A 0 mm (m) is symmetric in the variables z m+1 ; : : : ; z n. Therefore, it suces to investigate the coecient of [] where 00 is an anti-partition. Let [] be a typical term of m (m), i.e., 0 = 0 and 00 is a permutation 9
of 00. Let l := 1 + 1 = 1 + 1. Looking how 0 and H i act, we see that A 0 m[] is a linear combination of terms [] = [s i1 : : : s i r( 2 ; : : : ; n ; l)] where m i 1 < : : : < i r < n. If r = n? m then b 00 = b 00 = b 00 and we are done. Otherwise there is m j < n maximal such that j? 1 62 fi 1 ; : : : ; i r g. Since s j : : : s n?1 ( 2 ; : : : ; n ; l) = (: : : ; j ; l; j+1 ; : : :) we necessarily have j > l. We are only interested in the case where 00 is an anti-partition. So j has to be moved all the way to the m-th position. This means (i 1 ; : : : ; i r ) = (m; m + 1; : : : ; j? 2; j; : : : ; n? 1). Moreover, m : : : j?1 l j+1 n and j > l. There are exactly m l ( 00 ) + 1 = m l ( 00 ) such permutations 00 of 00. Each of them contributes t?a (1? t)t L(00 ) for the coecient of [] in A 0 m[]. With j, the length L( 00 ) runs through a consecutive segment b; : : : ; b + m l ( 00 ) of integers. So [] gets the factor t b?a (1? t)(1 + t + : : : + t m l( 00 ) ) = t b?a (1? t m l( 00 ) ). This shows that the coecient of [] in b 00A m m (m) is divisible by b 00. The case for A m is completely analogous and the details are left to the reader. The only change is that one only looks for the coecient of [] where 00 is a partition. The proof gives actually a little bit more: 5.7. Corollary. For all 2 we have E (z; 0; t) = m. Proof: Assume m = l() and look at A 0 mm. In the notation of the proof above, the second case (where some s j is missing) can not occur since then j = 0 would be greater than l > 0. So we get m = A 0 mm. This means that m satises the same recursion relation as E with q = 0. 6. The symmetric case and Kostka numbers Finally we come to the integrality properties of the symmetric polynomial J where 2 +. For this we normalize it as follows Y J (z; q; t). = (1? q a(s) t l(s)+1 )J (z; q; t) s2 P 6.1. Theorem. Let J (z) = c (q; t) ~m (0). Then the coecients c are in Z[q; t]. 10
Proof: Let m := l(m) and consider?, the anti-partition with? i = n+1?i and 0 := ( m? 1; : : : ; 1? 1; 0; : : : ; 0). Let E := m (E 0). Then E equals E?, except that in the normalization factor the contributions of the rst column of? are missing. Put J := (1? t)m [m 0 ()]! X w2w H w (E): We claim that J = J. Consider the subspace V of P spanned by all E w, w 2 W. Then it follows from Lemma 3.1b and the denitions that J spans V W. This shows that J is proportional to J. To show equality we compare the coecient of m. Since? is anti-dominant only those summands H w (E) have an m -term where w? =. These w form a left coset for W. Therefore, summation over this coset contribute the factor X w2w Y t l(w) = [m i ()]! = [m 0()]! (1? t) b (t): m i0 Thus, m has in J the coecient Y? b (t) 1? q a(s)+1 t l(s)+1 : s2? s6=(i;1) On the other hand, by denition, the coecient of m in J is Y s2 (1? q a(s) t l(s)+1 ): Let w = w?, the shortest permutation which transforms into?. This means w(i) > w(j) whenever i > j but w(i) < w(j) for i = j and i < j. Consider the following correspondence between boxes: 3 s = (i; j) $ s? = (w(i); j + 1) 2? : This is dened for all s with j < i. One easily veries that a(s) = a(s? ) + 1 and l(s) = l(s? ). This means that s and s? contribute the same factor in the products above. What is left out of the correspondence are those boxes of with j = i and the rst column of?. The rst type of these boxes contributes b to the factor of J. The second type doesn't contribute by construction. This shows J = J. Finally, we have to show that the coecient of ~m (0) in J is in Z[q; t]. By Corollary 5.2 it suces to show that these coecients are in A := Z[q; q?1 ; t; t?1 ]. So we can ignore negative powers of q and t and replace E by E 0 := ( 0 ) m (E 0) and similarly J by J 0. Lemma 5.6 shows that ( 0 ) m (m (m) ) equals now the symmetrization of some m 0 in the 11
rst n? m variables. Therefore, by abuse of notation let = 00 0 where 0 are the last m components of and let m (m) be the symmetrization of m in 00. The isotropy group of? is generated by simple reections. Hence E 0 is W? invariant. We may assume that is dominant for W, i.e., i i+1 whenever i = i+1. Let m ij := #fk j k = i; k = jg. Then E 0 is an A-linear combination of b 00 Q ij [m ij]! X w2w H w m Averaging over W we obtain that J 0 is an A-linear combination of (1? t) m [m 0 ()]! b 00 Q ij [m ij]! Y i [m i ()]! Y j [m j ()]! m (0) Because b = (1?t) n?m 0() Q j1 [m j()]! and b 00 = (1?t) l(00 ) Q j1 [m 0j]! the expression above equals (1? t) m+l(00 )+m 0 ()?n [m 0()]! [m 00 ]! This proves the theorem. Y i1 j0 [m i ()]! Q [m ij ]! ~m(0) To put this into a more classical perspective note: 6.2. Theorem. Let 2. Then m (0) = J j q=0. Moreover, m (0) : (respectively ~m(0) ) equals the Hall-Littlewood polynomial P (z; t) (respectively Q (z; t)) in the notation of [M2] III. Proof: The equality m (0) = J j q=0 follows from Corollary 5.7 and the equality J (z; 0; t) = P (z; t) is well known ([M2] VI.1). Finally, ~m (0) = Q (z; t) follows by comparing their denitions. Recall that the Kostka-functions K (q; t) form the transition matrix from the Macdonald polynomials J (z; q; t) to the t-schur functions S (z; t). It is known that the transition matrix from the S (z; t) to the Hall-Littlewood polynomials Q (z; t) is unitriangular ([M2]). Hence, Theorem 6.1 can be rephrased as 6.3. Theorem. For all ; 2 +, we have K (q; t) 2 Z[q; t]. 12
7. Jack polynomials Let me shortly indicate how to obtain even positivity for Jack polynomials. As already mentioned in the introduction, a detailed proof completely in the framework of Jack polynomials will appear as a joint paper with S. Sahi. There we even give a combinatorial formula in terms of certain tableaux. Let be an indeterminate and put formally q = t. Let p(q; t) 2 Q(q; t), p 0 2 Q and k 2 N. Then we write p?!p n; 0 if lim ;t) (1?t) n = p 0. For example, 1? q a t b?!a 1; + b. t!1 p(t Let J (z; ) be a Jack polynomial. One could dene it by J (z; q; t) jj;?!j (z; ) ([M2] VI.10.23). There is also a non-symmetric analogue dened by E (z; q; t) jj;?!e (z; ). 0; For any 2 and 1 m n we have m?!z and m P (m) 0;?!m (m) := z0 where runs through all permutations of 00. If 2 + let u := Q i1 m i()!. Then we have?! u. In particular, ~m (m) b l(); l();?! ~m (m) := u 00m (m). With this notation we have: 7.1. Theorem. a) Let 2 and m = l(). Then there is an expansion E (z; ) = P2 (m) c () ~m (m) with c () 2 N[] for all. P b) Let 2 + and m = l(). Then there is an expansion J (z; ) = 2 + c 0 () ~m(0) with c 0 () 2 N[] for all. Proof: Going to the limit t! 1, Theorem 5.4 and Theorem 6.1 imply c ; c 0 2 Z[t] (see also [M2] VI.10 Ex. 2a). It remains to show positivity. Since J (z; ) is the symmetrization of E (z; ) it suces to show positivity for the latter. For this, we write the operator X of Theorem 5.4 as X = q m?1 ((A m? A m ) + (1? m t m 1; )A m ). Then X?!X 1 and we prove that all parts of X1 preserve positivity. With 1 := z n s n?1 : : : s 1 this follows from q m?1 0;?!1;?! 0; 1 ; A m 0;?!s m : : : s n?1 ; X n?1 1; A m? A m?! i=m s m : : : bs i : : : s n?1 1 ; 1? m t m 1;?! m? k + m; where k is number of i = 1; : : : ; m? 1 with i m. 13
8. References [C1] Cherednik, I.: Double ane Hecke algebras and Macdonald's conjectures. Annals Math. 141 (1995), 191{216 [LS] Lascoux, A.; Schutzenberger, M.: Sur une conjecture de H. O. Foulkes. C.R. Acad. Sci. Paris (Serie I) 286A (1978), 323{324 [M1] Macdonald, I.: A new class of symmetric functions. Publ. I.R.M.A. Strasbourg, Actes 20 e Seminaire Lotharingien 131-71 (1988) [M2] Macdonald, I.: Symmetric functions and Hall polynomials (2nd ed.). Oxford: Clarendon Press 1995 [M3] Macdonald, I.: Ane Hecke algebras and orthogonal polynomials. Seminaire Bourbaki (1995), n o 797 [O1] Opdam, E.: Harmonic analysis for certain representations of graded Hecke algebras. Acta Math. 175 (1995), 75{121 14