Notes on the Dual Ramsey Theorem

Similar documents
A disjoint union theorem for trees

CHAPTER 5. The Topology of R. 1. Open and Closed Sets

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

Notes on ordinals and cardinals

Tree sets. Reinhard Diestel

int cl int cl A = int cl A.

NAME: Mathematics 205A, Fall 2008, Final Examination. Answer Key

Measures and Measure Spaces

Uniquely Universal Sets

the time it takes until a radioactive substance undergoes a decay

A new proof of the 2-dimensional Halpern Läuchli Theorem

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

AN INFINITE SELF DUAL RAMSEY THEOEREM

CHAPTER 7. Connectedness

Introduction to Topology

Large subsets of semigroups

Filters in Analysis and Topology

The Density Hales-Jewett Theorem in a Measure Preserving Framework Daniel Glasscock, April 2013

Analysis II: The Implicit and Inverse Function Theorems

5 Set Operations, Functions, and Counting

RAMSEY THEORY. 1 Ramsey Numbers

The Banach-Tarski paradox

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Introductory Analysis I Fall 2014 Homework #5 Solutions

Report 1 The Axiom of Choice

Topology Proceedings. COPYRIGHT c by Topology Proceedings. All rights reserved.

STABILITY AND POSETS

SELF-DUAL UNIFORM MATROIDS ON INFINITE SETS

Abstract Measure Theory

SMALL SUBSETS OF THE REALS AND TREE FORCING NOTIONS

A NEW LINDELOF SPACE WITH POINTS G δ

Spanning and Independence Properties of Finite Frames

On the Effectiveness of Symmetry Breaking

Handout 2 (Correction of Handout 1 plus continued discussion/hw) Comments and Homework in Chapter 1

Then A n W n, A n is compact and W n is open. (We take W 0 =(

TOPOLOGY TAKE-HOME CLAY SHONKWILER

Reverse mathematics and the equivalence of definitions for well and better quasi-orders

Topological properties

CHAPTER 4. βs as a semigroup

Maths 212: Homework Solutions

BOREL SETS ARE RAMSEY

RAMSEY S THEOREM AND CONE AVOIDANCE

CHODOUNSKY, DAVID, M.A. Relative Topological Properties. (2006) Directed by Dr. Jerry Vaughan. 48pp.

Hierarchy among Automata on Linear Orderings

Denotational Semantics

Spaces of continuous functions

Isomorphisms between pattern classes

Set theory and topology

ULTRAFILTER AND HINDMAN S THEOREM

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

ON A QUESTION OF SIERPIŃSKI

1 More finite deterministic automata

FUNCTIONAL ANALYSIS CHRISTIAN REMLING

S. Mrówka introduced a topological space ψ whose underlying set is the. natural numbers together with an infinite maximal almost disjoint family(madf)

Automata on linear orderings

CORES OF ALEXANDROFF SPACES

6 Classical dualities and reflexivity

Indeed, if we want m to be compatible with taking limits, it should be countably additive, meaning that ( )

An Algebraic View of the Relation between Largest Common Subtrees and Smallest Common Supertrees

Theory of Computation

6 CARDINALITY OF SETS

G δ ideals of compact sets

The Hurewicz Theorem

ITERATING ALONG A PRIKRY SEQUENCE

Section 31. The Separation Axioms

Computational Models - Lecture 4

MONOCHROMATIC VS. MULTICOLORED PATHS. Hanno Lefmann* Department of Mathematics and Computer Science Emory University. Atlanta, Georgia 30332, USA

Löwenheim-Skolem Theorems, Countable Approximations, and L ω. David W. Kueker (Lecture Notes, Fall 2007)

Comment: The induction is always on some parameter, and the basis case is always an integer or set of integers.

Chapter 3: Baire category and open mapping theorems

MH 7500 THEOREMS. (iii) A = A; (iv) A B = A B. Theorem 5. If {A α : α Λ} is any collection of subsets of a space X, then

Lebesgue Measure on R n

Nonstandard Methods in Combinatorics of Numbers: a few examples

4 Countability axioms

MORE ON CONTINUOUS FUNCTIONS AND SETS

The fundamental group of a locally finite graph with ends

Short Introduction to Admissible Recursion Theory

Common idempotents in compact left topological left semirings

Classification and Measure for Algebraic Fields

Supplementary Notes on Inductive Definitions

Graph Theory. Thomas Bloom. February 6, 2015

Economics 204 Fall 2012 Problem Set 3 Suggested Solutions

Perfect matchings in highly cyclically connected regular graphs

A Countably-Based Domain Representation of a Non-Regular Hausdorff Space

4 Choice axioms and Baire category theorem

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:

Lecture Notes on Inductive Definitions

Math 105A HW 1 Solutions

Zaslavsky s Theorem. As presented by Eric Samansky May 11, 2002

Introduction to Dynamical Systems

A dyadic endomorphism which is Bernoulli but not standard

FORCING WITH SEQUENCES OF MODELS OF TWO TYPES

Contents Ordered Fields... 2 Ordered sets and fields... 2 Construction of the Reals 1: Dedekind Cuts... 2 Metric Spaces... 3

Measures. Chapter Some prerequisites. 1.2 Introduction

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

Definition: Let S and T be sets. A binary relation on SxT is any subset of SxT. A binary relation on S is any subset of SxS.

Ramsey theory and the geometry of Banach spaces

DENSELY k-separable COMPACTA ARE DENSELY SEPARABLE

AN EXPLORATION OF THE METRIZABILITY OF TOPOLOGICAL SPACES

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Transcription:

Notes on the Dual Ramsey Theorem Reed Solomon July 29, 2010 1 Partitions and infinite variable words The goal of these notes is to give a proof of the Dual Ramsey Theorem. This theorem was first proved in Dual Form of Ramsey s Theorem by Tim Carlson and Steve Simpson, dvances in Mathematics, 1984. The proof presented here is the same proof given in the paper and follows much of the same notation and terminology. The main change is that we will explicitly use the notion of infinite variable words as used by Simpson in later discussions of this theorem. (For example, Simpson uses this language in his discussion of reverse mathematics and the Dual Ramsey Theorem in the Proceedings Volume from the Boulder conference.) Before stating the Dual Ramsey Theorem, we need to introduce some notation and terminology. We let ω = {0, 1,...} and be a finite (possibly empty) alphabet which is disjoint from ω. Definition 1.1. n -partition is a collection of pairwise disjoint nonempty subsets of ω (called blocks) whose union is ω and such that each block contains at most one element of. block which is disjoint from is called free. Example 1.2. For many of the examples, we will work with the alphabet = {a, b, c}. The -partition P given by {a, 0, 2, 4, 6,...}, {b, 1, 3, 5,...}, {c} has no free blocks, while the -partition Q given by {a, 0, 6, 12, 18,...}, {b, 2, 8, 14, 20,...}, {c, 4, 10, 16, 22,...}, {1, 3}, {5, 7}, {9, 11},... has infinitely many free blocks. We think of the free blocks of Q as indexed by elements of ω, with the ordering determined by the least element of each block. That is, the 0-th free block of Q is {1, 3}, the 1-st free block of Q is {5, 7} the 2-nd free block is {9, 11} and so on. We let (ω) ω denote the set of all -partitions containing infinitely many free blocks and we let (ω) k denote the set of all -partitions containing exactly k many free blocks. Thus, in Example 1.2, P (ω) 0 and Q (ω)ω. If X is an -partition, then we refer to the blocks of X as X-blocks. Note that if =, then an -partition is just an ordinary partition of ω. In this case, we require that k 1 (since k = 0 does not make sense if = ), and we write 1

(ω) ω and (ω) k in place of (ω) ω and (ω)k. Thus, (ω)ω denotes the set of all partitions of ω into infinitely many blocks and (ω) k (for k 1) denotes the set of all partitions of ω into k many blocks. Definition 1.3. If X and Y are -partitions, then Y is coarser than X if each X-block is contained in a Y -block. That is, we can obtain Y from X by collapsing X-blocks. When we deal with coarsening in these notes, we always work within the collection of -partitions. Thus, if X is an -partition then we cannot collapse distinct nonfree blocks together because this would violate the condition that each block contains at most one element of. For example, the only coarsening of the partition P in Example 1.2 is P because it has no free blocks. More generally, any element of (ω) 0 cannot be properly coarsened. Example 1.4. Let Q be as in Example 1.2. The -partition R given by {a, 0, 6, 12, 18,...}, {b, 2, 8, 14, 20,...}, {c, 4, 10, 16, 22,...}, {1, 3, 9, 11}, {5, 7, 13, 15}, {17, 19, 25, 27}, {21, 23, 29, 31},... is coarser than Q. We think of R as formed by collapsing the 0-th and 2-nd free blocks of Q, collapsing the 1-st and 3-rd free blocks of Q, collapsing the 4-th and 6-th free blocks of Q, collapsing the 5-th and 7-th free blocks of Q, and so on. Note that although no free blocks were collapsed into nonfree blocks in this example, that is also allowed by the definition of coarsening. For X (ω) ω, we write (X) ω = {Y (ω) ω Y is coarser than X} (X) k = {Y (ω) k Y is coarser than X} Thus, in Example 1.4, R (Q) ω since R is coarser than Q but still has infinitely many free blocks. s above, if =, the we drop the subscript and write (X) ω and (X) k. There are natural topologies on (ω) k and (ω) k which will be defined later and which give rise to the collection of Borel subsets of these spaces. We can now state the Dual Ramsey Theorem and the Generalized Dual Ramsey Theorem. Theorem 1.5 (Dual Ramsey Theorem). For all k, l 1, if (ω) k = C 0 C 1 C l 1, where each C i is Borel, then there exists an X (ω) ω such that (X) k C i for some i < l. Theorem 1.6 (Generalized Dual Ramsey Theorem). Let be a finite (possibly empty) alphabet, l 1 and k 0 (if is nonempty, otherwise k 1). If (ω) k = C 0 C 1 C l 1, where each C i is Borel, then there exists an X (ω) ω such that (X)k C i for some i < l. The Dual Ramsey Theorem is the special case of the Generalized Dual Ramsey Theorem when =. The advantage of allowing nonempty alphabets comes in simplifying the induction in the proof. Our goal is to prove the Generalized Dual Ramsey Theorem. This proof breaks into two pieces. We will first cover what Carlson calls the combinatorial core of the proof which is a Ramsey-style statement about infinite variable words. Next we will give the 2

topological half of the proof. Finally, I will explain why we cannot cross out the word Borel from the statement of the theorem. For the remainder of this section and for the next section, we assume that is a nonempty finite alphabet. We fix a countable collection of variables {x i i ω} disjoint from and ω. Definition 1.7. n infinite variable word (over ) is a function such that X : ω {x i i ω} for every variable x j, there is an i such that X(i) = x j and if j 0 < j 1, i 0 is the least such that X(i 0 ) = x j0 and i 1 is the least such that X(i 1 ) = x j1, then i 0 < i 1. That is, an infinite variable word is an ω-sequence in which each variable appears and the first occurrence of x j comes before the first occurrence of x j+1. (Note that x j can appear after x j+1 as well.) Example 1.8. For the purposes of examples, I will work with = {a, b, c}. The word bbx 0 cx 0 x 1 ax 0 ccbx 2 x 3 x 0 ca is the beginning of an infinite variable word, while the word is not. x 0 aabx 2 cx 1 c If X is an infinite variable word, then X can be thought of as an element of (ω) ω by associating the partition whose blocks are B a = {a} {n ω X(n) = a} for each a F j = {n ω X(n) = x j } for each variable x j That is, the B a blocks are the nonfree blocks and F j is the j-th free block. s above, we count the free blocks starting with the 0-th free block and we order the free blocks by their least element. Thus, the restriction that the first occurrence of x i comes before the first occurrence of x i+1 guarantees that the least element in F i is less than the least element of F i+1. Example 1.9. For the infinite variable word starting aabx 0 cx 0 x 1 ax 0 x 1 x 2 we have that the blocks containing elements of are {a, 0, 1, 7,...}, {b, 2,...} and {c, 4,...}, and there are free blocks {3, 5, 8,...}, {6, 9,...} and {10,...}. 3

Similarly, given an -partition P (ω) ω, there is an associated infinite variable word X given by X(n) = a if n is in the same block as a and X(n) = x j if n is in the j-th free block. Example 1.10. The infinite variable word associated with the -partition Q from Example 1.2 is ax 0 bx 0 cx 1 ax 1 bx 2 cx 2 ax 3 bx 3 cx 4 ax 4 In this example, we have the special property that all the occurrences of x i come before the first occurrence of x i+1. This property is not required, but it will turn out to be useful to us later. Because we can move back and forth between -partitions and infinite variable words, we can work with either formalism. For the next couple of sections, we will work mainly with infinite variable words, but when we return to the Generalized Dual Ramsey Theorem later, it will be helpful to switch between perspectives. Because of this connection, we use (ω) ω to denote the set of infinite variable words as well as the set of -partitions. If X and Y are infinite variable words, we say Y is coarser than X, if the -partition corresponding to Y is coarser than the -partition corresponding to X. Therefore, the notation (X) ω extends to infinite variable words as well. Example 1.11. We can think of the process of coarsening on an infinite variable word in terms of substituting elements of and variables in for other variables. For example, suppose X = bbx 0 ax 0 x 1 x 0 bcbax 2 x 3 x 4 bx 0 x 2 bb To coarsen X, we first decide whether the free block containing 2 (i.e. the block represented by the variable x 0 ) will be collapsed into one of the nonfree blocks or whether it will remain free. Suppose we decide to collapse it into the block containing a. This corresponds to substituting a in for x 0 to get bbaaax 1 abcbax 2 x 3 x 4 bax 2 bb We next decide what to do with the free block containing 5 (i.e. the block represented by x 1 ). We could again collapse this into a nonfree block or we could choose for it to remain free. Suppose we decide to keep this block free. In that case, since it is now the 0-th free block, we replace x 1 by x 0 to get bbaaax 0 abcbax 2 x 3 x 4 bax 2 bb Next, we decide what to do with the block represented by x 2. Now, we have three options. We can collapse this block into a nonfree block (which corresponds to substituting an element of in for x 2 ), or we can collapse it into the 0-th free block (which corresponds to substituting x 0 in for x 2 ), or we can let it remain free and disjoint from the 0-th free block. In the last case, since it is now the 1-st free block, we substitute x 1 in for x 2. Continuing in this manner, we can coarsen the original infinite variable word X. s long as all the variable occur at the end of this (potentially infinite) process, we will have coarsened X to another infinite variable word. 4

Of course, the formalism of words also works for -partitions with k many free blocks. The only change is that we think of our variable words as being over {x i i < k}. That is, rather than using infinitely many variables, we only use k many variables. In this context, the -partitions with no free blocks correspond to element of ω. We will also be interested in finite initial segments of these infinite variable words, or in the context of -partitions, -partitions of initial segments of ω. n -segment is a finite word s over {x j j ω} which is an initial segment of some element of (ω) ω. If n = length(s), then s can be thought of as an -partition of {0, 1,..., n 1}, and conversely, any - partition of {0, 1,..., n 1} corresponds to an -segment. We write #(s) for the number of variables appearing in s (or equivalently, the number of free blocks in the corresponding partition). If s and t are -segments and X (ω) ω, then we write s t (or s X) if s is an initial segment of t (or X respectively). We write s t to means that s and t have the same length and that the -partition corresponding to s is coarser than the -partition corresponding to t. That is, we can obtain the -partition corresponding to s by collapsing some of the blocks in the -partition corresponding to t. s finite variable words, this means that we can obtain the word s from the word t by doing a finite number of substitutions of either elements of for variables or variables for other variables, being careful to renumber the variables according to the order in which they appear. Example 1.12. Let s = abbx 0 bcx 0 x 0 bx 1 and t = ax 0 bx 1 x 0 cx 2 x 1 x 0 x 3. Then s corresponds to the partition of {0,..., 9} and t corresponds to the partition {a, 0}, {b, 1, 2, 4, 8}, {c, 5}, {3, 6, 7}, {9} {a, 0}, {b, 2}, {c, 5}, {1, 4, 8}, {3, 7}, {6}, {9} By collapsing {b, 2} with {1, 4, 8} and collapsing {3, 7} with {6}, we can get from the t partition to the s partition. Therefore, s t. lternately, we start with t ax 0 bx 1 x 0 cx 2 x 1 x 0 x 3 substitute b for x 0 (since the 0-th free block is collapsed into the nonfree block containing b) to get abbx 1 bcx 2 x 1 bx 3 substitute x 0 for x 1 (since the 1-st free block in t remains free and hence is now the 0-th free block) to get abbx 0 bcx 2 x 0 bx 3 substitute x 0 for x 2 (since the 2-nd free block in t remains free but is collapsed into what is now the 0-th free block) to get abbx 0 bcx 0 x 0 bx 3 5

and finally substitute x 1 for x 3 (since the last free block in t remains free and is not collapsed into another free block, and hence becomes the 1-st free block) to get which is exactly s. abbx 0 bcx 0 x 0 bx 1 Similarly, we write s X if s X[length(s)] where X[n] denotes the initial segment of X of length n (i.e. the word X(0)X(1) X(n 1)). We extend our notations about coarsening from infinite variable words to -segments as follows. Let (n) k denote the set of all -segments (finite words) s such that length(s) = n and #(s) = k. That is, (n) k is the set of all -segments (finite words) of length n in which only the variables x 0,..., x k 1 appear. (lternately, (n) k is the set of all -partitions of {0, 1,..., n} which have exactly k many free blocks.) If t (n) k and l < k, then (t) l = {s (n) l s t} That is, (t) l is the set of all -segments which can be obtained from t by a finite series of substitutions (and renumbering of variables) as above and which contain exactly the variables x 0,..., x l 1. In particular, (t) 0 is the set of all substitution instances of t (i.e. elements of n which can be obtained by substituting an element of for each variable in t). lso, (t) 1 is the set of all -segments obtained by first substituting elements of in for some (but not all) of the variables in t and then replacing the remaining variables by x 0. Example 1.13. Let = {a, b} and t = x 0 ax 1 x 0 b. (We use a smaller alphabet to keep the number of substitution instances smaller.) We obtain the elements of (t) 0 by running through each possible combination of plugging in a and b for x 0 and x 1. That is, we plug in a for both x 0 and x 1, we plug in a for x 0 and b for x 1, and so on, to obtain (t) 0 = {aaaab, aabab, baabb, babbb} We obtain (t) 1 in a similar fashion except we have to leave one variable free (or one block free). Thus, we can plug in x 0 for x 1 (i.e. collapse the two free blocks into one), we can leave x 0 as a variable (i.e. leave its block free) and plug in an element of for x 1 (i.e. collapse its free block into a nonfree block), or plug in an element of for x 0 (i.e. collapse its free block into a nonfree block) and plug in x 0 for x 1 (i.e. leave its block free, but renumber it as the 0-th free block) to obtain (t) 1 = {x 0 ax 0 x 0 b, x 0 aax 0 b, x 0 abx 0 b, aax 0 ab, bax 0 bb} We need one last notion before stating our Ramsey-style result. If s is an -segment (viewed as a word), then s = sx #(s). That is, we take s and attached the first variable not appearing in s on the end. When viewing s as an -partition of {0, 1,..., length(s) 1}, this amounts to defining s as the partition of {0, 1,..., length(s)} by taking the s-blocks and adding a new free block {length(s)} (i.e. the new free block contains only the number length(s)). In other words, s is the unique word such that length(s ) = length(s) + 1, s s and #(s ) = #(s) + 1. 6

Example 1.14. Consider the -segment t from Example 1.13. s a finite word, t = x 0 ax 1 x 0 bx 2. s a finite -partition, t partitions {a, b} {0, 1, 2, 3, 4} into the blocks {a, 1}, {b, 4}, {0, 3}, {2} Therefore, t partitions {a, b} {0, 1, 2, 3, 4, 5} into the blocks {a, 1}, {b, 4}, {0, 3}, {2}, {5} If X (ω) ω, then (X) is the set of all -segments s such that #(s) = 0 and s X. In other words, s (X) if and only if there is an -segment t such that t X, the next element after the initial segment t in the infinite variable word X is the variable x #(t) (i.e. the first variable not appearing in t) and s (t) 0. To give one more description, a string s <ω is in (X) if and only if s is formed by cutting off X just before the first occurrence of a variable and substituting all the variables in this initial segment by elements of. Example 1.15. Suppose that = {a, b} and X = abx 0 bx 1 x 0 x 2. We form (X) as follows. First, we can cut off X just before the first occurrence of x 0 to get ab. Since there are no variables to substitute, we have ab (X). Next, we can cut off X just before the first occurrence of x 1 to get abx 0 b. Plugging in a and b for x 0 gives us the strings abab, abbb (X). Next, we can cut off X just before the first occurrence of x 2 to get abx 0 bx 1 x 0. Plugging in all combinations of a and b for x 0 and x 1 gives us the strings ababaa, ababba, abbbab, abbbbb (X). Continuing in this manner, we have (X) = {ab, abab, abbb, ababaa, ababba, abbbab, abbbbb,...} One particular case of note is (ω). Here we regard ω as the partition {0}, {1},..., or equivalently, the word x 0 x 1 x 2. It follows directly from the definitions that (ω) = <ω. We can now state the first version of the combinatorial core theorem. Theorem 1.16. Let be a finite nonempty alphabet. If (ω) = C 0 C l 1, then there exists an X (ω) ω such that (X) C i for some i < l. Before proving this theorem, we state a corollary which is the real statement we will need later in the proof of the Generalized Dual Ramsey Theorem. Corollary 1.17. Let be a finite nonempty alphabet. If Y (ω) ω and (Y ) = C 0 C l 1, then there exists Z (Y ) ω such that (Z) C i for some i < l. Proof. Fix Y (ω) ω. There is a canonical bijection from (ω)ω onto (Y )ω. To see why, it is easiest to think in terms of -partitions, viewing ω as the -partition {a 0 },..., {a n }, {0}, {1},... where = {a 0,..., a n }. That is, both ω and Y consist of a single block for each a and then infinitely many free blocks which we can think of as ordered by comparing their least elements. n element W (ω) ω is a coarsening of the -partition ω which still contains 7

infinitely many free blocks. That is, we form W by performing a sequence of choices about collapsing the free blocks. First, we decide whether to collapse the 0-th free block with a nonfree block or allow it to remain free. Second, we decide whether to collapse the 1-st free block with a nonfree block, collapse it with the 0-th free block (if it remained free after the first choice) or do not collapse it with a previous block. Continuing in this manner (assuming we retain infinitely many free blocks in the end) yields W. The image of W under the bijection just performs the same sequence of choices on the corresponding free blocks in Y. How does this work at the level of words? Suppose we want to find the image of the following element of (ω) ω where = {a, b, c}. abx 0 x 1 x 0 cax 1 x 0 x 2 We can think of this word as being obtained by the following sequence of substitutions: a x 0 (i.e. substitute a in for x 0 ), b x 1, x 0 x 2, x 1 x 3, x 0 x 4 and so on. Perform the same sequence of substitutions on Y to obtain the image of this word in (Y ) ω. The same idea works at the level of (ω) and (Y ). That is, an element of (ω) is formed by chopping off the infinite word x 0 x 1 just before a new variable and substituting all the variables by elements of in this initial segment. We map this element of (ω) to the element of (Y ) formed by chopping off Y just before the same variable and performing the same substitutions. This gives a bijection from (ω) onto (Y ). Fix the coloring (Y ). Color (ω) = C 0 C l 1 by assigning each s (ω) the color C i if its image in (Y ) is colored C i. pplying Theorem 1.16, there is an X (ω) ω and an i < l such that (X) C i. Let Z (Y ) ω be the image of X under the first bijection. It follows that (Z) C i as required. We make one more simplification (actually strengthening) of Theorem 1.16 before giving the proof. Definition 1.18. n infinite variable word X (ω) ω is an ordered infinite variable word if for all i ω, every occurrence of x i comes before the first occurrence of x i+1. (Note that in particular, each variable occurs only finitely often,) We let ω ω denote the set of all ordered infinite variable words. Some of our other notation will be recast in terms of ordered words as follows. For X ω ω, we let X ω denote the set of all Y ω ω which are coarser than X. That is, X ω = (X)ω ω ω. n ordered -segment is an -segment s such that s X for some X ω ω. That is, s satisfies the same ordering restriction of its variables. Example 1.19. Consider the -partitions Q and R from Example 1.4. The infinite variable word version of Q is Q = ax 0 bx 0 cx 1 ax 1 bx 2 cx 2 ax 3 bx 3 cx 4 ax 4 which is an element of ω ω. The infinite variable word version of R is R = ax 0 bx 0 cx 1 ax 1 bx 0 cx 0 ax 1 bx 1 cx 2 ax 2 bx 3 cx 3 ax 2 bx 2 cx 3 ax 3 8

which is not an element of ω ω. (That is, to get R, we collapsed the 0-th and 2-nd free blocks of Q and the 1-st and 3-rd free blocks of Q, and thus creating an instance of x 0 after an instance of x 1.) Thus, despite the fact that R (Q) ω, we do not have R Q ω. On the other hand, the infinite variable word S given by which corresponds to the -partition S = ax 0 bx 0 cx 0 ax 0 bx 1 cx 1 ax 1 bx 1 cx 2 ax 2 bx 2 cx 2 {a, 0, 6, 12, 18,...}, {b, 2, 8, 14, 20,...}, {c, 4, 10, 16, 22,...}, {1, 3, 5, 7}, {9, 11, 13, 15}, {17, 19, 21, 23}... is an element of Q ω. Note that S is formed by collapsing the 0-th and 1-st free blocks of Q, collapsing the 2-nd and 3-rd free blocks of Q, collapsing the 4-th and 5-th blocks of Q and so on. Because of the nature of the proofs to come, it is more convenient to change some of the notation in the ordered case. This is unfortunate, but I will follow the notation in the Carlson and Simpson paper. For m ω, we let ω m denote the set of all ordered -segments s such that #(s) = m. In particular, ω 0 = (ω) = <ω since there are no variables in these sets and hence the ordering condition does not apply. For X ω ω, we let X m denote the set of all ordered -segments s ω m such that s X. That is, s X m if there is an initial segment t X such that the next element in the word X is a new variable, s (t) m and s is ordered. In particular, X 0 = (X) because we have substituted out all the variables in both of these sets so the ordering conditions on the variables does not apply. We let ω <ω = m ω ω m and X <ω = m ω X m The real combinatorial core theorem we will prove is the following strengthening of Theorem 1.16. (To see why Theorem 1.16 follows immediately from Theorem 1.20, recall from above that ω 0 = (ω) = <ω and X 0 = (X). Thus, we are merely strengthening the conclusion to require that the infinite variable word X is ordered.) Theorem 1.20 (Combinatorial Core). Let be a finite nonempty alphabet. If ω 0 = C 0 C l 1 then there exists X ω ω such that X 0 C i for some i < l. 9

2 Combinatorial core The key notion for proving Theorem 1.20 is density. Definition 2.1. For X ω ω and D <ω, we say D is dense in X 0 if Y 0 D for all Y X ω. That is, no matter how you refine X to an ordered infinite variable word Y, there is some variable such that if you cut off Y just before the first occurrence of this variable, it is possible to perform a substitution instance on the remaining variables in this initial segment of Y to get a string in D. You should think of D being dense as a largeness property. establish is the following. The key lemma we will Lemma 2.2. If D is dense in X 0, then there exists a W X ω such that W 0 D. Before proving Lemma 2.2, we show how to use it to establish Theorem 1.20. Theorem 2.3 (Combinatorial Core). Let be a finite nonempty alphabet. If <ω = ω 0 = C 0 C l 1 then there exists X ω ω such that X 0 C i for some i < l. Proof. We proceed by induction on l. The case of l = 1 is trivial. Consider the case when l = 2. In this case, we have a coloring ω 0 = C 0 C 1. Since ω 0 = <ω, we know C 0 <ω. We break into two cases. First, suppose that C 0 is dense in ω 0. Then by Lemma 2.2, there is a W ω ω such that W 0 C 0 and we are done. Otherwise, C 0 is not dense in ω 0. By definition, this means that there is a Y ω ω such that Y 0 C 0 =. But then Y 0 C 1 and we are done. Therefore, we have established the case when l = 2. Suppose l > 2 and ω 0 = C 0 C l 1. s above, if C 0 is dense in ω 0, then we are done by Lemma 2.2. If C 0 is not dense in ω 0, then there is a Y ω ω such that Y 0 C 1 C l 1. However, as in the proof of Corollary 1.17, there is a bijection between ω ω and Y ω. Therefore, we can shift this coloring of Y 0 (with one fewer colors) to ω 0, apply the inductive hypothesis and pull the resulting ordered variable word back into Y ω. For the remainder of this section, we work toward a proof of Lemma 2.2. t a key point, we will use one result from standard Ramsey Theory, the Hales-Jewett Theorem. Let n denote the set of all length n words over. For s n, we denote the element of in the i-th position in s (for 0 i < n) by s(i). ssume that = {a 0,..., a k 1 } line in n is a sequence s 0,..., s k 1 of elements of n such that for each coordinate 0 i < n either 10

(1) s j (i) = s j (i) for all 0 j, j < k, or (2) s j (i) = a j for all 0 j < k and the situation in Condition (2) happens at least once. (Note that we are implicitly assuming there is a fixed order on the set.) Theorem 2.4 (Hales-Jewett Theorem). Let be a finite nonempty alphabet. For every l, there is an n such that for all n n, if n is l-colored, then there is a monochromatic line. To translate this statement into our notation, recall that (n) 0 = n. If t (n) 1, then t is a length n word over and a single variable. Therefore, (t) 0 is the set of size whose elements are obtained by substituting an element of in for the single variable in t. We can order this set (from an ordering of = {a 0,..., a k 1 }) as s 0,..., s k 1 where s i is the result of substituting a i in for the variable in t. Therefore, a line in n has the form (t) 0 for some t (n) 1. We can state the Hales-Jewett Theorem as follows. Theorem 2.5. Let be a finite nonempty alphabet. For each l, there is an n such that for all n n, if (n ) 0 = C 0 C l 1, then there exists a t (n ) 1 such that (t)0 C i for some i < l. We need one further restatement of the Hales-Jewett Theorem to get to the version we will apply. Theorem 2.6. Let be a finite nonempty alphabet. For each l, there is an n such that for all u ω n and any coloring (u)0 = C 0 C l 1, there exists a v (u) 1 such that (v)0 C i for some i < l. Proof. By Theorem 2.5, let n be large enough that for any coloring (n) 0 = C 0 C l 1, then there exists a t (n) 1 such that (t)0 C i for some i < l. If u ω n, then u looks like the word x 0 x 1 x n 1 with extra variables and elements of interspersed, subject to the restriction that u is an ordered -segment. That is, if n = 3, then u could be the word x 0 abx 0 x 1 x 1 bbx 2 ax 2. There is an obvious bijection between (u) 0 and (n)0 which allows us to shift the coloring of (u) 0 to a coloring of (n)0. (That is, any element of (u)0 is obtained by substituting elements of for the variables x 0,..., x n 1 in u. pply the same substitutions to the word x 0 x 1 x n 1 to obtain the image in (n) 0.) pply Theorem 2.5 to the coloring of (n)0 to get t (n) 1 such that (t)0 is monochromatic. Since t is formed from the word x 0x 1 x n+1 by substituting in elements of for some (but not all) of the variables and substituting x 0 in for the remaining variables, we can perform the same sequence of substitutions on u to get v (u) 1. Since (t)0 is monochromatic, it follows that (v)0 is monochromatic. For the remainder of this section, let be an arbitrary finite nonempty alphabet, X be an arbitrary element of ω ω and D be an arbitrary subset of <ω which is dense in X 0. The following lemmas hold for any such objects. Lemma 2.7. There is an s X <ω such that (t)0 D for all t X <ω such that s t. 11

Proof. First, we claim that if X 0 D, then the lemma follows. Suppose that X 0 D. Consider any t X <ω. The set (t)0 contains all the strings in t formed by substituting elements of in for all the variables in t. By the definition of X 0, (t)0 X 0. Since X 0 D, we have (t)0 D and hence (t)0 D for all t X 0. In this case, we can let s X be the finite word obtained by cutting off X before the first occurrence of x 0 and the lemma follows. To proceed in general, we assume for a contradiction that the conclusion of the lemma fails. By the first paragraph, we know that there is a t 0 X 0 such that t 0 D. Fix t 0. Notice that (t 0 ) 0 = {t 0} since there are no variables to substitute in for. Therefore, (t 0 ) 0 D =. ssume we have been given t m X m such that (t m) 0 D =. Recall that t m is formed by cutting off X just before the first occurrence of a variable x i and possibly refining it (by plugging in elements of for variables or variables for other variables in a manner consistent with maintaining an ordered -segment) to have only m variables left. Fix s m+1 X m+1 such that t m s m+1. (For example, extend t m by adding on the part of X beginning with the first occurrence of x i and stopping just before the first occurrence of x i+1. Renaming the variable x i by x m yields such a string s m+1.) Since the conclusion of the lemma fails with s m+1, there is a t X <ω such that s m+1 t and (t ) 0 D =. Let t m+1 be the result of substituting x m in for any variable x j in t with index j > m (if such variables occur in t ). Thus we have t m+1 X m+1 such that s m+1 t m+1, t m+1 X m+1 and (t m+1 ) 0 (t ) 0. Hence, (t m+1 ) 0 D =. ltogether, we have a sequence of words t 0 t 1 t 2 such that t m X m and (t m ) 0 D =. Let Y be the limit of these words. That is, Y X ω and t m Y for each m. We claim that Y 0 = (t m) 0. Consider s Y 0. The string s is formed by cutting off Y immediately before the first occurrence of some variable x m and substituting elements of for each variable in the resulting initial segment of Y. But, cutting off Y before the variable x m yields t m and hence s (t m ) 0. Conversely, since each t m Y, we have that each s (t m ) 0 is in Y 0. Thus, Y 0 = (t m) 0 and hence Y 0 D =, contradicting the fact that D is dense in X ω. For the next lemma, we need to define a method of concatenating two -segments. If s and t are -segments (viewed as finite words), then s t is the finite word obtained by concatenating s and t and renumbering the variables in t by replacing x i by x i+#(s). That is, we concatenate the finite words and renumber the variables so that the variables in s and the renumbered variables in t do not overlap. Example 2.8. If s = ax 0 x 1 bcx 1 and t = bx 0 x 1 x 1 cx 2, then s t = ax 0 x 1 bcx 1 bx 2 x 3 x 3 cx 4. Lemma 2.9. There is a t X 1 such that (t)0 D. Proof. Fix s as in Lemma 2.7. Let l = (s) 0 and let (s)0 = {s 0,..., s l 1 }. By Theorem 2.6, let n be large enough so that for every u ω n, and any coloring (u)0 = C 0 C l 1, there exists a v (u) 1 such that (v)0 C i for some i < l. Pick u ω n such that s u X <ω. That is, let u be the part of X after s extending through the next n many variables and stopping immediately before the (n + 1)-st variable. 12

Consider an element w (u) 0. Then s w is a refinement of s u (that is, s w s u) obtained by substituting elements of for all the variables in the u part of s u and leaving the variables in the s part alone. Therefore, s w X <ω and s s w. By Lemma 2.7, (s w) 0 D. Therefore, there is a refinement s i of s (that is, s i s) with #(s i ) = 0 obtained by substituting elements of for all the variables in s such that s i w D. We color (u) 0 as follows. Let C i = {w (u) 0 s i w D} By the previous paragraph (u) 0 = C 0 C l 1. By the choice of n, there is a v (u) 1 and i < l such that (v) 0 C i. In other words, (s i v) 0 D. Set t = s i v. Then t X 1 and (t) 0 D. For the next lemma, we extend our notation for s ω <ω and Y ω ω by defining s Y to be the infinite ordered variable word formed by concatenating s and Y and renumbering all the variables in Y by sending x i to x i+#(s). Example 2.10. If s = ax 0 x 0 bcx 1 and Y = abx 0 x 1 bcbx 1 x 2, then s Y = ax 0 x 0 bcx 1 abx 2 x 3 bcbx 3 x 4 Lemma 2.11. There exists s ω 1 and Y ω ω such that s Y X ω and the set {t <ω (s t) 0 D} is dense in Y 0. Proof. ssume not and we derive a contradiction. By our assumption, for all s Y X ω, the set {t (s t) 0 D} is not dense in Y 0. That is, there is a Z Y 0 such that Z 0 {t (s t) 0 D} = In other words, for all t Z 0, (s t)0 D. Or stated one more way, for every t Z 0, there is a substitution instance (of the single variable in s) s of s such that t s D. To derive a contradiction, we will define W X ω such that for all u W 1, (u)0 D. This contradicts Lemma 2.9. (Note that if D is dense in X 0 and W X ω, then D is also dense in W 0.) We construct W by constructing a sequence s 0, s 1,... such that each s i ω 1 and setting W = s 0 s 1 s 2. Fix s 0 ω 1 and Y 0 such that X = s 0 Y 0. That is, let s 0 be the initial segment of X including all instances of x 0 and stopping right before the first instance of x 1. Let Y 0 be the remainder of X with the variables renumbered by replacing x i by x i 1 so that X = s 0 Y 0 In the future, we will not mention the renumbering of variables and leave it implied by the context. Let Z 0 be such that Z 0 Y 0 ω and (s t)0 D for all t Z 0 0. To continue the induction, fix s 1 ω 1 and Y 1 Z 0 ω such that Z 0 = s 1 Y 1. Notice that s 0 s 1 Y 1 X ω. Consider the finite set (s 0 s 1 ) 1 and list this set as v 0,..., v k. That is, s 0 s 1 contains the variables x 0 and x 1, so there are finitely many ways to perform 13

substitutions to reduce this word to having only one variable. We define a sequence V 0,..., V k such that each V i Y 1 ω as follows. Since v 0 Y 1 X ω and v 0 ω 1, there is (by the assumption in the first paragraph of this proof) a V 0 Y 1 ω such that (v 0 t) 0 D for all t V 0 0. By induction, assume that we have defined V i Y 1 ω such that (v j t) 0 D for all j i and all t V i 0. Since v i+1 V i X ω and v i+1 ω 1, there is a V i+1 V i ω such that (v i+1 t) 0 D for all t V i+1 0. Note the following facts. V i+1 Y 1 ω since V i+1 V i ω and V i Y 1 ω. For any t V i+1 0, we have t V i 0 since V i+1 V i ω. Fix t V i+1 0 and j i. Since t V i 0, we have by induction that (v j t) 0 D. We have now established that V i+1 Y 1 ω and for all j i + 1 and all t V i+1 0, (v j t) 0 D. Therefore, we have established the required induction hypothesis to continue defining the sequence V 0,..., V k. Set Z 1 = V k. Notice that Z 1 Y 1 ω and for all s (s 0 s 1 ) 1, we have (s t)0 D for all t Z 1 0. Write Z 1 = s 2 Y 2 where s 2 ω 1. Since s 0 s 1 Y 1 X ω and Z 1 Y 1 ω, we have that s 0 s 1 s 2 Y 2 X ω. The induction continues in the same way. Suppose we have defined s 0,..., s n and Y n so that s 0 s n Y n X ω. Let Z n Y n ω be such that (s t)0 D for all s (s 0 s n ) 1 and all t Z n 0. Write Z n = s n+1 Y n+1 to continue the induction. Let W = s 0 s 1 X ω. Recall that our desired contradiction is to show that (u) 0 D for all u W 1. Fix any u W 1. The finite word u is formed by cutting off W before the first occurrence of a variable x n+1 and performing substitutions to reduce this initial segment to having only one variable. That is, we substitute elements of for some (but not all) the variables in the initial segment and substitute x 0 for the remaining variables. Because u must contain a variable, we cannot obtain the initial segment of X by cutting off before x 0. Therefore, we can denote the variable used to obtain the initial segment of X by x n+1. It follows that u has the form u (s 0 s n t) 1 where t <ω is the initial segment of s n+1 formed by cutting off s n+1 just before the occurrence of the (only) variable in s n+1. However, since s n+1 was defined such that Z n = s n+1 Y n+1, we have that t Z n, and more specifically, t is the initial segment of Z n formed by cutting off Z n before the first occurrence of the first variable in Z n. Therefore, t Z n 0. But, then our construction of Z n implies that (u) 0 D, giving the necessary contradiction. Lemma 2.12. There exists s ω 1 and Y ω ω such that s Y X ω, the set {t <ω (s t) 0 D} is dense in Y 0 and there is an r D such that r s. Proof. We apply Lemma 2.11 repeatedly to define three sequences s 0, s 1,... with s i ω 1, Y 0, Y 1,... with Y i ω ω and D 0, D 1,... such that D n <ω is dense in Y n 0. Let s 0 and Y 0 14

be as in the conclusion of Lemma 2.11. That is, s 0 Y 0 X ω and D 0 = {t (s 0 t) 0 D} is dense in Y 0 0. Given Y n and D n (constructed inductively) such that D n is dense in Y n 0, apply Lemma 2.11 to obtain s n+1 and Y n+1 such that s n+1 Y n+1 Y n ω and D n+1 = {t (s n+1 t) 0 D n} is dense in Y n+1 0. By induction on n, we have the following two properties. s 0 s n Y n X ω D n = {t (s 0 s n t) 0 D} Set W = s 0 s 1 X ω. Since D is dense in X 0 and W X ω, there is an r D such that r W 0. Fix n ω such that length(r) < length(s 0 s n ). Since r W 0, there is an s (s 0 s n ) 1 such that r s. Fix s and set Y = Y n. Since r s, it remains to show the items in the conclusion of Lemma 2.11. s Y X ω follows since s 0 s n Y n X ω and we have both Y = Y n and s (s 0 s n ) 1. We know that {t (s 0 s n t) 0 D} is dense in Y 0. Since s (s 0 s n ) 1, we have that (s t) 0 (s 0 s n t) 0 for every t. Therefore {t (s 0 s n t) 0 D} {t (s t) 0 D} so {t (s t) 0 D} is dense in Y 0. Finally, we can prove Lemma 2.2 which we restate below for convenience. Lemma 2.13. If D is dense in X 0, then there is a W X ω such that W 0 D. Proof. The proof is very similar to the proof of Lemma 2.12, except that we apply Lemma 2.12 at each step instead of Lemma 2.11. That is, we define sequences s 0, s 1,... ω 1, Y 0, Y 1,... ω ω D 0, D 1,... such that D n is dense in Y n 0, r 0, r 1,... such that r i s i, r 0 D, r n+1 D n. Let r 0, s 0 and Y 0 be as in the conclusion of Lemma 2.12. That is, r 0 D, s 0 Y 0 X ω and D 0 = {t (s 0 t) 0 D} is dense in Y 0 0. Given Y n and D n (constructed inductively) such that D n is dense in Y n 0, apply Lemma 2.12 to obtain r n+1, s n+1 and Y n+1 such that r n+1 D n, rn+1 s n+1, s n+1 Y n+1 Y n ω and D n+1 = {t (s n+1 t) 0 D n} is dense in Y n+1 0. By induction on n, we have s 0 s n Y n X ω 15

D n = {t (s 0 s n t) 0 D} Let W = s 0 s 1 X ω. It remains to show that W 0 D. Fix any r W 0. If length(r) < length(s 0 ), then r = r 0 D (because we have to obtain r by cutting off W right before the first occurrence of the first variable). If length(s 0 s n ) length(r) < length(s 0 s n+1 ) then we obtain r by cutting off W before the first occurrence of the variable in s n+1 and substituting in for the variables in s 0 s n. Since r n+1 s n+1, we have r (s 0 s n r n+1 ) 0 But, r n+1 D n and hence (s 0 s n r n+1 ) 0 D. Therefore, r D as required. 3 Dual Ramsey Theorem Recall the statement of the Dual Ramsey Theorem. Theorem 3.1 (Dual Ramsey Theorem). Let k 1. If (ω) k = C 0 C 1 C l 1 where each C i is Borel, then there exists an X (ω) ω such that (X) k C i for some C i. Remember that (ω) k denotes the set of all partitions of ω into exactly k blocks. Note that unlike the notation ω k which denoted finite ordered variable words with k many variables (which correspond to -partitions of finite initial segments of ω), we are back to looking at partitions of all of ω. Similarly, (ω) k denotes the set of all -partitions of ω with exactly k many free blocks. To define the topology used here, notice that we can view each partition P in (ω) ω or (ω) k as a relation P ω ω for which P (n, m) holds if and only if n and m are in the same partition block. Therefore, each partition P is an element of 2 ω ω. This space is given the product topology (and {0, 1} is given the discrete topology), so (ω) k and (ω) ω inherit a subspace topology. The same idea holds for a finite alphabet and the sets (ω) k and (ω)ω. These spaces inherit a subspace topology from 2 ( ω) ( ω). To give a little more intuition about the topology on (ω) k, consider the basic open sets in 2 ( ω) ( ω). To specify a basic open set in this space, we list finitely many conditions of the form u and v are related (i.e. in the same block), or u and v are not related (i.e. not in the same block) for elements u, v ω. This finite list of conditions determines the basic open set of all relations on ω which satisfy these conditions. Of course, not all such sets of conditions can be satisfied by an -partition. For example, if we specify that 0 and 1 are related, 1 and 2 are related, but 0 and 2 are not related, then this finite information cannot be satisfied by an -partition. Similarly, if a, b and we specify that a and b are related, then this information cannot be accommodated by an -partition. However, restricting the 16

finite list of conditions to actual sets of -partitions is taken care of by the subspace topology (i.e. by the restriction to the intersection with the set (ω) k ). In this section, we will prove the following generalization of the Dual Ramsey Theorem. The Dual Ramsey Theorem is the special case when =. Theorem 3.2. Let be a finite (possibly empty) alphabet and k ω (with k 1 if = ). If (ω) k = C 0 C 1 C l 1 where each C i is Borel, then there exists an X (ω) ω such that (X) k C i for some i < l. Theorem 3.2 follows from three lemmas. Lemma 3.3. Let k ω and let be a finite alphabet such that = 1. If Theorem 3.2 holds for (ω) k, then it also holds for (ω)k+1 (i.e. for (ω) k+1 ). Lemma 3.4. Let be a finite nonempty alphabet. If (ω) 0 = C 0 C 1 C l 1 where each C i is Borel, then there exists a Y (ω) ω such that (Y )0 C i for some i < l. Lemma 3.5. Let be a finite nonempty alphabet and let a be a new symbol not in. If Theorem 3.2 holds for (ω) k {a}, then it also holds for (ω)k+1. Notice that Theorem 3.2 follows immediately from these three lemmas. Fix a finite (possible empty) alphabet and k ω. If k = 0 (and hence is nonempty), then we are done by Lemma 3.4. If k 1, then let + k denote an alphabet of size + k. If is nonempty, then Theorem 3.2 follows by applying Lemma 3.5 k many times. If is empty, then we apply Lemma 3.5 k 1 times to obtain Theorem 3.2 for (ω) k 1 B where B is a singleton set. Then we apply Lemma 3.3 once to obtain Theorem 3.2 for (ω) k. Therefore, it remains to prove Lemmas 3.3 3.5. The proof of Lemma 3.3 is trivial because (ω) k {a} = (ω) k+1. That is, in (ω) k {a}, the lone nonfree block corresponding to the single element a acts like any other block and hence (ω) k {a} is really the set of all partitions of ω into exactly k + 1 blocks. More specifically, we obtain an element of (ω) k+1 by collapsing blocks in the partition {0}, {1}, {2},... of ω until we are left with exactly k + 1 many blocks. Similarly, we obtain an element of (ω) k {a} by collapsing blocks in the partition {a}, {0}, {1},... of {a} ω until we have the block containing a and exactly k many free blocks. However, notice that there are no restrictions on blocks which can be collapsed with the nonfree block containing a. Thus, the nonfree block containing a acts just like a free block. Therefore, Lemma 3.3 is established. Notice that this reasoning cannot be extended. That is, (ω) k {a,b} is not the same as (ω)k+2. When collapsing blocks in the partition {a}, {b}, {0}, {1},... 17

to obtain an -partition with k free blocks, the nonfree blocks containing a and b do not act like arbitrary free blocks because they cannot be collapsed together. This is why we need to work harder to prove our generalization of the Dual Ramsey Theorem. Since we are now working with partitions instead of words, let me review some of the definitions in this context. We say s is a segment of an -partition X and write s X to mean s = X[n] for some n, where we identify n with {0, 1,..., n 1} and X[n] = {x ( n) x X} \ That is, we simply restrict the X-blocks to the initial segment n of ω and remove any blocks whose intersection with this initial segment is empty. Thus, we obtain exactly the -partition of n formed by restricting the -partition X. In this section, we will freely use the language of both variable words and -partitions since these notions are equivalent (as explained in the first section). Recall that we no longer have the restriction on variable words that all occurrences of x i come before the first occurrence of x i+1. We do retain the restriction that the first occurrence of x i comes before the first occurrence of x i+1. (The point of this restriction was to have a one-to-one correspondence between variable words and -partitions.) Recall that s X means s X[n], or equivalently, s Y [n] for some Y (X) ω. For s X, we write (s, X) ω = {Y (X) ω s Y } (s, X) k = {Y (X) k s Y } We can now prove Lemma 3.4, which is restated here for convenience. Lemma 3.6. Let be a finite nonempty alphabet. If (ω) 0 = C 0 C 1 C l 1, where each C i is Borel, then there exists a Y (ω) ω such that (Y )0 C i for some i < l. Proof. Note that (ω) 0 = ω and hence is a compact Hausdorff space with basic open sets T s = {Y ω s Y } where s <ω. Pictorially, you can think of ω as the set of infinite paths through the finitely branching tree <ω. The basic open set T s is the set of all infinite paths passing through the node s on <ω. We will use two topological facts without proof. The first topological fact is that Borel subsets of ω have the Property of Baire. That is, each Borel subset of ω differs from an open set by a set of first category (i.e. by a countable union of nowhere dense sets). Since each color C i is Borel, this means that for each i < l, there is open set O i and a sequence of dense open sets D i,n such that C i O i n ω D i,n or equivalently C i O i ω \ n ω D i,n 18

Since C 0 C l 1 = ω, we have that O 0 O l 1 contains all the points in ω except possibly the points in each C i \ O i. Since (C i \ O i ) we have and hence ω \ i<l i<l,n ω i<l,n ω i<l,n ω D i,n D i,n O 0 O l 1 D i,n O 0 O l 1. The second topological fact we will use is the Baire Category Theorem which says that a countable intersection of dense open sets in ω remains dense. Thus, the set i,n D i,n is dense and hence O 0 O l 1 is a dense open set. Therefore, at least one of the O i sets is nonempty, so we can fix an i < l and a t 0 <ω such that T t0 C i. We want to continue to define a sequence t 0, t 1, t 2,... such that t n t n+1, #(t n+1 ) = n + 1 and for all s t n+1 with #(s) = 0, T s D i,n. In other words, for n 1, t n is a finite variable word with n many variables such that for each s obtained by substituting elements of for all the variables in t n, the basic open set T s is contained in D i,n 1. Our desired infinite word Y will be the limit of the finite t n words. How do we define t 1? Recall that D i,0 is a dense open set. Therefore, it intersects T t0 and the intersection is open (and hence contains a basic open set). Fix t 0 <ω such that t 0 t 0 and T t 0 D i,0. Let t 1 = t 0 = t 0x 0. Clearly, #(t 1 ) = 1 as required and any s t 1 with #(s) = 0 satisfies t 0 s and hence T s T t 0 D i,0. How do we define t 2? For simplicity, suppose = {a, b} and recall that t 1 = t 0x 0. Let s a = t 0a be the result of substituting a in for x 0 in t 1. Thus s a <ω determines a basic open set T sa. Since D i,1 is a dense open set, there is a s a <ω such that s a s a and T s a D i,1. Fix u <ω such that s a = s a u = t 0au. Let s b = t 0bu <ω. That is, replace the occurrence of the symbol a in s a which was originally substituted for x 0 in t 1 to form s a by the symbol b. gain, since D 1,n is an open dense set, there is a string s b <ω such that s b s b and T s D i,1. Fix v <ω such that b and let t 2 be defined by s b = s b v = t 0buv t 2 = t 0x 0 uvx 1 Notice that #(t 2 ) = 2 as required. lso, if s t 2 and #(s) = 0 (so s <ω ), then either s a s (if a is substituted in for x 0 ) or s b s (if b is substituted in for x 0). In either case, we have that T s D i,1 are required. Containing in this way, we get a sequence t 0 t 1 t 2 t n+1 such that #(t n+1 ) = n + 1 and for all s t n+1 with #(s) = 0, T s D i,n. Let Y be the limit of the finite t n partitions. That is, Y is the unique element of (ω) ω such that t n Y for all n. The fact that t 0 Y implies that (Y ) 0 O i. That is, an arbitrary 19

element of (Y ) 0 is formed by substituting elements of in for the (infinitely many) variables of Y. Since t 0 Y and t 0 contains no variables, no matter how this substitution is done, the resulting string contains t 0 as an initial segment. Therefore, an arbitrary element of (Y ) 0 has t 0 as an initial segment, and hence is contained in T t0, which is contained (by construction) in O i. The fact that t n+1 Y implies that (Y ) 0 D i,n. That is, since t n+1 Y, any element of (Y ) 0 must contain some s t n+1 with #(s) = 0 as an initial segment. By construction, for such s, we have T s D i,n. Hence, each element of (Y ) 0 is in D i,n. Since (Y ) 0 D i,n for every n, we have (Y ) 0 D i,n n ω Since C i O i ω \ n ω D i,n we have (Y ) 0 (C i O i ) =. This means that (Y ) 0 is either contained in both C i and O i or in neither C i nor O i. Since (Y ) 0 O i, it follows that (Y ) 0 C i as required. Before proving Lemma 3.5, we need an important observation. ssume that the conclusion of Theorem 3.2 holds for (ω) k {a}. Our goal is to show that this conclusion also holds for (ω) k+1. The idea will be to transfer instances of (ω)k+1 to instances of (ω)k {a}. The difficulty, as noted before, is that in (ω) ω {a}, the block containing the element of a does not behave like an arbitrary free block because it cannot be collapsed with a block containing an element of. Therefore, there is a crucial difference between (ω) k+1 and (ω) k {a} even though the number of partition blocks in each case is the same. What we need is a way of fixing a free block in (ω) ω that is not collapsed into a block containing an element of, so that this free block acts like the block containing a in (ω) ω {a}. The idea for doing this procedure will be to iterate the following lemma. Recall that for X (ω) ω and s X, (s, X)k denotes the set of all Y (X)k such that s Y. In terms of infinite words, (s, X) k is the set of all coarsenings Y of the infinite variable word X such that Y is an infinite word with exactly k variables and Y contains s as an initial segment. Lemma 3.7. ssume that Theorem 3.2 holds for (ω) k {a} and fix a Borel coloring (ω) k+1 = C 0 C l 1 For any X (ω) ω, s (X) and X (X) ω such that X [ s ] = X[ s ], there is an X (X ) ω such that X [ s ] = X [ s ] and (s, X ) k+1 C i for some i < l. Proof. Fix X (ω) ω, s (X) and X (X) ω such that X [ s ] = X[ s ]. That is, there is a finite word t such that t X, the first symbol in X after t is the variable x #(t) (i.e. t X), s (t) 0 (i.e. s is the result of substituting elements of in for all the variables in t) and (since s = sx 0 = t ), t X. In terms of a picture, we have fixed a coarsening s of an initial segment t of X, where the first symbol in X after the initial segment t is a new variable, and X is an infinite variable word coarsening X which starts with this fixed initial segment t. 20