On the decomposition of orthogonal arrays

Similar documents
The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t

Parameter inequalities for orthogonal arrays with mixed levels

ORTHOGONAL ARRAYS OF STRENGTH 3 AND SMALL RUN SIZES

FRACTIONAL FACTORIAL DESIGNS OF STRENGTH 3 AND SMALL RUN SIZES

A Short Overview of Orthogonal Arrays

Some Nonregular Designs From the Nordstrom and Robinson Code and Their Statistical Properties

Isomorphisms between pattern classes

Some results on the existence of t-all-or-nothing transforms over arbitrary alphabets

CONSTRUCTION OF SLICED SPACE-FILLING DESIGNS BASED ON BALANCED SLICED ORTHOGONAL ARRAYS

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

On the degree of local permutation polynomials

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

On Construction of a Class of. Orthogonal Arrays

Linear Algebra March 16, 2019

Sets and Functions. (As we will see, in describing a set the order in which elements are listed is irrelevant).

Week 15-16: Combinatorial Design

Math 105A HW 1 Solutions

Optimal Fractional Factorial Plans for Asymmetric Factorials

Moment Aberration Projection for Nonregular Fractional Factorial Designs

Cover Page. The handle holds various files of this Leiden University dissertation

COUNTING NUMERICAL SEMIGROUPS BY GENUS AND SOME CASES OF A QUESTION OF WILF

On the construction of asymmetric orthogonal arrays

Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes

Chapter 1 The Real Numbers

= ϕ r cos θ. 0 cos ξ sin ξ and sin ξ cos ξ. sin ξ 0 cos ξ

Generalized Pigeonhole Properties of Graphs and Oriented Graphs

Lecture Notes 1: Vector spaces

Linear Models Review

Isotropic matroids III: Connectivity

QUASI-ORTHOGONAL ARRAYS AND OPTIMAL FRACTIONAL FACTORIAL PLANS

0 Sets and Induction. Sets

5 Set Operations, Functions, and Counting

Construction of some new families of nested orthogonal arrays

Chapter 1 : The language of mathematics.

CONSTRUCTION OF NESTED (NEARLY) ORTHOGONAL DESIGNS FOR COMPUTER EXPERIMENTS

Automorphism groups of wreath product digraphs

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Constructions with ruler and compass.

CONSTRUCTION OF OPTIMAL MULTI-LEVEL SUPERSATURATED DESIGNS. Hongquan Xu 1 and C. F. J. Wu 2 University of California and University of Michigan

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School

ORBIT-HOMOGENEITY. Abstract

Every SOMA(n 2, n) is Trojan

The cocycle lattice of binary matroids

IMA Preprint Series # 2066

GENERALIZED RESOLUTION AND MINIMUM ABERRATION CRITERIA FOR PLACKETT-BURMAN AND OTHER NONREGULAR FACTORIAL DESIGNS

Estimates for probabilities of independent events and infinite series

NUMBERS WITH INTEGER COMPLEXITY CLOSE TO THE LOWER BOUND

Affine designs and linear orthogonal arrays

Chapter 1. Preliminaries

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Sets are one of the basic building blocks for the types of objects considered in discrete mathematics.

Rook Polynomials In Higher Dimensions

Algebraic Methods in Combinatorics

Equality of P-partition Generating Functions

Combining the cycle index and the Tutte polynomial?

1 Basic Combinatorics

Hierarchy among Automata on Linear Orderings

Lebesgue Measure on R n

Measures and Measure Spaces

2. Sets. 2.1&2.2: Sets and Subsets. Combining Sets. c Dr Oksana Shatalov, Spring

Chapter 2 Metric Spaces

Containment restrictions

Automata on linear orderings

0.2 Vector spaces. J.A.Beachy 1

Additional Constructions to Solve the Generalized Russian Cards Problem using Combinatorial Designs

Table of mathematical symbols - Wikipedia, the free encyclopedia

SUMS PROBLEM COMPETITION, 2000

Jónsson posets and unary Jónsson algebras

Enumeration Schemes for Words Avoiding Permutations

On the adjacency matrix of a block graph

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Classification of root systems

PERIODIC POINTS OF THE FAMILY OF TENT MAPS

Boolean Inner-Product Spaces and Boolean Matrices

Group divisible designs in MOLS of order ten

On cordial labeling of hypertrees

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices

The boundary of a shape and its classification

K 4 -free graphs with no odd holes

Support weight enumerators and coset weight distributions of isodual codes

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Solutions to Unique Readability Homework Set 30 August 2011

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:

DO FIVE OUT OF SIX ON EACH SET PROBLEM SET

Eventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic

Foundations of Matrix Analysis

On Linear Subspace Codes Closed under Intersection

ORBIT-HOMOGENEITY IN PERMUTATION GROUPS

On Hadamard Diagonalizable Graphs

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Chapter Summary. Sets (2.1) Set Operations (2.2) Functions (2.3) Sequences and Summations (2.4) Cardinality of Sets (2.5) Matrices (2.

USING REGULAR FRACTIONS OF TWO-LEVEL DESIGNS TO FIND BASELINE DESIGNS

Strongly Regular Decompositions of the Complete Graph

In N we can do addition, but in order to do subtraction we need to extend N to the integers

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

ON THE CONSTRUCTION OF 2-SYMBOL ORTHOGONAL ARRAYS

On minimal models of the Region Connection Calculus

INEQUALITIES OF SYMMETRIC FUNCTIONS. 1. Introduction to Symmetric Functions [?] Definition 1.1. A symmetric function in n variables is a function, f,

* 8 Groups, with Appendix containing Rings and Fields.

ORTHOGONAL ARRAYS CONSTRUCTED BY GENERALIZED KRONECKER PRODUCT

Transcription:

On the decomposition of orthogonal arrays Wiebke S. Diestelkamp Department of Mathematics University of Dayton Dayton, OH 45469-2316 wiebke@udayton.edu Jay H. Beder Department of Mathematical Sciences University of Wisconsin Milwaukee P.O.Box 413 Milwaukee, WI 53201-0413 beder@uwm.edu Abstract When an orthogonal array is projected on a small number of factors, as is done in screening experiments, the question of interest is the structure of the projected design, by which we mean its decomposition in terms of smaller arrays of the same strength. In this paper we investigate the decomposition of arrays of strength t having t + 1 factors. The decomposition problem is well-understood for symmetric arrays on s = 2 symbols. In this paper we derive some general results on decomposition, with particular attention to arrays on s = 3 symbols. We give a new proof of the regularity of arrays of index 1 when s = 2 or 3, and show by counterexample that the result doesn t extend to larger s. For s = 3 we also construct an indecomposable array of index 2. Finally, we determine the structure of completely decomposable arrays on 3 symbols having strength 2 and index 2, 3 or 4. array. Key words. Orthogonal array, decomposition, projection, simple orthogonal array, regular orthogonal AMS subject classification. Primary 05B15, 62K15. 1 Introduction Orthogonal arrays were first introduced by Rao [11], who defined them as subsets of a Cartesian product of the form S k. In experimental design they represent fractional factorial experiments, where S k is the full set of treatment combinations. Here the (finite) set S indexes the levels of each of k factors. Orthogonal arrays are also used in cryptography to represent codes, where S is an alphabet and k the word length. Rao defined the strength and index of an array, and Utilitas Mathematica, Vol. 61, 2002, 65-86 1

in [12] extended these ideas to so-called asymmetric arrays, those based on arbitrary Cartesian products A 1 A k. The concept of orthogonal array went through a further extension by allowing elements (k-tuples) to be repeated. So-called simple arrays, those with no repetitions, may thus represent arbitrary subsets of arbitrary factorial experiments. The issue of projecting an orthogonal array onto a small number of factors arises in screening experiments, where one examines a large number k of factors to determine which ones are active. Because of cost, only a fraction of the possible treatment combinations can be observed, and one thus selects a fraction having the highest strength t affordable. One then identifies the k active factors and re-analyzes the data using only these factors. Typically, k is much larger than t, while k may only be slightly larger. The design used for re-analysis of the data is the projection of the original array onto k factors. Such a projection is again an orthogonal array of strength t, and the question is to analyze its structure, by which we will mean its representation as a juxtaposition of several smaller arrays having the same factors and the same strength. The question of projection thus turns into a question about the decomposition of an orthogonal array of strength t. In this paper we deal primarily with arrays having t + 1 factors. In an early paper, Seiden and Zemach [13] showed that every symmetric orthogonal array having t + 1 factors and s = 2 symbols (that is, where S has two elements) is a juxtaposition of arrays of size 2 t. Such arrays are the smallest possible of strength t, having index λ = 1. Cheng [4] has utilized this fact to show that every array on s = 2 symbols contains only half-replicates of the 2 t+1 factorial design or copies of the full 2 t+1 design and classified all arrays on 2 symbols into three types based on their decomposition. We describe this in Section 5. When s > 2, the question of decomposition is far more complicated. This is due to the appearance of indecomposable arrays of index λ > 1 (Proposition 4.4) and, when s > 3, of non-regular arrays of index λ = 1. Regularity is a property defined when s is a prime or prime power, and Hedayat, Stufken and Su [7] have shown that orthogonal arrays with s = 3 symbols and index 1 are always regular. We provide an alternate proof that also applies to the case when s = 2 (Theorem 3.7), and also show that for s = 4, 5 or 7 there exist non-regular orthogonal arrays of index 1, strength t = 2 and k = 3 factors (Theorem 3.8). Finally, we develop a way to extend Cheng s classification of orthogonal arrays to arbitrary completely decomposable symmetric orthogonal arrays of strength t having t + 1 factors, that is, to arrays which are decomposable into orthogonal arrays of index 1. For orthogonal arrays on s = 3 symbols, we use this to classify all completely decomposable arrays of size N = 18, 27 or 36 and strength t = 2 (Section 5). This requires development of a broader notion of type. The case s = 2 is special in a number of ways. It is the only one where all arrays of t + 1 factors are completely decomposable, and in fact the only one where decompositions are always unique. The type of an array that we propose may be a way of asserting some uniqueness property of decompositions when s > 2, and is useful to the statistician. It is common to represent orthogonal arrays as matrices of symbols. The idea is that an element of a Cartesian product of k sets may be written as a column (or row) of length k. This subtly extends the definition of orthogonal arrays to allow repeated elements, since a matrix can have repeated columns or rows. If repetitions occur, the collection of elements no longer constitutes a set, but rather a multiset. This point of view has some advantages, and so we will begin by reviewing some basic ideas about multisets as we define orthogonal arrays (Section 2). Section 3 deals with the properties of simplicity and regularity, and Section 4 with decomposability. Where possible, we state and prove our results for arbitrary (possibly asymmetric) 2

orthogonal arrays. After applying the theory to classifying 3-symbol arrays in Section 5, we conclude in Section 6 with a discussion of possible extensions as well as applications to the Bush and Rao inequalities. Throughout the paper, GF (s) will denote the field with s elements. If s is prime, we will use Z s instead. Column vectors will be denoted by boldface letters, and the transpose of a vector by. We will use the symbol to denote pairwise linear dependence (over GF (s)): a b θ GF (s), θ 0, such that a = θb. We will use the notation OA(N, k, s, t) for a (symmetric) orthogonal array with N elements, k factors (or constraints), s symbols and strength t, and write it as a k N matrix when displaying it. Finally, A will denote the cardinality of the set (or multiset) A. These concepts are discussed in the next section. 2 Multisets and orthogonal arrays The matrix representation of an orthogonal array is very useful for visualizing the array, but it imposes an ordering among the elements (k-tuples) of the array which is not inherent in the design represented by the matrix. Any permutation of these elements represents the same design. Thus for many purposes it is convenient to view an array as a set of k-tuples, or (since we allow repetition) as a multiset. This representation is unique. In the multiset M = {a, a, b, b, b}, the elements a and b occur with multiplicities 2 and 3, respectively, and M = 5. In general we may identify a multiset with the function that counts the multiplicity of each element, and its cardinality is the sum of the multiplicities. Definition 2.1 Let Ω be a set. A multiset on Ω is a function M : Ω {0, 1, 2,...}. M(x) is called the frequency (or multiplicity) of x. If M(x) = 0 for all x Ω, we call M the empty multiset, denoted. If M(x) 1 for all x Ω, M is said to be simple. We define the cardinality of M by M = M(x). x Ω Note that M is simple if and only if it is a set. The term simple was introduced in [2]. Definition 2.2 Let Ω be a set and K and L multisets on Ω. We define the following relations and operations: (1) K = L if K(x) = L(x) x Ω (multiset equality). (2) K L if K(x) L(x) x Ω (multiset inclusion). (3) x K if K (x) > 0 (multiset membership). (4) K L = min(k, L) (multiset intersection). (5) K L = max(k, L) (multiset union). (6) K + L = M, where M(x) = K(x) + L(x) x Ω (multiset sum or juxtaposition). If K L, we say that K is a multisubset of L. We will use the notation M 1 + + M n = n M i i=1 and M + + M = n M. }{{} n 3

It is easy to see that M 1 + + M n = M 1 + + M n. Note that when K L =, then K L = K + L. Definitions 2.1 and 2.2 include those given by Stanley [14]. Except for the multiset sum, the relations and operations of Definition 2.2 and the notion of cardinality coincide with the usual ones when K and L are sets. Definition 2.3 Let A 1,..., A k be finite sets. A multisubset of A 1 A k is called an orthogonal array on A 1 A k. To define the notion of projection of an orthogonal array, and in particular its strength, we proceed as follows. Let i 1 < < i m be elements of {1,..., k}, and let I = {i 1,..., i m }. Recall that the projection of A 1 A k onto A i1 A im is the function p I : A 1 A k A i1 A im such that p I (x 1,..., x k ) = (x i1,..., x im ). Definition 2.4 Let O be an orthogonal array on A 1 A k and let I = {i 1,..., i m } {1,..., k}. The projection of O onto the m factors i 1,..., i m is the multiset p I (O) = {p I (x) : x O}. Thus, with I as in this definition, the projection O = p I (O) is an orthogonal array on A i1 A im. We clearly have p I (O) = O for any projection p I. The operations of sum and projection are easily visualized in matrix terms as follows. If elements of an array O are represented by columns, then projecting O onto the factors i 1,..., i m means deleting all but rows i 1,..., i m from the corresponding matrix. If O 1 and O 2 are orthogonal arrays, then O 1 + O 2 is represented by juxtaposing the corresponding matrices. If O 1,..., O n are orthogonal arrays on A 1 A k and I {1,..., k}, then p I (O 1 + + O n ) = p I (O 1 ) + + p I (O n ). This elementary remark will be useful later. Definition 2.5 Let O be an orthogonal array on A 1 A k. Then O has strength t if for every set I = {i 1,..., i t } {1,..., k} there exists a positive integer λ I such that p I (O) = λ I (A i1 A it ). (1) In other words, the projection of O onto any set of t factors, say A i1,..., A it, contains each element of A i1 A it the same number of times, the number (λ I ) depending on the choice of the set I of factors. In statistical terms, the projection on t factors is a set of replicates of the complete factorial A i1 A it. Obviously the projection of an orthogonal array of strength t onto m t factors is again an orthogonal array of strength t. This observation is what reduces the study of projection properties to the study of decompositions. Let O be an orthogonal array on A 1 A k. When A 1 = = A k = S, say, it is easy to see that the numbers λ I in (1) are equal for all subsets I {1,..., k} of cardinality t, and that their common value is λ = N/s t, where N = O and s = S. Definition 2.6 An orthogonal array O on the set S k is said to be symmetric. The common value of λ in (1) is called the index of O. We will denote the family of symmetric orthogonal arrays of cardinality N on k factors, s = S symbols and strength t by OA(N, k, s, t). Thus an array of index 1 is of form OA(s t, k, s, t). Since N = λs t, arrays of index 1 are the smallest possible. (Note that the symmetry of an array O has nothing to do with the symmetry of the corresponding matrix.) 4

3 Simplicity and regularity In this section, all orthogonal arrays are assumed to be symmetric. We have defined an orthogonal array to be a multiset, in part to allow for possible repetitions of elements. Recall that an orthogonal array is simple if it contains no repeated elements. Every orthogonal array of index 1 is simple. However, the converse is not true. For example, the array [ ] 0 0 0 0 1 1 1 1 O = 0 0 1 1 0 0 1 1, 0 1 0 1 0 1 0 1 which is of the form OA(8, 3, 2, 2), is simple but has index 2. Now, the components of the elements in an orthogonal array belong to a set S of cardinality s, but the choice of S is arbitrary. When s is a power of a prime, we can choose S to be the field with s elements. Definition 3.1 Let s be a power of a prime. A simple orthogonal array of the form OA(N, k, s, t) is called regular if its columns form the solution set of a system of linear equations in k variables over the field GF (s). Notation: Let a 1,..., a n, a GF (s), where s is a prime power. When it is convenient, we will write an equation a 1 x 1 + + a n x n = a in the form a x = a, where a = (a 1,..., a n ) and x = (x 1,..., x n ). The observations contained in the following lemma will be used repeatedly. Lemma 3.2 Let s be a prime power, and let m n. Let a, b, b 1,..., b m (GF (s)) n, and a, b, b 1,..., b m GF (s). (i) If b 1,..., b m are linearly independent over GF (s), then the system b 1 x = b 1. b mx = b m has s n m solutions in (GF (s)) n. (ii) If b = θa, a 0, then the solution sets of the equations a x = a and b x = b are either disjoint or equal. For the remainder of this section we will assume that the number of factors of every orthogonal array is k = t + 1, where t is the strength of the array. The following characterization of regular arrays of index 1 will be applied repeatedly in Section 5. We omit the proof, the main point of which is to check that the given strength is attained. Proposition 3.3 Let O be a regular orthogonal array of strength t and index 1, where s is a prime power. Then O is the solution set in (GF (s)) t+1 of a single linear equation of the form a 1 x 1 + + a t+1 x t+1 = a, (2) where a, a i GF (s), and where the coefficients a i are all nonzero. 5

Under what circumstances can we expect an orthogonal array O of index 1 to be regular? Hedayat, Stufken and Su [7] have proved that regularity holds for every orthogonal array of the form OA ( 3 t, t + 1, 3, t ), where t 2. We will outline an alternate proof that is valid when s = 2 or 3, and for t 1 (Theorem 3.7). Our approach rests on a property of functions on a finite field (Proposition 3.5). This raises the question as to whether the same regularity result would hold for s > 3. The main result of this section (Theorem 3.8) is that Theorem 3.7 need not be true for other values of s, even when s is a prime. We begin by characterizing orthogonal arrays of index 1 on t + 1 factors without assuming regularity: Lemma 3.4 Let S be a set of cardinality s. A subset O of S t+1 is an orthogonal array of the form OA ( s t, t + 1, s, t ) if and only if there exists a function f : S t S satisfying the following conditions: (i) (x 1,..., x t+1 ) O f(x 1,..., x t ) = x t+1, and (ii) for all 0 d < t, if we fix any d arguments of f, the restriction is s t d 1 -to-one. The proof is straightforward and can be found in [5]. When S is a finite field, the characterization in Lemma 3.4 suggests making use of the following well-known fact (see, e.g., [8], pp. 368-369, 378): Proposition 3.5 Let s be a prime power. If f : (GF (s)) n GF (s) is a function, then f is given by a polynomial in n variables. Moreover, the polynomial is uniquely determined if we mandate that the exponent of each variable be at most s 1. In special cases the polynomial f can be shown to be linear in every component: Proposition 3.6 Let n 1, and f : Z n s Z s, where s = 2 or 3. Assume that if we fix any d arguments of f, the restriction is s n d 1 -to-one for all 0 d < n. Then f is given by a linear polynomial, i.e. n f(x 1,..., x n ) = a + a i x i, where a, a i Z s. (3) i=1 The proof is an induction on n using Lemma 3.4, and is omitted (see [5]). The proof of Proposition 3.6 does not generalize to s > 3, since the base step of the induction depends on the fact that s! = s(s 1), which is true only for s = 2 or 3. This turns out to be a key limitation. Theorem 3.7 Let O be an orthogonal array of the form OA(s t, t + 1, s, t), where s = 2 or 3. Then O is regular. Proof. By Lemma 3.4, there exists a function f : Z t s Z s such that for every (x 1,..., x t+1 ) O, f(x 1,..., x t ) = x t+1, and such that f satisfies the assumption in Proposition 3.6 with n = t. Thus f is of the form (3) with n = t. Therefore O is the solution set to the equation a + t a i x i = x t+1, and so is regular. i=1 6

The case s = 2 of Theorem 3.7 appears to be unremarked in the literature, but it seems to explain the simplicity of the decomposition theory for this case (which we review in Section 5). In asking whether Theorem 3.7 holds for values of s beyond 3, only prime power values need to be considered since these are the only ones for which regularity is defined. The following shows that if s > 3, an orthogonal array of index 1 need not be regular, even if s itself is a prime. Theorem 3.8 For s = 4, 5 and 7, there exist nonregular orthogonal arrays of the form OA ( s 2, 3, s, 2 ). Proof. Denote Z s by {0, 1,..., s 1}, and let GF (4) = {0, 1, a, b}, where ab = 1 and b = 1 + a. Let 0 0 0 0 1 1 1 1 a a a a b b b b O 4 = 0 1 a b 0 1 a b 0 1 a b 0 1 a b, 0 1 a b a 0 b 1 1 b 0 a b a 1 0 O 5 = O 7 = 0 0 0 0 0 0 1 2 3 4 0 1 2 3 4 1 1 1 1 1 0 1 2 3 4 1 3 4 2 0 2 2 2 2 2 0 1 2 3 4 2 4 0 1 3 3 3 3 3 3 0 1 2 3 4 3 0 1 4 2 4 4 4 4 4 0 1 2 3 4 4 2 3 0 1, and 0 0 0 0 0 0 0 1 1 1 1 1 1 1 2 2 2 2 2 2 2 3 3 3 3 3 3 3 4 4 4 4 4 4 4 5 5 5 5 5 5 5 6 6 6 6 6 6 6 0 1 2 3 4 5 6 0 1 2 3 4 5 6 0 1 2 3 4 5 6 0 1 2 3 4 5 6 0 1 2 3 4 5 6 0 1 2 3 4 5 6 0 1 2 3 4 5 6 0 1 2 3 4 5 6 6 2 3 4 5 1 0 1 3 6 5 0 4 2 5 6 4 0 1 2 3 4 0 1 6 2 3 5 2 5 0 1 3 6 4 3 4 5 2 6 0 1 It is easily verified that for each s, O = O s is an orthogonal array of the form OA ( s 2, 3, s, 2 ). By Lemma 3.4, there is a function f : GF (s) GF (s) GF (s) such that f(x 1, x 2 ) = x 3 for all (x 1, x 2, x 3 ) O, as O has index 1. Suppose for contradiction that O is regular. Then O is the solution set to an equation of the form a 1 x 1 + a 2 x 2 + a 3 x 3 = a, where a i, a GF (S) and a i 0, by Proposition 3.3. Solving for x 3 and invoking uniqueness we see that f must be of the form f(x 1, x 2 ) = cx 1 + dx 2 + e for some c, d, e GF (s). This implies that f(1, θ) f(0, θ) c for all θ GF (s). It is easy to see that this condition is violated in all three examples, which are therefore not regular. 4 Decomposability For j = 1,..., n let O j be an orthogonal array on A 1 A k, and let O = O 1 + + O n. If each O i has strength t then so does O. The strength may actually be greater than t, as happens in so-called foldover designs. Our problem is, in a sense, the reverse: Definition 4.1 Let O be an orthogonal array of strength t. We say O is decomposable if there exist orthogonal arrays O 1, O 2 O of strength t such that O = O 1 + O 2. We will call O 1 and O 2 components of O. When O is a symmetric orthogonal array, we say that O is completely decomposable if O = O 1 + + O n (4). 7

where each of the components O i, i = 1,..., n, has strength t and index 1. The expression in (4) is called a complete decomposition of O. Remark 4.2 If O is a completely decomposable (symmetric) orthogonal array of index λ, then any complete decomposition (4) contains n = λ components. Remark 4.3 Raghavarao [9] defines an orthogonal arrays of strength 2 to be β-resolvable if it is the juxtaposition of αs different arrays of the form OA(βs, k, s, 1), and completely resolvable if it is 1-resolvable. By contrast, decomposability requires each component to have the same strength as the original array. One of the things that complicates the decomposition theory of arrays on s > 2 symbols is the appearance of indecomposable arrays. For example, let [ ] 0 0 0 1 1 1 2 2 2 0 0 0 1 1 1 2 2 2 O = 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2. (5) 0 1 2 1 0 2 2 0 1 0 2 1 2 1 0 1 2 0 Proposition 4.4 The orthogonal array O defined in (5) is indecomposable and of index 2. Proof. It is easy to check that O is of the form OA(18, 3, 3, 2) and thus has index λ = 18/3 2 = 2. Now, suppose for contradiction that O is decomposable. Then O is the sum of two orthogonal arrays, necessarily of the form OA(9, 3, 3, 2). By Theorem 3.7, every such array is regular and is thus the solution set of an equation of the form a 1 x 1 + a 2 x 2 + a 3 x 3 = a, (6) where a, a i Z 3 and where a i 0 i. By Lemma 3.2, the solution sets of two equations of the form (6) either are disjoint, are equal, or share exactly three elements. Upon inspection, however, we can see that O contains the column (0, 0, 0) twice, while every other column occurs exactly once. Thus O cannot be the sum of two orthogonal arrays of the form OA (9, 3, 3, 2) and is therefore indecomposable. We also note that even if an orthogonal array is completely decomposable, this decomposition need not be unique. For example, let s be a prime power, let S = GF (s), and suppose O = S t+1. It is easy to see that O is of the form OA(s t+1, t + 1, s, t). Fix nonzero elements a 1,..., a t+1 GF (s), and for θ GF (s) define O θ to be the solution set of the equation a 1 x 1 + + a t+1 x t+1 = θ. From Section 3 we know that O θ is a regular orthogonal array of the form OA(s t, t + 1, s, t) and that these arrays are disjoint. Moreover, O θ = S t+1 = O, θ GF (s) so that O is completely decomposable into regular arrays. It is not hard to see that there are (s 1) t distinct ways to do this. Remark 4.5 Rather than asking for decomposability of an orthogonal array O, we might ask for something weaker, namely the existence of an array O 1 O of the same strength. Now if we let O 2 be the complement of O 1 in O, then O = O 1 +O 2, where O, O 1 and O 2 are orthogonal arrays on A 1 A k. But it is easily seen that if O and O 1 have strength t, then so does O 2. (This fact is noted in [10] for simple symmetric orthogonal arrays.) Let us define an array O of strength t to be minimal if none of its proper multisubsets has strength t. Then another way of stating the above is this: An orthogonal array is minimal if and only if it is indecomposable. Thus asking for a subarray O 1 O is no easier than asking that O be decomposable. 8

5 Classification of orthogonal arrays by their decomposition In this section we will study completely decomposable symmetric arrays. Throughout the section, s is a prime or prime power, and Z s = Z s \{0}. 5.1 A typology for completely decomposable orthogonal arrays Let us first consider the case when s = 2. Seiden and Zemach [13] have shown that every orthogonal array of the form OA(N, t + 1, 2, t) is completely decomposable. We know from Theorem 3.7 that when s = 2, every orthogonal array of index 1 is regular. But by Proposition 3.3 there are only two distinct OA(N, t + 1, 2, t) of index 1, namely the sets I 0 = { } x Z t+1 2 : 1 x = 0 and I 1 = These two sets are disjoint, and so we have the following: { } x Z t+1 2 : 1 x = 1. Proposition 5.1 Let O be an orthogonal array of the form OA(N, t + 1, 2, t) of index λ. Then O is uniquely decomposable as O = α I 0 + β I 1, where α and β are nonnegative integers such that α + β = λ. From the point of view of applications (such as statistical design), what is important is the form of the decomposition αa + βb of the array O, and we might define the type of an array O by this decomposition. (In the same way we might say that for certain purposes the numbers 12 and 18 are of the same type as they are both of the form pq 2.) There are several issues that complicate matters when s > 2. We have already noted that not every orthogonal array is completely decomposable, and that when s > 3 the components of index 1 need not be regular. In addition, there are many more possible components (regular or not), and two components can have a nonempty intersection without being equal. It is this fact which motivates the following definition. Definition 5.2 Let O and P be completely decomposable orthogonal arrays of the form OA(N, t+ 1, s, t) of index λ. We say that O and P are of the same type if for some complete decomposition of O and P there exists a labeling of the respective components, say O 1,..., O λ and P 1,..., P λ, such that for every k and for all 1 i 1 < < i k λ we have O i1 O ik = P i1 P ik. We say that O is polymorphic if it is simultaneously of more than one type (dimorphic if it is of exactly two). For arrays of form OA(N, t + 1, 2, t) Definition 5.2 reduces to the notion of type alluded to above. For example, if O, P are orthogonal arrays of the form OA(7(2 t ), t + 1, 2, t) with O = 3 I 0 + 4 I 1 and P = 4 I 0 + 3 I 1, then O and P are both of the same type. Proposition 5.1 implies that there are [ λ 2 ] + 1 distinct types (here [x] denotes the greatest integer that is less than or equal to x). Cheng [4] provided a classification of orthogonal arrays of the form 9

OA(N, t + 1, 2, t) into just three different categories. Here again, O = α I 0 + β I 1, where α + β = λ. Type I: β = 0 or α = 0 O = λ I 0 or O = λ I 1 Type II: α = β = λ 2 O = λ 2 Zt+1 2 (if λ is even) (7) Type III: other Cheng s Types I and II contain pure forms of orthogonal arrays, representing factorial experiments based either on replicates of the same one-half fraction (I 0 or I 1 ) of the design or on replicates of the full 2 t+1 design (where the set of treatment combinations is Z t+1 2 ). By pairing components I 0 and I 1, Type III arrays can be viewed as γ = min (α, β) replicates of the full 2 t+1 factorial design plus α β replicates of either I 0 or I 1. In this way Cheng explained some earlier observations (see [4] for discussion). In Section 5.2 we will examine in detail the classification into types of completely decomposable arrays on 3 symbols having strength 2 and index λ = 2, 3 or 4. These three cases provide some indication of how the number of types increases with λ. 5.2 Orthogonal arrays of the form OA(N, t + 1, 3, t) The following invariant of an orthogonal array will prove to be a valuable tool in determining its type: Definition 5.3 Let O be a symmetric orthogonal array of index λ, and let t j be the number of elements of O that occur exactly j times in O. We define the signature of O to be the λ-tuple T = (t 1, t 2,..., t λ ). Example 5.4 Suppose O = 2 A+B+C, where A = B = C = 9, the two-way intersections have size 3, and A B C = 1. The multiplicity of a given element x O is determined by two things: the multiplicity of each component (for example, A occurs twice) and the repetition of an element belonging to more than one component. We may compute the multiplicity O(x) of every element of A B C, the ambient set of O, as follows (we suppress the symbol): Subset Cardinality 2 A(x) + B(x) + C(x) = O(x) AB c C c 4 2 1 + 0 + 0 = 2 A c BC c 4 0 + 1 + 0 = 1 A c B c C 4 0 + 0 + 1 = 1 ABC c 2 2 1 + 1 + 0 = 3 AB c C 2 2 1 + 0 + 1 = 3 A c BC 2 0 + 1 + 1 = 2 ABC 1 2 1 + 1 + 1 = 4 Now we see that there are 4 + 4 = 8 elements of multiplicity 1, 4 + 2 = 6 elements of multiplicity 2, etc., so that O has signature (8, 6, 4, 1). (This example will arise in the proof of Proposition 5.8 and in Table 3.) Of course, the signature may be defined for any multiset. For an orthogonal array O, the length of the signature equals the index λ, as no element of O can occur more than λ times. 10

Further, λ jt j = N. j=1 What makes the signature so useful is that it is uniquely determined by O. Type Signature Structure Description of Components 1 (18, 0) O = A + B A = {x : c x = a}, B = {x : c x = b}, where a b 2 (12, 3) O = A + B A = {x : c 1 x = a}, B = {x : c 2 x = b}, where c 1 c 2 3 (0, 9) O = 2 A A = {x : c x = a} Table 1: Types of decomposable OA(18, 3, 3, 2). A, B are subsets of Z 3 3. c, c i have only nonzero components. denotes linear dependence. For arrays of index λ = 2 we have the following: Proposition 5.5 Let O be a decomposable orthogonal array of the form OA(18, 3, 3, 2). Then O is of exactly one of the three types in Table 1, uniquely determined by the signature of O. Proof. Every decomposable orthogonal array O of the form OA(18, 3, 3, 2) is completely decomposable, as O has index 2. Thus we can write O = A + B, where A and B are orthogonal arrays of index 1. By Theorem 3.7, A and B are regular, and so by Proposition 3.3 each is the solution set of an equation of the form c x = a, where x Z 3 3, c (Z 3 )3, a Z 3. Thus by Lemma 3.2, the three types listed in Table 1 are the only possible ones. To show uniqueness, we compute the signature T of O: A B = T = (18, 0), A B = 3 T = (12, 3), A = B T = (0, 9). But T is uniquely determined by O, and so O is of exactly one of the three types described in Table 1. Note that O is simple if and only if it is of Type 1. The three types described in Table 1 are the same that Wang and Wu observed as possible projections of the array L 18 (3 7 ) given in Table 6 of [15]. Before we proceed to prove the analogous result for arrays of index λ = 3, we need the following: Lemma 5.6 Suppose c 1, c 2, c 3 (Z 3 )3 (that is, the components of the c i are all nonzero). If c 1, c 2, c 3 are pairwise linearly independent, then they are linearly independent. Proof. Without loss of generality, assume that c i = (1, c i1, c i2 ) (8) 11

for i = 1, 2, 3, where c ij 0. Suppose that αc 1 + βc 2 + γc 3 = 0 (9) for α, β, γ Z 3. Then (8) implies that α + β + γ = 0. Since the c i are pairwise linearly independent, (9) implies that either or α = β = γ = 0 (10) αβγ 0. (11) Suppose (11) holds. Since α, β, γ Z 3, it is easy to check that α = β = γ. Using (9) again, we see that for all j, c 1j + c 2j + c 3j = 0. Since c ij Z 3 for all i, j, we verify similarly that c 1j = c 2j = c 3j for all j. But that means that c 1 = c 2 = c 3, which contradicts the assumption of pairwise independence of the c i. Therefore (10) must hold, and thus c 1, c 2, c 3 are linearly independent. Type Signature Structure Description of Components 1 (27, 0, 0) O = A + B + C A = {x : c x = a}, B = {x : c x = b}, C = {x : c x = c}, and a, b, c distinct 2 (15, 6, 0) O = A + B + C A = {x : c 1 x = a}, B = {x : c 1 x = b}, C = {x : c 2 x = c}, and c 1 c 2, a b 3 (12, 6, 1) O = A + B + C A = {x : c 1 x = a}, B = {x : c 2 x = b}, C = {x : c 3 x = c}, and c i c j for i j 4 (9, 9, 0) O = 2 A + B A = {x : c x = a}, B = {x : c x = b}, and a b 5 (6, 6, 3) O = 2 A + B A = {x : c 1 x = a}, B = {x : c 2 x = b}, and c 1 c 2 6 (0, 0, 9) O = 3 A A = {x : c x = a}. Table 2: Types of completely decomposable OA(27, 3, 3, 2). A, B, C are subsets of Z 3 3. c, c i have only nonzero components. denotes linear dependence. When λ = 3, we must add the assumption of complete decomposability: Proposition 5.7 Let O be a completely decomposable array of the form OA(27, 3, 3, 2). Then O is of exactly one of the six types in Table 2, uniquely determined by its signature. Proof. Since O is completely decomposable, we can write O = A 1 + A 2 + A 3, where A 1, A 2 and A 3 are orthogonal arrays of index 1. By Theorem 3.7, the A i are regular, say A i = {x : c i x = b i}, i = 1, 2, 3, where c i (Z 3 )3 and b i Z 3. Clearly, dim {c 1, c 2, c 3 } = 1, 2 or 3. Using Lemma 3.2, we have the following (T is again the signature of O): 1. Suppose dim {c 1, c 2, c 3 } = 1. Then c i c j for all i, j, and we can write A 1 = { x : c x = b 1 }, A2 = { x : c x = b 2 }, A3 = { x : c x = b 3 }. 12

(a) If the b i are pairwise distinct, then A i A j = for i j. Thus T = (27, 0, 0). (b) If b 1 = b 2 and b 1 b 3, then O = 2 A 1 + A 3 and A 1 A 3 =. Thus T = (9, 9, 0). (c) If b 1 = b 2 = b 3, then O = 3 A 1. Thus T = (0, 0, 9). 2. Suppose dim {c 1, c 2, c 3 } = 2. Claim: If c 1 c 2 then c 1 c 3 or c 2 c 3. Proof of claim. Suppose c 3 c j for j = 1, 2. Then c 1, c 2, c 3 are pairwise linearly independent, and thus, by Lemma 5.6, they are linearly independent. But that is a contradiction to the assumption that dim {c 1, c 2, c 3 } = 2. Therefore, without loss of generality, c 1 c 3 c 2. We can thus write A 1 = { x : c 1x = b 1 }, A2 = { x : c 1x = b 2 }, A3 = { x : c 2x = b 3 }. (a) If b 1 b 2, then A 1 A 2 = and A i A 3 = 3 for i = 1, 2. Thus T = (15, 6, 0). (b) If b 1 = b 2, then O = 2 A 1 + A 3 and A 1 A 3 = 3. Thus T = (6, 6, 3). 3. Suppose dim {c 1, c 2, c 3 } = 3. Then A i A j = 3 for i j, and A 1 A 2 A 3 = 1. Thus T = (12, 6, 1). Thus there are six possible types for O, each yielding a different expression for T, and they are exactly the ones listed in Table 2. But T is uniquely determined by O, and thus O is of exactly one of the six types. Note that if O is of Type 1, then O = Z 3 3. This is the only completely decomposable type of OA (27, 3, 3, 2) that is simple. It is when λ = 4 that the signature first fails to determine the type of an array in all cases. As it turns out, this is due to polymorphism: Proposition 5.8 Let O be a completely decomposable array of the form OA(36, 3, 3, 2). Then O is of one of the fourteen types in Table 3, and its type is uniquely determined up to dimorphism by its signature. Proof. Since O is completely decomposable, we can write O = A 1 + A 2 + A 3 + A 4, where A i is an orthogonal array of index 1 for i = 1, 2, 3, 4. By Theorem 3.7, the A i are regular, say A i = {x : c i x = b i}, i = 1, 2, 3, 4, where c i (Z 3 )3 and b i Z 3. Clearly, dim {c 1, c 2, c 3, c 4 } = 1, 2 or 3. Using Lemma 3.2, we have the following (T is again the signature of O): 1. Suppose dim {c 1, c 2, c 3, c 4 } = 1. Then c i c j for all i, j, and we can write A 1 = { x : c x = b 1 }, A2 = { x : c x = b 2 }, A3 = { x : c x = b 3 }, A4 = { x : c x = b 4 }. Note that the b i cannot all be distinct, as they are in Z 3. (a) If the b i are all equal, then O = 4 A 1. Thus T = (0, 0, 0, 9). (b) If b 1 = b 2 = b 3 b 4, then O = 3 A 1 + A 4 and A 1 A 4 =. Thus T = (9, 0, 9, 0). (c) If b 1 = b 2 b 3 = b 4, then O = 2 A 1 + 2 A 3 and A 1 A 3 =. Thus T = (0, 18, 0, 0). 13

(d) If b 1 = b 2 and if b 1, b 3 and b 4 are distinct, then O = 2 A 1 + A 3 + A 4 and A 1, A 3 and A 4 are pairwise disjoint. Thus T = (18, 9, 0, 0). 2. Suppose dim {c 1, c 2, c 3, c 4 } = 2. By Lemma 5.6, no three of these vectors can be pairwise linearly independent. Thus the four vectors partition into two -equivalence classes, consisting either of two vectors each or of three and one. For convenience in tabling, we number the vectors so that the two possibilities are c 1 c 3 c 2 c 4 and c 1 c 3 c 4 c 2. (a) Suppose c 1 c 3 c 2 c 4. We can write A 1 = { x : c 1x = b 1 }, A2 = { x : c 2x = b 2 }, A3 = { x : c 1x = b 3 }, A4 = { x : c 2x = b 4 }. i. If b 1 = b 3, b 2 = b 4, then O = 2 A 1 + 2 A 2, and A 1 A 2 = 3. Thus T = (0, 12, 0, 3). ii. If b 1 = b 3, b 2 b 4, then O = 2 A 1 + A 2 + A 4 and A 1 A 2 = A 1 A 4 = 3, and A 2 A 4 =. Thus T = (12, 3, 6, 0). iii. If b 1 b 3, b 2 b 4, then O = A 1 + A 2 + A 3 + A 4, and A 1 A 3 = A 2 A 4 =, A 1 A 2 = A 1 A 4 = A 2 A 3 = A 3 A 4 = 3. Thus T = (12, 12, 0, 0). (b) Suppose c 1 c 3 c 4 c 2. We can write A 1 = { x : c 1x = b 1 }, A2 = { x : c 2x = b 2 }, A3 = { x : c 1x = b 3 }, A4 = { x : c 1x = b 4 }. i. If b 1 = b 3 = b 4, then O = 3 A 1 + A 2 and A 1 A 2 = 3. Thus T = (6, 0, 6, 3). ii. If b 1 = b 4 b 3, then O = 2 A 1 + A 2 + A 3 and A 1 A 3 = and A 1 A 2 = A 2 A 3 = 3. Thus T = (9, 9, 3, 0). iii. If b 1, b 3, b 4 are pairwise distinct, then O = A 1 + A 2 + A 3 + A 4 and A 1 A 3 = A 1 A 4 = A 3 A 4 = and A 2 A i = 3 for i = 1, 3, 4. Thus T = (18, 9, 0, 0). 3. Suppose dim {c 1, c 2, c 3, c 4 } = 3. Of the four subsets of size three from {c 1, c 2, c 3, c 4 }, none can have dimension 1 and at least one has dimension 3. Lemma 5.6 eliminates two more possibilities, so that either two subsets have dimension 3 and two have dimension 2, or all four have dimension 3. (a) If exactly two of the subsets have dimension 3, then Lemma 5.6 shows that exactly one pair of vectors is linearly dependent. Now if c 1 c 4 and all other pairs are linearly independent, we can write A 1 = {x : c 1x = b 1 }, A 2 = {x : c 2x = b 2 }, A 3 = {x : c 3x = b 3 }, A 4 = {x : c 1x = b 4 }. i. If b 1 = b 4, then O = 2 A 1 +A 3 +A 4, where A i A j = 3 for i, j {1, 2, 3}, i j, and A 1 A 2 A 3 = 1. Thus T = (8, 6, 4, 1). ii. If b 1 b 2 then O = A 1 + A 2 + A 3 + A 4, where A i A j = 3 in all cases (i j) except that A 1 A 4 =, and where A 1 A 2 A 3 = A 2 A 3 A 4 = 1. Thus T = (18, 6, 2, 0). 14

(b) If every subset of three of the vectors c i is linearly independent, then the four vectors are pairwise linearly independent. This implies that O = A 1 + A 2 + A 3 + A 4, that all pairwise intersections contain three elements, and all 3-way intersections contain one element. The 4-way intersection has either one element or none, depending on whether or not the corresponding system of four equations is consistent (the system has 3 unknowns and rank 3). i. If A 1 A 2 A 3 A 4 = 1, then T = (8, 12, 0, 1). ii. If A 1 A 2 A 3 A 4 = 0, then T = (12, 6, 4, 0). Thus there are fourteen possible types for O, and they are exactly the ones listed in Table 3. With the exception of those with signature (18,9,0,0), they all have distinct signatures and thus are mutually exclusive. We now show that if O has signature (18,9,0,0) then it necessarily has both forms given in the table. Suppose that O = A + B + C + D, where the components satisfy the given conditions of type 5. Then A, B, and C partition Z 3 3, and D intersects each in exactly 3 elements. Thus the elements of D are precisely the ones of multiplicity 2 in O. Let A = D = {c 2x = d}, and define the sets B = {c 2 x = e} and C = {c 2x = f}, where d, e, and f are the distinct elements of Z 3. Then it is easy to see that O = 2 A + B + C and thus that O is of type 6. Conversely, suppose that O = 2 A + B + C, where the components satisfy the given conditions of type 6. Put D = A = {c x = a}, and let c 1 (Z 3 )3 be independent of c. Denoting the distinct elements of Z 3 by a, b, and c, define A = {c 1 x = a }, B = {c 1 x = b }, and C = {c 1 x = c }. It is easy to see that O = A + B + C + D, and thus that O is of type 5. We noted earlier that an array of the form OA(N, t + 1, 2, t) may be viewed as the sum (juxtaposition) of copies of the full 2 t+1 factorial design and replicates of a particular half fraction (I 0 or I 1 ). It is now clear that there is no hope of such a simple situation for similar arrays on 3 or more symbols. Even when N is large enough, the components simply will not combine neatly into full factorials plus fractions. Remark 5.9 For orthogonal arrays of the form OA(18, 3, 3, 2), OA(27, 3, 3, 2) or OA(36, 3, 3, 2), the signature T actually serves as a tool to determine whether a given orthogonal array O is completely decomposable, since if it is, the only possible choices for T are the ones listed in Table 1, Table 2, and Table 3, respectively. This would, for example, provide an alternate proof of Proposition 4.4. 6 Extensions The decomposition of orthogonal arrays has received renewed attention because of recent interest in projection properties. We may immediately ask whether every orthogonal array may arise as a projection (more precisely, as the projection of an array with the same strength and more factors). The following example shows that the answer is negative. Let O be defined as the set O = {(x 1, x 2, x 3, x 4 ) Z 4 2 : x 1 + x 2 + x 3 + x 4 = 0}. 15

It is easily verified that O is of the form OA(8, 4, 2, 3) and has index 1. But according to the Bush bounds [3] we must have k s + t 1. Thus in an OA(8, k, 2, 3) we have k 4, and so O does not arise as the projection of an array of strength 3 and more than four factors. Thus we need to know whether the pathologies we have observed can actually occur when we project an array onto m > t factors. For example, can an indecomposable array with index λ > 1 arise as a projection? Consider the array P = 0 0 0 1 1 1 2 2 2 0 0 0 1 1 1 2 2 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 1 0 2 2 0 1 0 2 1 2 1 0 1 2 0 0 1 2 2 1 0 1 2 0 0 2 1 1 0 2 2 0 1 It is easily checked that P is an orthogonal array of the form OA(18, 4, 3, 2), and its projection onto the first three rows is the indecomposable array O defined in (5). This shows that indecomposable arrays of index λ > 1 cannot be ignored when studying projections onto t + 1 factors. Decomposability is of interest in its own right, and has significant consequences for some well-known inequalities. For example, it is easy to see that the Bush bounds [3], originally stated for symmetric orthogonal arrays of index 1, are valid for completely decomposable arrays. These inequalities only involve the parameters k, s and t. (Here s need not be a prime power.) The Rao inequalities [11, 12] give lower bounds for N, the size of the array, in terms of the other parameters (for asymmetric arrays these include the sizes of the factors A 1, A 2,..., A k ). As pointed out in [1], however, the bounds are far from sharp for a decomposable array. For if O is of size N and has a component O 1 of size N 1, then the same lower bounds apply to N 1 as to N, while N 1 may be significantly smaller than N. We have seen how the decomposition problem grows more complicated as we move away from the case s = 2, with arrays on s = 3 symbols occupying a kind of middle ground. Not a great deal is known when s 3. When s = 3 the signature of an array evidently exerts some control over its decomposition. It can be shown by a rather involved argument (see [6]) that every simple orthogonal array of the form OA(N, t+1, 3, t) having signature T = (N, 0,..., 0) is completely decomposable. In view of the ease with which pathologies seem to arise, this result is rather surprising. Certain problems thus readily suggest themselves: to describe the structure of non-regular arrays; to determine the structure of indecomposable arrays of index λ > 1; to determine the decomposition of arrays with k > t + 1 factors. to determine whether the signature uniquely determines the types in all cases, up to polymorphism. Cheng [4] has some results on the latter question for arrays on s = 2 symbols. Regarding projection properties, it would be of interest to know the extent to which a projection can contain an irregular component, and what irregular decompositions may arise as projections; the relation between projections of a given array on different sets of factors.. 16

The answer to these problems would be of great utility in designing orthogonal arrays in order to have certain predetermined projection properties. References [1] Jay H. Beder. On Rao s inequalities for arrays of strength d. Utilitas Mathematica, 54:85 109, 1998. [2] Jürgen Bierbrauer, K. Gopalakrishnan, and D. R. Stinson. Bounds for resilient functions and orthogonal arrays. In Yvo G. Desmedt, editor, Advances in Cryptology CRYPTO 94, pages 247 256, 1994. [3] Kenneth A. Bush. Orthogonal arrays of index unity. Annals of Mathematical Statistics, 23:426 434, 1952. [4] Ching-Shui Cheng. Some projection properties of orthogonal arrays. The Annals of Statistics, 23:1223 1233, 1995. [5] Wiebke S. Diestelkamp. Projections, decompositions and parameter inequalities for orthogonal arrays. PhD thesis, The University of Wisconsin-Milwaukee, 1998. [6] Wiebke S. Diestelkamp. The decomposability of simple orthogonal arrays on 3 symbols having t+1 rows and strength t. Journal of Combinatorial Designs, 2000. To appear. [7] S. Hedayat, J. Stufken, and Guoqin Su. On the construction and existence of orthogonal arrays with three levels and indexes 1 and 2. Annals of Statistics, 25:2044 2053, 1997. [8] Rudolf Lidl and Harald Niederreiter. Finite Fields, volume 20 of Encyclopedia of Mathematics and Its Applications. Addison-Wesley Publishing Company, Reading, MA, 1983. [9] D. Raghavarao. Construction and Combinatorial Problems in Design of Experiments. John Wiley & Sons, New York, 1971. [10] B. L. Raktoe, A. Hedayat, and W. T. Federer. Factorial Designs. John Wiley & Sons, New York, 1981. [11] C. Radhakrishna Rao. Factorial experiments derivable from combinatorial arrangements of arrays. Journal of the Royal Statistical Society, Supplement, IX:128 139, 1947. [12] C. Radhakrishna Rao. Some combinatorial problems of arrays and applications to design of experiments. In J. N. Srivastava, editor, A Survey of Combinatorial Theory, chapter 29. North-Holland Publishing Company, 1973. [13] E. Seiden and R. Zemach. On orthogonal arrays. Annals of Mathematical Statistics, 37:1355 1370, 1966. [14] Richard P. Stanley. Enumerative Combinatorics. Wadsworth & Brooks/Cole, Monterey, 1986. [15] J. C. Wang and C. F. Jeff Wu. A hidden projection property of Plackett-Burman and related designs. Statistica Sinica, 5:235 250, 1995. 17

Type Signature Structure Description of Components 1 (0, 12, 4, 0) O = A + B + C + D A = {x : c 1 x = a}, B = {x : c 2 x = b}, C = {x : c 3 x = c}, D = {x : c 4 x = d}, c i c j for i j and A B C D = 2 (8, 12, 0, 1) O = A + B + C + D A = {x : c 1 x = a}, B = {x : c 2 x = b}, C = {x : c 3 x = c}, D = {x : c 4 x = d}, c i c j for i j and A B C D = 1 3 (12, 9, 2, 0) O = A + B + C + D A = {x : c 1 x = a}, B = {x : c 1 x = b}, C = {x : c 2 x = c}, D = {x : c 3 x = d}, c i c i for i j, and a b 4 (12, 12, 0, 0) O = A + B + C + D A = {x : c 1 x = a}, B = {x : c 1 x = b}, C = {x : c 2 x = c}, D = {x : c 2 x = d}, and c 1 c 2, a b, c d 5 (18, 9, 0, 0) O = A + B + C + D A = {x : c 1 x = a}, B = {x : c 1 x = b}, C = {x : c 1 x = c}, D = {x : c 2 x = d}, and c 1 c 2, a, b, c distinct 6 (18, 9, 0, 0) O = 2 A + B + C A = {x : c x = a}, B = {x : c x = b} C = {x : c x = c}, and a, b, c distinct 7 (12, 3, 6, 0) O = 2 A + B + C A = {x : c 1 x = a}, B = {x : c 2 x = b} C = {x : c 2 x = c}, and c 1 c 2, b c 8 (9, 9, 3, 0) O = 2 A + B + C A = {x : c 1 x = a}, B = {x : c 1 x = b} C = {x : c 2 x = c}, and c 1 c 2, a b 9 (8, 6, 4, 1) O = 2 A + B + C A = {x : c 1 x = a}, B = {x : c 2 x = b} C = {x : c 3 x = c}, and c i c j for i j 10 (0, 18, 0, 0) O = 2 A + 2 B A = {x : c x = a}, B = {x : c x = b}, and a b 11 (0, 12, 0, 3) O = 2 A + 2 B A = {x : c 1 x = a}, B = {x : c 2 x = b}, and c 1 c 2 12 (9, 0, 9, 0) O = 3 A + B A = {x : c x = a}, B = {x : c x = b}, and a b 13 (6, 0, 6, 3) O = 3 A + B A = {x : c 1 x = a}, B = {x : c 2 x = b}, 14 (0, 0, 0, 9) O = 4 A A = {x : c x = a} Table 3: Types of completely decomposable OA(36, 3, 3, 2). A, B, C, D are subsets of Z 3 3. c, c i have only nonzero components. denotes linear dependence. If O is of type 5 or 6, then O is dimorphic (of both type 5 and 6). 18