Orthogonal arrays obtained by repeating-column difference matrices
|
|
- Gary Mills
- 5 years ago
- Views:
Transcription
1 Discrete Mathematics 307 (2007) Orthogonal arrays obtained by repeating-column difference matrices Yingshan Zhang Department of Statistics, East China Normal University, Shanghai, , People s Republic of China Received 2 June 2003; received in revised form 11 April 2006; accepted 9 June 2006 Available online 26 September 2006 Abstract In this paper, by using the repeating-column difference matrices and orthogonal decompositions of projection matrices, we propose a new general approach to construct asymmetrical orthogonal arrays. As an application of the method, some new orthogonal arrays with run sizes 72 and 96 are constructed Elsevier B.V. All rights reserved. MSC: primary 62K15; secondary 05B1 Keywords: Asymmetrical orthogonal arrays; Generalized hadamard products; Generalized Kronecker products; Repeating-column difference matrices; Projection matrices; Permutation matrices 1. Introduction An n m matrix A, having k i columns with p i ( 2) levels, i = 1,...,t,m= t i=1 k i,p i p j, for i j, is called an orthogonal array (OA) of strength d and size n if each n d submatrix of A contains all possible 1 d row vectors with the same frequency. Unless stated otherwise, we consider an orthogonal array of strength 2, using the notation L n (p k 1 1,...,pk t t ) for such an array. An orthogonal array is said to be mixed-level (or asymmetrical) if t 2. Difference matrices are essential for the construction of many asymmetrical orthogonal arrays [2]. Using the notation for additive groups, a difference matrix having level p is an λp m matrix with the entries from a finite Abelian group G of cardinality p such that the vector differences of any two columns of the array, say d i d j if i j, contains every element of G exactly λ times. We will denote such an array by D(λp, m; p), although this notation suppresses the relevance of the group G. In most of our examples, G will correspond to the additive group associated with a Galois field GF(p). The difference matrix D(λp, m; p) is called a generalized Hadamard matrix if λp = m. In particular, D(λ2, λ2; 2) is the usual Hadamard matrix. If a D(λp, m; p) exists, it can always be constructed so that one of its rows and one of its columns contain only the zero element of G. Deleting this column from D(λp, m; p), we obtain a difference matrix, denoted by D 0 (λp, m 1; p) called an atom of difference matrix D(λp, m; p) (or an atomic difference matrix). Without loss of generality, address: ysh_zhang@163.com X/$ - see front matter 2006 Elsevier B.V. All rights reserved. doi: /j.disc
2 Y. Zhang / Discrete Mathematics 307 (2007) the D(λp, m; p) can be written as ( ) 0 0 D(λp, m; p) = = (0 D 0 (λp, m 1; p)). 0 A The property is important for the following discussions. For two matrices A = (a ij ) n m and B = (b ij ) s t both with the entries from group G, define their Kronecker sum [4] to be A B = (a ij B) 1 i n,1 j m, where each submatrix a ij B of A B is obtained by adding a ij to each entry of B. Shrikhande [4] showed that A B is a difference matrix if both A and B are difference matrices. In the contrast, Zhang [7] showed that A is a difference matrix if both A B and B are difference matrices. It is known that the Kronecker sum L = L μp (p s ) D(λp, m; p) (or L = D(λp, m; p) L μp (p s )) is an orthogonal array if L μp (p s ) is an orthogonal array and D(λp, r; p) is a difference matrix [1]. By setting μ=s =1, the Kronecker sum method reduces to the well-known construction of Bose and Bush [2], i.e., L = (p) D(λp, m; p) (or L = D(λp, m; p) (p)) is an orthogonal array if D(λp, m; p) is a difference matrix. In the contrast, Zhang [7] has found that the difference matrix D(λp, m; p) can be also constructed by using the orthogonal array L = (p) D(λp, m; p), i.e., D(λp, m; p) is a difference matrix if L = (p) D(λp, m; p) is an orthogonal array. Let D(r, m; p) = (d ij ),wehave L = D(r, m; p) (p) =[S 1 (0 r (p)),...,s m (0 r (p))], (1) where S j = diag(σ(d 1j ),...,σ(d rj )) and σ(d ij ) is a permutation matrix such that σ(d ij )(p) = d ij (p), (2) for any i, j, the d ij (p) can be obtained by adding d ij to each entry of (p). The idea of Kronecker sum and difference matrices can be generalized as follows [7]. Let n=pq. IfA is an orthogonal array L p (p k 1 1,...,pk t t ) with the partition A =[L p (p k 1 1 ),...,L p(p k t t )], and if there exist the atoms: D 0 (λ 1 p 1,m 1 1; p 1 ),...,D 0 (λ t p t,m t 1; p t ), of difference matrices D(λ 1 p 1,m 1 ; p 1 ),...,D(λ t p t,m t ; p t ), respectively, where q = λ i p i and p and q are both multiples the p i s, then the following array: [L p 0 q, 0 p L q,l p (p k 1 1 ) D0 (λ 1 p 1,m 1 1; p 1 ),...,L p (p k t t ) D 0 (λ t p t,m t 1; p t )] is also an orthogonal array for any orthogonal arrays L p and L q. In this paper, we will prove that the following array: [L p 0 q, 0 p L q,(p) D 0 (λ 0 p, m 0 1; p), L p (p 1 1 ) D0 (λ 1 p 1,m 1 1; p 1 ),...,L p (p 1 t ) D0 (λ t p t,m t 1; p t )], is also an orthogonal array for any orthogonal arrays L p and L q if A =[L p (p1 1),...,L p(pt 1 )] is a normal orthogonal array and D 0 =[D 0 (λ 0 p, m 0 1; p), D 0 (λ 1 p 1,m 1 1; p 1 ),...,D 0 (λ t p t,m t 1; p t )] is an atomic repeating-column difference matrix (Section 2).
3 248 Y. Zhang / Discrete Mathematics 307 (2007) Section 2 contains the basic concepts and main theorems of repeating-column difference matrices while in Section 3 we describe the method of construction. Some new orthogonal arrays with run sizes 72 and 96 are constructed in Section Repeating-column difference matrices In order to define the repeating-column difference matrices, we must define a generalized Hadamard product [11]. Definition 2.1. Let h(x, y) be a mapping from Ω 1 Ω 2 to V, where Ω 1 Ω 2 ={(x, y) : x Ω 1,y Ω 2 } and Ω 1, Ω 2,V are some sets. For two matrices A = (a ij ) n m with entries from Ω 1 and B = (b ij ) n m with entries from Ω 2, define their generalized Hadamard product, denoted by, h as follows: A h B = (h(a ij,b ij )) n m = (h(a ij,b ij )) 1 i n,1 j m, where each of the entries h(a ij,b ij ) of A h B may be a scalar or vector or matrix under the mapping h(x, y). Unless stated otherwise, we consider the sets Ω 1 and Ω 2 to be finite, using the notations Ω 1 ={0, 1,...,p 1} and Ω 2 ={0, 1,...,q 1} for two example sets. When V is a row-vector space of m-dimensions, the mapping h(i, j) can be represented by a pq m matrix D, i.e., h :[(p) 0 q ] h [0 p (q)]=d = (d (1),...,d (pq) ) T, with h(i, j) = d(iq+j+1) T (or h(i, j) is the (iq + j + 1)th row of D). For this case in the following discussions, the generalized Hadamard product h will only be defined by [(p) 0 q ] [0 h p (q)]=d. Note that [(p) 0 q ] [0 h p (q)]= (p) (q), h the generalized Hadamard product h will be also defined by (p) (q) h = D. This is a form of generalized Kronecker product. Let Ω 1, Ω 2,V be multiplicative groups. When h(i, j)=ij, the generalized Hadamard h is the usual Hadamard product in matrix theory, denoted by. Let Ω 1, Ω 2,V be additive (Abelian) groups. When h(i, j) = i + j, the generalized Hadamard h is the usual addition of matrices in matrix theory, denoted by +. Let Ω 1 = Ω 2 = V ={0, 1,...,p 1} and h(i, j) = i + j, mod p, Then, the generalized Hadamard product h is really the usual modulus addition of matrices in the theory of matrices, denoted by +, In particular, denote A + A = A + A, mod p by A + 2, which is very useful for the construction of orthogonal arrays. Let h(i, j) = (i, j), where Ω 1 ={0, 1,...,p 1}, Ω 2 ={0, 1,...,q 1},V ={(i, j); i Ω 1,j Ω 2 }. The generalized Hadamard product h is also called a repeating operation, denoted by, which can be used for the construction of repeating-column difference matrices. Note that we often write the elements (i, j) of V in a form ij instead of (i, j). Similarly let h(i, j) = iq + j, where Ω 1 ={0, 1,...,p 1}, Ω 2 ={0, 1,...,q 1},V ={0, 1,...,pq 1}. The generalized Hadamard product h is also called a joining operation, denoted by, which can be used for the construction of asymmetrical orthogonal arrays with large levels from those with small levels. Theorem 2.2. Let A, B, C, D be matrices and T a permutation matrix. Then, and (A B) (C D) = (A C) (B D), T(A B C D) = TA TB TC TD. Let m(a) be the matrix image of array A ([10,11] or Section 3), we have
4 Y. Zhang / Discrete Mathematics 307 (2007) Theorem 2.3. Suppose that a and b are two orthogonal arrays which have only one column with run size n, i.e., a = L n (p) = (a 1,...,a n ) T,b= L n (q) = (b 1,...,b n ) T. Then the matrix image of a b (or a b) is the following orthogonal decomposition: m(a b) = m(a) + m(b) + nm(a) m(b) = m(b a) = m(a b) = m(b a), if m(a)m(b) = 0, where a b = (a 1 b 1,...,a n b n ) T (or a b = (a 1 q + b 1,...,a n q + b n ) T ) is the repeating (or joining) operation of a and b in Definition 2.1, and m(a) m(b) is the usual Hadamard product in matrix theory. Corollary 2.4. Let K 1 =L n (p 1,...,p m )=(L n (p 1 ),...,L n (p m )) and K 2 =L n (q 1,...,q m )=(L n (q 1 ),...,L n (q m )) be two orthogonal arrays of run size n. Denote (L n (p 1 ) L n (q 1 ),...,L n (p m ) L n (q m )) =: K 1 K 2, then the matrix image of m(k 1 K 2 ) satisfies m(k 1 K 2 ) m(k 1 ) + m(k 2 ) + nm(k 1 ) m(k 2 ), if m(k 1 )m(k 2 ) = 0. Corollary 2.5. Suppose that L n1 = L n1 (p 1,...,p m ) = (L n1 (p 1 ),...,L n1 (p m )) and L n2 = L n2 (q 1,...,q m ) = (L n2 (q 1 ),...,L n2 (q m )) are two orthogonal arrays. Then (L n1 0 n2 ) (0 n1 L n2 ) is also an orthogonal array. In this case, its matrix image satisfies m((l n1 0 n2 ) (0 n1 L n2 )) m(l n1 ) P n2 + P n1 m(l n2 ) + m(l n1 ) m(l n2 ). Corollary 2.6. Suppose that p is a prime and a and b are OA s which have only one column with run size n and p levels, i.e., a =L n (p)=(a 1,...,a n ) T,b=L n (p)=(b 1,...,b n ) T. Then, L n (p p 1 )=(a +b,...,[p 1]a +b), mod p, is also an OA whose MI is nm(a) m(b) if m(a)m(b)=0. In particular, L p 2(p p 1 )=((p) (p),...,(p) + [p 1] (p)), mod p, is also an OA whose MI is τ p τ p. These theorems and corollaries can be proved easily and are also found in Zhang [5 7]. In order to find a generalized result of repeating-column difference matrices, we must study the structure of associated orthogonal arrays. Definition 2.7. Let L p = L p (p 1,...,p m ) = (C 1,...,C m ) be an orthogonal array where C l is a vector with entries from a additive group G l of order p l for any l. The array L p is called normal over G 0 if the set consisted of all entries of vector C 0 = C 1 C m is a additive group G 0 of order p, where G 0 G 1 G m := {(x 1,...,x m ); x l G l,l= 1, 2,...,m} with the usual addition: (x 1,x 2,...,x m ) + (y 1,y 2,...,y m ) = (x 1 + y 1,x 2 + y 2,...,x m + y m ) (x 1,x 2,...,x m ), (y 1,y 2,...,y m ) G 0. Example 2.8. The following arrays are normal: a L 4 ( ) =, L ( ) =, over G 4 0 ={(0, 0), (0, 1), (1, 0), (1, 1)} and G4 0 ={(0, 0, 0), (0, 1, 1), (1, 0, 1), (1, 1, 0)}, respectively.
5 250 Y. Zhang / Discrete Mathematics 307 (2007) b L 6 ( ) =, L ( ) =, over G 6 0 ={(0, 0), (0, 1),...,(2, 1)} and G8 0 ={(0, 0, 0, 0, 0, 0, 0),...,(1, 1, 0, 1, 0, 0, 1)}, respectively. c L 12 ( ) =, L ( ) =, L ( ) =, over G 12 0 G 12 0 respectively. ={(0, 0), (0, 1),...,(1, 5)}, G12 0 ={(0, 0, 0, 0), (0, 1, 1, 0),...,(1, 1, 0, 2)}, ={(0, 0), (0, 1),...,(3, 2)} and Definition 2.9. Let L p = L p (p 1,...,p m ) = (C 1,...,C m ) be a normal orthogonal array over G 0 and denote C 0 =C 1 C m and let D 0 =D(q, k 0 ; p)=d 1 D m be a difference matrix over G 0 having p levels.and suppose that [D i,d i] is an p (k 0 + k i ) difference matrix having p i levels for i = 1, 2,...,m.Thus, C =[C 0,C 1,...,C m ] is a partitioned matrix in which the m+1 columns C 0,C 1,...,C m are orthogonal arrays of strength 1 having p, p 1,...,p m levels, respectively. We call D =[D 0,D 1,...,D m ] a repeating-column difference matrix about C =[C 0,C 1,...,C m ]. The repeating-column difference matrix D is called an atomic repeating-column difference matrix if D j is atomic for any j. Theorem Let L p = L p (p 1,...,p m ) = (C 1,...,C m ) be a normal orthogonal array and denote C 0 = C 1 C m. Then, there exist p permutation matrices σ 0 (x), x G 0 and p l permutation matrices σ l (x l ), x l G l, l = 1, 2,...,m,such that σ 0 (x) C 0 = x C 0, σ 0 (x) C l = σ l (x l ) C l = x l C l, x = (x 1,x 2,...,x m ) G 0, l = 1, 2,...,m where x l C l stands for the vector obtained by x l to each entry of C l. In other words, we have σ 0 (x)l p = (σ 1 (x 1 )C 1,...,σ m (x m )C m ). In this case, the matrix images of C 0,C 1,C 2,...,C m satisfy the following equations: m(c l ) = m(σ 0 (x)c l ) x = (x 1,x 2,...,x m ) G 0, l = 0, 1, 2,...,m.
6 Y. Zhang / Discrete Mathematics 307 (2007) In particular, if let =τ p m(c 1 ) m(c m ), then, we have σ 0 (x) σ 0 (x) T =. Proof. Since the set of entries of vector C 0 = C 1 C m is an additive group G 0, for any given x G 0, there exists a permutation matrix σ 0 (x) such that σ 0 (x) C 0 = x C 0, where x C 0 stands for the vector obtained by x to each entry of C 0. Furthermore, since the order of group G 0 is p, i.e., x C 0 = C 0 iff x = (0, 0,...,0), we have σ 0 (x) σ 0 (y) if x y. Similarly, because C l is a vector with entries from an group G l and having a form C l = T l (0 λl (p l )) where λ l p l = p and T l is a permutation matrix for any l, for any given x l G l there exists a permutation matrix σ l (x l ) such that σ l (x l ) C l = x l C l = T l (0 λl [x l (p l )]), since there exists a permutation matrix π l (x l ) such that π l (x l )(p l ) = x l (p l ), where λ l p l = p and T l is a permutation matrix for any l. From above results and Theorem 2.2, we have (σ 0 (x)c 1 ) (σ 0 (x)c m ) = σ 0 (x)c 0 = x C 0 = (x 1 C 1 ) (x m C m ) = (σ 1 (x 1 )C 1 ) (σ m (x m )C m ), i.e., σ 0 (x)c l = σ l (x l )C l, l = 1, 2,...,m. Now, we prove that m(c l ) = m(σ 0 (x)c l ), x G 0,l= 0, 1, 2,...,m. In fact, that m(c 0 )=m(σ 0 (x)c 0 ) is trivial since m(c 0 )=τ p =σ 0 (x)τ p σ 0 (x) T =m(σ 0 (x)c 0 ). Since G l is an additive group of order s l, there exists a permutation matrix π l (x l ) such that π l (x l )(p l ) = x l (p l ). By the form of C l and the definition of matrix image (Section 2), for any x l G l, m(σ 0 (x)c l ) = m(σ l (x l )C l ) = m(x l C l ) = m(t l (0 λl [x l (p l )])) = m(t l (0 λl [π l (x l )(p l )])) = T l (P λl [π l (x l )τ pl π l (x l ) T ])Tl T = T l (P λl τ pl )Tl T = m(c l ), for any l. This completes the proof. Corollary Let L q = (C 1,...,C m ) be normal and D = (D 0,D 1,...,D m ) a repeating-column difference matrix about C 0,C 1,...,C m with entries from G 0,G 1,...,G m where C 0 = C 1 C m. Then, for any vector a 0 = a 1 a m with entries from G 0, the following array: [a 0 + D 0,a 1 + D 1,...,a m + D m ] is also a repeating-column difference matrix, where a j + D j means that a j is added to each column of D j. Definition Let L q = (C 1,...,C m ) be normal and D = (D 0,D 1,...,D m ) a repeating-column difference matrix about C 0,C 1,...,C m with entries from G 0,G 1,...,G m where C 0 =C 1 C m. Then, the following every operation is called a transformation of repeating-column difference matrices. (a) Exchange any two runs of D or any two columns of D l for a given l = 0, 1, 2,...,m. (b) Add an element x l G l to some column of D l for a given l = 0, 1, 2,...,m. (c) Add an element x = (x 1,...,x m ) G 0 to some row of D 0 while add x l G l to the same row of D l for all l = 1, 2,...,m. By Theorem 2.10 and Corollary 2.11, it is easy to see that D is still a repeating-column difference matrix if do a transformation of repeating-column difference matrices on D.
7 252 Y. Zhang / Discrete Mathematics 307 (2007) Orthogonal arrays and repeating-column difference matrices Suppose that an experiment is performed according to the array A=(a ij ) n m =(a 1,...,a m ) and Y =(y 1,y 2,...,y n ) T is the experimental data vector. In the analysis of variance Sj 2, the sum of squares of the jth factor, is defined as p j Sj 2 = i=1 1 I ij s Iij Y s 2 ( n ) 2 1 Y s, n s=1 where I ij ={s : a sj = i} and I ij is the number of elements in I ij. From the definition, Sj 2 is a quadratic form in Y and there exists a unique symmetric matrix A j such that Sj 2 = Y T A j Y. The matrix A j is called the matrix image (MI) of the jth column a j of A, denoted by m(a j ) = A j. The MI of a subarray of A is defined as the sum of the MIs of all its columns. In particular, we denote the MI of A by m(a) and the MIs of 1 r,(r)are P r, τ n, respectively. If a design is an orthogonal array, then the MIs of its columns has some interesting properties, which can be used to construct orthogonal arrays. Theorem 3.1. For any permutation matrix S and any array L, m(s(l 1 r )) = S(m(L) P r )S T and m(s(1 r L)) = S(P r m(l))s T. Theorem 3.2. Let the array A be an orthogonal array of strength 1, i.e., A = (a 1,...,a m ) = (S 1 (0 r1 (p 1 )),...,S m (0 rm (p m ))), where r i p i = n, S i is a permutation matrix, for i = 1,...,m. Then, the following statements are equivalent. (1) A is an orthogonal array of strength 2. (2) The MI of A is a projection matrix. (3) The MIs of any two columns of A are orthogonal, i.e., m(a i )m(a j ) = 0 (i j). (4) The projection matrix τ n can be decomposed as τ n = m(a 1 ) + +m(a m ) +, where rk( ) = n 1 m j=1 (p j 1) is the rank of the matrix. Definition 3.3. An orthogonal array A is said to be saturated if m j=1 (p j 1) = n 1 (or, equivalently, m(a) = τ n ). Corollary 3.4. Let (L, H ) and K be orthogonal arrays of size n. Then, (K, H ) is an orthogonal array if m(k) m(l), where m(k) m(l) means that the difference m(l) m(k) is nonnegative definite. Corollary 3.5. Suppose that L and H are orthogonal arrays. Then, K = (L, H ) is also an orthogonal array if m(l) and m(h ) are orthogonal, i.e., m(l)m(h ) = 0. In this case, m(k) = m(l) + m(h ). Theorem 3.6. Suppose that D 0 (q, m; p) is an atomic difference matrix. Then, (p) D 0 (q, m; p) is an orthogonal array whose matrix image satisfying m((p) D 0 (rq, m; p)) τ p τ q. These theorems and corollaries can be found in Zhang [5 7] and Zhang et al. [8]. Our procedure of constructing mixed-level orthogonal arrays consists of the following three steps [10]: Step 1: Orthogonally decompose the projection matrix τ n : τ n = A 1 + +A k where A i A j = 0 (i j). Step 2: Find an orthogonal array L i such that m(l i ) A i.
8 Y. Zhang / Discrete Mathematics 307 (2007) Step 3: Lay out the new orthogonal array L by Corollaries 3.4 and 3.5: L = (L 1,...,L k1 ) (k 1 k). Let L p =L p (p 1,...,p m )=(C 1,...,C m ) be a normal orthogonal array over G 0 and denote C 0 =C 1 C m and let D 0 =D(q, k 0 ; p)=d 1 D m be an atomic difference matrix over G 0 having p levels. And suppose that [D i,d i] is an atomic q (k 0 +k i ) difference matrix having p i levels for i=1, 2,...,m.Thus, C=[C 0,C 1,...,C m ] is a partitioned matrix in which the m + 1 columns C 0,C 1,...,C m are orthogonal arrays of strength 1 having p, p 1,...,p m levels, respectively. Then, D=[D 0,D 1,...,D m ] is an atomic repeating-column difference matrix about C=[C 0,C 1,...,C m ] if D is a repeating-column difference matrix about C. Theorem 3.7. The matrix D =[D 0,D 1,...,D m ] is an atomic repeating-column difference matrix about C =[C 0,C 1,...,C m ] if and only if L =[C 0 D 0,C 1 D 1,...,C m D m ] is an orthogonal array whose matrix image satisfying m(l) τ p τ q. In particular, m(c 0 D 0 ) m i=1 m(ci D i )+ τ q where =τ p (m(c 1 ) + +m(c m )). Proof. Consider the following orthogonal decomposition of projection matrix τ p τ q : τ p τ q = m(c 1 ) τ q + +m(c m ) τ q + τ q, where τ p = m(l p ) + =m(c 1 ) + +m(c m ) +. By Theorems 3.1, 3.2 and 3.6, we have m(c j [D j,d j ]) m(c j ) τ q, j = 1, 2,...,m, i.e., [C 1 [D 1,D 1],...,C m [D m,d m]] is an orthogonal array. By Corollary 3.5, we have that m(c i D i ) + m(c i D i ) = m(c i [D i,d i]), i = 1, 2,...,m.Thus, and m((c 1 D 1 ) + +m(c m D m )) + τ q τ p τ q, m((c 1 D 1 ) + +m(c m D m )) τ p τ q, [m((c 1 D 1 ) + +m(c m D m )) + τ q][m((c 1 D 1 ) + +m(c m D m ))]=0. By their orthogonality and Theorem 3.2, L =[C 0 D 0,C 1 D 1,...,C m D m ] is an orthogonal array whose matrix image satisfies m(l) τ p τ q if m(c 0 D 0 ) (m(c 1 D 1 ) + +m(c m D m )) + τ q. In fact, let D l = (dl ij ) q k 0 = (d l 1,...,dl k 0 ) be an atomic difference matrix for any l.and denote C 0 = (p). From Eqs. (1) and (2), we have [0,D 0 ] (p) = (S 0 0 (0 q (p)), S 0 1 (0 q (p)),...,s 0 k 0 (0 q (p))), [0,D l ] C l = (S l 0 (0 q C l ), S l 1 (0 n C l ),...,S l k 0 (0 q C l )), where S0 l = I pq and Sj l = diag(σ l(d1j l ),...,σ l(dqj l )), j = 1, 2,...,m; l = 1,...,m.By Theorems 3.1 and 3.2, we have m([0,d 0 ] C 0 ) = m([0,d 0 ] (p)) = = m k 0 i=1 j=0 k 0 j=0 S 0 j (P q τ p )(S 0 j )T S 0 j (P q m(c i ))(S 0 j )T + k 0 j=0 S 0 j (P q )(S 0 j )T,
9 254 Y. Zhang / Discrete Mathematics 307 (2007) since τ p =m(l p )+ =m(c 1 )+ +m(c m )+. The above decompositions are orthogonal because of the orthogonality in each step. Thus, all items S 0 j (P q m(c i ))(S 0 j )T,S 0 s (P q )(S 0 s )T,i=1, 2,...,m; j,s=0, 1,...,k 0, are orthogonal to each other. By Theorem 2.10, we have that Sj 0 (P q )(Sj 0 )T Sj 0 (I q )(Sj 0 )T = diag(σ 0 (d1j 0 ) σ 0(d1j 0 )T,...,σ 0 (dqj 0 ) σ 0(dqj 0 )T ) = diag(,..., ) = I q, for j = 0, 1,...,m; and that S 0 j (P q m(c i ))(S 0 j )T = m(s 0 j (0 q C i )) = m(s i j (0 q C i )) = m(d i j C i). Thus, we obtain i.e., m([0,d 0 ] C 0 ) P q τ p + m m(d i C i) + τ q, i=1 m(c 0 D 0 ) = K(p, q)m(d 0 C 0 )K(p, q) T = K(p, q)(m([0,d 0 ] C 0 ) P q τ p )K(p, q) T [ m ] m K(p,q) m(d i C i) + τ q K(p,q) T = m(c i D i ) + τ q. i=1 This completes the proof of. Let L=[C 0 D 0,C 1 D 1,...,C m D m ] be an orthogonal array whose matrix image satisfies m(l) τ p τ q. Then, C 0 [0,D 0 ] and C i [0,D i,d i] are orthogonal arrays for all i. Thus, [0,D 0 ] and [0,D i,d i] are difference matrices for all i, i.e., [0,D] is a repeating-column difference matrix. It means that the matrix D is an atomic repeatingcolumn difference matrix about C. This completes the proof. Corollary 3.8. The matrix D = [D 0, D 1,,D m ] be an atomic repeating-column difference matrix about C if and only if L =[L p 0 q, 0 p L q,c 0 D 0,C 1 D 1,...,C m D m ] is an orthogonal array for any orthogonal arrays L p and L q. 4. Examples 4.1. Constructions of orthogonal arrays of run size 72 Zhang et al. [11] has constructed an orthogonal array L 72 ( ) whose structure is L 72 ( ) =[0 3 (12) 0 2, 0 3 L (=) 24 (28 ), L (=) 36 (38 ) 0 2, (M 1 Q 1 )(L ( ) 36 (62 ) 0 2 ), (M 2 Q 2 )(L ( ) 36 (62 ) 0 2 )], where Q 1 = K(2, 2), Q 2 = K(2, 2) diag(i 2,N 2 )K(2, 2) T and M 1 = K(3, 6) diag(n 3,N 2 3,Q 1 I 3 )K(3, 6) T, M 2 = K(3, 6) diag(n 2 3,N 3,Q 2 I 3 )K(3, 6) T ; and where the orthogonal arrays satisfy L (=) 24 (28 ) = D(12, 8; 2) (2), L = 36 (38 ) = (3) D(12, 8; 3) L ( ) 36 (62 ) =[[(3) (3) 0 4 ] (0 18 (2)], [(3) (3) ] [0 9 (2) (2)]], in which D(12, 8; 2) and D(12, 8; 3) are some difference matrices. i=1
10 Y. Zhang / Discrete Mathematics 307 (2007) It is easy to prove that there exists a difference matrix D(12, 4; 6) = D(12, 4; 3) D(12, 4; 2) such that [(3) D(12, 4; 3) 0 2 ] [0 3 D(12, 4; 2) (2)] =[(M 1 Q 1 )(L ( ) 36 (62 ) 0 2 ), (M 2 Q 2 )(L ( ) 36 (62 ) 0 2 )]. Hence the following array, [D(12, 4; 6) [((3) 0 2 ) (0 3 (2))],D(12, 8; 3) ((3) 0 2 ), D(12, 8; 2) (0 3 (2))] is also an orthogonal array. By Theorem 3.7, we have D =[D(12, 4; 6), D(12, 8; 3), D(12, 8; 2)] is a repeating-column difference matrix about [(6), (3) 0 2, 0 2 (2)]. Let D(12, 4; 6)=(a 0,D(12, 3; 6)) where a 0 =a 1 a 2. By the transformation of repeating-column difference matrix, we have [0,D 0 ]:=[D(12, 4; 6) a 0,D(12, 8; 3) a 1,D(12, 8; 2) a 2 ] is also a repeating-column difference matrix. Thus, D 0 is an atomic repeating-column difference matrix. From above D 0, we can construct some atomic repeating-column difference matrices of run size 12 having a large number of columns such as D 0 =[D 0 (12, 4; 6), D 0 (12, 7; 3), D (12, 7; 2)]=, D 0 =[D 0 (12, 5; 6), D 0 (12, 3; 3), D (12, 6; 2)]=, over G 6 0. Define 0 = 00, 1 = 11, 2 = 20, 3 = 01, 4 = 10, 5 = 21. Then, the group is Z 6. By using Corollary 3.8 and above atomic repeating-column difference matrices D 0 s, we can construct a lot of new orthogonal arrays of run size 72, which are exhibited in Table 1 or in Kuhfeld [3].
11 256 Y. Zhang / Discrete Mathematics 307 (2007) Table 1 Orthogonal arrays L 72 ( ) obtained in Section 4.1 No. f 1 f 10 c 1 c 10 b 1 b 7 b 8 b 18 lf dc
12 Y. Zhang / Discrete Mathematics 307 (2007) Table 1 (continued) No. f 1 f 10 c 1 c 10 b 1 b 7 b 8 b 18 lf dc L 72 ( ) = (l f 1 f 7 f 10 c 4 c 10 b 1 b 7 ) (old) L 72 ( ) = (d f 1 f 7 f 10 c 4 c 10 b 3 b 7 b 9 b 18 ) L 72 ( ) = (d f 1 f 7 f 10 c 4 c 10 c b 3 b 7 b 9 b 11 ) L 72 ( ) = (d f 1 f 7 f 10 f c 4 c 10 b 3 b 7 b 9 ) L 72 ( ) = (l f 1 f 6 c 1 c 3 b 1 b 6 ) (old) L 72 ( ) = (d f 1 f 6 c 1 c 3 b 3 b 6 b 9 b 18 ) L 72 ( ) = (d f 1 f 6 c 1 c 3 c b 3 b 6 b 9 b 11 ) L 72 ( ) = (d f 1 f 6 f c 1 c 3 b 3 b 6 b 9 ) where d = b 1 b 2 b 8, l = b 8 b 18, m(c) m(b 12 b 18 ), m(f ) m(b 10 b 18 ) 4.2. Construction of orthogonal arrays of run size 96 Zhang et al. [9] has constructed an orthogonal array L 96 ( ) whose structure is L 96 ( ) =[D 1 (12, 4; 2) 0 2 (2) 0 2,D 2 (12, 4; 2) 0 4 (2), D 3 (12, 4; 2) (2) (2) (2), D(24, 20; 4) (4), (24) 0 4 ], where D(24, 20; 4), D 1 (12, 4; 2) 0 2, D 2 (12, 4; 2) 0 2 and D 3 (12, 4; 2) (2) are some difference matrices.
13 258 Y. Zhang / Discrete Mathematics 307 (2007) It is easy to prove that L 4 (2 3 )=((2) 0 2, 0 2 (2), (2) (2)) is normal (Example 2.8). By Theorem 3.7, the following array: D =[D(24, 20; 4), D 1 (12, 4; 2) 0 2,D 2 (12, 4; 2) 0 2,D 3 (12, 4; 2) (2)] is a repeating-column difference matrix about [(4), (2) 0 2, 0 2 (2), (2) (2)]. Let D(24, 20; 4)=(a 0,D(24, 19; 4)), where a 0 =a 1 a 2 a 3. By the transformation of repeating-column difference matrix, we have [0,D 0 ]:=[a 0 + D(24, 4; 4), a 1 + D 1 (12, 4; 2) 0 2,a 2 + D 2 (12, 4; 2) 0 2,a 3 + D 3 (12, 4; 2) (2)] is also a repeating-column difference matrix. Thus, D 0 is an atomic repeating-column difference matrix D =, where 0 = 000, 1 = 011, 2 = 101, 3 = 110 and x + y = y + x,x + x = 0, 0 + x = x,2 + 3 = 1, = 2, = 3. By using Corollary 3.8 and above atomic repeating-column difference matrix D 0, we can construct a lot of new orthogonal arrays of run size 96, which are exhibited in Table 2 or in Kuhfeld [3].
14 Y. Zhang / Discrete Mathematics 307 (2007) Table 2 Orthogonal arrays L 96 ( ) obtained in Section 4.2 L 96 ( ) rel 96 d 1 d 20 b 1 b 35 fd 21 d 23 clx
15 260 Y. Zhang / Discrete Mathematics 307 (2007) Table 2 (continued) L 96 ( ) rel 96 d 1 d 20 b 1 b 35 fd 21 d 23 clx L 96 ( ) = (x d 1 d 20 b 1 b 12 ) L 96 ( ) = (l d 1 d 20 d 22 d 23 b 1 b 4 b 5 b 8 b 12 b 26 b 35 ) L 96 ( ) = (d 1 d 23 b 1 b 4 b 5 b 8 b 12 b 14 b 23 b 26 b 29 b 32 b 35 ) L 96 ( ) = (c d 1 d 23 b 1 b 4 b 5 b 8 b 12 b 14 b 16 b 26 b 29 b 32 b 35 ) L 96 ( ) = (f d 1 d 23 b 1 b 4 b 5 b 8 b 12 b 14 b 26 b 29 b 32 b 35 ).
16 Y. Zhang / Discrete Mathematics 307 (2007) Table 2 (continued) L 96 ( ) rel 96 d 1 d 20 b 1 b 35 fd 21 d 23 clx where d 21 = b 13 b 30 b 31, d 22 = b 2 b 3 b 24, d 23 = b 6 b 7 b 25, x = b 13 b 35, l = b 13 b 23, m(c) m(b 13 b 23 ), m(f ) m(b 15 b 23 ). Acknowledgements The author would like to thank the referee for his many valuable suggestions and comments. The work was supported by National Social Science Foundations (No , No (Henan) and No (Henan)) in China. References [1] T. Beth, D. Jungnickel, H. Lenz, Design Theory, Bibliographishes Institut, Mannheinu-Wien-Zürich, 1985, and Cambridge University Press, Cambridge, [2] R.C. Bose, K.A. Bush, Orthogonal arrays of strength two and three, Ann. Math. Statist. 23 (1952) [3] W.F. Kuhfeld, Orthogonal arrays, [4] S. Shrikhande, Generalized Hadamard matrices and orthogonal arrays strength two, Canad. J. Math. 16 (1964) [5] Y.S. Zhang, Asymmetrical orthogonal design by multi-matrix methods, J. Chinese Statist. Assoc. 29 (1991) [6] Y.S. Zhang, Orthogonal array and matrices, J. Math. Res. Exposition 12 (3) (1992) [7] Y.S. Zhang, Theory of Multilateral Matrix, Chinese Statistic Press, [8] Y.S. Zhang, W.G. Li, S.S. Mao, Z.Q. Zheng, A simple method for constructing orthogonal arrays by the Kronecker sum, J. Syst. Sci. Complexity 19 (2006) [9] Y.S. Zhang, Orthogonal arrays obtained by generalized Kronecker product, J. Statist. Plann. Inference (2000), in review. [10] Y.S. Zhang, Y.Q. Lu, S.Q. Pang, Orthogonal arrays obtained by orthogonal decomposition of projection matrices, Statist. Sinica 9 (1999) [11] Y.S. Zhang, S.Q. Pang, Y.P. Wang, Orthogonal arrays obtained by generalized Hadamard product, Discrete Math. 238 (2001)
A SIMPLE METHOD FOR CONSTRUCTING ORTHOGONAL ARRAYS BY THE KRONECKER SUM
Jrl Syst Sci & Complexity (2006) 19: 266 273 A SIMPLE METHOD FOR CONSTRUCTING ORTHOGONAL ARRAYS BY THE KRONECKER SUM Yingshan ZHANG Weiguo LI Shisong MAO Zhongguo ZHENG Received: 14 December 2004 / Revised:
More informationORTHOGONAL ARRAYS CONSTRUCTED BY GENERALIZED KRONECKER PRODUCT
Journal of Applied Analysis and Computation Volume 7, Number 2, May 2017, 728 744 Website:http://jaac-online.com/ DOI:10.11948/2017046 ORTHOGONAL ARRAYS CONSTRUCTED BY GENERALIZED KRONECKER PRODUCT Chun
More informationA class of mixed orthogonal arrays obtained from projection matrix inequalities
Pang et al Journal of Inequalities and Applications 2015 2015:241 DOI 101186/s13660-015-0765-6 R E S E A R C H Open Access A class of mixed orthogonal arrays obtained from projection matrix inequalities
More informationAffine designs and linear orthogonal arrays
Affine designs and linear orthogonal arrays Vladimir D. Tonchev Department of Mathematical Sciences, Michigan Technological University, Houghton, Michigan 49931, USA, tonchev@mtu.edu Abstract It is proved
More informationOn Construction of a Class of. Orthogonal Arrays
On Construction of a Class of Orthogonal Arrays arxiv:1210.6923v1 [cs.dm] 25 Oct 2012 by Ankit Pat under the esteemed guidance of Professor Somesh Kumar A Dissertation Submitted for the Partial Fulfillment
More informationCONSTRUCTION OF SLICED SPACE-FILLING DESIGNS BASED ON BALANCED SLICED ORTHOGONAL ARRAYS
Statistica Sinica 24 (2014), 1685-1702 doi:http://dx.doi.org/10.5705/ss.2013.239 CONSTRUCTION OF SLICED SPACE-FILLING DESIGNS BASED ON BALANCED SLICED ORTHOGONAL ARRAYS Mingyao Ai 1, Bochuan Jiang 1,2
More informationConstruction of some new families of nested orthogonal arrays
isid/ms/2017/01 April 7, 2017 http://www.isid.ac.in/ statmath/index.php?module=preprint Construction of some new families of nested orthogonal arrays Tian-fang Zhang, Guobin Wu and Aloke Dey Indian Statistical
More informationCONSTRUCTION OF SLICED ORTHOGONAL LATIN HYPERCUBE DESIGNS
Statistica Sinica 23 (2013), 1117-1130 doi:http://dx.doi.org/10.5705/ss.2012.037 CONSTRUCTION OF SLICED ORTHOGONAL LATIN HYPERCUBE DESIGNS Jian-Feng Yang, C. Devon Lin, Peter Z. G. Qian and Dennis K. J.
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationSome matrices of Williamson type
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 1973 Some matrices of Williamson type Jennifer Seberry University of Wollongong,
More informationRepresentations of disjoint unions of complete graphs
Discrete Mathematics 307 (2007) 1191 1198 Note Representations of disjoint unions of complete graphs Anthony B. Evans Department of Mathematics and Statistics, Wright State University, Dayton, OH, USA
More informationThe skew-symmetric orthogonal solutions of the matrix equation AX = B
Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics
More informationNew Negative Latin Square Type Partial Difference Sets in Nonelementary Abelian 2-groups and 3-groups
New Negative Latin Square Type Partial Difference Sets in Nonelementary Abelian 2-groups and 3-groups John Polhill Department of Mathematics, Computer Science, and Statistics Bloomsburg University Bloomsburg,
More informationConstruction of column-orthogonal designs for computer experiments
SCIENCE CHINA Mathematics. ARTICLES. December 2011 Vol. 54 No. 12: 2683 2692 doi: 10.1007/s11425-011-4284-8 Construction of column-orthogonal designs for computer experiments SUN FaSheng 1,2, PANG Fang
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More informationProperties for the Perron complement of three known subclasses of H-matrices
Wang et al Journal of Inequalities and Applications 2015) 2015:9 DOI 101186/s13660-014-0531-1 R E S E A R C H Open Access Properties for the Perron complement of three known subclasses of H-matrices Leilei
More informationSome new constructions of orthogonal designs
University of Wollongong Research Online Faculty of Engineering and Information Sciences - Papers: Part A Faculty of Engineering and Information Sciences 2013 Some new constructions of orthogonal designs
More informationFRACTIONAL FACTORIAL DESIGNS OF STRENGTH 3 AND SMALL RUN SIZES
FRACTIONAL FACTORIAL DESIGNS OF STRENGTH 3 AND SMALL RUN SIZES ANDRIES E. BROUWER, ARJEH M. COHEN, MAN V.M. NGUYEN Abstract. All mixed (or asymmetric) orthogonal arrays of strength 3 with run size at most
More informationConstruction of latin squares of prime order
Construction of latin squares of prime order Theorem. If p is prime, then there exist p 1 MOLS of order p. Construction: The elements in the latin square will be the elements of Z p, the integers modulo
More informationORTHOGONAL ARRAYS OF STRENGTH 3 AND SMALL RUN SIZES
ORTHOGONAL ARRAYS OF STRENGTH 3 AND SMALL RUN SIZES ANDRIES E. BROUWER, ARJEH M. COHEN, MAN V.M. NGUYEN Abstract. All mixed (or asymmetric) orthogonal arrays of strength 3 with run size at most 64 are
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationThe Drazin inverses of products and differences of orthogonal projections
J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,
More information18Ï È² 7( &: ÄuANOVAp.O`û5 571 Based on this ANOVA model representation, Sobol (1993) proposed global sensitivity index, S i1...i s = D i1...i s /D, w
A^VÇÚO 1 Êò 18Ï 2013c12 Chinese Journal of Applied Probability and Statistics Vol.29 No.6 Dec. 2013 Optimal Properties of Orthogonal Arrays Based on ANOVA High-Dimensional Model Representation Chen Xueping
More informationLetting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.
1 Polynomial Matrices 1.1 Polynomials Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc., n ws ( ) as a
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationA Short Overview of Orthogonal Arrays
A Short Overview of Orthogonal Arrays John Stufken Department of Statistics University of Georgia Isaac Newton Institute September 5, 2011 John Stufken (University of Georgia) Orthogonal Arrays September
More informationCONSTRUCTION OF NESTED (NEARLY) ORTHOGONAL DESIGNS FOR COMPUTER EXPERIMENTS
Statistica Sinica 23 (2013), 451-466 doi:http://dx.doi.org/10.5705/ss.2011.092 CONSTRUCTION OF NESTED (NEARLY) ORTHOGONAL DESIGNS FOR COMPUTER EXPERIMENTS Jun Li and Peter Z. G. Qian Opera Solutions and
More informationOn the construction of asymmetric orthogonal arrays
isid/ms/2015/03 March 05, 2015 http://wwwisidacin/ statmath/indexphp?module=preprint On the construction of asymmetric orthogonal arrays Tianfang Zhang and Aloke Dey Indian Statistical Institute, Delhi
More informationRANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA
Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,
More informationJournal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error
Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 4 (2010) 118 1147 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Two error-correcting pooling
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationOn binary reflected Gray codes and functions
Discrete Mathematics 308 (008) 1690 1700 www.elsevier.com/locate/disc On binary reflected Gray codes and functions Martin W. Bunder, Keith P. Tognetti, Glen E. Wheeler School of Mathematics and Applied
More informationOn zero-sum partitions and anti-magic trees
Discrete Mathematics 09 (009) 010 014 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/disc On zero-sum partitions and anti-magic trees Gil Kaplan,
More informationInteraction balance in symmetrical factorial designs with generalized minimum aberration
Interaction balance in symmetrical factorial designs with generalized minimum aberration Mingyao Ai and Shuyuan He LMAM, School of Mathematical Sciences, Peing University, Beijing 100871, P. R. China Abstract:
More informationUpper triangular matrices and Billiard Arrays
Linear Algebra and its Applications 493 (2016) 508 536 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Upper triangular matrices and Billiard Arrays
More informationA property concerning the Hadamard powers of inverse M-matrices
Linear Algebra and its Applications 381 (2004 53 60 www.elsevier.com/locate/laa A property concerning the Hadamard powers of inverse M-matrices Shencan Chen Department of Mathematics, Fuzhou University,
More informationBinary construction of quantum codes of minimum distances five and six
Discrete Mathematics 308 2008) 1603 1611 www.elsevier.com/locate/disc Binary construction of quantum codes of minimum distances five and six Ruihu Li a, ueliang Li b a Department of Applied Mathematics
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationON THE CONSTRUCTION OF 2-SYMBOL ORTHOGONAL ARRAYS
Hacettepe Journal of Mathematics and Statistics Volume 31 (2002), 57 62 ON THE CONSTRUCTION OF 2-SYMBOL ORTHOGONAL ARRAYS Hülya Bayra and Aslıhan Alhan Received 22. 01. 2002 Abstract The application of
More informationNOTE ON CYCLIC DECOMPOSITIONS OF COMPLETE BIPARTITE GRAPHS INTO CUBES
Discussiones Mathematicae Graph Theory 19 (1999 ) 219 227 NOTE ON CYCLIC DECOMPOSITIONS OF COMPLETE BIPARTITE GRAPHS INTO CUBES Dalibor Fronček Department of Applied Mathematics Technical University Ostrava
More informationA NEW CLASS OF NESTED (NEARLY) ORTHOGONAL LATIN HYPERCUBE DESIGNS
Statistica Sinica 26 (2016), 1249-1267 doi:http://dx.doi.org/10.5705/ss.2014.029 A NEW CLASS OF NESTED (NEARLY) ORTHOGONAL LATIN HYPERCUBE DESIGNS Xue Yang 1,2, Jian-Feng Yang 2, Dennis K. J. Lin 3 and
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationAn iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C
Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation
More informationTensor Complementarity Problem and Semi-positive Tensors
DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 431 (29) 188 195 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Lattices associated with
More informationA note on optimal foldover design
Statistics & Probability Letters 62 (2003) 245 250 A note on optimal foldover design Kai-Tai Fang a;, Dennis K.J. Lin b, HongQin c;a a Department of Mathematics, Hong Kong Baptist University, Kowloon Tong,
More informationA Construction for Steiner 3-Designs
JOURNAL OF COMBINATORIAL THEORY, Series A 71, 60-66 (1995) A Construction for Steiner 3-Designs JOHN L. BLANCHARD * California Institute of Technology, Pasadena, California 91125 Communicated by the Managing
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationDiscrete Applied Mathematics
Discrete Applied Mathematics 157 (2009 1696 1701 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: www.elsevier.com/locate/dam Riordan group involutions and the -sequence
More informationOn Hadamard and Kronecker Products Over Matrix of Matrices
General Letters in Mathematics Vol 4, No 1, Feb 2018, pp13-22 e-issn 2519-9277, p-issn 2519-9269 Available online at http:// wwwrefaadcom On Hadamard and Kronecker Products Over Matrix of Matrices Z Kishka1,
More informationELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES
ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ZHONGPENG YANG AND XIAOXIA FENG Abstract. Under the entrywise dominance partial ordering, T.L. Markham
More informationSome results on the existence of t-all-or-nothing transforms over arbitrary alphabets
Some results on the existence of t-all-or-nothing transforms over arbitrary alphabets Navid Nasr Esfahani, Ian Goldberg and Douglas R. Stinson David R. Cheriton School of Computer Science University of
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationEventually reducible matrix, eventually nonnegative matrix, eventually r-cyclic
December 15, 2012 EVENUAL PROPERIES OF MARICES LESLIE HOGBEN AND ULRICA WILSON Abstract. An eventual property of a matrix M C n n is a property that holds for all powers M k, k k 0, for some positive integer
More informationEigenvectors and Reconstruction
Eigenvectors and Reconstruction Hongyu He Department of Mathematics Louisiana State University, Baton Rouge, USA hongyu@mathlsuedu Submitted: Jul 6, 2006; Accepted: Jun 14, 2007; Published: Jul 5, 2007
More informationEvery SOMA(n 2, n) is Trojan
Every SOMA(n 2, n) is Trojan John Arhin 1 Marlboro College, PO Box A, 2582 South Road, Marlboro, Vermont, 05344, USA. Abstract A SOMA(k, n) is an n n array A each of whose entries is a k-subset of a knset
More informationSection 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices
3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 1 Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices Note. In this section, we define the product
More informationA Block Negacyclic Bush-Type Hadamard Matrix and Two Strongly Regular Graphs
Journal of Combinatorial Theory, Series A 98, 118 126 (2002) doi:10.1006/jcta.2001.3231, available online at http://www.idealibrary.com on A Block Negacyclic Bush-Type Hadamard Matrix and Two Strongly
More informationMath 3013 Problem Set 4
(e) W = {x, 3x, 4x 3, 5x 4 x i R} in R 4 Math 33 Problem Set 4 Problems from.6 (pgs. 99- of text):,3,5,7,9,,7,9,,35,37,38. (Problems,3,4,7,9 in text). Determine whether the indicated subset is a subspace
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More informationThe generalized order-k Fibonacci Pell sequence by matrix methods
Journal of Computational and Applied Mathematics 09 (007) 33 45 wwwelseviercom/locate/cam The generalized order- Fibonacci Pell sequence by matrix methods Emrah Kilic Mathematics Department, TOBB University
More informationUniversity, Wuhan, China c College of Physical Science and Technology, Central China Normal. University, Wuhan, China Published online: 25 Apr 2014.
This article was downloaded by: [0.9.78.106] On: 0 April 01, At: 16:7 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 10795 Registered office: Mortimer House,
More informationARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions
ARCS IN FINITE PROJECTIVE SPACES SIMEON BALL Abstract. These notes are an outline of a course on arcs given at the Finite Geometry Summer School, University of Sussex, June 26-30, 2017. Let K denote an
More informationApplied Mathematics Letters
Applied Mathematics Letters 24 (2011) 797 802 Contents lists available at ScienceDirect Applied Mathematics Letters journal homepage: wwwelseviercom/locate/aml Model order determination using the Hankel
More informationInstitute of Statistics Mimeo Series No April 1965
_ - ON THE CONSTRUCTW OF DFFERENCE SETS AND THER USE N THE SEARCH FOR ORTHOGONAL ratn SQUARES AND ERROR CORRECTnm CODES by. M. Chakravarti University of North Carolina nstitute of Statistics Mimeo Series
More informationOrthogonal Arrays & Codes
Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-oa, is a λv t x k array of v symbols, such that in any t columns of the array every one of the possible
More informationThe following two problems were posed by de Caen [4] (see also [6]):
BINARY RANKS AND BINARY FACTORIZATIONS OF NONNEGATIVE INTEGER MATRICES JIN ZHONG Abstract A matrix is binary if each of its entries is either or The binary rank of a nonnegative integer matrix A is the
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More informationQUASI-ORTHOGONAL ARRAYS AND OPTIMAL FRACTIONAL FACTORIAL PLANS
Statistica Sinica 12(2002), 905-916 QUASI-ORTHOGONAL ARRAYS AND OPTIMAL FRACTIONAL FACTORIAL PLANS Kashinath Chatterjee, Ashish Das and Aloke Dey Asutosh College, Calcutta and Indian Statistical Institute,
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 432 2010 661 669 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: wwwelseviercom/locate/laa On the characteristic and
More informationa Λ q 1. Introduction
International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE
More informationMath Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT
Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear
More informationTHE SUM OF THE ELEMENTS OF THE POWERS OF A MATRIX
THE SUM OF THE ELEMENTS OF THE POWERS OF A MATRIX MARVIN MARCUS AND MORRIS NEWMAN l Introduction and results* In the first two sections of this paper A will be assumed to be an irreducible nonnegative
More informationOptimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications
Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,
More informationMatrix Algebra Review
APPENDIX A Matrix Algebra Review This appendix presents some of the basic definitions and properties of matrices. Many of the matrices in the appendix are named the same as the matrices that appear in
More informationNew quasi-symmetric designs constructed using mutually orthogonal Latin squares and Hadamard matrices
New quasi-symmetric designs constructed using mutually orthogonal Latin squares and Hadamard matrices Carl Bracken, Gary McGuire Department of Mathematics, National University of Ireland, Maynooth, Co.
More informationComparing the homotopy types of the components of Map(S 4 ;BSU(2))
Journal of Pure and Applied Algebra 161 (2001) 235 243 www.elsevier.com/locate/jpaa Comparing the homotopy types of the components of Map(S 4 ;BSU(2)) Shuichi Tsukuda 1 Department of Mathematical Sciences,
More informationproposed. This method can easily be used to construct the trend free orthogonal arrays of higher level and higher strength.
International Journal of Scientific & Engineering Research, Volume 5, Issue 7, July-2014 1512 Trend Free Orthogonal rrays using some Linear Codes Poonam Singh 1, Veena Budhraja 2, Puja Thapliyal 3 * bstract
More informationDiscrete Mathematics
Discrete Mathematics 3 (0) 333 343 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/disc The Randić index and the diameter of graphs Yiting Yang a,
More informationEighth Homework Solutions
Math 4124 Wednesday, April 20 Eighth Homework Solutions 1. Exercise 5.2.1(e). Determine the number of nonisomorphic abelian groups of order 2704. First we write 2704 as a product of prime powers, namely
More informationOptimal Fractional Factorial Plans for Asymmetric Factorials
Optimal Fractional Factorial Plans for Asymmetric Factorials Aloke Dey Chung-yi Suen and Ashish Das April 15, 2002 isid/ms/2002/04 Indian Statistical Institute, Delhi Centre 7, SJSS Marg, New Delhi 110
More informationBare minimum on matrix algebra. Psychology 588: Covariance structure and factor models
Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More informationSquare 2-designs/1. 1 Definition
Square 2-designs Square 2-designs are variously known as symmetric designs, symmetric BIBDs, and projective designs. The definition does not imply any symmetry of the design, and the term projective designs,
More informationAn Even Order Symmetric B Tensor is Positive Definite
An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or
More informationA concise proof of Kruskal s theorem on tensor decomposition
A concise proof of Kruskal s theorem on tensor decomposition John A. Rhodes 1 Department of Mathematics and Statistics University of Alaska Fairbanks PO Box 756660 Fairbanks, AK 99775 Abstract A theorem
More informationMATH 106 LINEAR ALGEBRA LECTURE NOTES
MATH 6 LINEAR ALGEBRA LECTURE NOTES FALL - These Lecture Notes are not in a final form being still subject of improvement Contents Systems of linear equations and matrices 5 Introduction to systems of
More informationELA
Volume 16, pp 171-182, July 2007 http://mathtechnionacil/iic/ela SUBDIRECT SUMS OF DOUBLY DIAGONALLY DOMINANT MATRICES YAN ZHU AND TING-ZHU HUANG Abstract The problem of when the k-subdirect sum of a doubly
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 7, Issue 1, Article 34, 2006 MATRIX EQUALITIES AND INEQUALITIES INVOLVING KHATRI-RAO AND TRACY-SINGH SUMS ZEYAD AL
More informationBulletin of the. Iranian Mathematical Society
ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 43 (2017), No. 3, pp. 951 974. Title: Normal edge-transitive Cayley graphs on the nonabelian groups of
More informationThe initial involution patterns of permutations
The initial involution patterns of permutations Dongsu Kim Department of Mathematics Korea Advanced Institute of Science and Technology Daejeon 305-701, Korea dskim@math.kaist.ac.kr and Jang Soo Kim Department
More informationA Taylor polynomial approach for solving differential-difference equations
Journal of Computational and Applied Mathematics 86 (006) 349 364 wwwelseviercom/locate/cam A Taylor polynomial approach for solving differential-difference equations Mustafa Gülsu, Mehmet Sezer Department
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More information--------------------------------------------------------------------------------------------- Math 6023 Topics: Design and Graph Theory ---------------------------------------------------------------------------------------------
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More informationTHE LIE ALGEBRA ASSOCIATED TO THE LOWER CENTRAL SERIES OF A LINK GROUP AND MURASUGI'S CONJECTURE
PROCEEDINGS of the AMERICAN MATHEMATICAL SOCIETY Volume 109, Number 4, August 1990 THE LIE ALGEBRA ASSOCIATED TO THE LOWER CENTRAL SERIES OF A LINK GROUP AND MURASUGI'S CONJECTURE JOHN P. LABUTE (Communicated
More informationSubset selection for matrices
Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,
More informationExtending MDS Codes. T. L. Alderson
Extending MDS Codes T. L. Alderson Abstract A q-ary (n,k)-mds code, linear or not, satisfies n q + k 1. A code meeting this bound is said to have maximum length. Using purely combinatorial methods we show
More information