Sanna Ranto. Identifying and Locating- Dominating Codes in Binary Hamming Spaces. Turku Centre for Computer Science

Size: px
Start display at page:

Download "Sanna Ranto. Identifying and Locating- Dominating Codes in Binary Hamming Spaces. Turku Centre for Computer Science"

Transcription

1 Sanna Ranto Identifying and Locating- Dominating Codes in Binary Hamming Spaces Turku Centre for Computer Science TUCS Dissertations No 86, April 2007

2

3 Identifying and Locating-Dominating Codes in Binary Hamming Spaces by Sanna Ranto To be presented, with the permission of the Faculty of Mathematics and Natural Sciences of the University of Turku, for public criticism in Auditorium XXI of the University on May 6th, 2007, at 2 noon University of Turku Department of Mathematics FIN-2004 Turku, Finland 2007

4 SUPERVISOR PROFESSOR IIRO HONKALA Department of Mathematics University of Turku FIN-2004 Turku Finland REVIEWERS PROFESSOR OLIVIER HUDRY Centre National de la Recherche Scientifique Département Informatique et Réseaux École Nationale Supérieure des Télécommunications 46 rue Barrault, PARIS cedex 3 France PROFESSOR MARK KARPOVSKY Department of Electrical and Computer Engineering Boston University 8 Saint Mary s Street Boston, Massachusetts 0225 USA OPPONENT CHARGÉ DE RECHERCHE ANTOINE LOBSTEIN Centre National de la Recherche Scientifique Département Informatique et Réseaux École Nationale Supérieure des Télécommunications 46 rue Barrault, PARIS cedex 3 France ISBN ISSN Painosalama Oy Turku, Finland 2007

5 Acknowledgements First of all I want to thank my supervisor Professor Iiro Honkala for his continuous support, and patience when long breaks in research took place. Many discussions with Iiro, his collaboration, suggestions for research topics, and careful proofreading made this work possible. I also want to thank Dr. Tero Laihonen for many inspiring discussions, suggestions for research topics, and for fruitful collaboration. Professor Aimo Tietäväinen encouraged me to begin my postgraduate studies, and I am grateful for that. I extend my thanks to co-author Professor Geoffrey Exoo, and the reviewers of this thesis, Professor Olivier Hudry and Professor Mark Karpovsky. I am grateful for financial support from Turku Centre for Computer Science (TUCS) and Department of Mathematics. I acknowledge the staff at these institutions for creating pleasant working environment. I acknowledge and thank the Nokia Foundation and the August and Lyydia Heino scholarship fund for partial financial support of my work. Finally, I want to thank my husband Kalle for his loving support and for reading my thesis. The thesis is dedicated to my lovely sons Samuli, Olli, and Vilho, who, throughout this process supplied numerous kisses and hugs. Turku April 2007 Sanna Ranto

6

7 Contents Introduction 7 2 Preliminaries 3 On identifying codes 5 3. Lower bounds Constructions for -identifying codes Constructions for r-identifying codes On codes identifying sets of vertices Lower bounds Short codes Constructions Constructions for (, 2)-identifying codes Direct sum constructions for radii r Strongly identifying codes Strongly -identifying codes Constructions Nonexistence results Short codes Strongly (, 2)-identifying codes Linear identifying codes On linear -identifying codes On linear (, l)-identifying codes Linear strongly identifying codes Locating-Dominating Codes 7 7. On the case l = Locating-dominating codes of type A Locating-dominating codes of type B

8 6 Contents 7.3. Lower bounds Constructions Optimal linear locating-dominating codes Appendix 87

9 Chapter Introduction We consider a connected graph G = (V,E) with a vertex set V and an edge set E. A code is a nonempty subset of the vertex set V, and a vertex belonging to a code is called a codeword. A code is called an identifying code, if all the vertices of V have different nonempty codeword neighbourhoods. A code is called a locatingdominating code if all the vertices of V not belonging to the code have different nonempty codeword neighbourhoods. Usually, the neighbourhood is the ball of radius r. This means that the codeword neighbourhood of a vertex x in the vertex set V consists of codewords at distance at most r from x. In this case we also speak about r-identifying and r-locating-dominating codes. The problem is to determine the smallest number of codewords in identifying and locating-dominating codes in different graphs. A code attaining the smallest possible cardinality is called optimal. The research on identifying codes started in 998 from the paper of Karpovsky, Chakrabarty and Levitin [40]. Since then the theory of identifying codes has been investigated in various graphs such as infinite grids [, 8, 0, 4, 27, 28], general graphs [2, 9, 5], trees [4, 7] and paths and cycles [3, 22]. Locatingdominating codes were introduced by Slater [60], see also [6, 58]. Other results on locating-dominating codes can be found in [2, 6, 25, 29, 54, 59, 6, 62]. Finding an identifying or locating-dominating code of an optimal cardinality and determining whether or not a given code is identifying or locating-dominating are computationally difficult problems. For these and other complexity results, see [9,, 5, 38, 39]. There are many other papers written about identifying and locating-dominating codes than the ones mentioned here. The reader can consult the up-to-date internet bibliography [50] on identifying and locating-dominating codes maintained by Antoine Lobstein. In this thesis, we consider identifying and locating-dominating codes in binary Hamming spaces or hypercubes. In the binary hypercube, the vertices are labelled by the 2 n binary words of length n, and two vertices are adjacent if and only if their labels differ in exactly one position. Identifying codes in bi-

10 8 Introduction nary Hamming spaces have been considered in [5, 6, 24, 26, 3, 32, 37, 40, 44 46, 52]. In [40] there are lower bounds for the cardinalities of r-identifying codes and some constructions which use covering codes. In [5] r-identifying codes are constructed for r 2. In [37] it is shown that for a constant ρ [0,) we have lim n n log 2 M ρn (n) = H(ρ), where M r (n) is the smallest cardinality of an r-identifying code of length n and H(x) = xlog 2 x ( x)log 2 ( x) is the binary entropy function. In [52] it is shown that the optimal cardinality of -identifying codes is a monotonically increasing function with respect to code length. References [44, 45] are about codes which identify sets of vertices of size at most l, when l 3. In [45] an infinite sequence of optimal (, l)-identifying codes is constructed by showing that the problem of finding an optimal (, l)- identifying code is equivalent to the problem of finding an optimal (2l )-fold -covering. In [44] an infinite optimal sequence of strongly (, l)-identifying codes is constructed. Robust identifying codes are considered in binary Hamming spaces in [26, 3, 32, 46]. Identifying codes can be applied in fault diagnosis of multiprocessor systems. The purpose of fault diagnosis is to locate malfunctioning processors by testing the system. Processors are the vertices of the graph and the links between the processors are represented as the edges of the graph. A set of processors is chosen for checking themselves and their neighbours and report to an outside controller. Each chosen processor sends a message if there is something wrong in the processor itself or some of its neighbours and 0 otherwise. The processors are chosen so that they form an identifying code. Depending on the code the controller can identify one or more malfunctioning processors in each test run. This is the initial application of identifying codes from [40]. A recent application area of identifying codes is sensor networks, like environmental monitoring. For more information about current applications, see [2, 26, 42, 43, 57]. The initial application of locating-dominating codes in [60] is the safeguard analysis of a facility using a fire and intruder alarm system. Now the vertices of a graph represent rooms, corridors, courtyards etc. Each edge connects two areas that are physically adjacent or they are within a sight or sound connection with each other. Sensors are placed in some of the vertices. These sensors each send a message to a controller. In this case there are three possible messages: 0 if no problem is detected, if there is a problem in some neighbouring vertex and 2 if a problem is at the vertex itself. The structure of the thesis is the following. In Chapter 2, we define identifying codes precisely and introduce some notations that are needed later. In Chapter 3 we consider r-identifying codes which can identify one fault. The chapter begins with a lower bound for r-identifying codes. This bound improves on the previously known lower bounds for r-identifying codes [40]. We give a construction of -identifying codes which gives the smallest known cardinalities for many lengths. We construct r-identifying codes, for r 2, by taking a direct

11 sum of r -identifying codes. This construction partly solves an open problem in [6]. The results of this chapter are from [8, 49]. In Chapter 4 we consider identifying codes which can identify any at most l faulty vertices when l 2 is fixed. The chapter begins with lower bounds and general notions and continues with short optimal codes. The main results of this chapter are the constructions in Section 4.3. For example, we have a construction which gives us two infinite families of optimal (, 2)-identifying codes. We also prove that the direct sum of r (, l)-identifying codes is an (r, l)-identifying code. This chapter is based on the papers [34, 47, 49, 56]. We also give results which are not included in these papers. We consider strongly identifying codes in Chapter 5. In this variant of identifying codes a malfunctioning processor in a fault diagnosis system may or may not send the correct message. Because we cannot trust the received information we have to assume somewhat more than in the basic identifying case. The results of this chapter are from [33, 35, 47, 48]. In Chapter 6 we consider linear identifying codes. An identifying code is linear if it is a subspace of a Hamming space. For linear -identifying codes the optimal cardinalities are solved in all cases: for all lengths and for any number l of faulty vertices. The optimal cardinalities are also solved for linear strongly -identifying codes. Results of this chapter are from [55]. In Chapter 7 we consider locating-dominating codes in Hamming spaces. We first consider locating-dominating codes which can locate one faulty vertex among non-codewords. We also give two different definitions for locating-dominating codes locating more than one faulty vertex. In both cases, we prove lower bounds and give some constructions. This chapter is based on [36]. In the Appendix, we list some identifying and locating-dominating codes found using a computer. 9

12

13 Chapter 2 Preliminaries Let F = {0,} denote the binary field. The binary Hamming space F n or binary hypercube is the n-fold Cartesian product of F. A code is a subset of F n. The elements of F n are called words or vectors, and the elements of a code are called codewords. The cardinality of a set X is denoted by X. The Hamming distance between words x = (x,...,x n ), y = (y,...,y n ) F n is the number of places in which they differ, that is, d(x,y) := {i x i y i }. The set of non-zero coordinates of a word x F n is called the support of x and is denoted by supp(x). The cardinality of the support of x is called the weight of x and is denoted by w(x). For x F n we denote B r (x) = {y F n d(x,y) r}, S r (x) = {y F n d(x,y) = r}. The set B r (x) is called the Hamming sphere or ball of radius r centred at x. The size of the Hamming sphere of radius r in F n does not depend on the choice of the centre and it is denoted by V (n,r). We have V (n,r) = r i=0 ( ) n. i

14 2 Preliminaries The next lemma will often be used. Lemma 2.. Let x,y F n. Then B (x) B (y) = n + if x = y, 2 if d(x,y) 2, 0 otherwise. The intersection of three different Hamming spheres of radius one contains at most one element. Proof. Denote by e i the word which support is {i} and moreover e 0 = 0. For 0 i n we have x + e i B (x) B (y) if and only if x + e i = y + e k for some 0 k n. This is equivalent to x = y + e i + e k. This implies the first claim. Suppose that d(x,y) 2, then for e i e k we have x + e i,x + e k B (x) B (y). If for some z F n we have B (x) B (y) B (z) = 2, then B (z) has to include those two words which belong to B (x) B (y). That means, for some e j and e h we have x + e i = y + e k = z + e j and x + e k = y + e i = z + e h. This is equivalent to x = z+e i +e j = z+e k +e h. Thus, {e i,e k } = {e h,e j }. If e i = e h, then y = z. If e i = e j, then x = z. It is clear that in the intersection of three different words there is one word if the words are suitably chosen. Let C F n be a code. For any X F n we denote I r (X) = I r (C;X) = (B r (x) C). The set I r (X) is called the I-set of X. If r =, we denote I(X) = I(C;X) for short. We also denote I r ({x,x 2,...,x l }) = I r (x,x 2,...,x l ) = I r (C;x,...x l ). The set X that we try to identify is called the fault pattern. Definition 2.2. Let r and l be non-negative integers. We say that C F n is an (r, l)-identifying code if for all X,Y F n, X Y, such that X l and Y l we have I r (X) I r (Y ). If l =, we say C is an r-identifying code. x X The symmetric difference of sets A and B is A B = (A \ B) (B \ A). In other words, we can say that C F n is (r, l)-identifying if and only if for all X,Y F n, X l and Y l we have I r (C;X) I r (C;Y ) /0. The smallest cardinality of an (r, l)-identifying code of length n is denoted by M r ( l) (n). If l =, we denote the smallest cardinality by M r (n). A code attaining the smallest cardinality is called optimal.

15 3 If l = 0, the requirement for an identifying code is empty and the code is trivial. If r = 0, then obviously M ( l) 0 (n) = 2 n. From now on we assume that r and l. In the constructions of identifying codes we often encounter covering codes. For further information of covering codes, see [3]. Let us recall some definitions from therein. We say that x F n r-covers y F n if d(x,y) r. Definition 2.3. A code C F n has covering radius r if r is the smallest integer such that for all x F n there is a codeword c C that r-covers x. The smallest cardinality of a binary code with length n and covering radius r is denoted by K(n, r). There is a well-known conjecture concerning the covering radius, see [3, p. 352]. According to the conjecture the best r-covering codes are asymptotically perfect. Conjecture 2.4. For a fixed r K(n,r)V(n,r) lim n 2 n =. Next we define multiple covering codes. Definition 2.5. A code C F n is a µ-fold r-covering if every word in F n is r- covered by at least µ distinct codewords of C. The smallest cardinality of a µ-fold r-covering of length n is denoted by K(n, µ,r). The sum of vectors x = (x,...,x n ) and y = (y,...,y n ) is The sum of sets X,Y F n is x + y = (x + y,...,x n + y n ). X +Y = {x + y x X,y Y }. The direct sum of codes C F n and C 2 F n 2 is C C 2 = {(c,c 2 ) F n +n 2 c C, c 2 C 2 }. We denote by π(u) the parity check bit of u, i.e., { 0 if w(u) even π(u) =. if w(u) odd

16

17 Chapter 3 On identifying codes In this chapter, we consider r-identifying codes when r = and r 2. In the case r =, our main purpose is to construct longer -identifying codes from - identifying codes that are 2-fold -coverings. For r 2, we prove M r ( r i= n i) r i= M(n i). We begin by introducing a lower bound for r-identifying codes which improves on the previously known lower bounds for r 2 and n large enough. This chapter is based on [8] except Theorem 3.3 with corollaries is from [49]. 3. Lower bounds The following lower bound for r-identifying codes is from [40, Theorem (iii)]. We mention the result here without the proof. Theorem 3. (Karpovsky et al. [40]). Let K be the smallest integer such that for a certain integer t ( t min{k,v (n, r)}) the following conditions are true: Then M r (n) K. 2 n t 2 j=0 t j= ( K j ) < V (n,r) ( ) ( K K + V (n,r) j t t j=0 t 2 j=0 ( ) K j ( ) ) K. The previous lower bound coincides with the next lower bound from [40, Theorem 3] for all r and n large enough. There are at most C words r-covered at most by one codeword in an r-identifying code C. All the other words are covered at least by two codewords. Thus, by counting the number of pairs {x,c} where x F n, c C and d(c,x) r in two ways we get the next lower bound from [40]. j

18 6 On identifying codes Theorem 3.2 (Karpovsky et al. [40]). M r (n) 2 n+ V (n,r) +. The next theorem improves on the lower bounds of the previous theorems for r 2. As a corollary, we also get the best known lower bound for -identifying codes from [40] (see also [6]). Theorem 3.3. Let C F n be r-identifying and m = max{ I r (x) : x F n }. Denote We have f r (x) = (x 2)(( ) 2r r ) ( 2r ) ( r + x ). 2 C 2n (2 + f r (v)) V (n,r) + f r (v) +. where v = m, if m ( ) 2r r, and v = 3 otherwise. Proof. Let C F n be an r-identifying code. Denote by V i the words r-covered by exactly i codewords. There are at most K = C words which are r-covered by one codeword. All the other words are r-covered at least by 2 codewords. Let x F n be r-covered by exactly two codewords, I r (x) = {c,c 2 }. Now d(c,c 2 ) 2r. When d(c,c 2 ) = 2r there are ( ) 2r r words covering both of these codewords. If d(c,c 2 ) < 2r, then by [3, Theorem 2.4.8] we know that there are at least ( ) 2r words covering both of these words. Hence, by the definition of identifying codes, for each word which is r-covered by two codewords there are at least ( ) 2r r words which are r-covered by three or more codewords. On the other hand, if y F n is r-covered by i 3 codewords, then there can be at most ( i 2) words z such that I r (z) I r (y) and I r (z) = 2. Hence, by counting in two ways the number of pairs {x,y} such that x V 2 and y V i (i 3) and I r (x) I r (y), we have ( ) 2r m ( ) i ( ) V 2 r V i. (3.) 2 For any positive real number a we get by counting in two ways the number of pairs {x,c}, where x F n and c C such that d(x,c) r K V (n,r) = m i= i V i = (2 + a) m i= V i + m i= i=3 (i 2 a) V i K + (2 + a)(2 n K) a V 2 + K + (2 + a)(2 n K) + m i=3 m i=3 (i 2 a (i 2 a) V i ( 2r r ( ) a i ) ) V i. 2 r

19 3. Lower bounds 7 We require that a satisfies i 2 a get ( a ( i ) 2r r ) 2 0 for all 3 i m. Hence, we a (i 2)(( ) 2r r ) ( 2r ) ( r + i ) =: f r (i) (3.2) 2 The function f r is decreasing when i ( ) 2r r, fr is increasing for 3 i ( ) 2r r and fr (3) = f r (2 + 2 ( ) ( 2r r ). Thus, when m r ) r we can choose a = f r (m) and otherwise a = f r (3). This implies (in both cases) from which the claim follows. K V (n,r) K + (2 + f r (v))(2 n K), Remark 3.4. Let A be a set of codewords in B r (0) such that every word in B 2r (0) is r-identified by using only the words of A. Suppose C F n is an r-identifying code. If there is y F n such that I r (y) > A, then the code C := (C \ I r (y)) D, where D := {a + y a A}, is still r-identifying and C < C. If the code C r- covers again some word more than A times we continue with the same process. In each step the cardinality of the code is getting smaller and every code is r- identifying. Hence, the process will stop at some point and the result will be an r- identifying code which r-covers each word in F n at most A times. Consequently, we can use m A in the previous theorem. Moreover, if C is r-identifying and attains M r (n), then we immediately know that I r (x) A for all x F n. By [40, Theorem 4] we know that S (0) -identifies B 2 (0). By substituting r = and m = S (0) = n to the Theorem 3.3 we get the following best known lower bound for -identifying codes due to Karpovsky, Chakrabarty and Levitin [40] (see also [6]). Corollary 3.5 (Karpovsky et al. [40]). M (n) n2n+ 2 + n + n 2 = n2n V (n,2). In [5, Construction 3] it is proven that all the words of weight two except those which supports are {i,i+} for i =,...,n and {,n} (when n 7) 2-identify all the words in the set B 4 (0). Hence, ( n 2) n words in S2 (0) 2-identify words in B 4 (0) when n 7. (When we say the set A F n r-identifies the set B F n, we mean that for all x B and y F n we have I r (A;x) I r (A;y).) Taking m = ( n 2) n in Theorem 3.3 one obtains the following. Corollary 3.6. For n 7 we have M 2 (n) 2 n+2 (n 3 6n 2 + 7n 24) n 5 5n 4 + 5n 3 n 2 + 4n 56.

20 8 On identifying codes In [40, Theorem 5] it is proven that the set S r (0) r-identifies all the words in B 2r (0) provided that r < n/2. Choosing m = ( n r) implies the next result. Corollary 3.7. When ( ) ( n r r ) r and r < n/2, we have M r (n) 2n (2 + f r ( ( n r) )) V (n,r) + f r ( ( ) n. r ) + We can apply also other constructions of [5] as the set A in the remark above. Using these results we get the following corollaries. Corollary 3.8. Let ( ) n/2 R = 2 n/2 + 2 r When V (n,r ) + R ( 2r r ) we have M r (n) ( n/2 r ). 2n (2 + f r (V (n,r ) + R)) V (n,r) + f r (V (n,r ) + R) +. Corollary 3.9. When 2r divides n and ( ) ( )( n 2r n R = r r 2r we have M r (n) 2n (2 + f r (V (n,r ) + R)) V (n,r) + f r (V (n,r ) + R) + The lower bounds of Theorem 3. and Theorem 3.2 coincide (see [40]) for every r when n is large enough. It is easy to see that ) r 2 n (2 + f r (x)) V (n,r) + f r (x) + > 2 n+ V (n,r) + for all x > 2 and n,r. Hence, the lower bound of Theorem 3.3 is always stronger than the lower bound of Theorem 3.2. Thus, for every fixed r there is n 0 such that for all n n 0 we improve on the lower bound of Theorem 3.. For example, Theorem 3.3 and the corollaries improve Theorem 3. for r = 2 when n 0, r = 3 when n 20, r = 4 when n 29, and r = 5 when n 37. In the next section we use 2-fold -covering codes that are -identifying to construct new -identifying codes. The proof for the lower bound of -identifying codes that are 2-fold -covered goes similarly to the proof of the previous theorem. For the proof see [8]. In 2-fold -covering code that is -identifying there can be (n + )-fold -covered codewords. Theorem 3.0. If C F n (n 2) is a 2-fold -covering and a -identifying code, then 2 n+ (n + ) C n 2. (3.3) + n + 2

21 3.2 Constructions for -identifying codes Constructions for -identifying codes In Theorem 3. we apply the classical (π(u),u,u + v)-construction to 2-fold -coverings of length n that are -identifying and get 2-fold -covering and - identifying codes of length 2n +. Theorem 3.. Suppose C F n is a 2-fold -covering and a -identifying code. Then D = {(π(u),u,u + v) u F n,v C} F 2n+ is a 2-fold -covering which is -identifying. Proof. By [3, Theorem 4.4.3] we know that D is a 2-fold -covering in F 2n+. Let x = (a,u,u + v) F 2n+. We divide the words of F 2n+ into two classes depending on the first bit. I If a = π(u), then I(x) = {(a,u,u + c) c C, d(c,v) }. II If a π(u), then I(x) = A {(a,u,u + c) c C, d(u,u) =, u + c = u + v}, where A = {(π(u),u,u + v)} if v C and otherwise A is empty. If I(x) 3, then it is clear by Lemma 2. that x is identified. Thus, suppose I(x) = 2 this implies that also I(v) = 2. We shall show that there does not exist a word y (y x) such that I(y) = I(x). Assume first that a = π(u). Now I(x) = {(a,u,u + c ),(a,u,u + c 2 )} where c,c 2 C, d(v,c ) and d(v,c 2 ) =. Because the first (n+)-bits of the words in the I-set of x are the same this implies that if I(y) = I(x), then also y belongs to the class I. Thus, y = (a,u,u + w) where d(w,c ) and d(w,c 2 ). If I(y) = I(x), then also I(v) = I(w). This is impossible, because C is -identifying and, clearly, v w. If a π(u) and v C, then I(x) = {(a,u,u + c ),(a,u 2,u 2 + c 2 )} where d(u i,u) = and u + v = u i + c i for i =,2. Thus, d(v,c ) = d(v,c 2 ) =. If I(y) = I(x), then also y belongs to the class II because the n bits following the first bit are changing in the words of I(y). Now y = (a,u,u + w) where u is the unique word in (B (u ) B (u 2 )) \ {u} and u + w = u + c = u 2 + c 2, thus w is the unique word in (B (c ) B (c 2 )) \ {v}. If I(y) = I(x), then also I(w) = I(v). That is impossible. Suppose finally a π(u) and v C. Then I(x) = {(π(u),u,u+v),(a,u,u + c )} where d(u,u ) =, d(c,v) = and u + c = u + v. If I(x) = I(y), then y = (π(u),u,u + c ) because the first bit is changing in the words of I(x). But if I(x) = I(y), then also I(v) = I(c ) and this cannot happen. The next remark is from [56].

22 20 On identifying codes Remark 3.2. In general, the (π(u), u, u + v)-construction does not directly apply to -identifying codes. Let C = {000,0,0,0} F 3. The code C is -identifying. Applying the (π(u),u,u + v) -construction to the code C we get a code C of length 7. In C we have I(00000) = I(0000) = {00000} and therefore C is not -identifying. Example 3.3. Theorem 3.0 shows that for the lengths 2,3 and 4 the codes that are presented next are optimal 2-fold -coverings which are -identifying.. Words 0,0, form a 2-fold -covering and a -identifying code of length 2 and cardinality Words 00, 00, 0, 0, form a 2-fold -covering and a -identifying code of length 3 and cardinality Words 000, 000, 00, 00, 00, 00, 0, 0 form a 2-fold - covering and a -identifying code of length 4 and cardinality The following words form a 2-fold -covering and a -identifying code of length 6 and cardinality 22 (lower bound by Theorem 3.0 is 2) Combining Theorem 3. with the small codes of the previous example we get the next corollary. Corollary 3.4. For any k 0 we have: M (3 2 k ) (2k ) k M (2 k+2 ) 5 2 2k+2 k 4 M (5 2 k ) 2 5 2k 2 k M (7 2 k ) 2 7 2k k 6 Let a be the ratio between the sphere bound and the cardinality of a code C F n, i.e., C a = 2 n /(n + ). Applying the (π(u),u,u + v)-construction to the code (and then to the resulting code and so on) we get an infinite family of codes, whose corresponding ratio is always a. For example, starting with the first code in Example 3.3 we get for all k 0 and n = 3 2 k : M (n) 2 n /(n + ) 2.25.

23 3.3 Constructions for r-identifying codes 2 By Corollary 3.5 we know that 2 a when n tends to infinity. For higher lengths, good (meaning a 2) initial codes can be obtained using the codes C = {c + b c A, b B (0)} where A is a -fold 2-covering given in [64]. Clearly, C is a 2-fold -covering in F n and it is also -identifying by [40]. The asymptotic results from [37, 40] for the size of usual -identifying codes are obtained (using C) in the same way also for 2-fold -coverings which are -identifying. 3.3 Constructions for r-identifying codes In Theorem 3.8 we answer, when r = r 2 =, the question posed in [6] whether or not M r +r 2 (n + n 2 ) M r (n )M r2 (n 2 ). The direct sum of two -identifying codes does not always give a 2-identifying code, as it is considered in [40, Theorem 9] and [4, Theorem 5]. For example, the direct sum of the -identifying code C = {000, 00, 0, 00} with itself is not 2-identifying, since for example I 2 (00000) = {000000, 00000, 00000, 00000, 00000} = I 2 (00000). We show that the result for the cardinalities of 2-identifying codes can still be retrieved by showing that it is always possible to arrange initial codes such that no problem occurs. The result generalizes for all radii r in such a way that the direct sum of r -identifying codes can be used to construct an r-identifying code. In Theorem 3.20 we generalize [6, Theorem and Theorem 2] for all radii r. Previously, in [6] only radius is considered. This theorem also shows that [40, Corollary 7] and [4, Corollary 6] cannot be proved by using the direct sum of an r-identifying code and F k unless k r +. As we already noticed, the direct sum of two -identifying codes does not always lead to a 2-identifying code. In the next lemma a -identifying code is arranged in such a way that it can be used in the direct sum of r -identifying codes to produce an r-identifying code. Lemma 3.5. Suppose n 3 and C F n is a -identifying code such that x F n \C, y C : I(x) = {y} u,...,u n S (x) \ {y} : u i C v,...,v n S (y) \ {x} : v i C, I(u i ) = {v i },I(y) = {y,v,...,v n }. Then also the code C = C {x,y} is -identifying. Notice that C = C. Proof. Denote by C the code we get from C by changing x C and y C. Now I(C ;x) = {x} I(C ;y) = {x,v,...,v n } I(C ;u i ) = {x,v i } i =,...,n {v i } I(C ;v i ).

24 22 On identifying codes Because the I-sets of the words in B (y) are the only ones losing a codeword (adding is not a problem), it is enough to make sure that these I-sets are distinguishable, when proving that C is -identifying. We see that I(x) and I(y) are unambiguous. The I-sets of v is are unambiguous since every word z S 3 (x) that is a neighbour of v i is also a neighbour of v k for some k (i k). The next definition and lemma are needed in the proof of the following theorem. Denote N = r i= n i and N = N n r. Definition 3.6. Assume c i, x i F n i for i r. We say that a word (c,...,c r ) F N is in balance with a word (x,...,x r ) F N if for all i r we have d(c i,x i ) 2. Lemma 3.7. Let C i F n i be -identifying codes for i =,...,r and x = (x,...,x r ) F N. There is a codeword c C... C r such that d(x,c) = r or there are codewords c and c 2 in C... C r such that d(x,c ) = r 2 and d(x,c 2 ) = r. Moreover, the codewords c, c and c 2 are in balance with x. Proof. If there are codewords c i C i for every x i F n i such that d(x i,c i ) =, then d(x,(c,...,c r )) = r and (c,...,c r ) is in balance with x. Suppose then there are j words x i such that I (x i ) = {x i }. Without loss of generality let x,...,x j be such words. When i j there is c (2) i C i such that d(x i,c (2) i ) = 2 because C i is -identifying and the words in S (x i ) must be identified. If j is even, then c (r ) = (x,...x j/2,c (2) j/2+,...,c(2) j,c j+,...,c r ) is a codeword at distance r from x. The codeword c (r ) is in balance with x. If j is odd, then c (r) = (x,...x ( j )/2,c (2) ( j )/2+,...,c(2) j,c(2) j,c j+,...,c r ) is a codeword at distance r from x and c (r 2) = (x,...x ( j )/2,c (2) ( j )/2+,...,c(2) j,x j,c j+,...,c r ) is a codeword at distance r 2 from x. The codewords c (r) and c (r 2) are in balance with x. Theorem 3.8. ( r ) r M r n i i= i= M (n i ).

25 3.3 Constructions for r-identifying codes 23 Proof. Suppose C i F n i for i =,...,r are -identifying codes and suppose also that the situation of Lemma 3.5 does not exist in any C i (if there are multiple bad cases, we fix them one by one it is easy to check that fixing a bad case does not create another). If n i = 3 for some i, then an optimal -identifying code C i = {000,0,0,0} is used. We prove that C = C... C r is an r-identifying code. In fact we prove a bit stronger result that for all x = (x,...,x r ), y = (y,...,y r ) F N, x y, there is a codeword c = (c,...,c r ) in I r (C;x) I r (C;y) such that c is in balance with the word in which I-set it is. We prove the claims by induction on r. The first step, r = of induction is trivial. An induction hypothesis is that C = C... C r is an (r )-identifying code and for every couple of words x, y F N, x y, there is a codeword c I r (C ;x ) I r (C ;y ) such that c is in balance with a word in which I-set it is. Denote x = (x,x r ) and y = (y,y r ) where x = (x,...,x r ), y = (y,...,y r ) F N and x r,y r F n r. Suppose first that x r y r and x y. By the induction hypothesis, we know that there is a codeword c = (c,...,c r ) I r (C ;x ) I r (C ;y ). Without loss of generality we may assume that c I r (C ;x ) and by assumption it is in balance with x. By adding to the end of c a codeword c r C r such that d(x r,c r ) 2 we get a codeword that is in balance with x. If at the same time any component of c is changed in such a way that it still is at distance at most two from the corresponding component of x, then again the result codeword is in balance with x. Since C r is -identifying, we know that there is a codeword c r C r such that c r I (C r ;x r ) I (C r ;y r ). If c r I (C r ;x r ), then (c,c r ) I r (C;x) \ I r (C;y). Assume then c r I (C r ;y r ). There is v r I (C r ;x r ). If d(y r,v r ), then (c,v r ) I r (C;x) \ I r (C;y). Suppose d(v r,y r ) = 0. Suppose also there is c x C r such that d(c x,x r ) = 2 and d(c x,y r ) = 3. If such a codeword does not exist, then we are in the situation of Lemma 3.5, and we assumed that it cannot happen. If c I r 2 (C ;x ), then (c,c r ) I r (C;x) \ I r (C;y). On the other hand, a word (c,y r ) I r (C;x), and we have (c,y r ) I r (C;y) if and only if d(c,y ) = r. Hence, assume now d(x,c ) = r and d(y,c ) = r. If for some j there is a codeword c j such that d(c j,x j) < d(c j,x j ), then a codeword (c,...,c j,c j,c j+,...,c r,c x ) I r (C;x) \ I r (C;y). Namely, the distance between y j and c j decreases by at most two compared to the distance between y j and c j. Suppose then that for any x j such a codeword c j does not exist, then d(x j,c j ) = and x j C j for all j r. If c is in balance with y, then there is k such that d(y k,c k ) = 2 (because d(y,c ) = r). Now there is c k I (C k ;y k ) and c = (c,...,c k,c k,c k+,...,c r,c r ) I r (C;y) \ I r (C;x), because x j C j j. The codeword c is in balance with y. If c is not in balance with y, denote by j,..., j k (k ) indices for which d(y jh,c jh ) 3 ( h k). For all these components there are c j h I (C jh ;y jh ). Because x jh C jh we have

26 24 On identifying codes d(x jh,c j h ). Thus, c = (c,...,c j,...,c j k,...,c r,c r ) I r (C;y) \ I r (C;x). The codeword c is in balance with y. Suppose next that x r = y r and x y. If x and y differ at least in two components x i y i and x j y j (i j, i, j r ), then we return to the previous case. Namely, we can operate now in an equivalent code C... C i C i+... C r C i. By the previous case we find a codeword c = (c,...,c i,c i+,...,c r,c i ) which is in the symmetric difference of the I-sets of x = (x,...,x i,x i+,...,x r,x i ) and y = (y,...,y i,y i+,...,y r,y i ), and it is in balance. By changing the last component back to the ith position we get a solution to the current case. Thus, we still need to prove that the claims hold when x r y r and x = y. There is a codeword c r I (C r ;x r ) I (C r ;y r ). We may assume without loss of generality that c r I (C r ;x r ). If there is a codeword c C such that d(c,x ) = r and c is in balance with x, then (c,c r ) I r (C;x) \ I r (C;y). If there does not exist a codeword at distance r from x, which is in balance with x, then by Lemma 3.7 we know that there are codewords c (r 2) and c (r) at distances r 2 and r from x, respectively. These codewords are in balance with x. If x r C r, then (c (r),x r ) I r (C r ;x) \ I r (C r ;y). Similar argumentation holds if y r C r. Suppose now x r,y r C r. Now (c (r 2),c r ) I r (C r ;x) I r (C r ;y) if and only if d(y r,c r ) = 2. Thus, suppose d(y r,c r ) = 2, this implies d(x r,y r ) = or 3. a) Suppose d(x r,y r ) = 3. Assume first that the code length n r 4, then there is z r S (x r ) \ S 2 (y r ). Because I (z r ) is nonempty there is a codeword c r C r such that d(c r,z r ) and d(c r,y r ) 3. Thus, (c (r 2),c r) I r (C;x) \ I r (C;y). If n r = 3, we use the code as assumed to be used in the beginning, and in this code there are no two non-codewords at distance three from each other. If n r = 2, then there are no two non-codewords and this is not a problem. b) Suppose d(x r,y r ) =. If there is c r (C r S 2 (x))\s (y), then (c (r 2),c r) I r (C;x) \ I r (C;y). Thus, suppose that if c r S 2 (x) C r, then d(c r,y r ) =. Denote by z r a unique word in (S (c r ) S (y r )) \ {x r }. Now I (C r ;z r ) = I (C r ;c r ) unless there is a codeword c z (c z c r ) such that d(c z,z r ) = and so d(c z,y r ) = 2 and d(c z,x r ) = 3. We get (c (r 2),c z ) I r (C;y) \ I r (C;x). So we can always distinguish between the I-sets of x and y for all x,y F N, x y. Moreover, there is a codeword in I r (C;x) I r (C;y) which is in balance with the word in which I-set it is.

27 3.3 Constructions for r-identifying codes 25 Corollary 3.9. M 2 (n + m) M (n)m (m). The next theorem generalizes [6, Theorem and Theorem 2] for all radii r. Previously, in [6] only radius is considered. Theorem Let C F n be an r-identifying code and k. A direct sum D = C F k is r-identifying if and only if x F n, c C : r k + d(x,c) r. (3.4) Moreover, the direct sum preserves the property (3.4) in F n+k. Proof. Suppose first that there is x F n such that I r (C;x) = I r k (C;x). For a,b F k, a b, we have I r (D;(x,a)) = {(c,y) c I r k (C;x), y F k } = I r (D;(x,b)). This is not possible. Suppose the condition (3.4) holds. Let x,x F n and a,b F k. Because C is an r-identifying code it is clear that I r (D;(x,a)) I r (D;(x,b)) when x x. Suppose x = x and a b. By assumption there is c C such that r k + d := d(x,c) r. Now (c,y) I r (D;(x,a)) \ I r (D;(x,b)) where y is chosen in such a way that d(y,a) r d and d(y,b) r d +. This choice is always possible since d(a,b). In the previous theorem if k r + the condition (3.4) is always true so we have the following corollary: Corollary 3.2. If C is an r-identifying code and k 0, then C F r++k is r- identifying. Thus, M r (n + r + + k) 2 r++k M r (n). In Table 3. we have collected the upper and lower bounds on the smallest cardinalities of - and 2-identifying codes.

28 26 On identifying codes Table 3.: Bounds on the cardinalities of - and 2-identifying codes. n M (n) M 2 (n) 2 3 d 3 4 a a 7 e 4 7 b a 6 e 5 0 a b 6 e b b 8 e 7 32 b c 4 f c b 7 2 f c b,d f c d 4 63 f c d g c d g c d h f d h g d i (7,8) h d i (5,) g,h d i (6,) h d i (7,) g d i (8,) h d i (9,)

29 3.3 Constructions for r-identifying codes 27 Key to the table of M (n) : The lower bounds for lengths 4 and 7 are from [6]. The other lower bounds come from [40, Theorem 3] see also [6] and Corollary 3.5. a Karpovsky et al. [40] b Blass et al. [6] c Codes in Appendix d M r (r + ) = 2 r+ Blass et al. [5, Theorem 5] f M (n) 4M (n 2) Blass et al. [6, Theorem 2] (see also Corollary 3.2) g Corollary 3.4 h M (n) 2M (n ) c C d(c,c\{c}) = Blass et al. [6, Theorem ] (see also Corollary 3.2) Key to the table of M 2 (n) : a Blass et al. [5] b Karpovsky et al. [40, Theorem (iii)] c Computer search [7] d Corollary 3.6 e Blass et al. [5, Theorem 5, Theorem 6], see also the note therein on [7]. f See Appendix. g Theorem 3.20, see also Appendix. h Corollary 3.2 i Corollary 3.9, in parentheses there are n and m that are used.

30

31 Chapter 4 On codes identifying sets of vertices In this chapter we consider the situation where the size of the fault pattern can be more than one. In the first section we present lower bounds for (r, l)-identifying codes. The lower bound for (, 2)-identifying codes turns out to be strict for infinitely many lengths. We also show which size of the fault pattern the code can handle with respect to a code length n and radius r. In the second section we construct optimal (, 2)-identifying codes for short lengths 4,5 and 7. In the third section we present constructions for (r, l)-identifying codes. First we present two constructions for (, 2)-identifying codes. The first one of these, (π(u),u,u + v)-construction, leads to two infinite families of optimal (, 2)-identifying codes. The second construction gives an asymptotically optimal result. At the end of the third section we prove that the direct sum of F r and an (r, l)-identifying code for r 2 and l 2 is an (r, l)-identifying code. In Theorems 4.26 and 4.28 we prove that the direct sum of r (, l)-identifying codes is an (r, l)-identifying code for all l 2. Section 4. is based on [34] except Theorem 4.7 and Theorem 4.0 are from [49]. The results in Section 4.2 are from [34] except the result concerning n = 7 is from [56]. Section 4.3. is based on [34] unless otherwise stated. The results of Section except Theorem 4.29 are from [49]. Theorem 4.9 and Theorem 4.22 for r = 2 are only published in this thesis. 4. Lower bounds Suppose C is an (r, l)-identifying code. There are 2 C subsets and l ) i=0 i fault patterns to be identified. Thus, we have proved the next lower bound from [40]. ( 2 n

32 30 On codes identifying sets of vertices Theorem 4.. If C is an (r, l)-identifying code, then C log 2 l i=0 ( ) 2 n. First we prove some general properties of certain identifying codes. Lemma 4.2. Let C F n be an (r, l)-identifying code where l 2. For any two different fault patterns X = {x,...,x k } and Y = {y,...,y v } such that X Y, Y X and X Y l, we have I r (X) I r (Y ) and I r (Y ) I r (X). In particular, if I r (x) I r (y), for some x,y F n, then x = y. Proof. If for some X and Y, I r (X) I r (Y ), then I r (X Y ) = I r (Y ), which leads to a contradiction, since X Y l. The next theorem generalizes [45, Theorem ] for all radii r. Theorem 4.3. Assume r and l. If C F n is an (r, l)-identifying code, then x F n : I r (x) = B r (x) C 2l. Proof. If l =, the claim holds trivially. Suppose l 2 and assume to the contrary that there is x F n such that I r (x) 2l 2. We denote I r (x) = {c,...,c k }, where k 2l 2. We know that d(c i,c j ) 2r for every c i,c j, where i j and i, j 2l 2. Thus, for any couple of codewords {c i,c j } there is a word y F n, y x, such that {c i,c j } I r (y). Hence, we find a collection of words Y such that Y l and I r (x) I r (Y ), which is a contradiction by Lemma 4.2. The next lemma is from [6]. Lemma 4.4. Let C F n be a 3-fold -covering. Then for all x,y,z F n, x y we have I(x) I(y) and I(x) I(y, z). Proof. The claims follow from Lemma 2.. Lemma 4.5. Let C F n be a (, 2)-identifying code. There does not exist a square of codewords such that x,y C, d(x,y) = 2, I(x) = I(y) = 3 and I(x) I(y) = 2. Proof. Suppose to the contrary that for x,y C we have d(x,y) = 2, I(x) = I(y) = 3 and I(x) I(y) = 2. For c I(x) I(y) we have I(x,c) = I(y,c), which is a contradiction. The next theorem improves on Theorem 4.. When r = and l = 2 the derived lower bound is shown to be optimal for infinitely many lengths. i

33 4. Lower bounds 3 Theorem 4.6. Assume r and l 2. We have In particular, M ( l) Moreover, if n 4 is even, then r (n) K(n,2l,r) (2l ) V (n,r). M ( 2) (n) 3 2n. n + M ( 2) (n) (3n 8)2 n n 2 2n 2 Proof. Let C be an (r, l)-identifying code. By Theorem 4.3 we know that all the words of the ambient space are r-covered at least by (2l ) codewords of C. Hence M r ( l) (n) K(n,2l,r). Obviously, (2l ) 2 n /V (n,r) is a lower bound on K(n,2l,r). The last assertion follows from [23, Corollary 2]: for even n where n 4. K(n,3,). (3(n 3) + )2n (n 3)(n + ) + The interested reader is referred to [45] for further results on (, l)-identifying codes when l 3. For r 2 and l 2 we can improve on the previous bound. Theorem 4.7. For r 2 and l 2 we have M r ( l) (2l )2 n (n) ( n ( r) + n ). r Proof. Suppose C F n is an (r, l)-identifying code. We shall show that for every x F n we have I r (x) (S r (x) S r (x)) 2l. Without loss of generality we can prove that the claim holds for x = 0 and clearly, it then holds for every x F n. For any two words c,c 2 of weight w(c ),w(c 2 ) r there is a word y of weight two which r-covers both of these words. Clearly, I r 2 (0) I r (y ). If I r (0) (S r (0) S r (0)) 2l 2, then there is a collection of words of weight two, y,y 2,...,y l, which r-cover the whole set I r (0). This is not possible by Lemma 4.2. The claim follows from a direct calculation C (( n r ) + ( n r 2 n )) (2l )2 n.

34 32 On codes identifying sets of vertices Next we provide some nonexistence results. Theorem 4.8. Let r(n, K) denote the smallest covering radius among all binary codes of length n and cardinality K. Let l < 2 n. If r r(n,l), then there does not exist an (r, l)-identifying code of length n. Proof. Let C(l) be a code realizing the value r(n,l). Thus, the Hamming spheres of radius at least r(n, l) centered at the codewords cover the entire space. Take x F n such that C(l) x +C(l) (this is always possible when C(l) F n ). Consequently, the sets C(l) and x +C(l) of cardinality l cannot be identified by any code C F n. Values of r(n,k) can be found for instance from [3, p ]. Corollary 4.9. There is no (, 2)-identifying code of length less than four and no (2, 2)-identifying code of length less than six. If we fix a radius r and a length n, then the following theorem tells what values the cardinality of a fault pattern l cannot get. Theorem 4.0. Suppose C F n is an (r, l)-identifying code. Then n l r Proof. The first step of the proof is to cover the whole set B r (0) by a small number of words. We take one word of weight 2r, namely, Suppose first 2 n. We take We also take a word z = Now 2r {}}{ x = r {}}{ y = y 2 = y (n 2r )/2 = I r (0,x,y,...,y (n 2r )/2,z) = I r (x,y,...,y (n 2r )/2,z) which is impossible if l (n 2r )/

35 4.2 Short codes 33 Assume then 2 n. We take 2r {}}{ y = y 2 = y (n 2r)/2 = These words cover all the words of weight r. To cover B r (0) we take any word z of weight one. We have I r (0,x,y,...,y (n 2r)/2,z) = I r (x,y,...,y (n 2r)/2,z), which is impossible if l (n 2r)/ In [26] it is proved that (, 2)-identifying codes are also -edge-robust - identifying codes. If µ 3, a perfect µ-fold -covering of length n can find one faulty processor [6] and detect a larger number of malfunctioning processors, because then I(X) 2µ 2 > µ = I(z) for any z F n and X F n where X 2. This property is clearly not shared by all codes, for example, the -identifying code C = {x F 4 w(x) {0,2,3}} gives I(000) = I(0000,000). From this point of view the research is done in [30] Short codes In the next theorems we shall obtain the optimal (, 2)-identifying codes of lengths four, five and seven. Theorem 4.. M ( 2) (4) =. Proof. The code C = {0000, 000, 000, 00, 00, 00, 00, 00, 0, 0, 0} gives the upper bound M ( 2) (4). Every word of F 4 is 3-fold -covered by C, thus by Lemma 4.4 single words are distinguishable from each other and from couples of words. One way of verifying that I(x,y) I(x,w) for all x,y,z,w F 4, {x,y} {z,w} is by looking at the following matrix A where the first row corresponds to the codeword 0000 and the second to 000 etc., and the columns correspond to all words x = (x,x 2,x 3,x 4 ) of the space F 4 in the order of integers J (0 J 5) when they are expressed as J = 4 i= x i2 (i ). The entry of the matrix is one if

36 34 On codes identifying sets of vertices the codeword corresponding to that row -covers the word of the corresponding column, otherwise the entry is zero. Thus, A = By comparing the logical OR of all pairs of columns we get the claim. By Theorem 4.6, we get M ( 2) (4). In the following proof x := + x. Theorem 4.2. i) If C is a perfect 3-fold -covering of length five, then for every x F 5, either x C or x C but not both. ii) There are exactly two nonequivalent perfect 3-fold -coverings of length five. iii) One of these perfect 3-fold -coverings is (, 2)-identifying and the other one is not. Proof. Without loss of generality, assume that 0 / C, 0000, 0000, 0000 C, 0000, 0000 / C. Because all the vectors of weight one are covered by exactly three codewords, the number of codewords of weight two must be six. To prove i), it suffices to notice that if / C, then the same argument shows that there are three codewords of weight four and six of weight three, and altogether the number of codewords would be more than sixteen, which is the cardinality of the perfect 3-fold -covering of length five. Having proved i), it suffices to consider which vectors of weight two are in C. Because 0000 is covered by three codewords, three of the six codewords of weight two have in the first coordinate, and the same is true for the second coordinate. If 000 / C, there is a unique choice for C:

37 4.2 Short codes The seven other codewords are the complements of the non-codewords of weight at most two. A routine verification shows that it is indeed a perfect 3-fold - covering. We notice that C = {0000,000,000,00} C. Clearly, C is not (, 2)-identifying, because I(0000,000) = C = I(000,00). The other possibility is that 000 C. Apart from the order of the coordinates, there is again a unique way of choosing the codewords of weight two in such a way that every vector of weight one is covered by exactly three codewords: A routine verification shows that the resulting code C = {0000, 0000, 0000, 000, 000, 000, 000, 000, 000, 00, 00, 00, 00, 0, 0, } is a perfect 3-fold -covering and (, 2)-identifying, and hence, in particular, inequivalent to the code constructed earlier. Corollary 4.3. There is a unique (, 2)-identifying code (up to equivalence) attaining the bound M ( 2) (5) = 6... Denote by H 3 the Hamming code of length seven with the parity check matrix H =

38 36 On codes identifying sets of vertices Let C = H and C 2 = H be two cosets of H 3. Let further P and P 3 be the codes obtained by permuting C using (7,3)(4,2) (the notation (i, j) means that ith and jth columns are interchanged) and (6,3)(4,), respectively. By P 2 we denote the code obtained from C 2 using the permutation (,2)(3,5). It is easy to check (with computer) that U = P P 2 P 3 is (, 2)- identifying. The following result now follows from Theorem 4.6. Theorem 4.4. The code U is a (, 2)-identifying code of length seven and hence M ( 2) (7) = Constructions In this section, we present different ways to construct (, 2)-identifying codes and (r, l)-identifying codes for r 2 and l 2. In the first construction, we use shorter (, 2)-identifying codes to create longer ones. In the second construction, we get asymptotically good (under Conjecture 2.4) (, 2)-identifying codes using 3-covering codes. In subsection we prove direct sum constructions for (r, l)-identifying codes for r and l Constructions for (, 2)-identifying codes The construction of the next theorem yields two infinite families of optimal (, 2)-identifying codes. Theorem 4.5. If C F n is a (, 2)-identifying code, then C = {(π(u),u,u + v) u F n,v C}, where π(u) denotes a parity check bit on u, is a (, 2)-identifying code of length 2n +. Proof. By Theorem 4.3 the code C covers each word at least three times, and by [3, Theorems and 4.4.3] so does the code C. By Lemma 4.4 the code C is -identifying and all single words and pairs of words are distinguishable. Thus, we only need to check that all pairs are identified from one another. Let us divide the words of F 2n+ into two classes by their first bit and consider the codewords which cover a word in each class. Let x = (a,u,u + v) F 2n+, where u,v F n and a F. I If a = π(u), then I(x) = {(π(u),u,u + c) c C,d(c,v) }. II If a π(u), then I(x) = A {(a,u,u + v) d(u,u) =, c C : u + v = u + c}. Here A = {(π(u),u,u + v)} if v C, and A = /0 if v / C.

39 4.3 Constructions 37 So in both classes we are interested in codewords c C such that d(c,v). Namely, in the class II the properties d(u,u) = and u + v = u + c imply that also d(v,c) =. If I(x) = {(b i,s i,t i ) i =,2,...,k}, then in both cases I(C;v) = {s i + t i i =,2,...,k}. Suppose there were words x, y, z, and w in F 2n+ such that I(x,y) = I(z,w) and {x,y} {z,w},x y,z w. (4.) If v, v 2, v 3, and v 4 are v s of x, y, z, and w, respectively, then by the previous discussion I(C;v,v 2 ) = I(C;v 3,v 4 ). Since C is a (, 2)-identifying code we must have {v,v 2 } = {v 3,v 4 }. We will show that (4.) cannot hold. Assume contrary that (4.) holds. Because I(x) 3, we know by (4.) that at most one of the sets I(x) I(z) and I(x) I(w) has cardinality one or less. A similar remark applies to I(y), I(z) and I(w). Hence we can without the loss of generality assume that I(x) I(z) 2 and I(y) I(w) 2. Depending on which class x belongs to, also z belongs to the same class. Indeed, if x is in the class I, then only last n bits are changing in the codewords of I(x), and, thus, those are the only bits which can change in the codewords of I(z). This means that z belongs to the class I. Also, if x belongs to the class II, then the last n bits do not change at all in I(x) and so it is true also for I(z), and hence z belongs to the class II. Similarly, y and w belong to the same class. ) If x = (π(u ),u,u + v ) and y = (π(u 2 ),u 2,u 2 + v 2 ) are words in the class I, then also z = (π(u 3 ),u 3,u 3 + v 3 ) and w = (π(u 4 ),u 4,u 4 + v 4 ) are. Since in I(x) and I(z) the codewords begin with the same n + bits, we get u = u 3. Similarly u 2 = u 4. We can assume that I(z) I(y) (or that I(x) I(w), which is a symmetric case): otherwise I(z) = I(x) and I(w) = I(y) and hence z = x and w = y. Since z and y are both words in the class I, we get u 2 = u 3. The fact that {v,v 2 } = {v 3,v 4 } now implies that {x,y} = {z,w}. 2) Assume x = (π(u ) +,u,u + v ) and y = (π(u 2 ) +,u 2,u 2 + v 2 ) and so also z = (π(u 3 ) +,u 3,u 3 + v 3 ) and w = (π(u 4 ) +,u 4,u 4 + v 4 ) are words in the class II. Since in I(x) and I(z) the codewords end with the same n bits as x and z we get u + v = u 3 + v 3, and similarly u 2 + v 2 = u 4 + v 4. If now v = v 3 and v 2 = v 4 we are done, since then x = z and y = w. Suppose therefore that v 2 = v 3 and v = v 4. As in the previous case, we can assume that I(z) I(y). Now the last n bits must be the same in I(z) and I(y), and thus u 3 + v 3 = u 2 + v 2 and we get u 3 = u 2, i.e., y = z. By Lemma 2. the word z cannot cover the whole I(x), otherwise z = x. This would imply that x = z = y, a contradiction with x y. So w must cover at least one word from I(x) which implies u + v = u 4 + v 4 and now u = u 4, i.e., x = w. Therefore {x,y} = {z,w}. 3) Suppose finally that x = (π(u ),u,u + v ) is a word in the class I and y = (π(u 2 ) +,u 2,u 2 + v 2 ) is a word in the class II. Now z = (π(u 3 ),u 3,u 3 + v 3 ) is a word in the class I and w = (π(u 4 ) +,u 4,u 4 + v 4 ) is a word in the class

40 38 On codes identifying sets of vertices II. Since the first n + bits are the same in the codewords of I(x) and I(z) we must have u = u 3. In the codewords of I(y) and I(w) the last n bits are the same, so u 2 + v 2 = u 4 + v 4. If now v = v 3 and v 2 = v 4, we are done since then x = z and y = w. Suppose therefore that v = v 4 and v 2 = v 3. As in the previous case, I(z) I(x) = 2: otherwise z = x, v = v 3 and thus v = v 2 = v 3 = v 4 which together with the equality u 2 +v 2 = u 4 +v 4 implies that u 2 = u 4 and hence {x,y} = {z,w}. Also y w, since otherwise v 2 = v 4 = v = v 3 and again {x,y} = {z,w}; consequently d(y,w) = 2 and I(y) I(w) = 2. Now I(x) I(w) = and I(z) I(y) = (these cannot be greater than one, since x and w, y and z belong to different classes). Let c be the unique codeword in I(x) I(w), and c 2 be the unique codeword in I(y) I(z). Since all the codewords in I(x) begin with (π(u ),u,...) and all the codewords in I(w) end in (...,u 4 + v 4 ), we know that c = (π(u ),u,u 4 + v 4 ). Similarly, c 2 = (π(u 3 ),u 3,u 2 + v 2 ). Hence c = c 2, and I(y) = I(w) and I(x) = I(z). Therefore x = z and y = w, completing the proof. Corollary 4.6. M ( 2) (2n + ) 2 n M ( 2) (n). Corollary 4.7. For k : For k 3 : M ( 2) (3 2 k ) = 2 3 2k k. M ( 2) (2 k ) = 3 2 2k k. Proof. By Corollary 4.3, we know that M ( 2) (5) = 6. Using Corollary 4.6 recursively and the lower bound from Theorem 4.6 we get the first equality. Similarly, by Theorem 4.4 we get the second claim. By Corollary 4.7 we have an optimal (, 2)-identifying code for all lengths greater than or equal to five for which there also exists a perfect 3-fold -covering [3, Theorem 4.2.4]. Example 4.8. The code C = {00,0,0,} of length two is a perfect 3-fold -covering, but not (, 2)-identifying, since I(00, ) = I(00, 0) = C. By the construction of Theorem 4.5 we get a perfect 3-fold -covering C of length 5. This code is not (, 2)-identifying, because c = 00000, c 2 = 0000, c 3 = 000, c 4 = 0000 are codewords in C and I(c,c 2 ) = I(c 3,c 4 ) = {c,c 2,c 3,c 4 }. The next theorem shows that asymptotically (, 2)-identifying codes reach the lower bound of Theorem 4.6 under the assumption that the Conjecture 2.4 ([3, p. 352]) holds. Research is done with Tero Laihonen.

41 4.3 Constructions 39 Theorem 4.9. Let n 5 and C F n be a 3-covering which attains K(n,3). Then the code C = {c F n d(c,c ) 2} is (, 2)-identifying. Proof. For all x F n there is a word c C such that d(c,x) 3. Let x,y,z,w F n. If for some c C we have {x,y} B 3 (c ) = /0 and {z,w} B 3 (c ) /0, then it is clear that I(x,y) I(z,w). Hence, it is enough to show that I-sets of different sets {x,y},{z,w} of size at most two are different when x,y,z,w B 3 (c) for some c C. We have { n if x S0 (c) S I(C;x) (c) 3 if x S 2 (c) S 3 (c). Lemma 4.4 implies that all single words are distinguishable from each other and pairs of words. Without loss of generality we may assume c = 0, and we denote S i (0) = S i. Let us denote J(x) = I(x) (S S 2 ) for x B 3 (0). Now J(x) is called the J-set of x. We will show that J(x,y) J(z,w) for all {x,y} {z,w}, where x,y,z,w B 3 (0), x y and z w, this implies I(x,y) I(z,w). Assume to the contrary J(x,y) = J(z,w). Using Lemma 2. we see that, if J(x) n, then there is no set of size two in which x does not belong to such that this set would cover the whole J-set of x. i) Assume first x S 0 S, then without loss of generality x = z. By Lemma 2. we have d(x,y) 2 : otherwise w has to cover the whole J-set of y which implies y = w. Similarly, d(x,w) 2. If y S 0 S, then y = w and we are done. Assume that x = z S 0 and y,w S 2 C, now y J(z,w) unless y = w. Suppose that x = z S C and y S 2 C. Now d(x,y) = and in (J(y) \ {x}) S there is a codeword that should be covered by w S 2. Any possible choice for w has J(w) which is impossible to cover by {x,y}. Suppose then x = z S C and y S 3, then by the symmetry of the previous case also w S 3. Since d(x,y) = d(x,w) = 2 we have either d(y,w) = 2 or 4. Now J(y) S 2 = J(w) S 2 = 3 and x = z covers exactly two codewords from both sets J(y) and J(w). If d(y,w) = 4, then (J(w)\J(x)) S 2 J(x,y). If d(y,w) = 2, then J(w) J(y) S 2 = {c }. This c J(x) since the support of x is the one of the coordinate positions which is in both y s and w s supports. Thus, again J(w) J(x,y). ii) Suppose now that x S 2. By symmetry also y,z,w S 2 S 3. Since J(x) S = 2 we need two words of S 2 different from x to cover these words and still x C is not covered. Thus, without loss of generality z = x. If y S 2 C, then similarly w = y. If y S 3, then (J(y) S 2 )\J(x) = 2, but these two words cannot be covered by any w y.

42 40 On codes identifying sets of vertices iii) Finally, let x S 3. By symmetry also y,z,w S 3. Since J(x,y) 5 any set {z,w} S 3,{z,w} {x,y}, cannot cover the set J(x,y). Corollary M ( 2) (n) K(n,3) 2 (n + n2 ) Combining Theorem 4.6, Corollary 4.20 and Conjecture 2.4 we get an asymptotic result for (, 2)-identifying codes: When n M ( 2) (n) 3 2n n Direct sum constructions for radii r It is proved in [34] that if C F n is a (, 2)-identifying code, then C F is also (, 2)-identifying. In [45] it is proved for l 3 that a code C F n is (, l)- identifying is and only if it is a (2l )-fold -covering. The direct sum C F preserves this property and thus, C F F n+ is also a (, l)-identifying code. In Theorem 4.22 we prove that for all l 2 we get a (2, l)-identifying code C F n+2 by taking a direct sum of a (2, l)-identifying code of length n and F 2. At the same time we also prove the corresponding result for the radius r =. Theorems 4.26 and 4.28 generalize the result of Theorem We prove that a direct sum of r (, l)-identifying codes is (r, l)-identifying for all l 2. Lemma 4.2. Suppose r and l 2. Let C F n be an (r, l)-identifying code. Then for all x F n and for every e S (x) there is no set of size at most l 2, not containing x, that r-covers the set (C S r (x)) \ B r (e). Proof. If for some x F n and e S (x) there is a set Y F n such that Y l 2 and Y r-covers a set (C S r (x)) \ B r (e). Then I r (Y {x,e}) = I r (Y {e}), which is impossible. Theorem Let r 2 and l 2. If C F n is an (r, l)-identifying code, then D := C F r F n+r is (r, l)-identifying. Proof. Let X,Y F n+r, X Y, X = {x,...,x l }, Y = {y,...,y l2 } and l,l 2 l. Let us denote x i = (x i,x i ) and y j = (y j,y j ), where x i,y j Fn and x i,y j Fr for i l and j l 2. Denote X = {x,...,x l } and Y = {y,...,y l 2 }. If X Y, then there is c I r (C;X ) I r (C;Y ). Without loss of generality we can assume c I r (C;x ) \ I r(c;y ). Now (c,x ) I r(d;x) \ I r (D;Y ). Suppose X = Y. Assume to the contrary that I r (D;X) = I r (D;Y ). Because X Y, for some i there is x i y h for all h for which y h = x i. Without loss of generality we can assume i =. Now x Y and the words (x,y h ),...,(x,y h j ) Y do not r-cover codewords at distance r from x ending with x. By Lemma 4.2 we know that such a codeword exists and in Y there must be l words which

43 4.3 Constructions 4 r-cover these codewords. This implies that every word in Y appear only once. This also holds for X since X = Y and l = l 2 = l. By Lemma 4.2 we know that there is a codeword c I r (C;x ) \ I r(c;x \ {x }). If d(c,x ), then because d(x,y h ), there is a word f Fr, such that d(f,x ) r d(c,x ) and d(f,y h ) > r d(c,x ). Thus, (c,f) I r (D;X) \ I r (D;Y ), which is a contradiction. Suppose therefore that d(c,x ) = 0 and c is the only word in I r (C;x ) \ I r (C;X \ {x }). In particular, d(x,x \ {x }) r +. (4.2) Assume there is a word x k X \ {x } such that x k Y \ {y h }, that is x k y s, when x k = y s. As above we get a contradiction unless x k is the only codeword in I r (C;x k ) \ I r(c;x \ {x k }), in which case there is c k I r(c;x ) I r(c;x k ) (cf. the third paragraph). Now I r (C;(X \ {x }) {c k }) = I r(c;(x \ {x k }) {c k }), which is impossible. This means that X \ {x } = Y \ {y h }. Without loss of generality h =, and we have X = {x,x 2,...,x l } and Y = {y,x 2,...,x l } where x y, x = y and x y. Moreover, the set {x 2,...,x l } r-covers I r (C;x ) \ {x }. The assumption I r(d;x) = I r (D;Y ) implies that the set {x 2,...,x l } r-covers I r (D;x ) I r (D;y ). Suppose xt X \ {x } r-covers a codeword c (I r(c;x ) \ I r d(x,y ) (C;x )) \ I r(c;x \ {x,x t }) (such words xt and c always exist). Hence, the word x t = (xt,x t) X Y r-covers the codewords (c,x ) I r(d;x ) and (c,y ) I r(d;y ) (since d((x,x ),(c,y )) r d(x,y ) + + d(x,y ) r + ). We have 2r d(x t,(c,x )) + d(x t,(c,y )) = d(x t,c ) + d(x t,x ) + d(x t,c ) + d(x t,y ) 2d(x t,c ) + d(x,y ) Hence, d(xt,c ) r 2 d(x,y ), which implies c (I r (C;x t ) (B r (x )\B r d(x,y ) (x )))\I r (X \{x,x t }) : d(x t,c ) r. (4.3) This and (4.2) imply r + d(x,x t ) 2r. If r =, this is impossible. From now on r = 2 and d(x,x t ) = 3. If d(x,y ) = 2, then by (4.3) we have (I 2(C;xt ) I 2 (C;x )) \ I 2(C;X \ {x,x t }) I (C;xt ) I 2 (C;x ). Hence, for every y S 2 (x ) such that d(y,xt ) = we have I 2 (C;(X \ {x,x t }) {y }) = I 2 (C;(X \ {x t }) {y }), which is a contradiction.

44 42 On codes identifying sets of vertices Suppose then d(x,y ) =. If (S (x ) I 2(C;x t )) \ I 2 (X \ {x,x t }) = /0, then we are done as in the previous case. If there is c 2 (S (x ) I 2(C;x t ))\ I 2 (X \{x,x t }), then (c 2,y +) I 2(D;x )\I 2 (D;Y ). Namely, y + x t, otherwise x t could not cover any codeword at S 2 (y ) ending with y. Corollary For l 2 we have: M ( l) (n + ) 2M ( l) (n). M ( l) 2 (n + 2) 4M ( l) 2 (n). Theorem Let r and l 2. If C F n is an (r, l)-identifying code, then D := C F r+ F n+r+ is (r, l)-identifying. Proof. The first three paragraphs of the proof of Theorem 4.22 goes similarly. Using the same notations we can continue slightly differently. Now we only have the case d(x,c ) 0. Because d(x,y h ) there is a word f Fr+ such that d(f,x ) r d(c,x ) and d(f,y h ) > r d(c,x ). Thus, (c,f) I r (D;X) \ I r (D;Y ). The next example shows that Theorem 4.22 cannot be generalized for (3, 2)- identifying codes. Example By a computer it can be shown that the code C = {0} (S 3 (0) S 2 (000)) (F 8 \ B 4 (0)) F 8 is (3, 2)-identifying code of length 8. The code C F 3 is not (3, 2)-identifying since I 3 ( ,000000) = I 3 ( ,000000). Denote N = r i= n i and N = N n r. Theorem Suppose r. Let C i F n i for i r be (, 2)-identifying codes. Then C = C... C r is an (r, 2)-identifying code. Proof. Let X = {x,y}, Y = {z,w} F N, X Y and X, Y 2. Denote x = (x,...,x r ), y = (y,...,y r ), z = (z,...,z r ) and w = (w,...,w r ). If for some k there is {x k,y k } {z k,w k }, then because C k is (, 2)-identifying there is c k I (C k ;x k,y k ) I (C k ;z k,w k ). Without loss of generality we may assume c k I (C k ;x k ) \ I (C k ;z k,w k ). For all h r, h k, we have I (C h ;x h ) 3 > {z h,w h }. Thus, there is c h I (C h ;x h ) such that d(c h,{z h,w h }). Hence (c,...,c k,c k,c k+,...,c r ) I r (C;x) \ I r (C;Y ).

45 4.3 Constructions 43 Suppose that for all k r we have {x k,y k } = {z k,w k }. This is possible only if X = Y = 2. Because X Y we may assume x = (x,x 2,x 3,...,x r ) z = (x,y 2,z 3,...,z r ) y = (y,y 2,y 3,...,y r ) w = (y,x 2,w 3,...,w r ) where x y, x 2 y 2 and {x k,y k } = {z k,w k } for k 3. Because x h {z h,w h } for all 3 h r, then I (C h ;x h ) \ {x h } 2 > {z h,w h } \ {x h }. Thus, for all 3 h r there are c h I (C h ;x h ) such that d(c h,{z h,w h }). Similarly, we find corresponding codewords for y 3,...,y r. There is c i I (C i ;x i ) \ I (C i ;y i ) for i =,2. There are two possible cases A) d(x i,c i ) = and B) d(x i,c i ) = 0, if there is no codeword c i such that the case A would hold. In the latter case we must have d(x i,y i ) = 2 and I (C i ;x i ) I (C i ;y i ) = 2. Now Lemma 4.5 implies that there is c yi I (C i ;y i ) \ I (C i ;x i ) such that d(x i,c yi ) = 3 and d(y i,c yi ) =. If both x and x 2 belong to the case A, then (c,c 2,c 3,...,c r ) I r (C;X) \ I r (C;Y ). If x belong to the case A and x 2 belong to the case B, then (c,c y2,c 3,...,c r ) I r (C;Y ) \ I r (C;X). The case x in B and x 2 in A goes similarly. If both x and x 2 belong to the case B, then (x,c y2,c 3,...,c r ) I r (C;Y ) \ I r (C;X). When proving the corresponding result for l 3 the next lemma is used. Lemma Let l 3 and C F n is an (, l)-identifying code. For all x F n and Y F n, Y l, we have c I (x) \ {x} such that d(c,y ). Proof. By [45] or Theorem 4.3 we know that for all x F n we have I (x) 2l. Thus, I (x) \ {x} 2l 2 > l Y, which implies the claim. Theorem Suppose l 3. Let C i F n i for i r be (, l)-identifying codes. Then C = C... C r F N is an (r, l)-identifying code. Proof. By [45] or Theorem 4.3 we know that for all x F n i we have I (C i ;x) 2l. We prove by induction on r that C = C... C r is an (r, l)-identifying code and, moreover, for every X,Y F N, X, Y l and X Y, there is c I r (C;x) \ I r (C;Y ) such that r d(c,x) r for some x X or c I r (C;y) \ I r (C;X) such that r d(c,y) r for some y Y. The first step of induction is trivial, r =. The induction hypothesis is that the claim holds for C = C... C r.

46 44 On codes identifying sets of vertices Let X,Y F N, X, Y l, X Y, X = {x,...,x l } and Y = {y,...,y l2 } and denote x i = (x i,,...,x i,r,x i,r ) = (x i,x i,r ), y j = (y j,,...,y j,r,y j,r ) = (y j,y j,r ) for i l and j l 2. Denote X = {x,...,x l } and Y = {y,...,y l 2 }, X i = {x,i,...x l,i} and Y i = {y,i,...,y l2,i}, for i r. If X Y, then the induction hypothesis implies that there is a codeword c I r (C ;X ) I r (C ;Y ). Without loss of generality we may assume c I r (C ;x ) \ I r (C ;Y ). The induction hypothesis implies, that r 2 d(c,x ) r. By Lemma 4.27 we know that there is c r I (C r ;x,r ) \ {x,r } such that d(c r,y r ). Hence, (c,c r ) I r (C;X) \ I r (C;Y ). Moreover, r d((c,c r ),x ) r. Suppose next X = Y and X r Y r. Because C r is (, l)-identifying there is c r I (C r ;X r ) I (C r ;Y r ). Without loss of generality we may assume c r I (C r ;x,r ) \ I (C r ;Y r ). By Lemma 4.27, for every k r there is c k I (C k ;x,k ) \ {x,k } such that d(c k,y k ). Hence, (c,...,c r,c r ) I r (C;X) \ I r (C;Y ). Moreover, (c,...,c r,c r ) S r (x ) S r (x ). Suppose then X = Y and X r = Y r. There is for some k, x k,r y h,r for all h for which x k = y h, otherwise X = Y. Suppose Y \ {x k } l 2, then by Lemma 4.2 there is a codeword c (C S r (x k )) \ I r (Y \ {x k }). There is c r I (C r ;x k,r ) \ I (C r ;Y r \ {x k,r }). If d(c r,x k,r ) =, then (c,c r ) I r (C;X) \ I r (C;Y ). Suppose c r = x k,r is the only codeword in I (C r ;x k,r ) \ I (C r ;Y r \ {x k,r }). This implies I (C r ;x k,r )\{x k,r } = 2l 2, Y r \{x k,r } = l, for all y h,r Y r \{x k,r } we have d(x k,r,y h,r ) = 2 and I (C r ;x k,r ) I (C r ;y h,r ) = 2. Moreover, we have d(y h,r,y h2,r) = 4, for h h 2. There is x t X = Y such that x k x t. Otherwise, X = {x k } = Y and X r = Y r imply that X = Y. Suppose there is y t,r such that (x t,y t,r ) Y and (x k,y t,r) Y. Let us choose c r I (C r ;x k,r ) I (C r ;y t,r ). As mentioned above d(c r,x k,r ) = d(c r,y t,r ) = and d(c r,y h,r ) = 3 for all y h,r x k,r and y h,r y t,r. We get (c,c r) I r (C;X) \ I r (C;Y ), moreover d((c,c r),(x k,x k,r)) = r. If such an y t,r does not exist, then X = Y = {x k,x t } and x k appears l times in Y. Now (x t,x k,r ) Y \ X and (x t,x t,r ) X \Y. Moreover, Y \{x t } = l 2, as above we have c t (C S r (x t ))\I r (C ;Y \ {x t }). Because I (C r ;x t,r ) 2l > 3 (I (x t,r ) I (x k,r )) {x t,r } there is c t,r I (C r ;x t,r ) \ ({x t,r } I (C r ;x k,r )). As above it is proved, we know that d(c t,r,y h,r ) 3 for all y h,r Y r, y h,r x t,r. Hence, (c t,c t,r ) I r (C;(x t,x t,r )) \ I r (C;Y ). Moreover, d((x t,x t,r ),(c t,c t,r )) = r.

47 4.3 Constructions 45 Suppose then that Y \ {x k } = l. This implies Y = l and every word in Y appears there only once. Because X = Y the same holds for X, as well. Hence, there is (x k,x k,r) X \Y and (x k,y h,r) Y \ X for some h. By the induction hypothesis there is c I r (C ;X ) I r (C ;X \ {x k }). This implies c I r (C ;x k ) and r 2 d(c,x k ) r. Suppose d(c,x k ) = r. Because (I (C r ;x k,r )\{x k,r })\I (C r ;y h,r ) 2l 4 > l 2 = Y r \{x k,r,y h,r } we know that there is c r I (C r ;x k,r ) \ {x k,r } such that d(c r,y h,r ) 2 and d(c r,y j,r ) for j h. Thus, (c,c r ) I r (C;X) \ I r (C;Y ) and d((c,c r ),(x k,x k,r)) = r. Suppose d(c,x k ) = r 2. We separate cases depending on the distance between x k,r and y h,r. In every case, we will find a codeword c r such that d(x k,r,c r ) 2, d(y h,r,c r ) 3 and d(c r,y j,r ), for j h. Then (c,c r ) I r (C;X)\I r (C;Y ) and it satisfies the wanted distance properties. If d(x k,r,y h,r ) 4, then clearly there is c r I (C r ;x k,r \ {x k,r }) which satisfies the conditions. If d(x k,r,y h,r ) = 3, then S (x k,r ) B 2 (y h,r ) = 3. Because always n i 4 (otherwise no (, l)-identifying code exists), there is z S (x k,r )\ B 2 (y h,r ). Now I (C r ;z)\{x k,r,z} 2l 3 > l 2 Y r \{x k,r,y h,r } implies that there is c r I (C r ;z) which satisfies the conditions. If d(x k,r,y h,r ) = 2, then (I (C r ;x k,r ) \ {x k,r }) \ B 2 (y h,r ) 2l 4 > l 2 = Y r \ {x k,r,y h,r }. Thus, we find c r I (C r ;x k,r ) which satisfies the conditions. If d(x k,r,y h,r ) =, then there is z S (x k,r ), z y h,r and I (C r ;z) \ ({x k,r,z} (I (y h,r ) I (z))) 2l 4 > l 2 Y r \ {x k,r,y h,r }. Thus, there is c r I (C r ;z) S 2 (x k,r ), which satisfies the conditions. Combining the results of Theorems 3.8, 4.26 and 4.28 we get the next corollary. Corollary For r and l we have M r ( l) r ( i= n i ) r i= M ( l) (n i ). In Table 4. we have collected lower and upper bounds on the cardinalities of (, 2)- and (2, 2)-identifying codes.

48 46 On codes identifying sets of vertices Table 4.: Bounds on the cardinalities of (, 2)- and (2, 2)-identifying codes. n (, 2) (2, 2) 4 a 5 6 a b 2 25 d 7 48 a 4 35 d b d c b b (5,5) 52 c (5,6) b (5,7) b,c (6,7) b (7,7) c (7,8) b (7,9) b,c (6,) b (7,) c (7,2) b (9,) b,c (0,) b (,) c (,2) The lower bounds for (, 2)-identifying codes come from Theorem 4.6. The lower bounds for (2, 2)-identifying codes for n 7 come from Theorem 4.7. The lower bound for (2, 2)-identifying code for n = 6 come from Theorem 4.. The upper bounds for (2, 2)-identifying for n 9 come from Corollary The used lengths are mentioned in parenthesis. Key to upper bounds a Short code constructions in Section 4.2 b For r 2 and l 2 : M r ( l) (n + ) 2 r M ( l) (n), Theorem c (π(u), u, u + v)-construction, Theorem 4.5. d See Appendix

49 Chapter 5 Strongly identifying codes In this chapter, we consider a modification of identifying codes, strongly identifying codes. Now we want to identify a set of vertices under the assumption that codewords which belong to a fault pattern might be missing from the I-set. The function of a strongly identifying code is easier to understand by an example. If a processor that belongs to a fault diagnosis system of a multiprocessor mechanism is malfunctioning then it can fail on sending information. If a controller cannot trust on received information some extra conditions are needed before it is possible to identify malfunctioning processors. In this case we need strongly identifying codes. In this model malfunctioning processors send report which may be correct or wrong. In order to identify fault patterns we require that a code C F n satisfies the following. Let for any different subsets X and Y of F n ( X, Y l) the sets I r (X)\S and I r (Y )\T, where S X C and T Y C, be nonempty and distinct. Then obviously, we can always distinguish between X and Y. The sets I r (X) \ S and I r (X) \ S where S,S X C (S S ) are automatically different from each other. This leads to the following definition. Definition 5.. Let C F n be a code and l an integer. Let further X F n and X l. Define I r (X) = {U I r (X) \ (X C) U I r (X)}. (5.) If for all X,X 2 F n, where X X 2 and X, X 2 l, we have I r (X ) I r (X 2 ) = /0, then we say that C is a strongly (r, l)-identifying code. If we replace (5.) by I r (X) = {I r (X)}, we get the definition of a (regular) (r, l)-identifying code. Therefore, a strongly identifying code is always a (regular) identifying code. We call a strongly (r, )-identifying code a strongly r-identifying code and we denote I r(y) = I r (y) \ {y}. The smallest cardinality of a strongly (r, l)- identifying code of length n is denoted by M r ( l)sid (n). When l =, we denote

50 48 Strongly identifying codes M ( )SID r (n) = Mr SID (n). Usually, we omit r from these notations if r =. A code attaining the smallest cardinality is called optimal. In this chapter we consider strongly -identifying codes and strongly (, 2)- identifying codes. In both cases we present constructions and lower bounds. The chapter is based on [35, 48]. 5. Strongly -identifying codes In this section we construct strongly -identifying codes and prove lower bounds for them. Strongly -identifying codes lie between -identifying codes and (, 2)-identifying codes, as we will see. 5.. Constructions The next construction is optimal for length 4. Theorem 5.2. For n 4 we have M SID (n) 2 n. Proof. The set F n {0} is a strongly -identifying code. Here each codeword x is covered by exactly n codewords and every non-codeword y = y, where y F n is covered by the unique codeword y 0. Thus, clearly, I(y) I(x) I (y) and I (x) I (y) for any distinct words x and y. Theorem 5.3. Let C F n be a -identifying code with the property that d(c,c \ {c}) = for all c C. Then D = C F is strongly -identifying. Proof. Let x = (x,x 2 ),y = (y,y 2 ) F n+, where x,y F n and x 2,y 2 F. If x y, then there is c I(C;x ) I(C;y ). Without loss of generality we can assume that c I(C;x ) \ I(C;y ). If d(c,x ) =, then (c,x 2 ) I (D;x) \ I(D;y). If d(c,x ) = 0, then (c,x 2 + ) I (D;x) \ I(D;y). If x = y, then x 2 y 2. Because d(c \ {c},c) = for all c C we know that there is c C such that d(c,x ) =. Now (c,x 2 ) I (D;x) \ I(D;y). This proves the claim. Corollary 5.4. If C is a strongly -identifying code, then its direct sum with F is as well. In Chapter 7, Theorem 7.8, it will be proved that a direct sum of a -locatingdominating code and F 2 is a -identifying code, which satisfies the property of the previous theorem. Thus, we have the next theorem, where L(n) denotes the smallest cardinality of a -locating-dominating code of length n. Theorem 5.5. M SID (n + 3) 8L(n).

51 5. Strongly -identifying codes 49 In the next theorem we show that a (, 2)-identifying code is strongly - identifying. Theorem 5.6. M SID (n) M ( 2) (n). Proof. Let C F n be a (, 2)-identifying code with M ( 2) (n) codewords. By Theorem 4.3 every word in F n is covered by at least three codewords. Thus, for all x F n the set I(x) is unique. Moreover, I (c) I (c ) for all c,c C (c c ). In particular, if I (c) = {c,c 2 } = I (c ), then I(c,c) = I(c,c ) and this is not possible. Theorem 5.7. M SID (n) 2n K(n,2). Proof. Suppose n 4. Let C F n be a code attaining the bound K(n,2). Denote e = F n. A code C := {c + a c C,a B (0) \ e } is -identifying with the property that for all c C we have d(c,c \ {c}) =. Namely, (compare the proof of [40, Theorem 8]) for all x F n there is c C such that d(c,x) 2. Without loss of generality assume c = 0. If I(x) n, then x = 0. If I(x) B (0) = 2, then the support of x is the union of the supports of the codewords in the I-set. If I(x) B (0) =, then either x = e and I(x) B (0) = {0} or x B 2 (0) and the support of x is {} {supp(c 0 ) c 0 I(x) B (0)}. The claim follows from Theorem 5.3. The result holds also for n = 3. Definition 5.8. Denote by W r (n,k,h) the minimum number of codewords in any code C of length n whose codewords all have weight h and which has the property that all the sets I r (C;x), x S k (0), are nonempty and different. Trivially, W (n,2,) = n. Let us look at the values of W (n,3,2). If we denote by D(C) the smallest number of different codewords of C whose sum is the all-zero word, we get the following theorem. Theorem 5.9. Let C = S 2 (0) \C, C S 2 (0). The sets I( C;x) are nonempty and distinct for all words x of weight three if and only if D(C) 5. Proof. Clearly, there is a word x of weight three for which I( C;x) = /0 if and only if C contains three words whose sum is the all-zero word. Any word x S 3 (0) for which I( C; x) 2 is obviously uniquely identified. We only need to check whether there are two words of weight three, say x and x 2 with supports {i, j,k} and {i, j,m}, such that I( C,x ) = {c} = I( C,x 2 ) where the support of c equals {i, j}. This happens if and only if the words with the supports {i,k}, { j,k}, {i,m} and { j,m} are all in C, i.e., if and only if some four codewords of C add up to the all-zero word.

52 50 Strongly identifying codes According to the theorem above, determining the values of W (n,3,2) is equivalent to finding the largest code C F n whose codewords are all of weight two and D(C) 5. Consider an undirected graph whose vertex set is the set of the coordinates {,...n} and an edge is a pair of such coordinates (that is, the support of a word of weight two). Hence, calculating W (n,3,2) is equivalent to finding a graph with the maximal number of edges and with the length of the shortest cycle (girth) at least five. The problem of finding such graphs is well-known and several exact values of W (n,3,2) are known (see, e.g., [20, 2, 63] and the references therein). For example, the values of W (n,3,2) are, 3, 5, 9, 3, 8, 24, 30, 39, 48, 57, 68, 79, 92, 05, 9, 30, 46, 63, 8, 99 for the lengths n = 3,...23, respectively. It also follows that W (n,3,2) lim n n 2 = 2. Example 5.0. The constant weight code S 2 (0)\ {00000, 00000, 00000, 00000, 00000, 00000, 00000, 00000} attains the value W (7,3,2) = 3. Theorem 5.. Suppose n 7. If A is a code attaining the value W (n,3,2), then every word of weight one is covered by at least three codewords of A. Proof. It is easy to see that the code consisting of all the words of length n 8 and weight two except the ones with supports {, 2}, {2, 3},...,{n, n}, {n, } and {,5} identifies all the words of weight three. From this and Example 5.0 we can deduce that for n 7 there are at least n + words of weight two that do not belong to A. Suppose to the contrary that there is a word x of weight one, which is covered by less than three codewords. Let {s} be the support of x. Without loss of generality we can assume that s =. Denote by i the number of codewords of weight two that cover x. If i = 0, then none of the words with supports {, j}, for j = 2,...,n, is a codeword. There are n such words. Hence there is also a word whose support is {k,h}, (k,h 2, k h) and which is not in A. Now the word of weight three with support {,k,h} is not covered at all, a contradiction. If i =, then for exactly one j the word with support {, j} is a codeword. Now there must be n+ (n 2) = 3 non-codewords of weight two which begin with zero. If any of them does not contain j in its support, then we are done as in the previous case. Without loss of generality, words with supports { j,k } and { j,k 2 } are non-codewords, for some k and k 2, k k 2. But now the words of weight three with supports {, j,k } and {, j,k 2 } cannot be distinguished, because both of them are only covered by the word with support {, j}. If i = 2, then for some j and j 2 words with supports {, j } and {, j 2 } are codewords. Now there are at least four non-codewords of weight two that begin

53 5. Strongly -identifying codes 5 with zero. As in the previous case either j or j 2 must occur in the support of each of them. This means, without loss of generality, that there are non-codewords with supports { j,k } and { j,k 2 }, k j 2, k 2 j 2, k k 2. We get a contradiction as in the previous case. Theorem 5.2. For n 7, M SID (n) (W (n,3,2) + n )K(n,3). Proof. Let A be a set which attains the value W (n,3,2), B realizing the value W (n,2,) = n, and D a code of length n and covering radius three. We show that (A B) + D is strongly identifying code. Let x F n and denote H(c) = {c + y y A B} for c D. Then d(x,c) 3 if and only if H(c) I(x) /0 (and if and only if H(c) I (x) /0). Therefore, using I(x) (or I (x)) we can find a codeword c D such that d(x,c) 3. Without loss of generality, assume that c = 0. If x = 0, then we immediately know it, because I(x) and I (x) both contain at least n 3 words of weight one, which uniquely identify x. Assume that x 0. If I(x) (or I (x)) contains 0, then we know that w(x) =, and by Theorem 5., at least three of the words in A cover x, and therefore uniquely identify it. We can now assume that we know w(x) 2. Then w(x) = 2 if and only if I(x) (or I (x)) contains at least one word of weight one. When it is known that w(x) = 2, the words in B uniquely identify x. When we know that w(x) = 3, then words of A uniquely identify x Nonexistence results Denote by N i the number of codewords of weight i in a code C. Lemma 5.3. Let C be a strongly -identifying code of length n 3. If 0 C, then at least 2n/3 codewords of weight two are needed to identify all the words of weight one. If 0 C, then we need at least 2(n )/3 codewords of weight two. Proof. Assume first that 0 / C. If s denotes the number of words x S (0) such that I (x) =, then s + 2(n s) x S (0) I (x) = 2N 2. Since s N 2, we get N 2 2n/3. Assume then that 0 C. Then I (x) = {0} for at most one x S (0). Considering the sets I (x) \ {0}, we similarly get N 2 + 2(n N 2 ) and the second claim follows. x S (0) I (x) \ {0} = 2N 2,

54 52 Strongly identifying codes Theorem 5.4. M SID (n) ( n 2 2 n 2n/3 ). + 2n/3 2(n )/3 Proof. Assume that C is a code with M SID (n) codewords. Applying Lemma 5.3 to all the words of F n we get ( ) n (2 n M SID (n)) 2n/3 + M SID (n) 2(n )/3 M SID (n), 2 from which the theorem follows. Lemma 5.5. Let n 4. If C F n is an optimal strongly -identifying code, then for all x F n we have I (x) n. Proof. Suppose to the contrary that for some x we have I (C;x) = n. Without loss of generality we can assume x = 0. We will show that C \ {y} is a strongly identifying code, we choose y in the following way: we take y S (0) such that I(C;y) S 2 (0) =, if such a word exists; otherwise we take any y S (0). It suffices to show that I (v) I(u) I(v) and I (v) I (u) I(v) (5.2) for all u B (y) and v F n. Here and from now on the notations I and I all refer to the code C \ {y}. Since I (0) = n 3 we may always exclude the cases u = 0 or v = 0. If also v B (y) \ {0} and v u, then either u y or v y; say u y. Now u + y I (u) S (0) but u + y I(v). This implies (5.2) in this case. Suppose then that v B (y). Let first w(v) 3. Since C is strongly identifying code, we get (5.2) for u = y. If u y, I (u) S (0) /0 and I(v) S (0) = /0. This gives (5.2) for these v. Let then w(v) = 2. Now there clearly is a codeword a I (v) S (0) such that a I(u) (because u 0). Let finally w(v) =. Since I (C;v) I (C;y) we may assume that w(u) = 2. If 0 C, then 0 I (v) but 0 / I(u). If 0 C, then the choice of y guarantees that there is a word of weight two in I (v) which does not belong to I(u). Therefore, we have (5.2) for all words u and v. In the proof of the next lower bound (cf. [40, Theorem 3] and [6, Theorem 9]) we use the concept of excess, see for example [3]: Assume that C F n has covering radius one. If a vector x F n is -covered by exactly i + codewords of C, then we say that the excess E(x) on x is i. In general, the excess E(V ) on a subset V F n is defined by E(V ) = x V E(x).

55 5. Strongly -identifying codes 53 Theorem 5.6. For n 9, M SID (6) 8, M SID (7) 32, M SID (8) 57. M SID (n) 2 n+ (n 2 2n + 4) n 3 n n + 8 Proof. Let C be a code realizing M SID (n) with n 6. The number of points for which I(x) = is at most M SID (n). The points x with I(x) = 2 are called sons and the ones with I(x) > 2 are called fathers. If I(x) = 2, then there is a unique point y such that I(x) I(y) and it is called the father of x. A family consists of a father and its sons. The space is partitioned by the families and the points with I(x) =. Assume that f is a father and denote by S = S(f) the set of sons of f. Let I(f) = i 3. We shall examine the average excess of a family, i.e., the function S + i S + =: g(i, S ). Let us bound above the number of sons of a father and thus bound below the average excess of a family. Without loss of generality we can assume that the father f is the all-zero word. Clearly, a father may have at most ( i 2) sons, but as we shall see, we can often say more. Assume first that f C. By the previous lemma we know that i n. Suppose first that i = n. Denote by x the unique word in S (f) \C. We show that for all c I(f) all the n 2 words in B (c) \ {f,c,x + c} cannot be sons of f. Indeed, there must be a codeword c c in I(c), otherwise I (c) = /0, and if c x + c we are done, so assume c = x + c is the unique codeword in I (c). There has to be a codeword c of weight three in I(c ): otherwise I(c ) = I(c). We have c = c + x + z for some z I(f). But then c+z cannot be a son, since I(c+x) 3. Consequently, c I(f) can have at most n 3 sons of f at distance one from it. Counting in two ways the pairs (c, s) where c I(f), s S and d(c, s) = we obtain 2 S (n 3) I(f) which implies S (n 3)(n )/2 =: U. Consider the case f C and i = n 2. Denote by x and x 2 the two words in S (f) \ C. Let c I(f). Because C is strongly identifying code, I (x + c) I (x 2 + c), and therefore one of these contains a codeword of weight three, which is not contained in the other. Without loss of generality, x + c + z C for some z I(f), z c. Then c,z,x + c + z I(c + z) and hence c + z is not a son. Thus, there cannot be n 3 sons in S (c). Counting as above, we get S (n 2)(n 4)/2 =: U 2. Notice that for values i = 4,...n the function g(i, ( i 2) ) is decreasing, and g(3, ( ( 3 2) ) = g(6, 6 ( 2) ). Hence for i = 3,...,n 3 we may bound g(i, i 2) ) below by g (n) := g(n 3, ( ) n 3 2 ), when n 9.

56 54 Strongly identifying codes Assume now that f C and I(f) = i. In this case S consists of the sons at distance one and two from f. Since a son is covered by exactly two codewords, there can be only one son at distance one from f. Indeed, if s,s 2 S and d(f,s ) = d(f,s 2 ) =, then I (s ) = I (s 2 ) = {f}. Consequently, S ( ) i 2 +, because there are at most ( ) i 2 sons of weight two. Notice that g(i, ( ) i 2 + ) is decreasing on i (i = 4,...,n) and we may bound it below by g 2 (n) := g(n, ( ) n 2 + ) for n 6. The minimum of g(3, ( ( 3 2) ),g(4, 4 ( 2) ),g(5, 5 ) 2 ) and g2 (3) is g(3, ( 3 2) ) = 5/4. So to find the minimum of the above mentioned lower estimates on g(i, S ) we need to compare the functions g (n),g(n,u ),g(n 2,U 2 ),g 2 (n), and 5/4. When 6 n 8 the minimum is 5/4 and for n 9 the minimum is g 2 (n). Denote by M(n) the minimum. Consequently, the average excess of a family is at least M(n) and hence for every family F the excess of it E(F ) M(n) F. Since the excess on F n by C is (n + )M SID (n) 2 n we get Routine calculations give the claim. (n + )M SID (n) 2 n (2 n M SID (n))m(n). There is (see, [64, Construction 4.24]) infinite sequences of codes (C i ) i= of length n i and covering radius two such that K(n i,2)v (n,2) lim i 2 n =. i Such families are known to exist for the lengths n i = 2 i i/2 2 2 where i 4 is even and n i = 23 2 i/2 4 3, where i 0 is even. Combining this result with Theorem 5.7 and Theorem 5.6, we have two infinite sequences of strongly identifying codes such that 2 n+ n ( + f (n)) MSID (n) 2n+ ( + g(n)), n where f (n) and g(n) tend to 0, when n tends to infinity. Using the results of [40] we also see that we have an infinite sequence of lengths such that asymptotically the ratio between the smallest cardinalities of identifying codes and strongly identifying codes tends to one Short codes In this section we prove sharp results for the cardinalities of strongly -identifying codes for short lengths. Theorem 5.7. M SID (3) = 6, M SID (4) = 8.

57 5. Strongly -identifying codes 55 Proof. The lower bound on M SID (3) follows from Theorem 5.4 and the upper bound from Theorem 5.3. The upper bound in the case n = 4 comes from Theorem 5.2. Theorem 5.4 gives that M SID (4) 7. Assume M SID (4) = 7 and let C be a code attaining the value M SID (4). We can assume that neither 0000 nor is a codeword. Namely, if every word-complement pair contains at least one codeword, there would be at least eight codewords. By Lemma 5.3 we need at least 2 4/3 = 3 codewords of weight two to identify all words of weight one. Assume that N 2 = 3. These codewords can be chosen in two ways. Either there are two words of weight one that are covered by two codewords, or there is a word which is covered by three codewords; in both cases all the other words of weight one are covered by one codeword each. In the first case, if x is a word of weight one such that I (x) = 2, then there is a word y of weight three such that I (y) = I (x), so we need at least four codewords of weight two. In the second case if x is the word for which I (x) = 3 then for the complement x of x we have I (x) = /0, so again N 2 4. Because I (0000) /0 and I () /0, we have N and N 3. Because I (x) /0 for all x of weight two, there is a codeword c of weight one such that its complement is also a codeword, or N + N 3 4. Without loss of generality we can assume that 000 and 0 are codewords. Because I (00), I (00) and I (00) must all be different, we need at least two more codewords of weight one or three. Again N + N 3 4. Theorem 5.8. M SID (5) = 4. Proof. A strongly -identifying code of length 5 and cardinality 4 is {0000, 0000, 0000, 000, 000, 000, 000, 00, 00, 00, 00, 0, 0, 0}. Let C be a strongly -identifying code of length five. To prove the lower bound we can assume that neither nor is a codeword. Namely, if every word-complement pair contained at least one codeword, then there would be at least 6 codewords. We will prove that N + N 3 7. By symmetry (considering the code { + c c C}) we then know that also N 2 + N 4 7, and the claim follows. Because I (00000) is not empty, N. Case : N =. We can assume that 0000 C. Denote A = {000, 000, 000, 000}. The sets I (x), x A are different and, because I (00000) = {0000}, each of the sets I (x) contains at least one codeword of weight three. Hence, C contains at least three codewords of weight three that begin with, say c, c 2 and c 3. Each of these three codewords covers one word of weight two that is not in A. But still we have three words of weight two that are not covered by them. If c, c 2 and c 3 are at distance one from one word in A, then the remaining three words of weight two that are not covered, say x, x 2, and x 3, are all at distance one from one word of weight three. Because I (x ), I (x 2 ), and I (x 3 )

58 56 Strongly identifying codes must be nonempty and different, there must be at least three more codewords of weight three, and hence N + N 3 7. Suppose therefore that two words from the set A are covered by two of the codewords c, c 2, and c 3 and two by one. Without loss of generality we can assume that c, c 2, and c 3 are 00,00, and 00. Now the words 000,000, and 000 are not covered. The only possibility to make I (000), I (000), and I (000) all nonempty and different using only two more codewords of weight three is to choose 00 and 00 as codewords. But then I (0) = I (000), so we need all in all at least six codewords of weight three, and hence N + N 3 7. Case 2: N = 2. Without loss of generality assume 0000,0000 C. Because I (00000) I (000) we can assume 00 C. Because I (000) I (000), at least one of the words 00, 00, 00, or 00 must be in C. The first two and the last two cases are symmetric. In the first case N + N 3 7, because I (000), I (000), and I (000) are nonempty and different, and we need three more codewords of weight three. In the last case suppose that 00 is a codeword. Again I (000), I (000), and I (000) must be nonempty and different, which requires three codewords of weight three, but of course 00 can be used as one of them. Let us assume that N 3 = 4. Then of the remaining two codewords of weight three (at least) one has to cover either 000 or 000 but not both. There are now three possibilities to choose them: ) 00, 00, now I (0) = I (000), 2) 00, 00, now I (0) = I (000), 3) 00, 00, now I (0) = I (000). So in each case we need at least one more codeword of weight three, and so N + N 3 7. Case 3: N 3. By Lemma 5.3 we know that N 3 4, so N + N Strongly (, 2)-identifying codes In this section we consider strongly (, 2)-identifying codes. We begin with a lower bound which turns out to be optimal for infinitely many cases. We continue with constructions. In the first construction we prove that the direct sum of a (, 2)-identifying code and F is a strongly (, 2)-identifying code. This result combined with the infinite optimal families of (, 2)-identifying codes (Corollary 4.7) gives optimal results for strongly (, 2)-identifying codes. In the second construction we prove that (π(u), u, u + v)-construction can be applied to strongly (, 2)-identifying codes. Theorem 5.9. Let l 2. Then M ( l)sid (n) (2l ) 2n. n Proof. Let C F n be a strongly (, l)-identifying code. If x C, then I(x) 2l. Indeed, otherwise if I(x) = {c,...,c 2l 2 } and x i (i =,...,l ) is the

59 5.2 Strongly (, 2)-identifying codes 57 unique word different from x at distance one from both c 2i and c 2i, we have I(x,...,x l ) = I(x,...,x l,x), which is impossible. Obviously, fewer than 2l 2 codewords in I(x) is also impossible. Assume then that x C. Suppose that I(x) = {c,...,c 2l 2,x} and define x i as above for all i =,...,l. Now I(x,...,x l ) = I(x,...,x l,x) \ {x} which is not allowed and hence I(x) 2l. Thus, we obtain which gives the claim. C (n + ) 2l C + (2l )(2 n C ) As we shall see, this lower bound can often be attained when l = 2. The proof of the theorem implies the next corollary. Corollary Let C F n be a strongly (, l)-identifying code, then for all x F n we have I (x) 2l. The interested reader is referred to [44] for further results on strongly (, l)- identifying codes when l 3. The following theorem turns out to be very useful when we construct (optimal) strongly (, 2)-identifying codes. Theorem 5.2. If C F n is a (, 2)-identifying code, then D = C F F n+ is strongly (, 2)-identifying. Proof. The code C is 3-fold -covering by Theorem 4.3. The direct sum of C and F preserves this property. Moreover, every codeword is covered by at least four codewords of D, namely, for all c C and a F we have I(D;(c,a)) = {(c,a) d(c,c ),c C} {(c,a + )}. Thus, for all x F n+ we have I (x) 3 and applying Lemma 4.4 we know that in F n+ single words are distinguishable from each other and pairs of words also in the strong sense. We still need to prove that for all {x,x 2 } {y,y 2 }, x i,y i F n+, x x 2 and y y 2, we have J(x,x 2 ) J(y,y 2 ), where I(D;Z)\{Z} J(Z) I(D;Z). Denote x i = (x i,x i ) and y i = (y i,y i ), where x i,y i F n and x i,y i F. From now on in this proof the I-set notation relates to the code C. i) Assume first {x,x 2 } {y,y 2 }. Then there is c I(x,x 2 ) I(y,y 2 ). Without loss of generality we may assume that c I(x ) \ I(y,y 2 ). If c {x,x 2 }, then (c,x ) J(x,x 2 ) \ J(y,y 2 ). If c = x and c x 2, then (c,x + ) J(x,x 2 ) \ J(y,y 2 ). (Similarly, the case c = x 2 and c x.)

60 58 Strongly identifying codes Suppose c = x = x 2, hence {x,x 2 } = {(c,0),(c,)}. Suppose also there is no other codeword than c in I(c ) \ I(y,y 2 ) (if there is, we get a solution from it). This implies d(c,{y,y 2 }) = 2. Without loss of generality we may assume I(c ) I(y ) I(c ) I(y 2 ). If y y 2, then there is a codeword c such that d(c,c ) = d(c,y ) = and d(c,y 2 ) >. Hence, (c,y + ) J(x,x 2 ) \ J(y,y 2 ). If y = y 2, then there is c 2 (I(y ) \ {y }) \ I(c ) (otherwise we have a square of codewords: for c I(c ) I(y ) we have I(c,c ) = I(y,c ) which is impossible). Thus, (c 2,y ) J(y,y 2 ) \ J(x,x 2 ). ii) Suppose now that {x,x 2 } = {y,y 2 }. We must have x x 2 and y y 2 otherwise we cannot have {x,x 2 } {y,y 2 }. We may assume x = y and x 2 = y 2. Without loss of generality we can assume x y, that is x = y +. There is a codeword c I(x ) \ I(x 2 ). If d(c,x ) =, then (c,x ) J(x,x 2 ) \ J(y,y 2 ). If d(x,c ) = 0 (and I(x ) \ I(x 2 ) = {x }), then there is c 2 (I(x 2 ) \ {x 2 }) \ I(x ) : otherwise there is a square of codewords as above. Now (c 2,x 2 ) J(x,x 2 ) and (c 2,x 2 ) J(y,y 2 ) if and only if y 2 = x 2 (y 2 = x 2 ). Suppose y 2 = x 2. The assumption d(x,c ) = 0 implies that there is a codeword c 3 C such that d(c 3,x ) = d(c 3,x 2 ) =. Now (c 3,x 2 + ) J(x,x 2 ) J(y,y 2 ), since either x 2 + = x or x 2 + = y, but not both. Corollary M ( 2)SID (n) 2M ( 2) (n ). We are now in a position to give two infinite families of optimal codes. Corollary For k : For k 3 : M ( 2)SID (3 2 k ) = 2 3 2k k. M ( 2)SID (2 k ) = 3 2 2k k. Proof. By Corollary 4.7 we know that M ( 2) (n) = 2 3 2k k, if n = 3 2 k (k ), and M ( 2) (n) = 3 2 2k k, if n = 2 k (k 3). Combining this with Corollary 5.22 and the lower bound from Theorem 5.9 we obtain the equations. No infinite family of optimal strongly -identifying codes is known. Corollary If C is strongly (, 2)-identifying, then the direct sum with F is as well. Before presenting one more optimality result, we show a construction which yields a strongly (, 2)-identifying code of length 2n + from a strongly (, 2)-identifying code of length n.

61 5.2 Strongly (, 2)-identifying codes 59 Theorem Let C be a strongly (, 2)-identifying code of length n. The code C = {(π(u),u,u + c) u F n,c C}, where π(u) denotes the parity check bit of u, is a strongly (, 2)-identifying code of length 2n +. Proof. By Corollary 5.20, we know that for all x F n, I (x) 3. The construction maintains this property by [3, Theorem and 4.4.3]. Lemma 4.4 now implies that all single words are distinguishable from each other and pairs of words also in the strong sense. Thus, to prove the claim it suffices to show that for all {x,y},{z,w} F 2n+, {x,y} {z,w}, x y and z w we have J(x,y) J(z,w) (5.3) for all the sets J, where I(D;Z) \ {Z} J(Z) I(D;Z), for Z F 2n+. Let us divide the words of F 2n+ into two classes according to their first bit and consider the codewords which cover a word in each class. Let x = (a,u,u + v) F 2n+. I If a = π(u), then I(x) = {(π(u),u,u + c) c C,d(c,v) }. II If a π(u), then I(x) = A {(a,u,u + v) d(u,u) =, c C : u + v = u + c}. Here A = {(π(u),u,u + v)} if v C, and A = /0 if v / C. Hence, in both classes we are interested in codewords c C such that d(c,v). Namely, in the class II the properties d(u,u) and u + v = u + c imply that also d(v,c). If I(x) = {(b i,s i,t i ) i =,2,...,k}, then in both cases I(C;v) = {s i + t i i =,2,...,k}. If x C and it is removed from I(x), then v is removed from I(C;v). Notice that if I(x,y) \ {x,y} = {(b i,s i,t i )} i =,2,...,k}, where x = (a,u,u +v ) and y = (a 2,u 2,u 2 +v 2 ), then I(C;{v,v 2 })\{v,v 2 } {s i + t i i =,2,...,k} I(C;{v,v 2 }). Similarly, if I(x,y) \ {x} = {(b i,s i,t i ) i =,2,...k}, then I(C;{v,v 2 }) \ {v } {s i + t i i =,2,...,k} I(C;{v,v 2 }). Suppose one of the inequalities (5.3) does not hold. Denote x = (a,u,u + v ), y = (a 2,u 2,u 2 + v 2 ), z = (a 3,u 3,u 3 + v 3 ) and w = (a 4,u 4,u 4 + v 4 ). Since C is a strongly (, 2)-identifying code we must have {v,v 2 } = {v 3,v 4 } by the previous discussion. By Theorem 4.5 we know that the code C is (, 2)-identifying. Thus, it is enough to consider pairs where there is at least one codeword. Without loss of generality we may assume x C. Thus, x belongs to a class I. We must have either I (x) I(z) 2 or I (x) I(w) 2. Without loss of generality suppose I (x) I(z) 2. Because the last n bits are changing in the words of I (x), this must be true also in I(z), thus, also z belongs to the class I. This also implies that u = u 3.

62 60 Strongly identifying codes If y belongs to the class II, then we must have I(y) I(w) 2, since z from the class I can cover at most one codeword from I(y). Hence, now also w belongs to the class II. Thus, y and w belong to the same class. Let first all x,y,z and w lie in the first class. If I (x) I(w) /0, then we have u = u 4. Similarly, if I(y) I(z) /0, then u 2 = u 3. Now u = u 2 = u 3 = u 4 and together with {v,v 2 } = {v 3,v 4 }, we get {x,y} = {z,w}, what is a contradiction. If I (x) I(w) = /0, then I (x) I(z) 3, which implies by Lemma 2. that x = z. Thus, I (y) I (w) 3 and y = w. Assume next that x and z belong to the first class and y and w lie in the second. Because y and w cannot be codewords, it suffices to verify only the following inequalities of (5.3) I(x,y) \ {x} I(z,w) \ {z} and I(x,y) \ {x} I(z,w). Suppose I(x,y)\{x} = I(z,w)\{z}. We have x z, since otherwise I(x,y) = I(z,w), which is impossible due to the fact that C is (, 2)-identifying. We must have I (x) I (z) /0 and hence u = u 3. Because I (e) 3 for all e F 2n+, we obtain I (x) I(w) = and I (z) I(y) = (the number of elements in the intersections cannot be greater than one, since x and w and also y and z belong to different classes). The words in these two intersections must be different, since x z. Since words y and w belong to the class II and we must have I(y) I(w) /0, the last n bits of the words in I(y) and I(w) are the same. On the other hand, the words in the intersections I (x) I(w) and I (z) I(y) must also end with those n bits, but now I (x) I (z) 3, since u = u 3, and thus x = z, a contradiction. Assume finally I(x,y) \ {x} = I(z,w). Evidently, x C and x z. If z is a codeword, then in I(z) there are at least two codewords which are not in I (x), these two words cannot be both in I(y) either (y and z are in different classes), so z / C. Now I(x,y) \ {x} = I(z,w) \ {z}, where z / C but belongs to the class I. This is impossible by the previous case. There does not exist a strongly (, 2)-identifying code of length less than five. Indeed, for length four we notice that there cannot be a strongly (, 2)- identifying code, since always I(0000, ) \ {0000, } = I(00, 00) \ {00,00} regardless of the code. In the next proof, we denote by Si n (x) the words at distance i from a word x in F n. Theorem M ( 2)SID (5) = 22. Proof. By Theorem 4. we have M ( 2) (4) =, and thus we obtain the upper bound using Corollary We prove the lower bound by using Corollary 5.20, I (x) 3 for all x F 5. Suppose M ( 2)SID (5) 2. By [3, p. 383] we need at least 22 codewords to cover each word in F 5 at least four times. Hence we know that there is a noncodeword which is covered by exactly three codewords. Let C be a code attaining the bound M ( 2)SID (5). Without loss of generality, assume C, and

63 5.2 Strongly (, 2)-identifying codes 6 I(00000) = {0000,0000,0000}. Each word of weight one must be covered by at least three codewords of weight two. Each codeword of weight two covers two words of weight one. Thus, we need at least 3 5/2 = 8 codewords of weight two. If C this is a symmetric situation, so all in all there are now at least 22 codewords. Suppose C. Now again there are at least three codewords of weight four. The word 000 must be covered by three codewords of weight three, those can only be {00,00,00} =: T. The words of the set A := S 3(000) S 2 (00) must be covered by at least two codewords of weight three. Denote by G a = a S 2(00), where a S3 (000). The words of weight three always cover exactly two words of A, except that 00 covers none. Moreover, a word in S3 5(00000) \ ({00} T ) covers a word both in G a and G b for some a and b (a b). Let c S3 5(00000) cover a word in G a and G b and, moreover, c 2 S3 5(00000) cover the other word in G a and a word in G d, where b a d and b d. The distance between the yet uncovered (by others than the words in T ) words of G b and G d is four. Thus, we need at least seven codewords of weight three in C. Again there are at least 22 codewords. In Table 5. we have collected lower and upper bounds on the cardinalities of strongly -identifying and strongly (, 2)-identifying codes.

64 62 Strongly identifying codes Table 5.: Bounds on the cardinalities of strongly -identifying codes and strongly (, 2)-identifying codes. n M SID (n) M ( 2)SID (n) 3 a 6 A - 4 b 8 B - 5 c 4 C 22 G 6 a, d 8 20 D 32 H 7 d E I 8 d A 96 H 9 d A 7 92 I 0 d A I d F I, J 2 d A 024 H 3 d A I, J 4 d A I 5 d A I, J 6 d A 2288 H 7 d A I, J 8 d A I 9 d A I, J 20 d A, D I 2 d A I, J 22 d D I 23 d E I, J 24 d D I Lower bounds on M ( 2)SID (n) for n 6 come from Theorem 5.9 and for n = 5 the lower bound comes from Theorem M SID (n) M ( 2)SID (n) a Theorem 5.4 G Theorem 5.26 b Theorem 5.7 H Corollary 5.23 c Theorem 5.8 I Theorem 5.2 d Theorem 5.6 J Theorem 5.25 A Theorem 5.3 B Theorem 5.2 C Theorem 5.8 D Theorem 5.7 E Theorem 5.2 F Theorem 5.5

65 Chapter 6 Linear identifying codes A binary code C F n is called linear if all the pairwise sums of codewords belong to the code. This means that C is a subspace of F n. There is a linearly independent set of words of F n that generates the code C. The number of these codewords, k, is called the dimension of the code. Thus, the cardinality of the code is 2 k. In this chapter we consider linear identifying codes. We solve the smallest dimensions for all possible lengths and for all l of linear (, l)-identifying and linear strongly (, l)-identifying codes. The requirement of linearity increases the number of codewords in most cases compared with the case where linearity is not required. However, we will see that for infinitely many lengths linear (, l)- identifying codes and linear strongly (, l)-identifying codes for l 3, are as good as the corresponding codes in the case where linearity is not required. This chapter is based on [55]. The smallest dimension of a binary linear (r, l)-identifying code of length n is denoted by k r ( l) [n]. A code attaining the smallest dimension is called optimal. The next lemma will often be used. Lemma 6.. If C F n is a linear code and c C, then for all x F n I(x + c) = I(x) + c. Proof. Clearly, y I(x+c) if and only if y C and d(x+c,y). This is equivalent to the fact that y + c C (because C is linear) and d(x,y + c) ; that is, y + c I(x). This is equivalent to y I(x) + c. Lemma 6. implies that each codeword is covered by the same number of codewords in a linear code. Namely, suppose C is a linear code. Then for all c C we have I(0 + c) = I(0) + c. Thus, each codeword is covered by the same number of codewords as 0, which is always a codeword. By the lemma, we also know that if x,y C,e i S (0) and both x + e i and y + e i C, then I(x) = I(y). Indeed,

66 64 Linear identifying codes I(x) + x + e i = I(x + x + e i ) = I(e i ) = I(y + y + e i ) = I(y) + y + e i. In fact, we get more; namely, I(x) = I(e i ) = I(y). Let us recall the definitions from [3]. A code C is a µ-fold -covering if the codewords of C cover each word of F n at least µ times. The smallest dimension of a linear µ-fold -covering is denoted by k[n,, µ]. A code attaining the smallest dimension is called optimal. We denote by k[n,, µ,ν] the smallest dimension of a binary linear code of length n such that each non-codeword in F n is covered by at least µ codewords and each codeword is covered by at least ν codewords. These covering codes are a generalization of µ-fold -coverings, and k[n,, µ] = k[n,, µ, µ]. The next theorem is the sphere-covering bound for such codes. Theorem 6.2. Suppose µ and ν are positive integers. Then k[n,, µ,ν] n + log 2 µ log 2 (n + + µ ν). Proof. Denote K = 2 k[n,,µ,ν]. The claim follows from the direct calculation ν K + µ(2 n K) K(n + ). A linear k-dimensional code C F n can always be defined by choosing an (n k) n matrix H called its parity check matrix such that x C if and only if Hx T = 0. Let x F n be arbitrary, and consider its syndrome Hx T. If Hx T appears in H as the i-th column, then clearly the word obtained by changing the i-th bit in x belongs to C. Consequently, if x is a codeword, the number of codewords within distance one from it equals one plus the number of zero columns in H; if x is a non-codeword, the number of codewords within distance one from it equals the number of times the syndrome of x appears as a column in H. The proof of the next theorem gives optimal constructions for all possible lengths of linear codes such that all non-codewords are covered at least µ times and all codewords are covered at least ν times. Theorem 6.3. Let µ and ν be positive integers. When n = (2 r )µ + ν + s, where r 0 and 0 s 2 r µ, then k[n,, µ,ν] = n r. Proof. The lower bound comes from Theorem 6.2. If r = 0, then by Theorem 6.2, k[n,, µ,ν] n. In F n codewords are covered at least ν times and, thus, it is the optimal code.

67 6. On linear -identifying codes 65 Suppose then that r. Let H be any (r n)-matrix which contains the allzero column at least ν times and every non-zero column at least µ times. Now n = (2 r )µ +ν +s, where 0 s 2 r µ. The matrix H has clearly rank r, and the code which has H as its parity check matrix is as required, by the previous discussion. 6. On linear -identifying codes In this section, we first prove a lower bound for linear -identifying codes. We also prove a construction which increases the length and dimension of a linear -identifying code by one and the result is a linear -identifying code. The corresponding result is not known for -identifying codes, in general. In Theorem 6.6 we combine these results and get the optimal dimensions for all possible lengths of linear -identifying codes. Theorem 6.4. For n 2 we have k ( ) [n] n + log 2 3 log 2 (n + 3). Proof. Suppose C F n is a linear -identifying code of length n and cardinality K. We will show that each word in F n is covered either by one codeword or by at least three codewords. Because C is -identifying we have I(x) for all x F n, and I(x) = for at most K words. Suppose that for some y F n we have I(y) 2. Then {y+e,y+e 2 } I(y) where e,e 2 B (0). Now {y+e,y+e 2 } I(y+e +e 2 ) and by Lemma 6. we know that I(y) = I(y + e + e 2 ). This implies a contradiction with identifying property unless I(y) 3. Since y was chosen arbitrarily we know that for all y F n such that I(y) > we have I(y) 3. Now we can calculate from which the claim follows. K + 3(2 n K) K(n + ) For linear -identifying codes we get the next nice result which is not known for -identifying codes in general. Theorem 6.5. k ( ) [n] k ( ) [n ] +. Proof. Let D be a linear -identifying code of length n and dimension k ( ) [n ]. If each codeword is covered by more than one codeword then by [6, Theorem ] (or Theorem 3.20) the direct sum D F is a -identifying code of length n. Clearly, the code D F is linear.

68 66 Linear identifying codes Suppose that in D each codeword is covered only by itself, this is equivalent with the condition that the minimum distance of D is at least two. Then as noticed in the proof of Theorem 6.4 each non-codeword is covered by at least three codewords. Denote by C(0) = {00,} F 2 and C() = {0,0} F 2. We will show that the code C = (x,x 2,...,x n 2 ) C(x n ) (x,x 2,...,x n ) D is a linear -identifying code of length n. If G D is a generator matrix for D, then the matrix 0 G C = G D is a generator matrix for C and, hence, C is linear. Since the minimum distance of D is at least two, we see from the generator matrix G C that the minimum distance of C is two. Namely, C and, thus, the minimum distance is at most two. If x D and x 0, then (x,0) C and the weight of (x,0) is the same as the weight of x. If the last bit of x is one, then the codeword (x,0) has the same weight as x, otherwise, the weight is one greater. Thus, each codeword of C is covered only by itself. Suppose x = (x,a) F n, where x F n 2 and a F 2, is a non-codeword. Assume that a C(z), where z F. Now (x,z) D and we know that I(D;(x,z)) 3. In particular, I(D;(x,z)) = {(c,z) D d(x,c) } A, where A = {(x,z + )} if (x,z + ) D and otherwise A is empty. Now I(C;x) = {(c,a) F n d(x,c),(c,z) D} B, where B = {(x,b) F n b C(z+)} if (x,z+) D and otherwise B is empty. This means that also in C each non-codeword is covered by at least three codewords. Since in the intersection of three Hamming balls of radius one there is at most one word, we know that each non-codeword is distinguishable. Hence, each word in F n has a unique I-set. Thus, C is a linear -identifying code of length n and dimension k ( ) [n ] +. Theorem 6.6. Let n = 3(2 r ) + s for r and 0 s 3 2 r. Then k ( ) [n] = n r. Proof. Let n = 3(2 r ). By Theorem 6.3 there is a linear code such that each non-codeword is covered by exactly three codewords and each codeword is covered only by itself. Thus, this code is -identifying. The dimension of the code is

69 6.2 On linear (, l)-identifying codes 67 n r. The upper bounds for s 3 2 r follow from Theorem 6.5. The lower bounds follow from Theorem 6.4. At least for some lengths there are also other optimal linear -identifying codes than what we constructed in the proofs of Theorem 6.6 and Theorem 6.5. Let n = 2 r = 3(2 r 2 ) + 2 r for r 3 and denote by H the Hamming code of length n. Then the code C = H (H + 0 n ) (H + 00 n 2 ) (H + 0 n 2 ) is a linear code and each word in F n is covered by four codewords, and, thus, it is -identifying. The code C is optimal and by repeatedly taking the direct sum with F we get optimal codes up to the length 3(2 r ). These codes and the codes of Theorem 6.6 are not equivalent, since in the codes of Theorem 6.6 all codewords are covered only by themselves. 6.2 On linear (, l)-identifying codes Theorem 6.7 solves the dimensions of (, 2)-identifying codes for all possible lengths. The optimal results for linear (, l)-identifying codes, when l 3, follow from the results in [45]. Theorem 6.7. A linear code C F n is (, 2)-identifying if and only if it is a 5-fold -covering. In particular, if n = 5 2 r + s, where r 0 and 0 s 5 2 r, we have k ( 2) [n] = k[n,,5] = n r. Proof. Suppose first that I(C;x) 5 for all x F n. By Lemma 2. no set of size at most two where x does not belong to can cover the whole I-set of x. Thus, C is (, 2)-identifying. Let C be a linear (, 2)-identifying code. By Theorem 4.3 we know that I(x) 3 for all x F n. Let us first consider codewords. Suppose c C. Since C is linear, 0 C. Suppose first that I(c) = 3. Then by Lemma 6., I(0) = {0,c,c 2 } for some codewords c,c 2 S (0). By linearity, c + c 2 C. Again by Lemma 6., I(c ) = {0,c,c + c 2 } and I(c + c 2 ) = {c,c 2,c + c 2 }, and thus I(0,c ) = I(0,c +c 2 ) = {0,c,c 2,c +c 2 }, which is impossible. Hence I(c) 4 for all c C. If I(c) = 4, then I(0) = {x,y,z,0} where the supports of x,y, and z are {i}, { j}, and {k}, respectively. By linearity, there are codewords c,c 2,c 3 and c 4 with supports {i, j}, {i,k}, { j,k} and {i, j,k}, respectively. But now I(0,c 4 ) = I(y,c 2 ) = {0,x,y,z,c,c 2,c 3,c 4 }. Thus, each codeword is covered by at least five codewords. We claim that also each non-codeword is covered at least by five codewords. First, we show that I(x) 4 for all x C. Assume on the contrary that x C

70 68 Linear identifying codes and I(x) = 3. Without loss of generality (by Lemma 6.) we can assume that x S (0). There are two non-codewords y and z in S (0) such that x + y and x + z belong to the code, and by linearity also y + z C. By Lemma 6., I(x,y) = {0,x + y,x + z,y + z} = I(x,z), which is a contradiction. Hence, I(x) 4 for all x C. Assume now that x C and I(x) = {x + e,x + e 2,x + e 3,x + e 4 } where e i S (0), for i =,...,4 are different non-codewords. Denote Using Lemma 6. we get y = x + e + e 2 z = x + e 3 + e 4 v = y + e 3 + e 4. I(y) = {y + e = x + e 2,y + e 2 = x + e,y + e 3,y + e 4 } I(z) = {z + e,z + e 2,z + e 3 = x + e 4,z + e 4 = x + e 3 } I(v) = {v + e,v + e 2,v + e 3 = y + e 4,v + e 4 = y + e 3 }. Now z+e = x+e 3 +e 4 +e = y+e 3 +e 4 +e 2 = v+e 2 and similarly z+e 2 = v + e. Thus, we have I(x,v) = I(y,z) which is impossible, and, hence, I(x) 5 for all x F n. The optimal dimensions follow from Theorem 6.3. Let l 3. It is proved in [45, Theorem 2] that C F n is (, l)-identifying if and only if it is a (2l )-fold -covering. Thus, we have the next theorem, the optimal dimensions follow again from Theorem 6.3. Theorem 6.8. If l 3 and n = (2l )2 r + s, where r 0 and 0 s (2l )2 r, then k ( l) [n] = k[n,,2l ] = n r. For l 3, we know by [45, Corollary ] that M ( l) (2l )2n (n) = n + if and only if there are integers i 0, µ 0 > 0 such that µ 0 (2l ) and 2l 2 i µ 0 and n = µ 0 2 i. When n = 2 r (2l ) for r 0 the above requirements are true and (2l )2 n /(n + ) = 2 n r, and, thus, the linear codes constructed above for lengths n = 2 r (2l ) where r 0 are optimal also in the case where linearity is not required. When l = 2 and l = 3 we want, in both cases, each word to be covered by at least five codewords, and, thus, in these two cases the optimal codes are the same.

71 6.3 Linear strongly identifying codes Linear strongly identifying codes We defined strongly (r, l)-identifying codes in Chapter 5. In this section, we solve all the optimal dimensions for linear strongly (, l)-identifying codes for any l and for any possible code length. The smallest dimension of a linear strongly (, l)-identifying code is denoted by k ( l)sid [n]. Theorem 6.9. For n 3 we have k ( )SID [n] n + log 2 3 log 2 (n + 2). Proof. Let C be a linear strongly -identifying code, and denote K = C. There are at most K words which are covered exactly by one codeword. Because a strongly -identifying code is -identifying, we know from the proof of Theorem 6.4 that all the other words are at least 3-fold -covered. For all c C, I(c) > since otherwise I (c) = /0, which is impossible. In fact, codewords are covered by at least four codewords. Assume to the contrary that I(c) = 3 for c C. Then by Lemma 6., I(0) = 3. In particular, I(0) = {0,c,c 2 }, where c,c 2 S (0) are codewords. By linearity c + c 2 C and using again Lemma 6. we get I (0) = I (c + c 2 ) = {c,c 2 }, which is a contradiction. Thus, we can calculate from which the claim follows. 4K + K + 3(2 n 2K) K(n + ) Theorem 6.0. Let n = 3(2 r )++s, where r and 0 s 3 2 r. Then k ( )SID [n] = n r. Proof. Let H be an (r n)-matrix such that there is at most one nonzero binary r-tuple which occurs only once as the column of H and all the other r-tuples (including the all-zero tuple) occur at least three times. As in the proof of Theorem 6.3, we know that the code which has H as its parity check matrix has the following properties. Each codeword is covered at least four times. There are 2 n r or 0 non-codewords which are covered by exactly one codeword. All the other non-codewords are covered by at least three codewords. The I-sets I(x) for all x F n are unique since at distance one from each codeword there is at most one non-codeword which is covered only by this one codeword, and all the other words of F n are covered by at least three codewords. By Lemma 2. we know that three codewords is enough to make an I-set unique. Also the sets I (x) for all codewords are distinguishable, since again there are at least three words in I (x). The lower bound follows from Theorem 6.9.

72 70 Linear identifying codes There is also a linear strongly -identifying code of length n = 3 (Theorem 6.0 does not give this). We have k ( )SID [3] = 3 since F 3 covers each codeword four times and there are no non-codewords. Theorem 6.9 gives the lower bound. Theorem 6.. Let l 3. If n = (2l )2 r + s, where r 0 and 0 s (2l )2 r, then k ( l)sid [n] = k[n,,2l,2l] = n r. Proof. By [44] we know that C is strongly ( l)-identifying if and only if C covers each word at least 2l times and each codeword at least 2l times. Thus, we have the claim. The optimal dimensions follow from Theorem 6.3. Theorem 6.2. If n = 5 2 r + s, where r 0 and 0 s 5 2 r, then k ( 2)SID [n] = k[n,,5,6] = n r. Proof. First we prove the lower bound. Suppose C is a linear strongly (, 2)- identifying code. By Theorem 6.7 we know that each word in F n must be covered by at least five codewords, because otherwise C cannot be a linear (, 2)- identifying code. If codewords were covered by five codewords, then I(0) = {c, c 2, c 3, c 4, 0} for some c,c 2,c 3,c 4 C of weight one. By linearity and Lemma 6. it would be I (0) I (c +c 2 +c 3 +c 4 ) = {c, c 2, c 3, c 4, c +c 2 +c 3, c +c 2 +c 4, c +c 3 +c 4, c 2 +c 3 +c 4 } = I (c +c 2 ) I (c 3 +c 4 ). Thus, all codewords must be covered by at least six codewords. Upper bound: By the proof of Theorem 6., we know that a code attaining the dimension k[n,, 5, 6] is strongly (, 3)-identifying. This implies that such a code is also a linear strongly (, 2)-identifying code. The optimal dimensions follow from Theorem 6.3. The optimal linear strongly (, l)-identifying codes are the same for the values l = 2 and l = 3. Let l 3. By [44, Theorem 3] we know that M ( l)sid (n) = (2l )2 n /n, when there are integers i 0, µ 0 > 0 such that µ 0 (2l ), 2l 2 i µ 0 and n = µ 0 2 i. If n = (2l )2 r, when r 0, the optimal linear strongly (, l)- identifying codes of length n are optimal also in the case where the linearity is not required.

73 Chapter 7 Locating-Dominating Codes Let G be a graph with a vertex set V. A subset S V is said to be a dominating set if every vertex in V \ S is adjacent to at least one vertex in S. A dominating set S is defined to be locating if for all u,v V \ S the sets of adjacent vertices in S are different. This is the definition of locating-dominating sets in graphs. Locating-dominating sets in graphs are considered for example in [54, 60 62]. We consider locating-dominating sets or codes in binary Hamming spaces, and we generalize the definition to locate-dominate more than one vertex. This chapter is based on [36]. Theorem 7.22 is not published before. Consider the binary hypercube (or an arbitrary graph). Assume that it is used to model a multiprocessor architecture, and that each node corresponds to a processor and each edge corresponds to a dedicated link between two processors. We choose some of the processors as codewords, and ask each of them to check its r-neighbourhood. Each codeword sends us the symbol 2 if it itself is malfunctioning, if it itself is fine, but at least one processor in its r-neighbourhood is malfunctioning, and 0 otherwise. When we have received the reports from all the codewords we wish to locate the malfunctioning processors, under the assumption that there are at most l of them. The integers r and l are given at the outset. This is the classical locating-dominating set problem if l = (and then the requirement for the code C, i.e., the set of codewords, is that the sets B r (x) C are different and nonempty for all x F n \C). This leads to the following definition. Definition 7.. Let r and l be non-negative integers. A code C F n is an (r, l)- locating-dominating code of type A an (r, l)-lda for short if for all X F n and Y F n of size at most l, implies that X = Y. B r (X) C = B r (Y ) C (7.) X C = Y C (7.2)

74 72 Locating-Dominating Codes If X = /0, then B r (X) C = /0: so, in particular, the previous definition implies that B r (Y ) C /0 whenever Y /0. In the second variant we only consider subsets X and Y of F n \C. Then X C and Y C are empty, (7.2) becomes void, and (7.) takes the simple form B r (X) C = B r (Y ) C. This corresponds to a two-step testing procedure: we first fix all the codewords that have sent 2, and then immediately perform a second round, during which we only receive s and 0 s. Definition 7.2. Let r and l be non-negative integers. A code C F n is an (r, l)- locating-dominating code of type B an (r, l)-ldb for short if for all sets X F n \C and Y F n \C of size at most l, implies that X = Y. B r (X) C = B r (Y ) C Both definitions reduce to the definition of an r-locating-dominating code when l =. Clearly, an (r, l)-lda is always an (r, l)-ldb. The smallest cardinalities of an (r, l)-lda and an (r, l)-ldb of length n are denoted by (n) and LB ( l) (n), respectively. LA ( l) r r Example 7.3. (i) Let C be the even-weight code of length n 5, i.e., it consists of all 2 n words of length n that have even weight. We shall see in Theorem 7.9 that C is a (, (n + )/2 )-LDA. Clearly, C is not (, 2)-identifying (take X = {00...0}, Y = { ,00...0}). (ii) The code C = F n {0}, n 2, is a (, 2 n )-LDB (and hence a (, l)-ldb for all l). Clearly, C is not (, 2)-LDA (take X = {(x,0)} and Y = {(x,0),(x,)} for any x F n ). (iii) For all k l an (r, l)-lda is always (r, k)-lda, and an (r, l)-ldb is an (r, k)-ldb. 7. On the case l = In this section we examine the case l =. We denote L(n) = LA ( ) (n) = LB ( ) (n). From [40] we know that M (n) nk(n,2). This implies trivially that L(n) nk(n,2). Theorem 7.4. For all n 4, L(n + ) (2n )K(n,2).

75 7. On the case l = 73 Proof. Let C F n (n 4) be a code attaining the bound K(n,2). Denote by e the word of weight one whose first coordinate equals one. Let C = {c + e c C,e S (0)} and C 2 = {c + e c C,e S (0) \ e }. We claim that D = (C {0}) (C 2 {}) is -locating-dominating. Denote O i = F n {i} for i = 0,. By the construction, I(x) /0 for all x F n+. Assume that x,y F n+ \ D and x y. It is shown in [40] that C is - identifying. Hence, if x,y O 0, then I(x) I(y). Suppose x O 0 and y O. Because x / D, the structure of C implies that I(x) O 0 2 whereas I(y) O 0. Hence, I(x) I(y). Assume next that x,y O. There is c C {} such that d(c,x) 2. If x = c, then I(x) S (c) 3 and I(y) I(x) 2 by Lemma 2.. If x = c + e, then I(x) O 0 /0 and thus I(x) I(y). Assume then that x S 2 (c) and that x covers two elements of I(c). In this case only y = c could cover both elements of I(x) S (c), and this is impossible as we have seen. Suppose finally that x S 2 (c) and d(x,c + e ) =. Then y should cover the unique element in I(x) S (c) and consequently y S 2 (c), but then y covers also another codeword of I(c), which gives I(x) I(y). By [6, Theorem 0] we have the next theorem. Theorem n+ L(n). n + 3 In the proof of the next theorem we use the notation of excess, see page 52 (cf. [6]). Theorem can also be proved using the technique of the proof of Theorem 3.3. Theorem 7.6. L(n) n 2 2 n+ n 3 + 2n 2 + 3n 2. Proof. Suppose C is an optimal -locating-dominating code of length n. Denote by K the cardinality of C. It is not difficult to check that there can be only four kinds of words in F n :. a word which is covered by exactly one codeword, 2. a codeword, say c, which forms a couple with another codeword c 2 such that I(c ) = I(c 2 ) = {c,c 2 }, 3. a word x which has E(x) 2, and 4. a word y which has E(y) = and for which there exists a word x such that E(x) 2 and I(y) I(x).

76 74 Locating-Dominating Codes A word of type 3 is called a father. A word of type 4 is called a son; the word x such that I(y) I(x) is called its father (it is easy to see that x is uniquely defined). A family consists of a father and its sons. The families, couples and points with excess zero partition the whole space F n. Suppose that a father is covered by i codewords. Then there are at most ( ) i 2 sons in the family. The average excess on the points in a family whose father is covered by exactly i 3 codewords is therefore at least f (i) := ( i ) 2 + i ( i. 2) + This is a decreasing function on i 4; and f (3) = f (6). Assume that n 6. Then f (i) f (n) for all 3 i n. Since C is an optimal code there is no codeword c C such that I(c) = n +. Namely, if there were such a codeword, then we could remove it and still get a -locating-dominating code. This is impossible since C is optimal. The excess on F n is K(n + ) 2 n. We now estimate it in a different way. There are at most 2K words outside the families. The uniquely covered points contribute nothing to excess. We can estimate K(n + ) 2 n (2 n 2K) f (n), from which we get the claimed lower bound on K for n 6. When n = 5 then the minimum of f (3), f (4) and f (5) is f (3). Now we can estimate 6K 2 5 (2 5 2K) f (3). From this we get K 9. And, thus, the lower bound holds for n = 5. Similarly, when n = 4 we notice that the lower bound holds. For n 3 the lower bound is true by Theorem 7.5. Theorem 7.7. If C F n is -locating-dominating with the property that d(c,c \ {c}) = for all c C, then D = C F is -identifying. Proof. Let x,y F n+, x = (x,x 2 ) and y = (y,y 2 ), where x,y F n and x 2,y 2 F. If x y and x,y C, then there is c I(C;x ) I(C;y ). Without loss of generality we can assume c I(C;x ). Thus, (c,x 2 ) I(D;x) \ I(D;y). Assume x y and at least the other one of them belong to C, assume x C. Then d(x,y ) and, thus, (x,y 2 + ) I(D;x) \ I(D;y). If x = y, then x 2 y 2. By the assumption d(a,c \ {a}) = for all a F n, we know that there is c C such that d(c,x ) = and, thus, (c,x 2 ) I(D;x) \ I(D;y). It is proved in [6] (see also Theorem 3.20) that if C F n is -identifying, then C F 2 is also -identifying. The next theorem shows that we do not need a -identifying code in this construction, -locating-dominating code is enough. as was pointed out by Tero Laihonen

77 7.2 Locating-dominating codes of type A 75 Theorem 7.8. Let C F n be -locating-dominating, then C F 2 is -identifying which satisfies that for all c C F 2 we have d(c,(c F 2 ) \ {c}) =. Proof. It will be proved in Theorem 7.8 that C F is -locating-dominating. The direct sum construction also implies that all the codewords are 2-fold -covered in F n+ and in F n+2. The claim follows from the previous theorem. 7.2 Locating-dominating codes of type A In this section we consider locating-dominating codes of type A when l 3. We have the following characterization of the locating-dominating codes of type A; cf. [45]. Theorem 7.9. Assume that l 3. A code C F n is a (, l)-lda if and only if every element of F n \C is covered by at least 2l codewords of C. Proof. Assume first that every element of F n \ C is covered by at least 2l codewords of C. Assume that X and Y are two subsets of F n, each of size at most l, and that X C = Y C and B (X) C = B (Y ) C. (7.3) Assume contrary that x X \C does not belong to Y. Without loss of generality x = As a non-codeword, x is covered by at least 2l codewords, say c,..., c 2l ; and without loss of generality, they have been indexed in such a way that c,..., c i X, whereas c i+,..., c 2l / X (0 i l ). The fact that X C = Y C immediately implies that c,..., c i Y and c i+,..., c 2l / Y. By (7.3), the codewords c i+,..., c 2l are therefore covered by the at most l i words of weight two of Y, each covering at most two of the codewords c i+,..., c 2l. Because 2(l i ) < 2l i, there are exactly l i words of weight two in Y (together with the remaining i words of weight one). We first notice that the set X does not contain any word of weight four. Indeed, because X C = Y C, and Y does not contain any word of weight four, there is no codeword of weight four in X. Any non-codeword of weight four in X would be covered by at least (2l ) 4 (> 0) codewords of weight five, which cannot be, because no word in Y could cover a codeword of weight five. Assume that y Y \C has weight two. We show that y X. By assumption, y is covered by at least 2l codewords, of which at least 2l 3 have weight three; and they all must be covered by the words in X. Because there are no words of weight four in X, all these at least 2l 3 words of weight three must be covered by the at most l i words of weight two and three in X. Because 2l 3 > l i for all l 3, and no word of weight two or three other than y itself covers more than one of these at least 2l 3 words, we conclude that y must belong to X.

78 76 Locating-Dominating Codes We have shown that Y \C X. We already knew that Y C = X C X, and hence Y X. But Y has size l, and therefore Y = X, contradicting the fact that x X \Y. We have therefore shown that C is a (, l)-lda. Assume conversely that C is a (, l)-lda, and let x F n \C be arbitrary. Without loss of generality, x = Assume that the codewords that cover x are c, c 2,..., c j. Let i = j/2. We can choose i words x, x 2,..., x i F n \ {x} that together cover all the words c, c 2,..., c j. Then X = {x,x 2,...,x i,x} and Y = {x,x 2,...x i } satisfy (7.) and (7.2), and both have cardinality at most i+ l, unless j 2l. Hence every non-codeword is covered by at least 2l codewords of C. Let us denote by K(n,r, µ,ν) the smallest cardinality of a code C F n such that every non-codeword is r-covered by at least µ codewords and every codeword is r-covered by at least ν codewords. From the proof of Theorem 6.2 we get the following lower bound. Theorem 7.0. For l 3, LA ( l) (2l )2n (n) = K(n,,2l,) n + 2l. Proof. By the previous theorem it suffices to prove the inequality. Assume that C F n attains the bound K(n,,2l,). We count in two ways the number of pairs (c,x), where c C, x F n and d(c,x). Given c, there are n + choices for x, so the number of such pairs equals (n + ) C. Given x, there are at least 2l choices for c if x is not a codeword, and at least one, if x is a codeword. Hence (n + ) C (2l )(2 n C ) + C, which gives the claim. From [45] we know that M ( l) (n) (2l )2 n /(n + ) for a given l 3 when n. Because LA ( l) (n) M ( l) (n), the previous theorem gives the following corollary. Corollary 7.. For a fixed l 3, when n. LA ( l) (2l )2n (n) n We also get the following infinite family of optimal codes.

79 7.3 Locating-dominating codes of type B 77 Theorem 7.2. Assume that l 3. Then for all s =,2,... LA ( l) ((2l )(2 s )) = 2 (2l )(2s ) s Proof. By Theorem 6.3 we know that for n = (2l )(2 s ) the inequality of Theorem 7.0 is attained by a linear code whose parity check matrix has as its columns 2l copies of every non-zero element in F s. Theorem 7.3. Let l 3. If C is a (, l)-lda then the direct sum C F is also a (, l)-lda. Thus LA ( l) (n) 2LA ( l) (n ). Proof. The claim follows from Theorem 7.9 since the direct sum preserves the property that each non-codeword is covered by at least 2l codewords. 7.3 Locating-dominating codes of type B In this section we consider (r, l)-locating-dominating codes of type B for l 2 and r. Firstly, we prove lower bounds and secondly we introduce some constructions Lower bounds We denote N = F n \C, C i = {c C I(c) = i}, N i = {x N I(x) = i} and C i, j = j k=i C k and further N i, j = j k=i N k. Theorem 7.4. Let 2 l n. Then LB ( l) (n) l n + l 2n Proof. Assume that C F n is a (, l)-ldb where 2 l n. Let us count in two ways the number of pairs (x,c) where x N,l, c C n and d(c,x) =. Every x N,l has at least one c C n in I(x), since otherwise there would exist y i N S 2 (x) B (c i ) for all c i I(x) and, thus,. I(y,...,y I(x) ) = I(y,...,y I(x),x)

80 78 Locating-Dominating Codes which is impossible. On the other hand, each c C n can have at most one x N,l at distance one from it. Therefore, N,l C n. (7.4) Next we compute in two ways the number of pairs (x,c) where c C, x N l+,n and d(x,c) =. All the non-codewords at distance one from c C belong to N l+,n. Indeed, assume that there were a word x at distance one from c such that I(x) = {c,...,c i,c} (0 i l ). If i = 0, let y be any word other than x which is at distance from c. Then I(y) = I(y,x). Assume that i. Denoting by y i the unique word other than x such that d(c,y i ) = d(y i,c i ) =, we get the contradiction (note that y i N) I(y,...,y i ) = I(y,...,y i,x). On the other hand, a non-codeword x N l+,n can have at most n elements of C at distance one from it. Consequently, C N l+,n. Finally, we count the number of pairs (x,c) such that x F n, c C and d(x,c). Now using (7.4) and the previous inequality, we obtain C (n + ) l N (l ) N,l + N l+,n + 2 C C + (n 2) C n This yields the claim. l N + 2 C. Lemma 7.5. Let C F n be a (, l)-ldb with 2 l n. If j is a nonnegative integer satisfying l j and l + j n, then (l j) N,l+ j (n l + j + ) C l j,n + (l j) C n. Proof. Let us count the number of pairs (c,x), where c C l j,n, x N,l+ j and d(x,c) =, in two ways. As mentioned in the proof of Theorem 7.4, if x N,l, then there is at least one such pair (c,x). Let us then concentrate on x N l,l+ j. If there is c I(x) such that c C n, then there is again one sought pair. Assume then that for all c I(x) we have I(c) n. Denote by M l,l+ j the subset of N l,l+ j which consists of all such elements x. Let x M l,l+ j and without loss of generality x = Denote by P a maximal subset of {y N S 2 (x) I(y) I(x) = 2} such that its minimum distance is four. Since C is a (, l)-ldb, we must have l I(x) P. (7.5)

81 7.3 Locating-dominating codes of type B 79 Indeed, suppose I(x) P l. Denote by y i a word in N S 2 (x) such that d(y i,c i ) = for c i R = I(x) \ I(P). Notice that such words y i exist for every c i because x M l,l+ j. Now this leads to which is impossible because I(P {y,...,y R }) = I(P {y,...,y R,x}) P {y,...,y R,x} = I(x) P + l. Clearly, by (7.5), 0 P I(x) l j. Hence, by virtue of (7.5), R = I(x) 2 P l P l j. Since P is maximal, {y S 2 (x) I(y) R = 2} C, and consequently, we get at least l j pairs from every x M l,l+ j. All in all the number of sought pairs is at least N,l + N l,l+ j \ M l,l+ j + (l j) M l,l+ j = N,l + N l,l+ j + (l j ) M l,l+ j. On the other hand, at distance one from c C l j,n (resp. c C n ) there are at most n l + j + words (resp. one word) of N and hence possibly of N,l+ j. Thus, the number of sought pairs is at most (n l + j + ) C l j,n + C n. Furthermore, one immediately verifies the modification of (7.4) that N,l + N l,l+ j \M l,l+ j C n. Multiplying this by l j and adding it to the inequality which we get from the estimates of the number of pairs proves the claim. Theorem 7.6. Let n + l n. Denote { l 2 } n k = min,n l. + l + n Then LB ( l) l + k + (n) n + l + k + 2n. Proof. Let C F n be a (, l)-ldb with n + l n. Obviously, C (n + ) (l + k + ) N (k + ) N l,l+k (l + k) N,l + C + (l k ) C l k,n + (n ) C n. Since l n we get by (7.4) that C (n + ) (l + k + ) N (k + ) N l,l+k (k + ) N,l + C + (l k ) C l k,n + (n l) C n.

82 80 Locating-Dominating Codes It is easy to check that the choice of k satisfies and furthermore l k (k + )(n l + k + ) l k n l k +. Consequently, using Lemma 7.5 for j = k one gets C (n + ) (l + k + ) N + C. The claim follows immediately from this Constructions In this section we give different constructions for locating-dominating codes of type B. In the beginning we consider r =, in Theorem 7.22 we have r 2. Theorem 7.7. Let n 3. If l 2 ( 2n 2 + 2n + ), then LB ( l) (n) = 2 n. Proof. If l 2 ( 2n 2 + 2n + ), then in Theorem 7.6 we have k = n l. Combining this lower bound to Example 7.3(ii) and (iii) we get the claim. Theorem 7.8. If C is a (, l)-ldb, then C F is also a (, l)-ldb. In particular, LB ( l) (n + ) 2LB ( l) (n). Proof. Suppose C is a (, l)-ldb of length n. Divide the words of F n+ into two layers O 0 and O depending on the last bit of the word, i.e., O i = F n {i} for i F. Only a codeword can have in its I-set a codeword from the other layer than where it itself lies. Thus, the I-set of a set of non-codewords on the layer O i has codewords only from the same layer. So, if X l, then X O i l for both i F, and because C is a (, l)-ldb, I(X O i ) O i uniquely identifies X O i for both i F. We denote by A(n,d,w) the smallest cardinality of a code of length n and minimum distance d whose codewords all have weight w. For values of this function, see [53]. Denote S i = S i (0). Theorem 7.9. For l n/2 we have LB ( l) (n) K(n,l + )(V(n,l) A(n,6,l)).

83 7.3 Locating-dominating codes of type B 8 Proof. Let D be a code attaining the value K(n,l+), A a code attaining the value A(n,6,l), and B = B l (00...0). We shall show that the code C = D + (B \ A) = {c + b c D, b B \ A} is a (, l)-ldb. In the code C there are at most K(n,l + )(V(n,l) A(n,6,l)) codewords. Suppose there are sets X N and Y N such that X Y, X, Y l and I(X) = I(Y ). There is a word x X such that x Y. Since x C there is a codeword c D such that d(x,c) = l or l+. Without loss of generality we can assume that c = If d(x,c) = l, then x is covered by l codewords of weight l. None of them can belong to I(Y ) since B l (00...0) N = /0 and non-codewords in S l \{x} are at least at distance six from x. Suppose now that d(x,c) = l+. Then l I(x) S l l+. If y Y covers a codeword of weight l in I(x), then the weight of y must be l + ; and since y x, y can cover only one codeword of weight l in I(x). Because Y l, this implies that I(x) S l = l, Y = l, Y S l+, and each word in Y covers one codeword from I(x) S l. For y Y we have I(y) S l = l +, because the minimum distance of non-codewords of weight l is six. As we have already seen (for x whose role y now assumes), this is only possible if y X. Consequently, X must contain all the l elements of Y together with x / Y, which is impossible, since X l. The previous theorem is interesting in the light of Conjecture 2.4. Namely, for every fixed l 2 Theorems 7.4 and 7.9 give the bounds l n 2n ( + f (n)) LB ( l) (n) l + n 2n ( + g(n)), where both f (n) and g(n) tend to zero when n. Next we construct some (, 2)-LDBs. Theorem Let n 5 and C be a code attaining the bound K(n,3). Then the code D = C {c + x c C,x S 2 (0)} is a (, 2)-LDB and has cardinality at most K(n,3)( + ( n 2) ). Proof. Suppose there are X,Y F n \ D such that X, Y 2, X Y and I(X) = I(Y ). By the definition of D, every y / D has distance or 3 to C. We first show that any word y Y for which there is a codeword c C such that d(c,y) = also belongs to X. Indeed, such a word y is covered by n ( 5) codewords of D and thus by Lemma 2. no set of size at most two where y does not belong to can cover all those words.

84 82 Locating-Dominating Codes Since X Y there is y Y \ X such that d(c,y) = 3 for some c C. Without loss of generality, c = and y = 0... Then y is covered by the codewords 00..., and of D. If an element x of X F n \ D covers at least one of these three words, then x has weight one or three. If w(x) = 3, then x ( y) covers at most one of them. Consequently, there is a word x X of weight one which covers two of them. But being of weight one, x must also belong to Y. Without loss of generality, x = Still we have one codeword c = which is not in I(x). To cover this word c we need one more word in the set X. This word cannot have weight one since then it should belong to Y as well, which is impossible. So, without loss of generality, the other word in X is 0... But then and belong to I(X) but not to I(Y ), which gives the required contradiction. Example 7.2. The repetition code { , } satisfies K(7, 3). By the previous theorem we have a (, 2)-LDB of length 7 and cardinality 44. From this we get by Theorem 7.8 a (, 2)-LDB of length 8 and cardinality 88. Let C be the perfect binary Golay code of length 23, dimension 2 and minimum distance 7 (and covering radius 3). By Theorem 7.20 we can construct a (, 2)-LDB of length 23 with cardinality 2 2 ( + ( 23 2 ) ) = The next theorem 2 considers locating-dominating codes of type B when r 2 and l = 2. Theorem Let r 2. Let C F n be a code attaining K(n,2r). Then C = C + B r (0) is (r, 2)-locating-dominating of type B. Proof. Suppose we are given a set I r (X) and we have to settle out the set X F n \ C. We show that for c C we can tell what X B 2r (c) is by looking at B r (c) I r (X). Without loss of generality assume that c = 0 and X B 2r (c)\b r (c) where X 2. Let k be the smallest weight of codewords in B r (0) I r (X). If I r (X) S k (0) = ( ) k+r k, then supp(x) = supp(u) u I r (X) S k (0) for any word x X of weight k + r. If I r (X) \ I r (x) = /0, then X = {x}, otherwise X = {x,y}. We already know that w(x) < w(y). Hence, there is i supp(y) \ supp(x). Denote A = {i supp(y) i supp(x)} and B = {i supp(y) i supp(x)}. Because w(y) r = A + B r A + w(x) r and A, any word v of weight w(y) r in I r (X) which has supp(v) B w(x) r and supp(v) A (7.6) 2 Research is done with Tero Laihonen.

85 7.3 Locating-dominating codes of type B 83 is at least at distance r + from x and at distance r from y. Thus, v I r (X)\I r (x). On the other hand, the union of the supports of all the possible choices of words v satisfying (7.6) includes all the coordinates of supp(y). Hence, supp(y) = v I r (X)\I r (x) w(v)=w(y) r supp(v) Suppose I r (X) S k (0) > ( ) k+r k. Now {x,y} Sk+r (0). Suppose first that k >. For i supp(x) supp(y) we know that ( ) k + r {c S k (0) I r (X) i supp(c)} =. (7.7) k If i supp(x) supp(y), then ( ) k + r {c S k (0) I r (X) i supp(c)} >. k If i / supp(x) supp(y), then {c S k (0) I r (X) i supp(c)} = 0. Hence, for some i such that (7.7) is satisfied, we see that and supp(x) = supp(y) = c I r (X) S k (0) i supp(c) supp(c) c (I r (X)\I r (x)) S k (0) supp(c). If k =, then we can do the same except using k + = 2 instead of k (support i have to occur now r times). There are codewords of weight k + = 2 when r 2. Corollary For r 2 LB r ( 2) (n) V (n,r)k(n,2r). In Table 7. we have collected lower and upper bounds on the cardinalities of -locating-dominating codes, (, 2)- and (, 3)-locating-dominating codes of type B for short lengths.

86 84 Locating-Dominating Codes Table 7.: Values of the cardinalities of locating-dominating codes. The cases n = are trivial. The lower bounds L(2) 2 and L(3) 4 are easy to check by hand and Example 7.3(iii) then gives the lower bounds for lengths two and three for l 2. n L(n) LB ( 2) (n) LB ( 3) (n) a 6 A c 8 D c 8 D 5 a 0 A d 6 D c 5 6 D 6 b 6 8 B d 9 29 E c D 7 b C d F c D 8 b 50 6 A d G c D 9 b 9 5 C d G d D Key to table a Lower bound obtained by a computer search. b Theorem 7.6 c Theorem 7.6 d Theorem 7.4 A See Appendix B Theorem 7.4 C L ( l) (n) M ( l) (n) D Example 7.3(iii) E See Appendix F Theorem 7.20 G Theorem Optimal linear locating-dominating codes A binary code C F n is said to be linear if it is a k-dimensional subspace of F n. In this section we consider linear (, l)-ldbs and (, l)-ldas. We denote by LB ( l) [n] (resp. LA ( l) [n]) the smallest dimension of a linear (, l)-ldb (resp. (, l)-lda) of length n. Let us first consider the case l = and denote L[n] = LA ( ) [n] = LB ( ) [n]. Theorem L[n] n + log 2 3 log 2 (n + 5). Proof. Suppose C is a linear -locating-dominating code of length n and C = K. There can be K non-codewords which are covered by one codeword each. All

87 7.4 Optimal linear locating-dominating codes 85 the other non-codewords must be covered by at least three times. Namely, if I(x) = {c,c 2 } for some x C, then for y = x + c + c 2, I(y) = I(x) by Lemma 6.. Thus, we have 2K + 3(2 n 2K) K(n + ), from which the claim follows. Theorem Let n = 3 2 k 5+s, for k and 0 s < 3 2 k. Then L[n] = n k. Proof. The lower bound follows from Theorem Because n 3 2 k 5, we can choose a (k n)-matrix H in which every nonzero column appears at least three times except for one nonzero column which appears exactly once. Let C be the code with parity check matrix H. We claim that C is a -locating-dominating. Each non-codeword which is covered by at least three codewords is identified by Lemma 2.. Each non-codeword of N is also identified, because for each c C, there is exactly one word of N at distance one from it. Theorem For all n 5, LB ( 2) [n] n + log 2 5 log 2 (n + 5). Proof. Let C F n be a linear (, 2)-LDB of cardinality K. If I(x) = {c} for some x C, then by the proof of (7.4) we have I(c) = n and, thus, by Lemma 6., I(c) = n for all c C. Hence, K n + (2 n K) K(n + ), i.e., K 2 n, and the claim follows. So assume that I(x) 2 for all x C. If I(x) = 2 for some x C, then I(x) = {x + e i,x + e j }, for some e i,e j S (0), e i e j. Now I(x) I(x + e i + e j ), which is impossible unless x + e i + e j C. This would imply that e i,e j C and thus, also x C. This is impossible by the assumption. Similarly, as in Theorem 6.7 we can show that each non-codeword must be covered by at least five codewords. Now we have K +5(2 n K) K(n+), from which the claim follows. Theorem Let n = 5(2 k )+s, for k and 0 s < 5 2 k. Then LB ( 2) [n] = n k. Proof. The lower bound follows from Theorem Let H be a (k n)-matrix where every nonzero k-tuple appears at least five times. Let C be the code which has H as its parity check matrix. Every non-codeword is covered at least by five codewords of C. Thus, any set of size at most 2 cannot cover the I-set of a codeword x unless this set does not include x. This proves the claim. Theorem For l 3 and n 2l, LB ( l) [n] n + log 2 (2l ) log 2 (n + 2l ).

88 86 Locating-Dominating Codes Proof. Assume that C is a linear (, l)-ldb of length n and cardinality K. If there is a word x / C such that I(x) =, then the beginning of the proof of Theorem 7.26 (since C is trivially also a (, 2)-LDB) shows that K 2 n, and since we have assumed that n 2l, we are already done. Assume now that I(x) 2 for all x / C. We shall show that each noncodeword must be covered by at least 2l codewords. Assume to the contrary that for some x / C, I(x) = j where 2 j 2l 2, and denote I(x) = {x + e,...,x + e j }, where e i S (0). For all i j/2, let y i = x + e 2i + e 2i (which is the unique word other than x at distance one from both x + e 2i and x + e 2i ). If j is odd, define y 0 = x + e + e j. Let Y consists of all the words y i, i j/2, together with y 0, if j is odd. Because C is linear and x / C, we know that Y F n \ C. Now I(Y ) = I(Y {x}) gives a contradiction, because Y {x} = j/2 + l. Thus, we have K + (2l )(2 n K) K(n + ), and the claim follows. Theorem Let l 3 and n = (2l )(2 k ) + s, for k and 0 s < (2l )2 k. Then LB ( l) [n] = n k. Proof. The lower bound follows from Theorem Let H be a (k n)-matrix in which every nonzero k-tuple appears at least 2l times. The code which has H as a parity check matrix is (, l)-ldb by Theorem 7.9, and has the required dimension. The previous proof in fact gives the following theorem as an immediate corollary. Theorem Let l 3. For all n 2l we have Theorem 7.3. For all n 5, we have LA ( l) [n] = LB ( l) [n]. LA ( 2) [n] = LB ( 2) [n]. Proof. By Theorems 7.27, 7.29 and 7.30 we know that for all n 5, LB ( 2) [n] = LB ( 3) [n] = LA ( 3) [n]. Trivially, LA ( 3) [n] LA ( 2) [n] LB ( 2) [n], and the claim follows.

89 Appendix In this chapter we list some identifying and locating-dominating codes found using a computer. The - and 2-identifying codes are from [8], see also [7]. The (2, 2)-identifying codes presented here have not been published before. The locating-dominating codes are from [36] except the one of length 8. -identifying codes The binary expressions of the numbers in the next table form a -identifying code of length 8 and cardinality 62, [7] The binary expressions of the numbers in the next table form a -identifying code C 9 of length 9 and cardinality 5, [7]

90 88 Appendix The codewords covered only by themselves in the previous code C 9 are A = { , , , 000, 000, 00000, 00000, 00}. The set B = {000000, 0000, , , 0000, 000} -covers A. Now (C 9 F) (B {0}) is -identifying of length 0 and cardinality 236. The code C = {c F 5 w(c) {0,,4,5}} F 5 is a 2-fold -covering and -identifying (see [40]) and C = 2. By Theorem 3.0 it is the smallest possible such a code. Applying the (π(u), u, u + v)-construction of Theorem 3. to C we get a -identifying code C of length. From C we can remove words {(π(u),u,u + v) u F 5, w(u) even, v {00000,}} and the result is still -identifying. The cardinality of the code is 352. A direct sum of the code C of length and F is a -identifying code by Theorem By a computer search, we found that (C F) \ A, where A is of size 72, is also -identifying. Thus, M (2) 696. The set A consists of the binary expressions of the numbers in the following table The code of length 3 made by (π(u),u,u + v)-construction from the code of length 6 in Example 3.3 can be improved by removing a set B of size 64. Thus, M (3) 344. The set B is the binary expressions of the numbers in the following table

91 89 2-identifying codes The binary expressions of the numbers 4, 6, 9, 9, 26, 34, 53, 90, 93, 07, 08, 3, 8, 2 form a 2-identifying code of length 7 and cardinality 4, [7]. The binary expressions of the numbers 7, 9, 26, 36, 55, 63, 64, 85, 07, 4, 44, 63, 74, 85, 205, 2, 220, 222, 232, 244, 246 form a 2-identifying code of length 8 and cardinality 2, [7]. The binary expressions of the numbers in the next table form a 2-identifying code of length 9 and cardinality 36, [7] The binary expressions of the numbers in the next table form a 2-identifying code C 0 of length 0 and cardinality By adding to C 0 the binary expressions of the numbers 26, 62, 32, 205, 45, 467, 620, 624, 704, 78, 793, 07 and removing 834 we get a 2- identifying code D 0 of cardinality 74. The code D 0 has the property that for all x F n there is c D 0 such that d(x,c) = 2. Theorem 3.20 shows that D 0 F is 2-identifying of length and cardinality 48. By adding to C 0 the binary expressions of the numbers 32, 205, 45, 620, 624, 793, 07 we get a 2-identifying code E 0 of cardinality 70. The code E 0 has the property that for all c E 0 we have d(c,e 0 \ {c}) 2. Theorem 3.20 shows that E 0 F 2 is a 2-identifying code of length 2 and cardinality 280. (2, 2)-identifying codes The binary expressions of the next numbers form a (2, 2)-identifying code of length 6 and cardinality 25:

92 90 Appendix The binary expressions of the following 35 elements subset of the (, 2)- identifying code of length 7 in Theorem 4.4 is (2, 2)-identifying The direct sum of this code and F is a (2, 2)-identifying code of length 8 and cardinality 70. Locating-dominating codes Codes {0000, 00, 00, 0, 0, 0} F 4 and {x F 5 w(x) = or 4} are optimal -locating-dominating. The lower bounds are obtained by a computer. The -identifying code of length 8 and cardinality 62 on page 87 is - locating-dominating even if a word 2 is removed from it. The binary expressions of the numbers form a (, 2)-LDB of cardinality 29 and length 6.

93 Bibliography [] Y. Ben-Haim and S. Litsyn. Exact minimum density of codes identifying vertices in the square grid. SIAM J. Discrete Math., 9():69 82, [2] T. Y. Berger-Wolf, W. E. Hart, and J. Saia. Discrete sensor placement problems in distribution networks. Math. Comput. Modelling, 42(3): , [3] N. Bertrand, I. Charon, O. Hudry, and A. Lobstein. Identifying and locatingdominating codes on chains and cycles. European J. Combin., 25(7): , [4] N. Bertrand, I. Charon, O. Hudry, and A. Lobstein. -identifying codes on trees. Australas. J. Combin., 3:2 35, [5] U. Blass, I. Honkala, and S. Litsyn. On binary codes for identification. J. Combin. Des., 8(2):5 56, [6] U. Blass, I. Honkala, and S. Litsyn. Bounds on identifying codes. Discrete Math., 24(-3):9 28, 200. [7] I. Charon, S. Gravier, O. Hudry, A. Lobstein, M. Mollard, and J. Moncel. A linear algorithm for minimum -identifying codes in oriented trees. Discrete Appl. Math., 54(8): , [8] I. Charon, I. Honkala, O. Hudry, and A. Lobstein. General bounds for identifying codes in some infinite regular graphs. Electron. J. Combin., 8():Research Paper 39, 2 pp., 200. [9] I. Charon, O. Hudry, and A. Lobstein. Identifying and locating-dominating codes: NP-completeness results for directed graphs. IEEE Trans. Inform. Theory, 48(8): , [0] I. Charon, O. Hudry, and A. Lobstein. Identifying codes with small radius in some infinite regular graphs. Electron. J. Combin., 9():Research Paper, 25 pp., 2002.

94 92 Bibliography [] I. Charon, O. Hudry, and A. Lobstein. Minimizing the size of an identifying or locating-dominating code in a graph is NP-hard. Theoret. Comput. Sci., 290(3): , [2] I. Charon, O. Hudry, and A. Lobstein. Extremal cardinalities for identifying and locating-dominating codes in graphs. Discrete Math., 307(3-5): , [3] G. Cohen, I. Honkala, S. Litsyn, and A. Lobstein. Covering Codes. Elsevier, Amsterdam, 997. [4] G. Cohen, I. Honkala, A. Lobstein, and G. Zémor. New bounds for codes identifying vertices in graphs. Electron. J. Combin., 6:Research Paper 9, 4 pp., 999. [5] G. D. Cohen, I. Honkala, A. Lobstein, and G. Zémor. On identifying codes. In A. Barg and S. Litsyn, editors, Proceedings of the DIMACS Workshop on Codes and Association Schemes, volume 56, pages 97 09, AMS, Providence, 200. [6] C. J. Colbourn, P. J. Slater, and L. K. Stewart. Locating dominating sets in series parallel networks. Congr. Numer., 56:35 62, 987. Sixteenth Manitoba conference on numerical mathematics and computing (Winnipeg, Man., 986). [7] G. Exoo. Computational results on identifying t-codes. Preprint. [8] G. Exoo, T. Laihonen, and S. Ranto. Improved upper bounds on binary identifying codes. IEEE Trans. Inform. Theory, submitted. [9] A. Frieze, R. Martin, J. Moncel, M. Ruszinkó, and C. Smyth. Codes identifying sets of vertices in random networks. Discrete Math., 307(9-0):094 07, [20] D. Garnick and N. A. Nieuwejaar. Nonisomorphic extremal graphs without three-cycles or four-cycles. J. Combin. Math. Combin. Comput., 2:33 56, 992. [2] D. K. Garnick, Y. H. H. Kwong, and F. Lazebnik. Extremal graphs without three-cycles or four-cycles. J. Graph Theory, 7: , 993. [22] S. Gravier, J. Moncel, and A. Semri. Identifying codes of cycles. European Journal of Combinatorics, 27(5): , [23] H. O. Hämäläinen, I. S. Honkala, M. K. Kaikkonen, and S. N. Litsyn. Bounds for binary multiple covering codes. Des. Codes Cryptogr., 3:25 275, 993.

95 Bibliography 93 [24] I. Honkala. On the identifying radius of codes. In I. Honkala and T. Harju, editors, Proceedings of the Seventh Nordic Combinatorial Conference, TUCS General Publications, pages 33 43, Turku, 999. [25] I. Honkala. An optimal locating-dominating set in the infinite triangular grid. Discrete Math., 306(2): , [26] I. Honkala, M. G. Karpovsky, and L. B. Levitin. On robust and dynamic identifying codes. IEEE Trans. Inform. Theory, 52(2):599 6, [27] I. Honkala and T. Laihonen. On identification in the triangular grid. J. Combin. Theory Ser. B, 9():67 86, [28] I. Honkala and T. Laihonen. On identifying codes in the triangular and square grids. SIAM J. Comput., 33(2):304 32, [29] I. Honkala and T. Laihonen. On locating-dominating sets in infinite grids. European J. Combin., 27(2):28 227, [30] I. Honkala and T. Laihonen. On a new class of identifying codes in graphs. Inform. Process. Lett., 02(2-3):92 98, [3] I. Honkala and T. Laihonen. On identifying codes that are robust against edge changes. Inform. and Comput., to appear. [32] I. Honkala and T. Laihonen. On vertex-robust identifying codes of level three. Ars Combin., to appear. [33] I. Honkala, T. Laihonen, and S. Ranto. Codes for strong identification. In Electronic Notes in Discrete Mathematics, volume 6, pages , 200. Proceedings of International Workshop on Coding and Cryptography. [34] I. Honkala, T. Laihonen, and S. Ranto. On codes identifying sets of vertices in Hamming spaces. Des. Codes Cryptogr., 24(2):93 204, 200. [35] I. Honkala, T. Laihonen, and S. Ranto. On strongly identifying codes. Discrete Math., 254(-3):9 205, [36] I. Honkala, T. Laihonen, and S. Ranto. On locating-dominating codes in binary Hamming spaces. Discrete Math. Theor. Comput. Sci., 6(2):265 28, [37] I. Honkala and A. Lobstein. On identifying codes in binary Hamming spaces. J. Combin. Theory Ser. A, 99(2): , [38] I. Honkala and A. Lobstein. On the complexity of the identification problem in Hamming spaces. Acta Inform., 38(-2): , 2002.

96 94 Bibliography [39] I. Honkala and A. Lobstein. On the density of identifying codes in the square lattice. J. Combin. Theory Ser. B, 85(2): , [40] M. G. Karpovsky, K. Chakrabarty, and L. B. Levitin. On a new class of codes for identifying vertices in graphs. IEEE Trans. Inform. Theory, 44(2):599 6, 998. [4] M. G. Karpovsky, K. Chakrabarty, L. B. Levitin, and D. R. Avresky. On the covering of vertices for fault diagnosis in hypercubes. Inform. Process. Lett., 69(2):99 03, 999. [42] M. Laifenfeld and A. Trachtenberg. Disjoint identifying-codes for arbitrary graphs. In Proceedings of International Symposium on Information Theory, ISIT 2005, pages , [43] M. Laifenfeld, A. Trachtenberg, and T. Y. Berger-Wolf. Identifying codes and the set cover problem. In Proceedings of the 44th Annual Allerton Conf. on Communication, Control and Computing, Monticello, USA, [44] T. Laihonen. Optimal codes for strong identification. European J. Combin., 23(3):307 33, [45] T. Laihonen. Sequences of optimal identifying codes. IEEE Trans. Inform. Theory, 48(3): , [46] T. Laihonen. On optimal edge-robust and vertex-robust (, l)-identifying codes. SIAM J. Discrete Math., 8(4): , [47] T. Laihonen and S. Ranto. Codes identifying sets of vertices. In Applied algebra, algebraic algorithms and error-correcting codes (Melbourne, 200), volume 2227 of Lecture Notes in Computer Science, pages Springer, Berlin, 200. [48] T. Laihonen and S. Ranto. Families of optimal codes for strong identification. Discrete Appl. Math., 2(-3):203 23, [49] T. Laihonen and S. Ranto. Codes identifying sets of binary words with large radii. In Proceedings of International Workshop on Coding and Cryptography WCC 2007, to appear. [50] A. Lobstein. Identifying and locating-dominating codes in graphs, a bibliography. Published electronically at!"#. [5] J. Moncel. Constructing codes identifying sets of vertices. Des. Codes Cryptogr., 4():23 3, 2006.

97 Bibliography 95 [52] J. Moncel. Monotonicity of the minimum cardinality of an identifying code in the hypercube. Discrete Appl. Math., 54(6): , [53] E. M. Rains and N. J. A. Sloane. Table of constant weight binary codes. Published electronically at $ $ $"# % & & ' (! )%&% (( % * %($. [54] D. F. Rall and P. J. Slater. On location domination numbers for certain classes of graphs. Congr. Numer., 45:97 06, 984. [55] S. M. Ranto. Optimal linear identifying codes. IEEE Trans. Inform. Theory, 49(6): , [56] S. M. Ranto, I. S. Honkala, and T. K. Laihonen. Two families of optimal identifying codes in binary Hamming spaces. IEEE Trans. Inform. Theory, 48(5): , [57] S. Ray, D. Starobinski, A. Trachtenberg, and R. Ungrangsi. Robust location detection with sensor networks. IEEE Journal on Selected Areas in Comminications (Special Issue on Fundamental Performance Limits of Wireless Sensor Networks), vi. 22, [58] P. J. Slater. Domination and loation in graphs. Research raport 93, National University of Singapore, 983. [59] P. J. Slater. Domination and location in acyclic graphs. Networks, 7():55 64, 987. [60] P. J. Slater. Dominating and reference sets in a graph. J. Math. Phys. Sci., 22: , 988. [6] P. J. Slater. Locating dominating sets and locating-dominating sets. In Graph Theory, Combinatories and Applications: Proceedings of the Seventh Quadrennial International Conference on the Theory and Applications of Graphs, volume 2, pages Wiley, 995. [62] P. J. Slater. Fault-tolerant locating-dominating sets. Discrete Math., 249( 3):79 89, [63] N. J. A. Sloane. The on-line encyclopedia of integer sequences. Published electronically at $ $ $"# % & & ' (! )%&% % (+ ( %. [64] R. Struik. Covering Codes. Ph. D. thesis, Eindhoven University of Technology, the Netherlands, 994.

98 Turku Centre for Computer Science TUCS Dissertations 5. Attila Gyenesei, Discovering Frequent Fuzzy Patterns in Relations of Quantitative Attributes 52. Petteri Kaitovaara, Packaging of IT Services Conceptual and Empirical Studies 53. Petri Rosendahl, Niho Type Cross-Correlation Functions and Related Equations 54. Péter Majlender, A Normative Approach to Possibility Theory and Soft Decision Support 55. Seppo Virtanen, A Framework for Rapid Design and Evaluation of Protocol Processors 56. Tomas Eklund, The Self-Organizing Map in Financial Benchmarking 57. Mikael Collan, Giga-Investments: Modelling the Valuation of Very Large Industrial Real Investments 58. Dag Björklund, A Kernel Language for Unified Code Synthesis 59. Shengnan Han, Understanding User Adoption of Mobile Technology: Focusing on Physicians in Finland 60. Irina Georgescu, Rational Choice and Revealed Preference: A Fuzzy Approach 6. Ping Yan, Limit Cycles for Generalized Liénard-type and Lotka-Volterra Systems 62. Joonas Lehtinen, Coding of Wavelet-Transformed Images 63. Tommi Meskanen, On the NTRU Cryptosystem 64. Saeed Salehi, Varieties of Tree Languages 65. Jukka Arvo, Efficient Algorithms for Hardware-Accelerated Shadow Computation 66. Mika Hirvikorpi, On the Tactical Level Production Planning in Flexible Manufacturing Systems 67. Adrian Costea, Computational Intelligence Methods for Quantitative Data Mining 68. Cristina Seceleanu, A Methodology for Constructing Correct Reactive Systems 69. Luigia Petre, Modeling with Action Systems 70. Lu Yan, Systematic Design of Ubiquitous Systems 7. Mehran Gomari, On the Generalization Ability of Bayesian Neural Networks 72. Ville Harkke, Knowledge Freedom for Medical Professionals An Evaluation Study of a Mobile Information System for Physicians in Finland 73. Marius Cosmin Codrea, Pattern Analysis of Chlorophyll Fluorescence Signals 74. Aiying Rong, Cogeneration Planning Under the Deregulated Power Market and Emissions Trading Scheme 75. Chihab BenMoussa, Supporting the Sales Force through Mobile Information and Communication Technologies: Focusing on the Pharmaceutical Sales Force 76. Jussi Salmi, Improving Data Analysis in Proteomics 77. Orieta Celiku, Mechanized Reasoning for Dually-Nondeterministic and Probabilistic Programs 78. Kaj-Mikael Björk, Supply Chain Efficiency with Some Forest Industry Improvements 79. Viorel Preoteasa, Program Variables The Core of Mechanical Reasoning about Imperative Programs 80. Jonne Poikonen, Absolute Value Extraction and Order Statistic Filtering for a Mixed-Mode Array Image Processor 8. Luka Milovanov, Agile Software Development in an Academic Environment 82. Francisco Augusto Alcaraz Garcia, Real Options, Default Risk and Soft Applications 83. Kai K. Kimppa, Problems with the Justification of Intellectual Property Rights in Relation to Software and Other Digitally Distributable Media 84. Dragoº Truº can, Model Driven Development of Programmable Architectures 85. Eugen Czeizler, The Inverse Neighborhood Problem and Applications of Welch Sets in Automata Theory 86. Sanna Ranto, Identifying and Locating-Dominating Codes in Binary Hamming Spaces

99

100 Turku Centre for Computer Science Joukahaisenkatu 3-5 B, Turku, Finland University of Turku Department of Information Technology Department of Mathematics Åbo Akademi University Department of Information Technologies Turku School of Economics Institute of Information Systems Sciences ISBN ISSN

On Locating-Dominating Codes in Binary Hamming Spaces

On Locating-Dominating Codes in Binary Hamming Spaces Discrete Mathematics and Theoretical Computer Science 6, 2004, 265 282 On Locating-Dominating Codes in Binary Hamming Spaces Iiro Honkala and Tero Laihonen and Sanna Ranto Department of Mathematics and

More information

Ville Junnila. On Identifying and Locating-Dominating Codes. Turku Centre for Computer Science

Ville Junnila. On Identifying and Locating-Dominating Codes. Turku Centre for Computer Science Ville Junnila On Identifying and Locating-Dominating Codes Turku Centre for Computer Science TUCS Dissertations No 137, June 2011 On Identifying and Locating-Dominating Codes Ville Junnila To be presented,

More information

Minimizing the size of an identifying or locating-dominating code in a graph is NP-hard

Minimizing the size of an identifying or locating-dominating code in a graph is NP-hard Theoretical Computer Science 290 (2003) 2109 2120 www.elsevier.com/locate/tcs Note Minimizing the size of an identifying or locating-dominating code in a graph is NP-hard Irene Charon, Olivier Hudry, Antoine

More information

arxiv: v1 [cs.dm] 5 Feb 2018

arxiv: v1 [cs.dm] 5 Feb 2018 Optimal Bounds on Codes for Location in Circulant Graphs arxiv:1802.01325v1 [cs.dm] 5 Feb 2018 Ville Junnila and Tero Laihonen Department of Mathematics and Statistics University of Turku, FI-20014 Turku,

More information

Graphs where every k-subset of vertices is an identifying set

Graphs where every k-subset of vertices is an identifying set Graphs where every k-subset of vertices is an identifying set Sylvain Gravier Svante Janson Tero Laihonen Sanna Ranto 8 January 2008 Abstract Let G = (V, E) be an undirected graph without loops and multiple

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

Graphs where every k-subset of vertices is an identifying set

Graphs where every k-subset of vertices is an identifying set Graphs where every k-subset of vertices is an identifying set Sylvain Gravier, Svante Janson, Tero Laihonen, Sanna Ranto To cite this version: Sylvain Gravier, Svante Janson, Tero Laihonen, Sanna Ranto.

More information

The edge-density for K 2,t minors

The edge-density for K 2,t minors The edge-density for K,t minors Maria Chudnovsky 1 Columbia University, New York, NY 1007 Bruce Reed McGill University, Montreal, QC Paul Seymour Princeton University, Princeton, NJ 08544 December 5 007;

More information

General bounds for identifying codes in some infinite regular graphs

General bounds for identifying codes in some infinite regular graphs General bounds for identifying codes in some infinite regular graphs Irène Charon CNRS & ENST 46, rue Barrault 75634 Paris Cedex 13 - France charon@infres.enst.fr Olivier Hudry CNRS & ENST 46, rue Barrault

More information

On non-antipodal binary completely regular codes

On non-antipodal binary completely regular codes On non-antipodal binary completely regular codes J. Borges, J. Rifà Department of Information and Communications Engineering, Universitat Autònoma de Barcelona, 08193-Bellaterra, Spain. V.A. Zinoviev Institute

More information

Irredundant Families of Subcubes

Irredundant Families of Subcubes Irredundant Families of Subcubes David Ellis January 2010 Abstract We consider the problem of finding the maximum possible size of a family of -dimensional subcubes of the n-cube {0, 1} n, none of which

More information

Construction of Codes Identifying Sets of Vertices

Construction of Codes Identifying Sets of Vertices Construction of Codes Identifying Sets of Vertices Sylvain Gravier CNRS - UJF, ERTé Maths à Modeler, Groupe de Recherche GéoD Laboratoire Leibniz, 46, avenue Félix Viallet, 38031 Grenoble Cedex (France)

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

Hypercube Coloring and the Structure of Binary Codes

Hypercube Coloring and the Structure of Binary Codes Hypercube Coloring and the Structure of Binary Codes by James Gregory Rix A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The College of Graduate Studies

More information

Mathematics Course 111: Algebra I Part I: Algebraic Structures, Sets and Permutations

Mathematics Course 111: Algebra I Part I: Algebraic Structures, Sets and Permutations Mathematics Course 111: Algebra I Part I: Algebraic Structures, Sets and Permutations D. R. Wilkins Academic Year 1996-7 1 Number Systems and Matrix Algebra Integers The whole numbers 0, ±1, ±2, ±3, ±4,...

More information

Bichain graphs: geometric model and universal graphs

Bichain graphs: geometric model and universal graphs Bichain graphs: geometric model and universal graphs Robert Brignall a,1, Vadim V. Lozin b,, Juraj Stacho b, a Department of Mathematics and Statistics, The Open University, Milton Keynes MK7 6AA, United

More information

arxiv: v1 [math.co] 13 May 2016

arxiv: v1 [math.co] 13 May 2016 GENERALISED RAMSEY NUMBERS FOR TWO SETS OF CYCLES MIKAEL HANSSON arxiv:1605.04301v1 [math.co] 13 May 2016 Abstract. We determine several generalised Ramsey numbers for two sets Γ 1 and Γ 2 of cycles, in

More information

Fixed Parameter Algorithms for Interval Vertex Deletion and Interval Completion Problems

Fixed Parameter Algorithms for Interval Vertex Deletion and Interval Completion Problems Fixed Parameter Algorithms for Interval Vertex Deletion and Interval Completion Problems Arash Rafiey Department of Informatics, University of Bergen, Norway arash.rafiey@ii.uib.no Abstract We consider

More information

Extension of continuous functions in digital spaces with the Khalimsky topology

Extension of continuous functions in digital spaces with the Khalimsky topology Extension of continuous functions in digital spaces with the Khalimsky topology Erik Melin Uppsala University, Department of Mathematics Box 480, SE-751 06 Uppsala, Sweden melin@math.uu.se http://www.math.uu.se/~melin

More information

Partial cubes: structures, characterizations, and constructions

Partial cubes: structures, characterizations, and constructions Partial cubes: structures, characterizations, and constructions Sergei Ovchinnikov San Francisco State University, Mathematics Department, 1600 Holloway Ave., San Francisco, CA 94132 Abstract Partial cubes

More information

Identifying Codes and Domination in the Product of Graphs

Identifying Codes and Domination in the Product of Graphs Clemson University TigerPrints All Dissertations Dissertations 5-2014 Identifying Codes and Domination in the Product of Graphs Kirsti Wash Clemson University, kirstiw@clemson.edu Follow this and additional

More information

EMBEDDED PATHS AND CYCLES IN FAULTY HYPERCUBES

EMBEDDED PATHS AND CYCLES IN FAULTY HYPERCUBES EMBEDDED PATHS AND CYCLES IN FAULTY HYPERCUBES NELSON CASTAÑEDA AND IVAN S. GOTCHEV Abstract. An important task in the theory of hypercubes is to establish the maximum integer f n such that for every set

More information

11 Minimal Distance and the Parity Check Matrix

11 Minimal Distance and the Parity Check Matrix MATH32031: Coding Theory Part 12: Hamming Codes 11 Minimal Distance and the Parity Check Matrix Theorem 23 (Distance Theorem for Linear Codes) Let C be an [n, k] F q -code with parity check matrix H. Then

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Ramsey Theory. May 24, 2015

Ramsey Theory. May 24, 2015 Ramsey Theory May 24, 2015 1 König s Lemma König s Lemma is a basic tool to move between finite and infinite combinatorics. To be concise, we use the notation [k] = {1, 2,..., k}, and [X] r will denote

More information

The Covering Excess Method in the Theory of Covering Codes

The Covering Excess Method in the Theory of Covering Codes The Covering Excess Method in the Theory of Covering Codes Wolfgang Haas Dissertation zur Erlangung des Doktorgrades der Fakultät für Mathematik und Physik der Albert-Ludwigs-Universität Freiburg im Breisgau

More information

Sydney University Mathematical Society Problems Competition Solutions.

Sydney University Mathematical Society Problems Competition Solutions. Sydney University Mathematical Society Problems Competition 005 Solutions 1 Suppose that we look at the set X n of strings of 0 s and 1 s of length n Given a string ɛ = (ɛ 1,, ɛ n ) X n, we are allowed

More information

Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes

Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes These notes form a brief summary of what has been covered during the lectures. All the definitions must be memorized and understood. Statements

More information

Minimum sizes of identifying codes in graphs differing by one edge or one vertex

Minimum sizes of identifying codes in graphs differing by one edge or one vertex Minimum sizes of identifying codes in graphs differing by one edge or one vertex Taille minimum des codes identifiants dans les graphes différant par un sommet ou une arête Irène Charon Olivier Hudry Antoine

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

The cocycle lattice of binary matroids

The cocycle lattice of binary matroids Published in: Europ. J. Comb. 14 (1993), 241 250. The cocycle lattice of binary matroids László Lovász Eötvös University, Budapest, Hungary, H-1088 Princeton University, Princeton, NJ 08544 Ákos Seress*

More information

Maximising the number of induced cycles in a graph

Maximising the number of induced cycles in a graph Maximising the number of induced cycles in a graph Natasha Morrison Alex Scott April 12, 2017 Abstract We determine the maximum number of induced cycles that can be contained in a graph on n n 0 vertices,

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

0 Sets and Induction. Sets

0 Sets and Induction. Sets 0 Sets and Induction Sets A set is an unordered collection of objects, called elements or members of the set. A set is said to contain its elements. We write a A to denote that a is an element of the set

More information

6 Permutations Very little of this section comes from PJE.

6 Permutations Very little of this section comes from PJE. 6 Permutations Very little of this section comes from PJE Definition A permutation (p147 of a set A is a bijection ρ : A A Notation If A = {a b c } and ρ is a permutation on A we can express the action

More information

Graph Theory. Thomas Bloom. February 6, 2015

Graph Theory. Thomas Bloom. February 6, 2015 Graph Theory Thomas Bloom February 6, 2015 1 Lecture 1 Introduction A graph (for the purposes of these lectures) is a finite set of vertices, some of which are connected by a single edge. Most importantly,

More information

Strongly chordal and chordal bipartite graphs are sandwich monotone

Strongly chordal and chordal bipartite graphs are sandwich monotone Strongly chordal and chordal bipartite graphs are sandwich monotone Pinar Heggernes Federico Mancini Charis Papadopoulos R. Sritharan Abstract A graph class is sandwich monotone if, for every pair of its

More information

Support weight enumerators and coset weight distributions of isodual codes

Support weight enumerators and coset weight distributions of isodual codes Support weight enumerators and coset weight distributions of isodual codes Olgica Milenkovic Department of Electrical and Computer Engineering University of Colorado, Boulder March 31, 2003 Abstract In

More information

Open neighborhood locating-dominating sets

Open neighborhood locating-dominating sets AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 46 (2010), Pages 109 119 Open neighborhood locating-dominating sets Suk J. Seo Computer Science Department Middle Tennessee State University Murfreesboro, TN

More information

Decomposing planar cubic graphs

Decomposing planar cubic graphs Decomposing planar cubic graphs Arthur Hoffmann-Ostenhof Tomáš Kaiser Kenta Ozeki Abstract The 3-Decomposition Conjecture states that every connected cubic graph can be decomposed into a spanning tree,

More information

5 Flows and cuts in digraphs

5 Flows and cuts in digraphs 5 Flows and cuts in digraphs Recall that a digraph or network is a pair G = (V, E) where V is a set and E is a multiset of ordered pairs of elements of V, which we refer to as arcs. Note that two vertices

More information

CONSTRAINED PERCOLATION ON Z 2

CONSTRAINED PERCOLATION ON Z 2 CONSTRAINED PERCOLATION ON Z 2 ZHONGYANG LI Abstract. We study a constrained percolation process on Z 2, and prove the almost sure nonexistence of infinite clusters and contours for a large class of probability

More information

Enumeration and symmetry of edit metric spaces. Jessie Katherine Campbell. A dissertation submitted to the graduate faculty

Enumeration and symmetry of edit metric spaces. Jessie Katherine Campbell. A dissertation submitted to the graduate faculty Enumeration and symmetry of edit metric spaces by Jessie Katherine Campbell A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY

More information

PERVERSE SHEAVES ON A TRIANGULATED SPACE

PERVERSE SHEAVES ON A TRIANGULATED SPACE PERVERSE SHEAVES ON A TRIANGULATED SPACE A. POLISHCHUK The goal of this note is to prove that the category of perverse sheaves constructible with respect to a triangulation is Koszul (i.e. equivalent to

More information

arxiv:cs/ v1 [cs.it] 15 Sep 2005

arxiv:cs/ v1 [cs.it] 15 Sep 2005 On Hats and other Covers (Extended Summary) arxiv:cs/0509045v1 [cs.it] 15 Sep 2005 Hendrik W. Lenstra Gadiel Seroussi Abstract. We study a game puzzle that has enjoyed recent popularity among mathematicians,

More information

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School Basic counting techniques Periklis A. Papakonstantinou Rutgers Business School i LECTURE NOTES IN Elementary counting methods Periklis A. Papakonstantinou MSIS, Rutgers Business School ALL RIGHTS RESERVED

More information

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of

More information

Modular Monochromatic Colorings, Spectra and Frames in Graphs

Modular Monochromatic Colorings, Spectra and Frames in Graphs Western Michigan University ScholarWorks at WMU Dissertations Graduate College 12-2014 Modular Monochromatic Colorings, Spectra and Frames in Graphs Chira Lumduanhom Western Michigan University, chira@swu.ac.th

More information

Lecture 2: Vector Spaces, Metric Spaces

Lecture 2: Vector Spaces, Metric Spaces CCS Discrete II Professor: Padraic Bartlett Lecture 2: Vector Spaces, Metric Spaces Week 2 UCSB 2015 1 Vector Spaces, Informally The two vector spaces 1 you re probably the most used to working with, from

More information

AALBORG UNIVERSITY. Total domination in partitioned graphs. Allan Frendrup, Preben Dahl Vestergaard and Anders Yeo

AALBORG UNIVERSITY. Total domination in partitioned graphs. Allan Frendrup, Preben Dahl Vestergaard and Anders Yeo AALBORG UNIVERSITY Total domination in partitioned graphs by Allan Frendrup, Preben Dahl Vestergaard and Anders Yeo R-2007-08 February 2007 Department of Mathematical Sciences Aalborg University Fredrik

More information

Solutions to Exercises Chapter 10: Ramsey s Theorem

Solutions to Exercises Chapter 10: Ramsey s Theorem Solutions to Exercises Chapter 10: Ramsey s Theorem 1 A platoon of soldiers (all of different heights) is in rectangular formation on a parade ground. The sergeant rearranges the soldiers in each row of

More information

Tilings of Binary Spaces

Tilings of Binary Spaces Tilings of Binary Spaces Gérard Cohen Département Informatique ENST, 46 rue Barrault 75634 Paris, France Simon Litsyn Department of Electrical Engineering Tel-Aviv University Ramat-Aviv 69978, Israel Alexander

More information

Maths 212: Homework Solutions

Maths 212: Homework Solutions Maths 212: Homework Solutions 1. The definition of A ensures that x π for all x A, so π is an upper bound of A. To show it is the least upper bound, suppose x < π and consider two cases. If x < 1, then

More information

Doyen-Wilson results for odd length cycle systems

Doyen-Wilson results for odd length cycle systems Doyen-Wilson results for odd length cycle systems arxiv:1411.3785v [math.co] 1 Jun 015 Daniel Horsley and Rosalind A. Hoyte School of Mathematical Sciences Monash University Vic 3800, Australia danhorsley@gmail.com,

More information

OUTER MEASURE AND UTILITY

OUTER MEASURE AND UTILITY ECOLE POLYTECHNIQUE CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE OUTER MEASURE AND UTILITY Mark VOORNEVELD Jörgen W. WEIBULL October 2008 Cahier n 2008-28 DEPARTEMENT D'ECONOMIE Route de Saclay 91128 PALAISEAU

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES The Pennsylvania State University The Graduate School Department of Mathematics STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES A Dissertation in Mathematics by John T. Ethier c 008 John T. Ethier

More information

Root systems and optimal block designs

Root systems and optimal block designs Root systems and optimal block designs Peter J. Cameron School of Mathematical Sciences Queen Mary, University of London Mile End Road London E1 4NS, UK p.j.cameron@qmul.ac.uk Abstract Motivated by a question

More information

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures 2 1 Borel Regular Measures We now state and prove an important regularity property of Borel regular outer measures: Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon

More information

Multi-coloring and Mycielski s construction

Multi-coloring and Mycielski s construction Multi-coloring and Mycielski s construction Tim Meagher Fall 2010 Abstract We consider a number of related results taken from two papers one by W. Lin [1], and the other D. C. Fisher[2]. These articles

More information

Graphs with few total dominating sets

Graphs with few total dominating sets Graphs with few total dominating sets Marcin Krzywkowski marcin.krzywkowski@gmail.com Stephan Wagner swagner@sun.ac.za Abstract We give a lower bound for the number of total dominating sets of a graph

More information

Zigzag Codes: MDS Array Codes with Optimal Rebuilding

Zigzag Codes: MDS Array Codes with Optimal Rebuilding 1 Zigzag Codes: MDS Array Codes with Optimal Rebuilding Itzhak Tamo, Zhiying Wang, and Jehoshua Bruck Electrical Engineering Department, California Institute of Technology, Pasadena, CA 91125, USA Electrical

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

Improved Upper Bounds on Sizes of Codes

Improved Upper Bounds on Sizes of Codes 880 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 4, APRIL 2002 Improved Upper Bounds on Sizes of Codes Beniamin Mounits, Tuvi Etzion, Senior Member, IEEE, and Simon Litsyn, Senior Member, IEEE

More information

Out-colourings of Digraphs

Out-colourings of Digraphs Out-colourings of Digraphs N. Alon J. Bang-Jensen S. Bessy July 13, 2017 Abstract We study vertex colourings of digraphs so that no out-neighbourhood is monochromatic and call such a colouring an out-colouring.

More information

A characterization of graphs by codes from their incidence matrices

A characterization of graphs by codes from their incidence matrices A characterization of graphs by codes from their incidence matrices Peter Dankelmann Department of Mathematics University of Johannesburg P.O. Box 54 Auckland Park 006, South Africa Jennifer D. Key pdankelmann@uj.ac.za

More information

Chapter VI. Relations. Assumptions are the termites of relationships. Henry Winkler

Chapter VI. Relations. Assumptions are the termites of relationships. Henry Winkler Chapter VI Relations Assumptions are the termites of relationships. Henry Winkler Studying relationships between objects can yield important information about the objects themselves. In the real numbers,

More information

: Error Correcting Codes. November 2017 Lecture 2

: Error Correcting Codes. November 2017 Lecture 2 03683072: Error Correcting Codes. November 2017 Lecture 2 Polynomial Codes and Cyclic Codes Amnon Ta-Shma and Dean Doron 1 Polynomial Codes Fix a finite field F q. For the purpose of constructing polynomial

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include PUTNAM TRAINING POLYNOMIALS (Last updated: December 11, 2017) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

Rank and Kernel of binary Hadamard codes.

Rank and Kernel of binary Hadamard codes. 1 Rank and Kernel of binary Hadamard codes. K.T. Phelps, J. Rifà Senior Member IEEE, M. Villanueva Abstract In this paper the rank and the dimension of the kernel for (binary) Hadamard codes of length

More information

Asymmetric binary covering codes

Asymmetric binary covering codes Asymmetric binary covering codes Joshua N Cooper and Robert B Ellis Department of Mathematics, University of California at San Diego, La Jolla, California E-mail: cooper@mathucsdedu, rellis@mathucsdedu

More information

Near-domination in graphs

Near-domination in graphs Near-domination in graphs Bruce Reed Researcher, Projet COATI, INRIA and Laboratoire I3S, CNRS France, and Visiting Researcher, IMPA, Brazil Alex Scott Mathematical Institute, University of Oxford, Oxford

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Upper Bounds of Dynamic Chromatic Number

Upper Bounds of Dynamic Chromatic Number Upper Bounds of Dynamic Chromatic Number Hong-Jian Lai, Bruce Montgomery and Hoifung Poon Department of Mathematics West Virginia University, Morgantown, WV 26506-6310 June 22, 2000 Abstract A proper vertex

More information

Small Label Classes in 2-Distinguishing Labelings

Small Label Classes in 2-Distinguishing Labelings Also available at http://amc.imfm.si ISSN 1855-3966 (printed ed.), ISSN 1855-3974 (electronic ed.) ARS MATHEMATICA CONTEMPORANEA 1 (2008) 154 164 Small Label Classes in 2-Distinguishing Labelings Debra

More information

2. Metric Spaces. 2.1 Definitions etc.

2. Metric Spaces. 2.1 Definitions etc. 2. Metric Spaces 2.1 Definitions etc. The procedure in Section for regarding R as a topological space may be generalized to many other sets in which there is some kind of distance (formally, sets with

More information

Triangle-free graphs with no six-vertex induced path

Triangle-free graphs with no six-vertex induced path Triangle-free graphs with no six-vertex induced path Maria Chudnovsky 1, Paul Seymour 2, Sophie Spirkl Princeton University, Princeton, NJ 08544 Mingxian Zhong Columbia University, New York, NY 10027 June

More information

On the Countable Dense Homogeneity of Euclidean Spaces. Randall Gay

On the Countable Dense Homogeneity of Euclidean Spaces. Randall Gay On the Countable Dense Homogeneity of Euclidean Spaces by Randall Gay A thesis submitted to the Graduate Faculty of Auburn University in partial fulfillment of the requirements for the Degree of Master

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

Connectivity of addable graph classes

Connectivity of addable graph classes Connectivity of addable graph classes Paul Balister Béla Bollobás Stefanie Gerke July 6, 008 A non-empty class A of labelled graphs is weakly addable if for each graph G A and any two distinct components

More information

Axioms for Set Theory

Axioms for Set Theory Axioms for Set Theory The following is a subset of the Zermelo-Fraenkel axioms for set theory. In this setting, all objects are sets which are denoted by letters, e.g. x, y, X, Y. Equality is logical identity:

More information

arxiv: v1 [cs.dm] 26 Apr 2010

arxiv: v1 [cs.dm] 26 Apr 2010 A Simple Polynomial Algorithm for the Longest Path Problem on Cocomparability Graphs George B. Mertzios Derek G. Corneil arxiv:1004.4560v1 [cs.dm] 26 Apr 2010 Abstract Given a graph G, the longest path

More information

MATH 31BH Homework 1 Solutions

MATH 31BH Homework 1 Solutions MATH 3BH Homework Solutions January 0, 04 Problem.5. (a) (x, y)-plane in R 3 is closed and not open. To see that this plane is not open, notice that any ball around the origin (0, 0, 0) will contain points

More information

EQUITABLE COLORING OF SPARSE PLANAR GRAPHS

EQUITABLE COLORING OF SPARSE PLANAR GRAPHS EQUITABLE COLORING OF SPARSE PLANAR GRAPHS RONG LUO, D. CHRISTOPHER STEPHENS, AND GEXIN YU Abstract. A proper vertex coloring of a graph G is equitable if the sizes of color classes differ by at most one.

More information

On decomposing graphs of large minimum degree into locally irregular subgraphs

On decomposing graphs of large minimum degree into locally irregular subgraphs On decomposing graphs of large minimum degree into locally irregular subgraphs Jakub Przyby lo AGH University of Science and Technology al. A. Mickiewicza 0 0-059 Krakow, Poland jakubprz@agh.edu.pl Submitted:

More information

CS 173: Induction. Madhusudan Parthasarathy University of Illinois at Urbana-Champaign. February 7, 2016

CS 173: Induction. Madhusudan Parthasarathy University of Illinois at Urbana-Champaign. February 7, 2016 CS 173: Induction Madhusudan Parthasarathy University of Illinois at Urbana-Champaign 1 Induction February 7, 016 This chapter covers mathematical induction, and is an alternative resource to the one in

More information

On Dominator Colorings in Graphs

On Dominator Colorings in Graphs On Dominator Colorings in Graphs Ralucca Michelle Gera Department of Applied Mathematics Naval Postgraduate School Monterey, CA 994, USA ABSTRACT Given a graph G, the dominator coloring problem seeks a

More information

arxiv: v1 [math.co] 22 Jan 2018

arxiv: v1 [math.co] 22 Jan 2018 arxiv:1801.07025v1 [math.co] 22 Jan 2018 Spanning trees without adjacent vertices of degree 2 Kasper Szabo Lyngsie, Martin Merker Abstract Albertson, Berman, Hutchinson, and Thomassen showed in 1990 that

More information

REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING

REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING California State University, San Bernardino CSUSB ScholarWorks Electronic Theses, Projects, and Dissertations Office of Graduate Studies 6-016 REALIZING TOURNAMENTS AS MODELS FOR K-MAJORITY VOTING Gina

More information

arxiv: v2 [cs.dm] 29 Mar 2013

arxiv: v2 [cs.dm] 29 Mar 2013 arxiv:1302.6346v2 [cs.dm] 29 Mar 2013 Fixed point theorems for Boolean networks expressed in terms of forbidden subnetworks Adrien Richard Laboratoire I3S, CNRS & Université de Nice-Sophia Antipolis, France.

More information

arxiv: v4 [cs.dm] 20 Nov 2017

arxiv: v4 [cs.dm] 20 Nov 2017 THE SPREAD OF COOPERATIVE STRATEGIES ON GRIDS WITH RANDOM ASYNCHRONOUS UPDATING CHRISTOPHER DUFFY 1 AND JEANNETTE JANSSEN 2 1 DEPARTMENT OF MATHEMATICS AND STATISTICS UNIVERSITY OF SASKATCHEWAN 2 DEPARTMENT

More information

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 x + + a n 1 x n 1 + a n x n, where the coefficients a 0, a 1, a 2,,

More information

Codes over Subfields. Chapter Basics

Codes over Subfields. Chapter Basics Chapter 7 Codes over Subfields In Chapter 6 we looked at various general methods for constructing new codes from old codes. Here we concentrate on two more specialized techniques that result from writing

More information

Zero sum partition of Abelian groups into sets of the same order and its applications

Zero sum partition of Abelian groups into sets of the same order and its applications Zero sum partition of Abelian groups into sets of the same order and its applications Sylwia Cichacz Faculty of Applied Mathematics AGH University of Science and Technology Al. Mickiewicza 30, 30-059 Kraków,

More information

Arc-chromatic number of digraphs in which every vertex has bounded outdegree or bounded indegree

Arc-chromatic number of digraphs in which every vertex has bounded outdegree or bounded indegree Arc-chromatic number of digraphs in which every vertex has bounded outdegree or bounded indegree S. Bessy and F. Havet, Projet Mascotte, CNRS/INRIA/UNSA, INRIA Sophia-Antipolis, 2004 route des Lucioles

More information

Isomorphisms between pattern classes

Isomorphisms between pattern classes Journal of Combinatorics olume 0, Number 0, 1 8, 0000 Isomorphisms between pattern classes M. H. Albert, M. D. Atkinson and Anders Claesson Isomorphisms φ : A B between pattern classes are considered.

More information

Rao s degree sequence conjecture

Rao s degree sequence conjecture Rao s degree sequence conjecture Maria Chudnovsky 1 Columbia University, New York, NY 10027 Paul Seymour 2 Princeton University, Princeton, NJ 08544 July 31, 2009; revised December 10, 2013 1 Supported

More information

OPTIMAL bisections are needed for optimal VLSI layouts

OPTIMAL bisections are needed for optimal VLSI layouts International Journal of Computer, Electrical, Automation, Control and Information Engineering Vol:1, No:3, 007 A Proof for Bisection Width of Grids Kemal Efe and Gui-Liang Feng Abstract The optimal bisection

More information