On the Structure of Low Autocorrelation Binary Sequences

Size: px
Start display at page:

Download "On the Structure of Low Autocorrelation Binary Sequences"

Transcription

1 On the Structure of Low Autocorrelation Binary Sequences Svein Bjarte Aasestøl University of Bergen, Bergen, Norway December 1,

2 blank 2

3 Contents 1 Introduction 5 2 Overview 5 3 Denitions Shift operation Hamming distance The Autocorrelation function (ACF) The search basics Collisions Child Parent Sibling Loop Tail End Tail Singularity The Basic Loop Search 15 5 Analysis of the Loop Search using the All Zero Sequence Periodic loop search, cyclic collision Periodic loop search, negacyclic collision Negaperiodic loop search Aperiodic loop search Analysis of the tail and loop sizes A closer look at the relation between parent and child sequences Recursive end tail search Symmetries in the structure of loops and tails 35 8 Exhaustive search exhaustive pc loop search Exhaustive nn loop search Exhaustive ac loop search connecting periodic and negaperiodic energies Loop sibling search sliding window, aperiodic search Aperiodic energies for any sub-length Alternating loop search Complexity and programming Computing the energy

4 15.2 Finding the loop Memory Other Open questions Conclusion End notes 69 4

5 1 Introduction In modern communication systems and system engineering, the need for binary sequences is steadily growing. Such sequences are used for the commonly used cell phone and for more advanced elds such as radar and sonar signals. However not all sequences are equally useful. For some areas of technology we need certain criteria to be fulllled with regard to the sequences, for them to be of any use to us. One of these is that the sequences should have low autocorrelation values. Finding such sequences for low lengths is easy, since we just have to do a complete exhaustive search of all the 2 x sequences, where x is the sequence length. However once we start to increase our sequence length, the total number of sequences grows exponentially. Since the amount of sequences with low autocorrelation values stays relatively low, the chance of nding one very rapidly diminishes. It does not take a very big increase in sequence length to make these kinds of searches computationally infeasible. While this eld of research is not new, it is still growing and needed more then ever before. While there are already several methods to construct sequences with low autocorrelation values, we will take a closer look at the relation between the sequences, using a sort of evolutionary search which starts by going down a path or trajectory in the full set of sequences and terminates in a set of resonating sequences dened by our search parameters. We hope that nding data about this structure will help us nd ways to skip parts of the process of nding these resonating sequences. Also we hope to use this structure to make changes to our initial search and thus be sure that if we start out with a random or other easily constructed sequence, we search to nd another sequence with a signicantly better autocorrelation value. Finally we aim to construct long, semi-innite sequences such that subsequences of such sequences have favorable spectral properties. 2 Overview In this thesis we will rst apply our search using the all zero sequence, {0,0,0,...,0} as input for dierent sequence lengths N. We will observe and analyze the structures this input, for dierent parameters, will generate. Then after this initial test of our search, we will take a closer look at the basic structure and relation of each step of our search. Analyzing this, we will then perform our search backwards, starting with sequences that have good autocorrelation properties. Our hope is that this might reveal a pattern to which sequences are connected to these low autocorrelation sequences. Further we will investigate how negated sequences are related, and how negating all sequences, or some of them in our structure will aect the structure as a whole. With this in mind we will do some exhaustive searches for low length sequences, especially noting parameters that are oppositional but generating structures that behave surprisingly similar, namely using periodic and negaperiodic autocorrelation. We will also look at the exhaustive search of aperiodic autocorrelation structures. Also, we will investigate ways to transform sequences from periodic autocorrelation into negaperiodic autocorrelation equivalents, using special transform sequences. Pos- 5

6 sibly we might be able to nd a way to directly transform one structure of a given type into another structure of another type, but still equivalent. Following this we intend to modify our search, so that we can use results from one search to possibly improve results of the next. Also by alternating our search parameters while performing our search might prove interesting, or using one parameter to optimize, while actually being interested in another parameter.. We will show that there are in fact several interesting structures and patters forming, depending on which parameters we use. Even the trivial all zero sequence can occasionally let our search terminate nding m-sequences. Symmetric properties are abundant, and we can show that structures in the periodic and negaperiodic autocorrelation environment are closely related both in size and behavior. Then we will generate new sequences, by taking one bit of each sequence in our structure. Using this new sequence, we will apply a sliding window, and show that for each subsequence the aperiodic properties are on average signicantly better then for random sequences. 3 Denitions Sequence The rst object we need a clear denition for is a Sequence. One possible denition would be: sequence: An arrangement of items according to a specied set of rules, for example, items arranged alphabetically, numerically, or chronologically. In this thesis we will be working on nite sequences where sequence S = {s 0, s 1, s 2,..., s N 1 } for N 1 such that s {-1,+1}. However for our purposes we can represent the same sequence S by using the alphabet s i {0,1}, where 1 = 1 0 and 1 = 1 1, which gives us the possibility to read each element of the sequence as a binary number, or as bits which a computer can use to represent any number, character or instruction. Bits have the value of False or True, or 0 and 1. One element in a sequence will in this thesis be referred to as a bit or several elements as bits. Only in the autocorrelation function will s i {-1,+1} be used to calculate a given sequences energy. 3.1 Shift operation This is the most basic operation we will be performing on the sequences. A shift operation is done by moving all bits in a Sequence one (or more) steps to the right or left. Let S = {s 0, s 1, s 2,..., s N 1 } be a sequence of length N. A right cyclic shift on S can be dened as Denition 1 Shift(S) = S i S i+1(mod N) i i {0,1,..., N-1} Example: Shift operations performed on the sequence s = {000111} 6

7 right shift operation Initial cyclic negacyclic acyclic left shift operation Initial cyclic negacyclic acyclic For clarity we will name the shift operations, used on a sequence S as crshift(s) = Cyclic Right Shift of S nrshift(s) = Negacyclic Right Shift of S arshift(s) = Acyclic Right Shift of S clshift(s) = Cyclic Left Shift of S nlshift(s) = Negacyclic Left Shift of S alshift(s) = Acyclic Left Shift of S 3.2 Hamming distance This metric is used to measure dierences between two given sequences. The Hamming distance can be dened as follows The minimum number of bits that must be changed in order to convert one bit string into another. or more formally, the Hamming distance between two strings A and B, each of length N: Denition 2 H(A, B) = N 1 i=0 A i B i 3.3 The Autocorrelation function (ACF) We then need a denition for the autocorrelation function (ACF). The autocorrelation function measures self similarity of a binary sequence. We consider three types of autocorrelation: periodic, negaperiodic and aperiodic. The periodic autocorrelation is by far the most studied. In this thesis we will use and look at all three types of autocorrelation, but with a slight focus on the periodic and aperiodic. The periodic autocorrelation (PACF) will measure the correlation of a sequence with the cyclic shift of itself. Let s be a binary sequence of length N, such that s = {s 0, s 1,..., s N 1 }, s i Z 2, and s i = 0, i < 0, i N. Then the periodic autocorrelation function is dened as N 1 P ACF k (s) : p k = ( 1) si s i+k, 0 k < N. (1) i=0 7

8 The parameter N represents the period of the cyclic shift, k is the shift index at which the sequence is compared to itself, and the sequence indices, i, are taken mod N. Another measure of the autocorrelation is the negaperiodic autocorrelation (NACF). NACF will measure the correlation of the negacyclic shift of a sequence against itself. The function is dened as Denition 3 NACF k (s) : n k = N 1 i=0 ( 1)si s i+k k+i N, 0 k < N. N now represents half the period of the sequence. Once again the indices i are taken mod N. A third way to measure the autocorrelation is a combination of periodic and negaperiodic called the aperiodic autocorrelation function (AACF). When we combine the periodic and the negaperiodic shifts of the sequence, we lose the periodicity of the shifted sequence. Instead of shifting the sequence cyclically, we compare the window of indices where both the shifted sequence and sequence itself exist. The AACF is then dened as Denition 4 AACF k (s) : a k = N k 1 i=0 ( 1) si s i+k, 1 k < N We can also represent PACF, NACF and AACF as polynomial multiplication. A binary sequence s = (s o, s 1, s 2,..., s N 1 can be associated with the polynomial s(x) = s 0 + s 1 x + s 2 x s N 1 x N 1. We can dene where and p(x) := P ACF (s(x)) = s((x)s(x 1 )(mod x N 1), (2) n(x) := NACF (s(x)) = s(x)s(x 1 )(mod x N + 1), (3) a(x) := AACF (s(x)) = s(x)s(x 1 ), (4) p(x) = n(x) = a(x) = N 1 k=0 N 1 k=0 N 1 k=1 N p i x i, (5) n i x i, (6) a i x i. (7) This representation includes all shifts N < k < N, where the autocorrelation for the kth shift is the coecient for x k. Example: Let N = 5 and s = The P ACF k is {5,-3,1,1,-3}, NACF k is {5,-1,1,-1,1} and the AACF k is {5,-1,1,0,-1} as seen below: This can also be written in polynomial form as P ACF (S(n)) = 5 3x + x 2 + x 3 3x 4. (8) NACF (S(n)) = 5 x + x 2 x 3 + x 4. (9) 8

9 k Cyclic shift P ACF k Negacyclic shift NACF k Acyclic shift AACF k Table 1: Showing periodic, negaperiodic and aperiodic autocorrelation. AACF (S(n)) = x 4 + x 2 2x x + x 2 x 4. (10) A common metric used to measure binary sequences with low aperiodic autocorrelation is the Golay Merit Factor (MF) proposed by Golay. The periodic Merit Factor of a binary sequence s, of length N is given by MF a (s) = For periodic autocorrelation we will be using MF p (s) = N 2 2. (11) N 1 k=1 a2 k N 2 N 1, (12) k=1 p2 k the negaperiodic Merit Factor of a binary sequence s, of length N is given by MF n (s) = N 2 N 1, (13) k=1 n2 k and the aperiodic Merit Factor of a binary sequence s, of length N is given by MF a (s) = N 2 2. (14) N 1 k=1 a2 k In this thesis we will be looking at all three Merit Factors, and the Merit Factors will be labeled MF p, MF n and MF a, however any outputs from our program will be usually labeled with the type of Merit Factor rst, i.e. pmf, nmf and amf. The higher the Merit Factor, the lower the values p k, n k and a k, 1 k < N. Note that the trivial autocorrelation coecient, where a 0 = N is not used in the calculations of the Merit Factor. Another metric we will use for binary sequences with low autocorrelation is the sum-of-squares, σ p, σ n, or σ a given by σ p (s) = σ n (s) = σ a (s) = N 1 k=1 N 1 k=1 N 1 k=1 9 p k 2. (15) n k 2. (16) a k 2. (17)

10 For brevity we will refer the sum-of-squaresas the Energy. The type of Energy a sequence can have is periodic, negaperiodic or aperiodic. We can see that σ is a part of the Merit Factor function, and the Merit Factor function can be written with the use of σ, for example the periodic Merit Factor is given by MF p (s) = N 2 Example: If N = 5 and s = the p k values are {5,-3,1,1,-3}. The sum-of-squares or Energy is then 2σ p. (18) σ p = ( 3) 2 + (1) 2 + (1) 2 + ( 3) 2 = = 20. (19) The periodic Merit Factor for this sequence is then MF p (s) = 25 = (20) 2(20) 3.4 The search basics The most important operation in our searches is the generation and selection of the next generation sequence. By starting out with a random binary sequence, S 0, we proceed to generate two new sequences based on S 0 by performing a right cyclic shift and a right negacyclic shift. Then we calculate the energy for both sequences S 1c and S 1n. The sequence with the lowest Energy is chosen and used as input for the next step. Should they have equal energy, there is a collision and we must use a method to decide which of them we should choose. This will eventually lead us to having a chain of sequences S 0, S 1, S 2,...,S n. Since we are working with sequences of length N, we have a nite number 2 N to choose from. Because of this, at some point our newly generated sequence S n will be identical to a previously chosen sequence S n x. At this point we terminate our search, as continuing would only lead to generating the same sequences over and over. Thus we have found our resonating sequences. Example: Given the sequence S 0 = {0000} and using periodic Energy, we then generate and S 1c = crshift(0000) = 0000, (21) S 1n = nrshift(0000) = (22) We then calculate the periodic Energy for S 1c and S 1n Energy p (S 1c ) = 48. (23) Energy p (S 1n ) = 0. (24) 10

11 We see that S 1n has the lowest periodic energy, so we choose that as our S 1. Now we repeat this process using S 1 as our initial sequence and nd S 2c and S 2n again we calculate their periodic Energy and nd S 2c = crshift(1000) = 0100, (25) S 2n = nrshift(1000) = 1100, (26) Energy p (S 1c ) = 0, (27) Energy p (S 1n ) = 16, (28) so we choose S 1c since it has the lowest periodic energy and set it as S 2. Now we have the set of sequences S = {0000}, {1000}, {0100}. If we keep repeating this process, we nd that the next sequences we choose will be S 3 = {0010}, S 4 = {0001}, S 5 = {1000}. Now, for every new sequence we choose, we must continually check back in the set S to see if we at any point generate and choose a sequence that already exists in S. We can see that in this example, S 5 = {1000} is identical to S 1. Proceeding any further is pointless as we will end up looping forever where we will repeatedly choose S 1, S 2, S 3, S 4 in this order. As we perform this and similar searches, some relations and structures of binary sequences show up very frequently and it is useful to dene them Collisions It is possible that, given a sequence S 0, after we generate two sequences S 1c and S 1n from S 0, we have the case that Energy(S 1c ) = Energy(S 1n ). When this occurs, we have a collision, meaning we can not know which of the two sequences would be the better choice to continue with. Example: Given the initial sequence S 0 = {10101}, we generate S 1c = {11010} and S 1n = {01010} by doing a cyclic right shift and a negacyclic right shift on S 0. By calculating the periodic energy for S 1c and S 1n we see that Energy p (S 1c ) = Energy p (S 1n ) = 20. (29) Thus we have a collision between S 1c and S 1n Child Starting with sequence S 0, we generate the sequences S 1c and S 1n and choose one based on our parameters. The sequence we choose is the best case child sequence of the original. All sequences have a child sequence. Note: While both sequences generated are children of the original sequence, the term child will be commonly used to describe the child sequence we would choose after looking at their energies. 11

12 3.4.3 Parent Starting with a sequence S 0, generate the sequences S 1c and S 1n. We now check for both the new sequences, which of them would choose S 0 as it's child (see Child). These are the structural parent(s 0 ) of S 0. A sequence can have 0, 1 or 2 parents that choose that sequence as the one with lowest energy. Note: As in child above, we here also have two possible sequences to choose from, but in this case they are parent sequences. Unlike in child however, where we must choose either of the children, with parent sequences it is likely that both parent sequences would lead to the same child. This property is important for estimating the number of End Tail sequences within a eld of sequences. We will look at this later Sibling This term describes the relation between two children. One child sequence can be called the sibling of the other child sequence (see: child). The easiest way to generate the sibling of a given sequence is to ip the rightmost bit. Note: a pair of sequences being siblings, have the same two parent sequences also Loop The most important structure in this thesis. This represents a chain of sequences S 0, S 1, S 1,..., S n such that starting with sequence S 0 and repeating our sequence generation method described above, we will traverse all the sequences in order and nd that S n = S 0. Example: The sequences 1000, 0100, 0010, 0001 form a Periodic energy, (see: the search basics) cyclic collision Loop. Starting with any one of the above sequences, generate S n+1c and S n+1n, and calculate their energy. Then we choose the generated sequence with lowest periodic energy. Repeating this process will lead us to loop around these four sequences Tail As mentioned in the explanation above, if we start with a random sequence and perform the sequence generation method repeatedly, we nd that some sequences might not be part of the loop. These sequences form what can be described as a tail or transient. For example given a chain of sequences S 0, S 1, S 2,...,S n, where S n+1 = S 2, the sequences S 0 and S 1 would be elements of the tail. Generally, but far from always, a loop will have a tail for every sequence within the loop. Example: If we use the same parameters as in the Loop example, the sequence S 0 = 0000 will be a tail. If we start with this given sequence, generate S 1c and S 1n which are 0000 and 1000, and calculate their periodic energy, then Energy p (S 1c ) = 48 and Energy p (S 1n ) = 0. We see that our search will choose 12

13 S 1n or 1000 as the current best sequence. If we look at the example above in loop denition, we can see now that repeating the process will lead us to be caught within the loop described above End Tail If we generate "backwards" and nd S 1c and S 1n, then if neither of the two sequences would nd that S 0 is with highest energy, then S 0 is an end tail. In other words, starting at any other sequence of the same length, our search will never pass through this S 0 sequence given the parameters. Example: Using the same sequence as above, we can show that the All Zero sequence is not only a tail sequence, but also an end tail sequence. Let us generate the sibling sequence of 0000, which is We see that both sequences have the same parent sequences, namely 0000 and If we calculate the periodic energy for the sibling pair, we get that pe(1000) = 0 and pe(0000) = 48, thus we get that 0000 will not be chosen by either of it's parents and is an End Tail sequence. In other words: the only way for our search to include this sequence in a structure, is by starting with it as input Singularity This is a very special case of the Loop structure. In this case we have a sequence, where the same sequence is both its own parent and child. Thus the whole loop and tail is made up of one sequence. There are for any sequence length only two possible cases of a singularity appearing. In order for a sequence to be both it's own child and parent, a cyclic or negacyclic shift must result in itself. The only sequences for which this is at all possible are the cases of the All Zero Sequence, , and it's negation the All Ones Sequence, Now we can look at how these sequences behave when we generate their children. First we start with the All Zero Sequence of length N = 5, generate the two children by performing a cyclic right shift and a negacyclic right shift. This will give us the sequences and We see that the All Zero Sequence has the maximum possible periodic energy. k P ACF k p k p k = = = = 5 25 The length of the sequence is irrelevant as in this case, the all-zero sequence regenerates always itself. However should we ip the leftmost bit the situation changes and for N = 5, we get the following table: 13

14 k P ACF k p k p k = = = = 1 1 We see that for every row there will be 2 bits that dier, and N-2 bits that are equal. Increasing the sequence length by adding zeros will not change this. Thus we can see that for periodic energy, the All Zero Sequence will never choose itself as it's best case child. If we use aperiodic energy, the tables are essentially the same, except half of the energy will disappear because of the aperiodic structure. k AACF a a k a k = = = = 1 1 We see in the above table, that the All Zero Sequence gives the maximum possible aperiodic energy for a given length. Looking at the sibling sequence we see that introducing one negated bit aects the table almost identically to the case for periodic energy. k AACF a a k a k = = = =-1 1 Again we see in the above table that one single bit has a major impact on the total sum. As with the periodic cause, the autocorrelation value will be one less then the autocorrelation for the All Zero Sequence for any length n. Thus, for aperiodic energy, the All Zero Sequence will never choose itself as it's best case child. Setting up the same tables for negaperiodic energy will show us some interesting properties. k NACF n n k n k = = = = -3 9 If we compare the above table with the All Zero sequence's sibling, as shown in the table below, we see that the tables are nearly identical. 14

15 k NACF n n k n k = = = = -3 9 An analysis of these two tables is in order. We see that both tables have one column that consists of always the same number. In the All Zero Sequence table, the last column contains only zeros. Decreasing or increasing the sequence length n will not change the last column. The rst n-1 columns will form an (n-1)*(n-1) matrix where exactly half of the values will be 0 and the other half 1. This also holds true for any length n. Looking at the second table, we can see the same properties. The rst column will always consist of 1's, regardless of sequence length, while the columns 2-n will also form a matrix (n-1)*(n-1) which is exactly identical to the rst. Thus we get that for any sequence of any length, the energy table will be made up of two identical parts for both the all zero sequence and it's sibling. Both will have a (n-1)*(n-1) matrix which is identical, and a column on the side which will provide identical values. Thus we have that for the All Zero Sequence of any length, it and it's sibling will always have identical negaperiodic energy tables, or in other words identical negaperiodic energy. Further selecting the cyclic child in case of a collision (which always appear in this case), we have that for any length n, the All Zero Sequence will always choose itself as best case child, thus we have a Singularity. For all practical purposes the reasoning used above holds for the All Ones Sequence, except that all bits are negated. To make things simpler we will dene a xy- prex for tails and loops to easier identify what parameters we used to generate them. This can be dened as Denition 5 a prex xy such that x {p, n, a} and y {c, n}. where the set {p, n, a} represents the three possible energies: periodic, negaperiodic, aperiodic, and the set {c, n} represents the two possible collisions: cyclic and negacyclic. 4 The Basic Loop Search The Basic Loop Search forms the most basic of our search methods. Starting out with an arbitrary sequence s 0, we proceed to generate subsequent sequences s 1, s 2, s 3,... until we hit a Loop. The easiest Loop to nd is when we use periodic autocorrelation and cyclic default collisions. These parameters make the structure we get very predictable, and this can be exploited in our program, reducing the CPU power needed. Generating a Loop from the all zero sequence, length 10 is shown in Table 2 Although the all zero sequence has the worst possible energy using periodic autocorrelation, this table serves to show the general structure of the Loop. We can see how we started out with a given sequence which has a rather high periodic energy and within very few steps, we have found the local minimum for energy. The periodic merit factor for the 15

16 Tail (pe: 900) (pe: 324) (pe: 100) (pe: 100) Loop (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) Table 2: A periodic energy search using cyclic collision default (pc-loop search). First four sequences make up the tail, while the last ten form a loop loop is not extremely high, but most denitely an improvement from the start. Initial periodic merit factor = , Loop periodic merit Factor = It should be noted that already in this example the need for deciding a default choice in case of collision shows up. The full output for this particular Loop search is shown in Table 3. The Sequences on the left side are those chosen by the program as best case, while the right side side sequences are those discarded, given our input parameters. Each row consists of the two children from the chosen sequence at the row above. If we look closer at this particular example, we can see that the tail-sequence nr 3. { } generates two children, { } and { } that have the same periodic energy. At this point we have a collision and we must decide which path to follow. For simplicity our searches will strictly choose either the cyclic child or the negacyclic child as default in case of collisions. Some other possible ways of solving this could be: randomly choosing which sequence to accept as the new best case, or use another metric to measure them, for instance calculating the aperiodic or negaperiodic energy for both, and make a decision based on this new metric. Using the periodic autocorrelation as the parameter causes all cyclic shifts of one sequence to have the same periodic energy as the initial sequence. When we also decide on using the cyclic collision default, we then eectively force every Loop we nd to consist of N sequences, where N = sequence length. If we use strictly negacyclic collisions, we nd that this does not necessarily hold. We will see later that using negacyclic collisions does not guarantee Loops of size N. Table 4 is a comparison of the two Loop searches using rst cyclic then negacyclic collisions. Tail Cyclic Default Negacyclic Default (pe: 900) (pe: 900) (pe: 324) (pe: 324) 16

17 (pe: 100) (pe: 100) (pe: 100) (pe: 100) (pe: 100) (pe: 68) Loop (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) Table 4: Comparison between a pc-loop and a pn-loop, starting with the same sequence s 0 = { }. In table 4 we can see how much the two loops dier simply by having to decide upon one or the other collision default. Another interesting observation is that the negacyclic collision Loop is now made up of parts of other cyclic collision loops. The bold faced sequences show every time when we have a collision. We can see that they are cyclically inequivalent. Another interesting observation here is that, contrary to what one might have expected, the distribution of sequences from each cyclic collision Loop are not even. Counting the sequences from the dierent Loops gives us = 20. If we move on to looking at the negacyclic autocorrelation, also starting with the all zero sequence, we get a new pattern in the Loop. For the Loop search with cyclic collisions, the best case Sequence for is itself. This holds for any time we start with the all zero sequence for any length of N. Thus we have a singularity. This can only happen for the all zero or the all ones sequences. However if we take a look at the negacylic collision option, we get a more expected output. The all zero sequence will yield the loop shown in Table 5 Chosen Sequences Tail Discarded Sequences 17

18 (ne: 240) (ne: 240) (ne: 240) (ne: 144) (ne: 240) (ne: 80) (ne: 144) (ne: 48) (ne: 80) (ne: 48) (ne: 48) Loop (ne: 16) (ne: 48) (ne: 16) (ne: 80) (ne: 16) (ne: 48) (ne: 16) (ne: 48) (ne: 16) (ne: 48) (ne: 16) (ne: 144) (ne: 16) (ne: 16) (ne: 16) (ne: 144) (ne: 16) (ne: 48) (ne: 16) (ne: 48) (ne: 16) (ne: 48) (ne: 16) (ne: 80) (ne: 16) (ne: 48) (ne: 16) (ne: 48) (ne: 16) (ne: 48) (ne: 16) (ne: 144) (ne: 16) (ne: 16) (ne: 16) (ne: 144) (ne: 16) (ne: 48) (ne: 16) (ne: 48) Table 5: A basic nc-loop search, each row represents the two children of the previous chosen best case sequence. The chosen sequences are found in the left column Again we started out with a sequence of high negaperiodic energy, and very rapidly improved it. The tail length stayed shorter than the Loop in this case also. At step 1 we can see the collision between the all zero child and the all zero child with 1 bit ipped. But since we here chose negacyclic collision, the search broke out of the singularity. Finally in Table 6, we will take a look at an example of how the all zero sequence performs when using the aperiodic autocorrelation. Tail (ae: 285) (ae: 141) (ae: 85) (ae: 45) (ae: 29) (ae: 13) 18

19 (ae: 13) (ae: 21) Loop (ae: 13) (ae: 21) (ae: 21) (ae: 29) (ae: 21) (ae: 29) (ae: 37) (ae: 45) (ae: 29) (ae: 13) (ae: 21) (ae: 21) (ae: 29) (ae: 21) (ae: 29) (ae: 37) (ae: 45) (ae: 29) (ae: 13) (ae: 21) (ae: 21) (ae: 29) (ae: 21) (ae: 29) (ae: 37) (ae: 45) (ae: 29) (ae: 13) (ae: 21) (ae: 21) (ae: 29) (ae: 21) (ae: 29) (ae: 37) (ae: 45) (ae: 29) Table 6: Basic ac loop search, using the all zero sequence of length 10 as input. The rst thing we can note is the length of the Loop, l tail = 36, not even a multiple of the sequence length n = 10. Second we note how the search hits the local minima even before entering the Loop. Even at step 6 in the tail, the energy a = 13. Once we enter the loop, energy a = 13 for the rst sequence. However thereafter there are only two occurrences of energy a = 13. The average 19

20 Tail Chosen Sequences Discarded Sequences (pe: 900) (pe: 324) (pe: 900) (pe: 100) (pe: 324) (pe: 100) (pe: 100) Loop Chosen Sequences Discarded Sequences (pe: 36) (pe: 100) (pe: 36) (pe: 68) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 36) (pe: 68) (pe: 36) (pe: 100) (pe: 36) (pe: 68) (pe: 36) (pe: 100) (pe: 36) (pe: 100) (pe: 36) (pe: 196) Table 3: The full search, where we can see in each step of the search which sequences where chosen and which were discarded. The best case sequences are found in the left column energy a being signicantly higher and maximum energy a within the loop being 45, almost 3.5 times as large as the local minima. Using negacyclic collisions does not show a signicant change in structure here, so we will refrain from showing that example. 5 Analysis of the Loop Search using the All Zero Sequence We will now take a closer look at the loop search using the all zero Sequence of dierent lengths. This starting sequence was chosen because one of our goals was to look at ways to transform easily constructed sequences with high energy into more complex sequences with low Energy. 5.1 Periodic loop search, cyclic collision In Table 7 we see some loop searches using periodic energy. Searches were rst performed with the cyclic collision, then the negacyclic collision parameter. Headings are dened as follows N - sequence length tail pc - length of the tail using a pc-loop search. loop pc - length of the pc-loop, which the tail pc terminates in. 20

21 pmf pc - best periodic merit factor found in loop pc. coll - number of collisions found in both tail pc and loop pc. tail pn - length of the tail using a pn-loop search. loop pn - length of the pn-loop, which the tail pn terminates in. pmf pn - best periodic merit factor found in loop pn. N tail pc loop pc pmf pc coll tail pn loop pn pmf pn Table 7: excerpt of a table comparing the dierence between cyclic and negacyclic collisions, using the all zero sequence as input for dierent lengths. If we look at the pc-loop searches, the rst thing we note is the very short pc-tail length. Considering that the all zero Sequence is a maximum energy p sequence, we expected that starting with this sequence would lead to a long tail, possibly 21

22 Figure 1: Comparing tail lengths of pc- and pn-loop searches, starting with the all zero sequence for lengths N = 10 to N = The two graphs are so close they are hard to dierentiate the longest tail found. However it is shown here that for sequences of short length, N < 17, most of the pc-tails are shorter then N. As we increase the sequence length N, it is easy to see that the tail length grows past N. We have to keep in mind that the total number of sequences that exists of this length grows exponentially in N. If we plot the size of the pc-tails on a graph, see Figure 1, it's very easy to see that, although the size is increasing with N, it is nowhere near achieving this kind of exponential growth. If we look at the size of the pc-loops, they seem to follow N in size. This happens because of the way doing cyclic shifts on a sequence aects the sequence's energy p. Performing any number of cyclic shift operations on a sequence will not change the energy p at all for any of the shifted sequences. This fact ensures that any time during the pc-loop search, we are faced with the following situations: When we start with the initial sequence S 0 of length N, we can already state the following facts. If we perform N cyclic shift operations on S 0, we will have t dierent sequences, where t N which all have the same Energy p, and it is highly likely that t=n. The only way during our search to prevent the above collection of sequences to form a loop is that at any point of our search, the negacyclic shifted sequence must have lower energy p than S 0. If we at any time nd a negacyclic shifted sequence that has better energy p, this sequence can now be regarded as our new S 0 and the above statements can be applied for this. 22

23 Figure 2: Periodic merit factors for pc-loops generated by the all zero sequence. From N = 10 to N = 1500 This shows us that for any sequence length N a pc-loop will also have a loop length of N, where all sequences in the loop are cyclic shifts of each other. This is very useful when checking for loops in our program, but this will be discussed later. However as we can see from Table 7, the periodic merit factor for the pc loops seems to come close to 6. If we take a look at Figure 2, it is very clear that the periodic merit factor converges on 7. From this graph we can draw the conclusion that starting a pc-loop search with the all zero sequence will, for longer length sequences, with high probability converge at Periodic loop search, negacyclic collision If we then look at the pn-loop searches starting with the all zero Sequence again, we see in general the same behavior as with the pc-loop searches. The tail length stays surprisingly short, although it often gives a dierent result compared to the pc-tail length, at a given N, but still grows exponentially in N. Finding that the pn-loop is very similar to the pc-loop was not a surprise, as the loops will only dier if there are one or more collisions during the search. We did not expect collisions to be common, and they are not, which we will see later on. Moreover if we look at the pn-loop lengths, we see that they are also mostly of length N. Several loops on the other hand have lengths that are a multiples of N. Examples of this happening are when N = {18, 31, 71, 75, 86}. Performing a pn-loop search starting with the all zero sequence of these lengths will yield loop lengths of {72, 62, 142, 150, 172} respectively. In terms of N these loops have lengths of {4N, 2N, 2N, 2N, 2N}. Our rst expectations of these loops was that, for the 2N-length loops, they consisted of two pc-loops that had the same energy p and at some point there was a collision and our search found a pn-loop 23

24 Figure 3: Graph showing the number of collisions found when performing a pc-loop search, using the all zero sequence as input for dierent lengths N. which is made up of two pc-loops. In the case where we have loop lengths of 4N, we would expect to have four pc-loops close together. While this does occur, it does not happen always. If we look at the example for periodic loops in Table 4, we can see that the pn-loop is actually made up from six dierent pc-loops. Another special case we should mention is when N = 14. In this case when we do a pn-loop search starting with the all zero Sequence we nd that the loop length = 92. This is a length of 6.571N, and shows that this particular pn-loop can not be made up of several full pc-loops.finally we'll look at the collisions. Only collisions that happened in the pc-loop search were noted. If there are no collisions, there is no need to run a pn-loop search. This is because no collisions means that both searches would be identical, as the only dierence between them is the collision handling. Looking at the above table again, we can see that there are very few collisions. In fact, although the number of sequences we traverse (tail+loop) grows fast, the number of collisions mostly stay <20. To make this even more interesting, we can see in Figure 3 that the number of collisions form a pattern depending on N. In perticular for any given sibling pair of length n (mod 4) = 0, we can eliminate the need for collision handing at all. It can also be noted experimentally, that usually the number of collisions satisfy Collisions(n(mod4) = 1) < Collisions(n(mod4) = 2) (30) Collisions(n(mod4) = 3) < Collisions(n(mod4) = 2) (31) 5.3 Negaperiodic loop search Let us see what happens if we perform the basic loop search using Energy n basic loop searches starting again with the All Zero Sequence of increasing length. 24

25 N - sequence length tail nc - length of the tail using a nc-loop search. loop nc - length of the nc-loop, which the tail nc terminates in. pmf nc - best negaperiodic merit factor found in loop nc. coll - number of collisions found in both tail nc and loop nc. tail nn - length of the tail using a nn-loop search. loop nn - length of the nn-loop, which the tail nn terminates in. pmf nn - best negaperiodic merit factor found in loop nn. N tail nc loop nc nmf nc coll tail nn loop nn nmf nc Table 8: excerpt of a table comparing the dierence between cyclic and negacyclic collisions for negaperiodic energy. Using the all zero sequence as input for dierent lengths. 25

26 Figure 4: Graph showing the number of collisions found when performing a pc-loop search, using the all zero sequence as input for dierent lengths N. Consider Table 8. The most obvious pattern of the nc-loop search is that every single one of these terminate with the input sequence, and we have a singularity at all searches. This shows that starting with the All Zero Sequence as input in a nc-loop is a very bad choice. The problem of getting a singularity for a certain energy is that we cannot ignore collision handling. See denition of singularity. If we instead look at the nn-loop search, the same input sequences, we see that the increase in the size of the nn-tail is similar to that for the Energy p tails, although we can see from the following graph that the nn-tail length grows faster then the Energy p tails, it is nowhere near achieving exponential growth in N Changing the scale to a logarithmic, it is shown in Figure 5 the size of the tails fails to achieve exponential growth, but the predictability is very easy to see. Also for the nn-loops we can look at the merit factors. Again we see the same tendency as with Table 7. Performing nn loop searches form N = 10 to N = 1300 gives the following merit factor plot, as show in Figure 6. Comparing this plot to the plot of merit factors for pc-loops we saw previously in Figure 2, it is no doubt that the exact same pattern is emerging. In fact, we would be hard pressed to distinguish the two plots if superimposed on each other. 26

27 Figure 5: Identical to Figure 4, but with Y axis changed to a logarithmic scale. Figure 6: Plot of negaperiodic merit factors for nn-loop searches. N = 10 to N = 1300, all zero sequence as input. 27

28 5.4 Aperiodic loop search If we change our metric type to aperiodic, our search run into trouble. Again using the all zero sequence n as input, we run the search from sequence length N = 10. N tail nc loop nc mf coll tail nn loop nn mf Table 9: excerpt of a table comparing the dierence between cyclic and negacyclic collisions for aperiodic energy. Using the all zero sequence as input for dierent lengths. 28

29 Figure 7: Comparison of tail and loop length for ac-loop searches, starting with the all zero sequence. If we take the results from Table 9 and plot them on a graph, we can see how the size of both the tails and loops seem to have absolutely no relation to N, Figure 7. While both the tail and loop length does seem to increase with sequence length N, it does so at highly irregular steps. For example at N = 42 the tail length has already reached and when we increase N to 43, the tail length drops to tail ac = 9394, a mere 11.76% even with increased sequence length. The same can be seen with the loop lengths, but at a much larger scale. For the ac-loop lengths in the table 9, N = {41, 45, 47}, the loop length = {46, , 100}, respectively. The increase in loop length from one sequence length to the other is far beyond what we have seen in the periodic and negaperiodic loop searches above. From length N = 41 to 45, the increase in loop length is a staggering %. Only to drop again by nearly the same percentage when N = 47. This can be seen in Figure 7. Our search does not handle these extreme variations of the loop and tail length very well. It seem very unlikely that either the tail or loop lengths falls into a predictable pattern. The lack of such a pattern turns our search into one that not only must calculate a large amount of energies/merit factors, but for every new sequence found, we must backtrack the list of sequences found so far. To top o this we are also faced with the fact that when we nally do nd the loop we are not guaranteed, unlike with periodic and negaperiodic energies, that the local best energy sequence is actually within the loop. However this is something that we expected from the start. Our search does reveal some interesting data about collisions. As with the periodic search, there seem to be certain sequence lengths N where we can not nd any collisions. Whenever N = 0 (mod 4) and N = 3 (mod 4), we can see that there are no collisions, as with our periodic search above. 29

30 Figure 8: Number of collisions found in the searches show in Figure 7. 6 Analysis of the tail and loop sizes So far we have only looked at the tails generated by the all zero sequence and how long they are, but what does these tails look like? Is a tail simply a chain of sequences, or does it grow as a binary tree? What do the other tails connecting to the loops we have found look like? First we will study more closely at the relation between a given sequence and it's children to better understand this. 6.1 A closer look at the relation between parent and child sequences For a given sequence S 0, we generate two children sequences S 1c and S 1n by performing a right cyclic or negacyclic shift. For two child sequences S 1c and S 1n there are two (parent) sequences that can generate these children using right cyclic or negacyclic shift. One of these is S 0, and the other is a sequence identical to S 0, but with bit s n 1 negated. Let us call this sequence S' 0. The child generated by the cyclic shift of S 0 is the same sequence as that generated by a negacyclic right shift of S' 0. This change in relation will also happen for the other child. Figure 9 shows this property very clearly. Since this Figure holds for each and every sequence, we can think of the tail and loop, somehow forming a variant of the DNA double helix shape, if we include both children and the missing parent in our searches. We now put this fact into context with our search. Given the sequences S 1 and S' 1, as long as energy(s 1 ) energy(s 1) (32) then both S 0 and S' 0 will choose the same sequence as it's best case child. This is of high signicance for the structure of the tails. Looking again at the tables for 30

A fast algorithm to generate necklaces with xed content

A fast algorithm to generate necklaces with xed content Theoretical Computer Science 301 (003) 477 489 www.elsevier.com/locate/tcs Note A fast algorithm to generate necklaces with xed content Joe Sawada 1 Department of Computer Science, University of Toronto,

More information

Lecture 14 - P v.s. NP 1

Lecture 14 - P v.s. NP 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 27, 2018 Lecture 14 - P v.s. NP 1 In this lecture we start Unit 3 on NP-hardness and approximation

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

CS4026 Formal Models of Computation

CS4026 Formal Models of Computation CS4026 Formal Models of Computation Turing Machines Turing Machines Abstract but accurate model of computers Proposed by Alan Turing in 1936 There weren t computers back then! Turing s motivation: find

More information

Turing Machines, diagonalization, the halting problem, reducibility

Turing Machines, diagonalization, the halting problem, reducibility Notes on Computer Theory Last updated: September, 015 Turing Machines, diagonalization, the halting problem, reducibility 1 Turing Machines A Turing machine is a state machine, similar to the ones we have

More information

Discrete Probability and State Estimation

Discrete Probability and State Estimation 6.01, Spring Semester, 2008 Week 12 Course Notes 1 MASSACHVSETTS INSTITVTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.01 Introduction to EECS I Spring Semester, 2008 Week

More information

Chapter 3 Deterministic planning

Chapter 3 Deterministic planning Chapter 3 Deterministic planning In this chapter we describe a number of algorithms for solving the historically most important and most basic type of planning problem. Two rather strong simplifying assumptions

More information

where Q is a finite set of states

where Q is a finite set of states Space Complexity So far most of our theoretical investigation on the performances of the various algorithms considered has focused on time. Another important dynamic complexity measure that can be associated

More information

ECS 120 Lesson 23 The Class P

ECS 120 Lesson 23 The Class P ECS 120 Lesson 23 The Class P Oliver Kreylos Wednesday, May 23th, 2001 We saw last time how to analyze the time complexity of Turing Machines, and how to classify languages into complexity classes. We

More information

Ramanujan Graphs of Every Size

Ramanujan Graphs of Every Size Spectral Graph Theory Lecture 24 Ramanujan Graphs of Every Size Daniel A. Spielman December 2, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The

More information

Proof Techniques (Review of Math 271)

Proof Techniques (Review of Math 271) Chapter 2 Proof Techniques (Review of Math 271) 2.1 Overview This chapter reviews proof techniques that were probably introduced in Math 271 and that may also have been used in a different way in Phil

More information

Week 2: Defining Computation

Week 2: Defining Computation Computational Complexity Theory Summer HSSP 2018 Week 2: Defining Computation Dylan Hendrickson MIT Educational Studies Program 2.1 Turing Machines Turing machines provide a simple, clearly defined way

More information

Against the F-score. Adam Yedidia. December 8, This essay explains why the F-score is a poor metric for the success of a statistical prediction.

Against the F-score. Adam Yedidia. December 8, This essay explains why the F-score is a poor metric for the success of a statistical prediction. Against the F-score Adam Yedidia December 8, 2016 This essay explains why the F-score is a poor metric for the success of a statistical prediction. 1 What is the F-score? From Wikipedia: In statistical

More information

DIMACS Technical Report March Game Seki 1

DIMACS Technical Report March Game Seki 1 DIMACS Technical Report 2007-05 March 2007 Game Seki 1 by Diogo V. Andrade RUTCOR, Rutgers University 640 Bartholomew Road Piscataway, NJ 08854-8003 dandrade@rutcor.rutgers.edu Vladimir A. Gurvich RUTCOR,

More information

HW1 Solutions. October 5, (20 pts.) Random variables, sample space and events Consider the random experiment of ipping a coin 4 times.

HW1 Solutions. October 5, (20 pts.) Random variables, sample space and events Consider the random experiment of ipping a coin 4 times. HW1 Solutions October 5, 2016 1. (20 pts.) Random variables, sample space and events Consider the random experiment of ipping a coin 4 times. 1. (2 pts.) Dene the appropriate random variables. Answer:

More information

Module 1: Analyzing the Efficiency of Algorithms

Module 1: Analyzing the Efficiency of Algorithms Module 1: Analyzing the Efficiency of Algorithms Dr. Natarajan Meghanathan Associate Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Based

More information

17 Galois Fields Introduction Primitive Elements Roots of Polynomials... 8

17 Galois Fields Introduction Primitive Elements Roots of Polynomials... 8 Contents 17 Galois Fields 2 17.1 Introduction............................... 2 17.2 Irreducible Polynomials, Construction of GF(q m )... 3 17.3 Primitive Elements... 6 17.4 Roots of Polynomials..........................

More information

CSCI3390-Lecture 6: An Undecidable Problem

CSCI3390-Lecture 6: An Undecidable Problem CSCI3390-Lecture 6: An Undecidable Problem September 21, 2018 1 Summary The language L T M recognized by the universal Turing machine is not decidable. Thus there is no algorithm that determines, yes or

More information

Topics in Computer Mathematics

Topics in Computer Mathematics Random Number Generation (Uniform random numbers) Introduction We frequently need some way to generate numbers that are random (by some criteria), especially in computer science. Simulations of natural

More information

Time-bounded computations

Time-bounded computations Lecture 18 Time-bounded computations We now begin the final part of the course, which is on complexity theory. We ll have time to only scratch the surface complexity theory is a rich subject, and many

More information

Languages, regular languages, finite automata

Languages, regular languages, finite automata Notes on Computer Theory Last updated: January, 2018 Languages, regular languages, finite automata Content largely taken from Richards [1] and Sipser [2] 1 Languages An alphabet is a finite set of characters,

More information

Theory of Computation

Theory of Computation Theory of Computation Dr. Sarmad Abbasi Dr. Sarmad Abbasi () Theory of Computation 1 / 33 Lecture 20: Overview Incompressible strings Minimal Length Descriptions Descriptive Complexity Dr. Sarmad Abbasi

More information

Introduction to Turing Machines. Reading: Chapters 8 & 9

Introduction to Turing Machines. Reading: Chapters 8 & 9 Introduction to Turing Machines Reading: Chapters 8 & 9 1 Turing Machines (TM) Generalize the class of CFLs: Recursively Enumerable Languages Recursive Languages Context-Free Languages Regular Languages

More information

Complexity and NP-completeness

Complexity and NP-completeness Lecture 17 Complexity and NP-completeness Supplemental reading in CLRS: Chapter 34 As an engineer or computer scientist, it is important not only to be able to solve problems, but also to know which problems

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Lecture 15 - NP Completeness 1

Lecture 15 - NP Completeness 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 29, 2018 Lecture 15 - NP Completeness 1 In the last lecture we discussed how to provide

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar

Q = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar Turing Machine A Turing machine is an abstract representation of a computing device. It consists of a read/write

More information

M. Smith. 6 September 2016 / GSAC

M. Smith. 6 September 2016 / GSAC , Complexity, and Department of Mathematics University of Utah 6 September 2016 / GSAC Outline 1 2 3 4 Outline 1 2 3 4 Motivation The clock puzzle is an infamous part of the video game XIII-2 (2011). Most

More information

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February) Algorithms and Data Structures 016 Week 5 solutions (Tues 9th - Fri 1th February) 1. Draw the decision tree (under the assumption of all-distinct inputs) Quicksort for n = 3. answer: (of course you should

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Math Circle: Recursion and Induction

Math Circle: Recursion and Induction Math Circle: Recursion and Induction Prof. Wickerhauser 1 Recursion What can we compute, using only simple formulas and rules that everyone can understand? 1. Let us use N to denote the set of counting

More information

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0 Asymptotic Notation Asymptotic notation deals with the behaviour of a function in the limit, that is, for sufficiently large values of its parameter. Often, when analysing the run time of an algorithm,

More information

Eects of domain characteristics on instance-based learning algorithms

Eects of domain characteristics on instance-based learning algorithms Theoretical Computer Science 298 (2003) 207 233 www.elsevier.com/locate/tcs Eects of domain characteristics on instance-based learning algorithms Seishi Okamoto, Nobuhiro Yugami Fujitsu Laboratories, 1-9-3

More information

PRIME GENERATING LUCAS SEQUENCES

PRIME GENERATING LUCAS SEQUENCES PRIME GENERATING LUCAS SEQUENCES PAUL LIU & RON ESTRIN Science One Program The University of British Columbia Vancouver, Canada April 011 1 PRIME GENERATING LUCAS SEQUENCES Abstract. The distribution of

More information

Selection and Adversary Arguments. COMP 215 Lecture 19

Selection and Adversary Arguments. COMP 215 Lecture 19 Selection and Adversary Arguments COMP 215 Lecture 19 Selection Problems We want to find the k'th largest entry in an unsorted array. Could be the largest, smallest, median, etc. Ideas for an n lg n algorithm?

More information

Hill climbing: Simulated annealing and Tabu search

Hill climbing: Simulated annealing and Tabu search Hill climbing: Simulated annealing and Tabu search Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Hill climbing Instead of repeating local search, it is

More information

Essentials of Intermediate Algebra

Essentials of Intermediate Algebra Essentials of Intermediate Algebra BY Tom K. Kim, Ph.D. Peninsula College, WA Randy Anderson, M.S. Peninsula College, WA 9/24/2012 Contents 1 Review 1 2 Rules of Exponents 2 2.1 Multiplying Two Exponentials

More information

Final exam study sheet for CS3719 Turing machines and decidability.

Final exam study sheet for CS3719 Turing machines and decidability. Final exam study sheet for CS3719 Turing machines and decidability. A Turing machine is a finite automaton with an infinite memory (tape). Formally, a Turing machine is a 6-tuple M = (Q, Σ, Γ, δ, q 0,

More information

Extracted from a working draft of Goldreich s FOUNDATIONS OF CRYPTOGRAPHY. See copyright notice.

Extracted from a working draft of Goldreich s FOUNDATIONS OF CRYPTOGRAPHY. See copyright notice. 106 CHAPTER 3. PSEUDORANDOM GENERATORS Using the ideas presented in the proofs of Propositions 3.5.3 and 3.5.9, one can show that if the n 3 -bit to l(n 3 ) + 1-bit function used in Construction 3.5.2

More information

CSCI3390-Assignment 2 Solutions

CSCI3390-Assignment 2 Solutions CSCI3390-Assignment 2 Solutions due February 3, 2016 1 TMs for Deciding Languages Write the specification of a Turing machine recognizing one of the following three languages. Do one of these problems.

More information

PRIMES Math Problem Set

PRIMES Math Problem Set PRIMES Math Problem Set PRIMES 017 Due December 1, 01 Dear PRIMES applicant: This is the PRIMES 017 Math Problem Set. Please send us your solutions as part of your PRIMES application by December 1, 01.

More information

Boolean circuits. Lecture Definitions

Boolean circuits. Lecture Definitions Lecture 20 Boolean circuits In this lecture we will discuss the Boolean circuit model of computation and its connection to the Turing machine model. Although the Boolean circuit model is fundamentally

More information

Notes on Computer Theory Last updated: November, Circuits

Notes on Computer Theory Last updated: November, Circuits Notes on Computer Theory Last updated: November, 2015 Circuits Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Circuits Boolean circuits offer an alternate model of computation: a non-uniform one

More information

Algebra Exam. Solutions and Grading Guide

Algebra Exam. Solutions and Grading Guide Algebra Exam Solutions and Grading Guide You should use this grading guide to carefully grade your own exam, trying to be as objective as possible about what score the TAs would give your responses. Full

More information

Reductions for One-Way Functions

Reductions for One-Way Functions Reductions for One-Way Functions by Mark Liu A thesis submitted in partial fulfillment of the requirements for degree of Bachelor of Science (Honors Computer Science) from The University of Michigan 2013

More information

Introduction. How can we say that one algorithm performs better than another? Quantify the resources required to execute:

Introduction. How can we say that one algorithm performs better than another? Quantify the resources required to execute: Slides by Christopher M. Bourke Instructor: Berthe Y. Choueiry Spring 2006 1 / 1 Computer Science & Engineering 235 Section 2.3 of Rosen cse235@cse.unl.edu Introduction How can we say that one algorithm

More information

The Count-Min-Sketch and its Applications

The Count-Min-Sketch and its Applications The Count-Min-Sketch and its Applications Jannik Sundermeier Abstract In this thesis, we want to reveal how to get rid of a huge amount of data which is at least dicult or even impossible to store in local

More information

How to Pop a Deep PDA Matters

How to Pop a Deep PDA Matters How to Pop a Deep PDA Matters Peter Leupold Department of Mathematics, Faculty of Science Kyoto Sangyo University Kyoto 603-8555, Japan email:leupold@cc.kyoto-su.ac.jp Abstract Deep PDA are push-down automata

More information

Mining Data Streams. The Stream Model. The Stream Model Sliding Windows Counting 1 s

Mining Data Streams. The Stream Model. The Stream Model Sliding Windows Counting 1 s Mining Data Streams The Stream Model Sliding Windows Counting 1 s 1 The Stream Model Data enters at a rapid rate from one or more input ports. The system cannot store the entire stream. How do you make

More information

Shannon-Fano-Elias coding

Shannon-Fano-Elias coding Shannon-Fano-Elias coding Suppose that we have a memoryless source X t taking values in the alphabet {1, 2,..., L}. Suppose that the probabilities for all symbols are strictly positive: p(i) > 0, i. The

More information

Men. Women. Men. Men. Women. Women

Men. Women. Men. Men. Women. Women Math 203 Topics for second exam Statistics: the science of data Chapter 5: Producing data Statistics is all about drawing conclusions about the opinions/behavior/structure of large populations based on

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

1 Computational problems

1 Computational problems 80240233: Computational Complexity Lecture 1 ITCS, Tsinghua Univesity, Fall 2007 9 October 2007 Instructor: Andrej Bogdanov Notes by: Andrej Bogdanov The aim of computational complexity theory is to study

More information

1 Maintaining a Dictionary

1 Maintaining a Dictionary 15-451/651: Design & Analysis of Algorithms February 1, 2016 Lecture #7: Hashing last changed: January 29, 2016 Hashing is a great practical tool, with an interesting and subtle theory too. In addition

More information

(a) Definition of TMs. First Problem of URMs

(a) Definition of TMs. First Problem of URMs Sec. 4: Turing Machines First Problem of URMs (a) Definition of the Turing Machine. (b) URM computable functions are Turing computable. (c) Undecidability of the Turing Halting Problem That incrementing

More information

Contents. 1 Introduction to Dynamics. 1.1 Examples of Dynamical Systems

Contents. 1 Introduction to Dynamics. 1.1 Examples of Dynamical Systems Dynamics, Chaos, and Fractals (part 1): Introduction to Dynamics (by Evan Dummit, 2015, v. 1.07) Contents 1 Introduction to Dynamics 1 1.1 Examples of Dynamical Systems......................................

More information

Introduction to Computer Science and Programming for Astronomers

Introduction to Computer Science and Programming for Astronomers Introduction to Computer Science and Programming for Astronomers Lecture 8. István Szapudi Institute for Astronomy University of Hawaii March 7, 2018 Outline Reminder 1 Reminder 2 3 4 Reminder We have

More information

CSE370 HW3 Solutions (Winter 2010)

CSE370 HW3 Solutions (Winter 2010) CSE370 HW3 Solutions (Winter 2010) 1. CL2e, 4.9 We are asked to implement the function f(a,,c,,e) = A + C + + + CE using the smallest possible multiplexer. We can t use any extra gates or the complement

More information

CSE 202 Homework 4 Matthias Springer, A

CSE 202 Homework 4 Matthias Springer, A CSE 202 Homework 4 Matthias Springer, A99500782 1 Problem 2 Basic Idea PERFECT ASSEMBLY N P: a permutation P of s i S is a certificate that can be checked in polynomial time by ensuring that P = S, and

More information

Module 1: Analyzing the Efficiency of Algorithms

Module 1: Analyzing the Efficiency of Algorithms Module 1: Analyzing the Efficiency of Algorithms Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu What is an Algorithm?

More information

The Power of Amnesia: Learning Probabilistic. Automata with Variable Memory Length

The Power of Amnesia: Learning Probabilistic. Automata with Variable Memory Length The Power of Amnesia: Learning Probabilistic Automata with Variable Memory Length DANA RON YORAM SINGER NAFTALI TISHBY Institute of Computer Science, Hebrew University, Jerusalem 9904, Israel danar@cs.huji.ac.il

More information

Part I: Definitions and Properties

Part I: Definitions and Properties Turing Machines Part I: Definitions and Properties Finite State Automata Deterministic Automata (DFSA) M = {Q, Σ, δ, q 0, F} -- Σ = Symbols -- Q = States -- q 0 = Initial State -- F = Accepting States

More information

CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy

CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy CPSC 320 Sample Solution, Reductions and Resident Matching: A Residentectomy August 25, 2017 A group of residents each needs a residency in some hospital. A group of hospitals each need some number (one

More information

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 27

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 27 CS 70 Discrete Mathematics for CS Spring 007 Luca Trevisan Lecture 7 Infinity and Countability Consider a function f that maps elements of a set A (called the domain of f ) to elements of set B (called

More information

Sliding Windows with Limited Storage

Sliding Windows with Limited Storage Electronic Colloquium on Computational Complexity, Report No. 178 (2012) Sliding Windows with Limited Storage Paul Beame Computer Science and Engineering University of Washington Seattle, WA 98195-2350

More information

The Turing machine model of computation

The Turing machine model of computation The Turing machine model of computation For most of the remainder of the course we will study the Turing machine model of computation, named after Alan Turing (1912 1954) who proposed the model in 1936.

More information

the subset partial order Paul Pritchard Technical Report CIT School of Computing and Information Technology

the subset partial order Paul Pritchard Technical Report CIT School of Computing and Information Technology A simple sub-quadratic algorithm for computing the subset partial order Paul Pritchard P.Pritchard@cit.gu.edu.au Technical Report CIT-95-04 School of Computing and Information Technology Grith University

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

4 Derivations in the Propositional Calculus

4 Derivations in the Propositional Calculus 4 Derivations in the Propositional Calculus 1. Arguments Expressed in the Propositional Calculus We have seen that we can symbolize a wide variety of statement forms using formulas of the propositional

More information

Slope Fields: Graphing Solutions Without the Solutions

Slope Fields: Graphing Solutions Without the Solutions 8 Slope Fields: Graphing Solutions Without the Solutions Up to now, our efforts have been directed mainly towards finding formulas or equations describing solutions to given differential equations. Then,

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Math Models of OR: Branch-and-Bound

Math Models of OR: Branch-and-Bound Math Models of OR: Branch-and-Bound John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA November 2018 Mitchell Branch-and-Bound 1 / 15 Branch-and-Bound Outline 1 Branch-and-Bound

More information

Problem. Problem Given a dictionary and a word. Which page (if any) contains the given word? 3 / 26

Problem. Problem Given a dictionary and a word. Which page (if any) contains the given word? 3 / 26 Binary Search Introduction Problem Problem Given a dictionary and a word. Which page (if any) contains the given word? 3 / 26 Strategy 1: Random Search Randomly select a page until the page containing

More information

Evolving a New Feature for a Working Program

Evolving a New Feature for a Working Program Evolving a New Feature for a Working Program Mike Stimpson arxiv:1104.0283v1 [cs.ne] 2 Apr 2011 January 18, 2013 Abstract A genetic programming system is created. A first fitness function f 1 is used to

More information

Stream ciphers I. Thomas Johansson. May 16, Dept. of EIT, Lund University, P.O. Box 118, Lund, Sweden

Stream ciphers I. Thomas Johansson. May 16, Dept. of EIT, Lund University, P.O. Box 118, Lund, Sweden Dept. of EIT, Lund University, P.O. Box 118, 221 00 Lund, Sweden thomas@eit.lth.se May 16, 2011 Outline: Introduction to stream ciphers Distinguishers Basic constructions of distinguishers Various types

More information

On the Law of Distribution of Energy in the Normal Spectrum

On the Law of Distribution of Energy in the Normal Spectrum On the Law of Distribution of Energy in the Normal Spectrum Max Planck Annalen der Physik vol.4, p.553 ff (1901) The recent spectral measurements made by O. Lummer and E. Pringsheim[1], and even more notable

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 2018 Design and Analysis of Algorithms Lecture 5: Divide and Conquer (Part 2) Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ A Lower Bound on Convex Hull Lecture 4 Task: sort the

More information

1 Normal Distribution.

1 Normal Distribution. Normal Distribution.. Introduction A Bernoulli trial is simple random experiment that ends in success or failure. A Bernoulli trial can be used to make a new random experiment by repeating the Bernoulli

More information

Expected Value II. 1 The Expected Number of Events that Happen

Expected Value II. 1 The Expected Number of Events that Happen 6.042/18.062J Mathematics for Computer Science December 5, 2006 Tom Leighton and Ronitt Rubinfeld Lecture Notes Expected Value II 1 The Expected Number of Events that Happen Last week we concluded by showing

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2/26/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 2 More algorithms

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 13 Competitive Optimality of the Shannon Code So, far we have studied

More information

Examples of frequentist probability include games of chance, sample surveys, and randomized experiments. We will focus on frequentist probability sinc

Examples of frequentist probability include games of chance, sample surveys, and randomized experiments. We will focus on frequentist probability sinc FPPA-Chapters 13,14 and parts of 16,17, and 18 STATISTICS 50 Richard A. Berk Spring, 1997 May 30, 1997 1 Thinking about Chance People talk about \chance" and \probability" all the time. There are many

More information

Complexity Theory Part I

Complexity Theory Part I Complexity Theory Part I Problem Problem Set Set 77 due due right right now now using using a late late period period The Limits of Computability EQ TM EQ TM co-re R RE L D ADD L D HALT A TM HALT A TM

More information

Method of Frobenius. General Considerations. L. Nielsen, Ph.D. Dierential Equations, Fall Department of Mathematics, Creighton University

Method of Frobenius. General Considerations. L. Nielsen, Ph.D. Dierential Equations, Fall Department of Mathematics, Creighton University Method of Frobenius General Considerations L. Nielsen, Ph.D. Department of Mathematics, Creighton University Dierential Equations, Fall 2008 Outline 1 The Dierential Equation and Assumptions 2 3 Main Theorem

More information

output H = 2*H+P H=2*(H-P)

output H = 2*H+P H=2*(H-P) Ecient Algorithms for Multiplication on Elliptic Curves by Volker Muller TI-9/97 22. April 997 Institut fur theoretische Informatik Ecient Algorithms for Multiplication on Elliptic Curves Volker Muller

More information

The Proof of IP = P SP ACE

The Proof of IP = P SP ACE The Proof of IP = P SP ACE Larisse D. Voufo March 29th, 2007 For a long time, the question of how a verier can be convinced with high probability that a given theorem is provable without showing the whole

More information

Mathematical Logic Part Three

Mathematical Logic Part Three Mathematical Logic Part hree riday our Square! oday at 4:15PM, Outside Gates Announcements Problem Set 3 due right now. Problem Set 4 goes out today. Checkpoint due Monday, October 22. Remainder due riday,

More information

The tape of M. Figure 3: Simulation of a Turing machine with doubly infinite tape

The tape of M. Figure 3: Simulation of a Turing machine with doubly infinite tape UG3 Computability and Intractability (2009-2010): Note 4 4. Bells and whistles. In defining a formal model of computation we inevitably make a number of essentially arbitrary design decisions. These decisions

More information

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu Analysis of Algorithm Efficiency Dr. Yingwu Zhu Measure Algorithm Efficiency Time efficiency How fast the algorithm runs; amount of time required to accomplish the task Our focus! Space efficiency Amount

More information

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 Computational complexity studies the amount of resources necessary to perform given computations.

More information

2 Exercises 1. The following represent graphs of functions from the real numbers R to R. Decide which are one-to-one, which are onto, which are neithe

2 Exercises 1. The following represent graphs of functions from the real numbers R to R. Decide which are one-to-one, which are onto, which are neithe Infinity and Counting 1 Peter Trapa September 28, 2005 There are 10 kinds of people in the world: those who understand binary, and those who don't. Welcome to the rst installment of the 2005 Utah Math

More information

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv Math 1270 Honors ODE I Fall, 2008 Class notes # 1 We have learned how to study nonlinear systems x 0 = F (x; y) y 0 = G (x; y) (1) by linearizing around equilibrium points. If (x 0 ; y 0 ) is an equilibrium

More information

An average case analysis of a dierential attack. on a class of SP-networks. Distributed Systems Technology Centre, and

An average case analysis of a dierential attack. on a class of SP-networks. Distributed Systems Technology Centre, and An average case analysis of a dierential attack on a class of SP-networks Luke O'Connor Distributed Systems Technology Centre, and Information Security Research Center, QUT Brisbane, Australia Abstract

More information

Reading and Writing. Mathematical Proofs. Slides by Arthur van Goetham

Reading and Writing. Mathematical Proofs. Slides by Arthur van Goetham Reading and Writing Mathematical Proofs Slides by Arthur van Goetham What is a proof? Why explanations are not proofs What is a proof? A method for establishing truth What establishes truth depends on

More information

The Inductive Proof Template

The Inductive Proof Template CS103 Handout 24 Winter 2016 February 5, 2016 Guide to Inductive Proofs Induction gives a new way to prove results about natural numbers and discrete structures like games, puzzles, and graphs. All of

More information

1 The Basic Counting Principles

1 The Basic Counting Principles 1 The Basic Counting Principles The Multiplication Rule If an operation consists of k steps and the first step can be performed in n 1 ways, the second step can be performed in n ways [regardless of how

More information

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms Computer Science 385 Analysis of Algorithms Siena College Spring 2011 Topic Notes: Limitations of Algorithms We conclude with a discussion of the limitations of the power of algorithms. That is, what kinds

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information