Efficient communication for the list problem

Size: px
Start display at page:

Download "Efficient communication for the list problem"

Transcription

1 Efficient communication for the list problem Joran van Apeldoorn July 18, 014 Bachelorthesis Main Supervisor: Supervisors: prof. dr. Harry Buhrman dr. Leen Torenvliet prof. dr. Hans Maassen Korteweg-de Vries Instituut voor Wiskunde

2 Abstract This thesis starts with an introduction to communication complexity. After the basics are laid out we move on to a specific problem, the so-called list problem. In this scenario one party needs to communicate a binary string to the other with the extra help that the receiving party has a list of the possible strings that could be send. However, what is on the list is unknown to the sender so multiple rounds of communication will be needed to make use of the list. As show by Naor et al. [8] multi-round communication can be exponentially better than single round communication. Furthermore, -round communication is almost optimal and 4- rounds will suffice to get optimal communication. Our main result is that in this case, efficiently calculable protocols will suffice. We show this by modifying the protocol found Naor et al. to use linear maps instead of general functions. After the first two chapters we move into the domain of quantum computations. One chapter gives a introduction into the field, followed by an other that looks at some interesting results that arise when using quantum communication for variations of the list problem. We will end by discussing the still open questions. Title: Efficient communication for the list problem Author: Joran van Apeldoorn, jorants@gmail.com, Main Supervisor: prof. dr. Harry Buhrman Second Supervisor: dr. Leen Torenvliet Third Supervisor: dr. Hans Maassen Second corrector: prof. dr. Jean-Sebastien Caux Date: July 18, 014 Korteweg-de Vries Instituut voor Wiskunde Universiteit van Amsterdam Science Park 904, 1098 XH Amsterdam

3 Acknowledgments Mostly I want to thank my thesis advisers. First of all Harry Buhrman for suggesting the subject and taking the time to sit down with me time and time again to help me when I got stuck. Secondly Leen Torenvliet, when I first went looking for a subject he introduced me to Harry and since then, despite not being my main adviser, he has taken the time to come to all the meetings with Harry. Last but not least I would like to thank Hans Maasen who helped me look at the problem in a different way and helped me to word to problems and solutions in an understandable manner. Finally I also want to thank my friends and family, who, despite studying (arts) history, literature or film, still took the time to act like my stories about Alice, Bob and lists were interesting. 3

4 Contents 1 Introduction 5 Overview of communication complexity 7.1 The model Protocols Bounds and complexity Some variations Promise Limit the amount of rounds The list problem A lower bound Finding a protocol rounds rounds Overview of quantum computations The qubit Multiple bits Manipulating states bit flip phase flip Hadamard Controlled not Quantum teleportation Deutsch-Jozsa Distributed Deutsch-Jozsa Quantum computations and lists Notes on graphs fixed Hamming distance Discussion Classical Quantum Popular summary 37 A Python code for protocol simulation 39 4

5 Chapter 1 Introduction In daily life communication plays an enormous role, whether it is to ask for the nearest post office, giving a lecture on quantum computing or to tell someone you love them, we are constantly exchanging information. It is such a big part of us that it is surprising how bad we can be at communication sometimes. To give some examples, most people have that one uncle at their family parties that tells way to detailed stories about things you already know, or have read a way to detailed manual while all you want to know is which button turns the blender on. Both cases are day-to-day examples of what can go wrong in one-way communication. It is not without a reason that all languages have a way of asking questions built into them. If we can tell our uncle that we are just interested in one detail or call the help desk of our local blender manufacturer to ask how to turn it on, we can save a lot of time, or more precisely, a lot of words. Our normal language is not very efficient. Luckily computers can do a lot better. In the field of communication complexity we ask ourselves what the most efficient protocol of communicating certain information is. In particular, we will look at the difference between single-round and multi-round communication. Apart from the obvious fact that a one sided conversation is less efficient, a proof by Naor et al. [8] shows that for a lot of purposes four rounds is all we need to be efficient. The case we will focus on most is the one where the receiver has a list of possible messages he can get. The general idea is that he can ask a smart but short question, the answer of which will be enough to determine which message needs to be sent. A real world example would be where your friend is going out with some people and you are wondering with whom. Instead of asking for all the names, you could ask to which bar they are going. Since you know the people in question pretty well, this is enough to determine who is going. The name of a bar is probably a lot shorter than the names of all the people that are going. Or possibly some other night, when you and your friends are out and are reminiscing about old times, instead of retelling the whole past night, just saying remember the night that we... will probably be enough. In real life we are not always interested in the shortest communication, just chatting with your friends can be nice. However, computers don t care for just chatting, the shortest way to get a message across is of great importance in networked computing. Database look-ups based on big keys can possibly be speeded up a lot when we account for the fact that only a limited set of keys is present. We call this class of communication problems the list problem. 5

6 CHAPTER 1. INTRODUCTION This thesis consists of two parts, each containing a chapter as introduction into the field and a chapter looking at the list problem within this field. The first part is about classical communication, the second part is about quantum communication. Notation We will use the following notation: F k is a finite field with k elements. We will also use this notation for the set {0,, k} when we don t care about the operations defined on it. we will write f(x) = O(g(x)) if and only if for large values of x, f(x) M g(x) for a fixed M. log will always mean the -log. we will write log k for the k-log and ln for the natural logarithm. When the context implies that the result of the logarithm should be an integer, for example because it is the number of elements in a set, or the number of bits, we will assume that the logarithm is rounded upwards. 6 Efficient communication for the list problem

7 Chapter Overview of communication complexity In this chapter we will give a broad overview of communication complexity. We will describe the basic model and go on to discuss possible variations, the main theorems and some interesting examples. For those who have a basic knowledge of the field it should not be a problem to skip ahead to the second chapter. Most information in this chapter is based on a book by Kushilevitz and Nisan [6],.1 The model In the field of communication complexity we are concerned with two parties, normally called Alice and Bob. They both receive some information and have to solve a problem together while keeping the communication to a minimum. In more formal terms, we fix two sets X and Y, the input domain. We also fix a set of solutions Z and a problem f : X Y Z. Now Alice will receive an x X and Bob will receive a y Y and their goal is to both know f(x, y) Z..1.1 Protocols It is important to know both parties saw this situation coming and have agreed on a protocol P beforehand. This may sound restricting but when we think about it, we do the same for communicating with our regular languages, we have to agree on the language we are speaking if we want to communicate. So what is a protocol? We will look at the communication in a bit-by-bit manner. At each point in time the protocol will have to tell us: 1. If the protocol is finished or not.. If not, who is to send the next bit. 3. What bit will that be? 4. If the protocol is finished, what is f(x, y)? These decisions are based on x and y and on the communication up till that point. The communication up to the kth bit can be described by a transcript t k F k. Since both parties have to agree on whose turn it is, this decision can only depend on the 7

8 CHAPTER. OVERVIEW OF COMMUNICATION COMPLEXITY transcript, since this is the only mutual information. This gives a function T : (D F ) {Alice, Bob, Done}, where D is the set of all possible transcripts. After this, if Alice or Bob will have to send the next bit, they will use P A : X F F and P B : Y F F to determine what to send. While this model is handy for someone who has to implement an algorithm, it is less so for us as the algorithm designers. We will view the protocol as a tree. On each internal node there is a function A i (x) or B j (y) that will give the next bit. We will move left on a zero and right on a one. Each leaf of the tree has a value in Z that is the resulting answer. See figure.1 A 0 (x) B 1 (y) A (x) B 3 (y) B 4 (y) z 5 z 6 z 7 z 8 z 9 A 10 (y) z 11 z 1 Figure.1: An example of a communication protocol. In this case the complexity is 4.1. Bounds and complexity We are now ready to define the communication complexity, CC(f) of a problem. We are interested in the worst-case scenario, this is equal to the longest branch in the tree, so we define the cost of a protocol P as the depth of the tree. The complexity CC(f) is the minimum cost over all protocols that compute f. Most of the time it will be hard to determine CC(f) and we will only be able to give an upper bound in the form of a protocol and a lower bound due to a minimal transfer of information being needed. If we can get the lower bound to match the cost of a protocol we will have found a optimal protocol. Example.1. Let us look at the following problem: X = Y = F n, Z = F and f = P AR denotes the parity of xy (the number of ones modulo ). We find two bounds: Since both parties need to compute P AR(x, y) and the function really depends on both inputs, both need to send at least one bit to the other: CC(P AR) We can use the following algorithm to compute P AR(X, Y ): 1. Alice computes a = P AR(x) and sends this to Bob.. Bob computes b = P AR(y) and sends this to Alice. 3. Now both compute a b to find P AR(x, y) This protocol always takes bits, so CC(P AR). 8 Efficient communication for the list problem

9 .. SOME VARIATIONS A 0 = P AR(x) B 1 = P AR(y) 0 1 B = P AR(y) 1 0 Figure.: A protocol for the parity problem, the depth and so the cost is. We can now conclude that CC(P AR) = and that the protocol is optimal. See Figure. for a tree representation of the protocol. The tree view of protocols allows us to set some general lower bounds, for example Lemma.. Let Z be the range of f (Z = f(x, Y )). then CC(f) log Z Proof. Since all values in Z are possible outcomes, any protocol P will need to have at least Z leafs, one for each outcome. A binary tree with Z leafs has depth at least log Z.. Some variations Now that we we have laid out the basic model, we can start tuning it to our needs by making some slight modifications to it...1 Promise We want to introduce the concept of a promise, this means that Alice and Bob got some extra information about their x and y beforehand. If we would be talking about numbers a very simple promise could be that x + y is even. More precise, we don t allow all the pairs (x, y) X Y to be given to Alice and Bob. We only allow the pairs in a specific subset D X Y that is known to both parties. Let us discuss an example we will encounter again later in Chapter 4 when we look at quantum communication. First however, we need a simple definition Definition.3 (Hamming distance). The hamming distance d H (x, y) of two bit strings is the number of bits they differ on. Example.4. Again we have X = Y = F n. However, this time we look at the subset of X Y so that d H (x, y) = 0 or d H (x, y) = n. If d H(x, y) = 0 then the strings are equal, if d H (x, y) = n then we call the strings balanced. The goal is to determine which of the two is the case. If we need to determine the distance between two random strings we need to send over all the bits, however since we have our promise, we only need to send n + bits. Efficient communication for the list problem 9

10 CHAPTER. OVERVIEW OF COMMUNICATION COMPLEXITY.. Limit the amount of rounds So far we have looked at communication at a bit-by-bit basis, this is however not very realistic. In real live we normally send packages of information. We will define a round as a sequence of bits all send by one person. In real world applications we can have a lot of benefits by limiting the amount of rounds, with as most extreme case a maximum of one round. Definition.5. CC k (f) is the communication complexity when we limit ourselves to protocols using a maximum of k rounds. CC (f) denotes the communication with an unlimited number of rounds. One important note to make is that problems f(x, y) for which the answer should be known to both parties and that are solvable with a 1-round protocol can only depend on one of the variables, so either f(x, y) = f (x) or f(x, y) = f (y). This means that if we are interested in 1-round communication, we will often require only one of the parties to know the answer. We can now prove the following relation: Lemma.6. CC (f) log CC 1 (f) Proof. Without loss of generality we say that the one round protocol is from Alice to Bob. First we will look at a multi-round protocol with cost k. If we view this protocol as a tree, it will have depth k and so it will have a maximum of k 1 nodes. In particular, there are less than k 1 nodes on which Alice has to send a bit. Since the bit for each node only depends on Alice her input, she can compute the bits for all the nodes beforehand. Now a simple one round protocol becomes possible: Alice computes all the bits for all the nodes and sends these to Bob. There are less than k 1 nodes, so this takes less than k 1 bits. Using this information, Bob can mimic the whole multi-round protocol and find the answer without any further communication. this means that CC 1 (f) k 1 CC (f) 1 CC (f) By taking the log on both sides we find the lemma. There are also some examples of problems where there is no difference between single and multi round protocols: Example.7. Again we have X = Y. The problem to solve is the equality function, so the goal is to determine whether x = y or not. We will only consider the case where just Bob needs to know the answer. The proof consists of two parts: n CC (eq): We look at the set of inputs (α, α) X Y. Each of these should have a different transcript, since otherwise the transcripts would also be the same as for (α, β), seeing that at no point in the protocol Alice or Bob would see the difference between the inputs (we will look at this more precise in the next chapter). This is problematic since the same transcripts should produce the same answer, we conclude that the transcripts for all the (α, α) pairs should be differ-end. Since there are n of these inputs there should be at least n leafs in the tree so the tree is at least of depth n. 10 Efficient communication for the list problem

11 .. SOME VARIATIONS CC 1 (eq) n: Allice can just send her whole input. We use CC (eq) CC 1 (eq) so n CC (eq) CC 1 (eq) n to conclude CC (eq) = CC 1 (eq) Efficient communication for the list problem 11

12 Chapter 3 The list problem In this section we will consider the following problem: Problem statement Bob receives a list of d bit strings in F n. Alice receives one of these strings, their goal is for Bob to know which string she received. In the notation we used in Chapter ; the problem is the function list(x, y) = x, with the promise on the inputs that x y. As in the last chapter, we will write CC k (list) for the communication complexity of this problem when k rounds are allowed. We will consider the difference between single and multi-round communication. As found by Naor et al. [8] multi round communication can be exponentially better than one round. Also, four rounds are almost as good as the theoretical minimum. All the protocols they used to proof this used random functions. Random functions can be very hard to calculate and store, so while their proof showed the existence of very efficient protocols when it comes to communication, those protocols had a big trade off when it came to computation complexity. We will show that the same methods hold if we only allow efficiently calculable functions. For this we will look at two sets of functions. The first is the functions that give the remainder after division by a prime number. The prime numbers will be of polynomial size in the variables n and d and so calculating the remainder will be an efficient operation. The second set of functions consists of all linear maps. Applying a linear maps is the same as calculating a matrix-vector-product, since the input and output are both small, this too will be a efficient operation. 3.1 A lower bound Before we go on to find the protocols, we first prove some lower bounds. This will allow us to compare the protocols we will find to the lower bounds and draw some conclusions on the efficiency of the protocols. Theorem 3.1. CC 1 (list) = n Proof. It is clear that the one message that can be send has to go from Alice to Bob. Now assume that CC 1 (list) = k < n and so that there is a protocol that uses only k bits, we will show that this can never be a valid protocol for all inputs. Such a protocol is function f : F n F k that gives Alice s message for each input. Since 1

13 3.1. A LOWER BOUND k < n also F k < F n. So there have to be two inputs x 1 and x that will result in the same message. This means that if Bob would have a list containing both x 1 and x, the protocol would be inconclusive since he will never be able to distinguish between the two. We can conclude that CC 1 (list) n. Since Alice can just send her n-bit input we find that CC 1 (list) = n. Lemma 3.. CC (list) log d Proof. If we view the communication as a tree, we know that there are at least d leafs, one for every possible outcome Bob can expect. A binary tree with d leafs needs to have at least a depth of log d. Lemma 3.3. CC (list) log n Proof. We use Lemma.6 and that CC 1 (list) = n Theorem 3.4. CC (list) max{log d, log n} Proof. Combine Lemma 3. and Lemma 3.3 remark It is important to note that if d is of the same order as n we will never be able to get a big speedup since log d will be of order n. In general we will assume that d O(n k ) for some k Example 3.5. We look at the simple case where n = 8 and d =. Say Bob receives the list And Alice receives the first string: If we only allow one round of communication, Alice will need to send here full n = 8 bits, as explained above. However, with multiple rounds Bob can communicate information about his list back to Alice. He could for example tell here the first index on which the two strings differ, Alice then sends back the bit she has on that index. This would take only log n + 1 = 4 bits. The exponential gain in this specific case bring us to ask if we could find a general multi round protocol that gives an exponential gain. We will get back tot this in Section 3.. First we will prove a better lower bound. The proof was given by Harry Buhrman and is not yet published []. We will start with a simple Lemma about the transcripts of communication. The more general version of this Lemma introduces the concept of combinatorial rectangles, a very strong concept for proving lower bounds in communication complexity that we did not get to in Chapter. We will however use the following property: Lemma 3.6. [6] If for a protocol P, two inputs (x 1, L 1 ) and (x, L ) will result in the same transcript of communication, then (x, L 1 ) will also produce that transcript. Efficient communication for the list problem 13

14 CHAPTER 3. THE LIST PROBLEM Proof. If we look at P as a binary tree, then the transcript is a path trough this tree. We will have to show that the path is the same for (x, L 1 ) as for the other two inputs. At any node along the path of (x, L 1 ) there are two possibilities: Alice send the next bit. As seen in Chapter, the bit she will send is A i (x ), where A i is the function belonging to that node. So she will send the same bit for (x, L 1 ) as for (x, L ), which in turn is the same as (x 1, L 1 ), and we will move along the path in the same way for all three inputs. Bob send the next bit. Just as before, the bit he will send is B j (L 1 ). So he will send the same bit for (x, L 1 ) as for (x 1, L 1 ), which in turn is the same as (x, L ), and we will move along the path in the same way for all three inputs. We can now conclude that the path, and so the transcript of the communication, will be the same for all three inputs. Using the last Lemma and the following definition of a cover free family we will be able to prove a stronger lower bound. Definition 3.7. Let F = {F 1,, F N } be a family of sets. We call F a d-cover free family if for all {F α1,, F αd } = S F and for all F F S we have F S. In other words, if no subset of d sets can cover any other set. Lemma 3.8. [5] If F = {F 1,, F N } is a d-cover free family then F d log N 4 log d + o(1) Theorem 3.9. [] CC (list) log(d 1) + log n log log(d 1) 3 Proof. Let F x be the set of all possible transcripts of communication for a protocol if Alice receives x as input. We will show that {F x x F n } is a (d 1)-cover free family, we can then use Lemma 3.8 to prove the lower bound from the theorem. We will start by picking any d 1 sets F x1,, F xd 1 and we will show that for any other set F x0, we have F x0 {F x1,, F xd 1 } and so {F x x F n } is a (d 1)-cover free set. To do so we will use the list L 0 = {x 0,, x d 1 }. We claim that the transcript generated by (x 0, L 0 ) is in F x0 but not in any of the other sets F xi. We do so by contradiction, assume it is in F xi, than there is some L such that the transcript for (x i, L ) is the same as that for (x 0, L 0 ). Now by Lemma 3.6 we find that the transcript for (x i, L 0 ) is the same as that for (x 0, L 0 ). But this would mean that Bob can not distinguish between x 0 and x i if he has received L 0, making the protocol invalid. We conclude that the transcript of (x 0, L 0 ) can not be in any of the other F xi, so {F x x F n } is a (d 1)-cover free family. If there are k distinct transcripts, then there has to be at least one transcript that is longer than log k. Using this and Lemma 3.8 we find: CC log {F x x F n } (d 1) n log 4 log(d 1) + o(1) log(d 1) + log n log log(d 1) 3 14 Efficient communication for the list problem

15 3.. FINDING A PROTOCOL 3. Finding a protocol In the last section we already found an optimal protocol for the one round communication and a lower bound on the multi-round communication, we will now discuss a few multi-round protocols that save a lot of communication rounds In this section we will take a look at -round protocols. In all cases that we will look at, Bob will ask Alice to compute the value of her input under a specific function and send the result back to him. He will pick a function so that all entries in his list have a different image under the function, allowing him to use Alice her answer to recognize her element in his list. As showed by Naor et al. [8] we can pick the possible functions in a way that allows Bob to describe each with a small identifier and so that the result will be sufficiently small to. They however looked at random functions and did not care about the computation complexity nor the memory needed to save such a function. We will prove that the same results hold for efficiently calculable functions, starting with the remainder after division. Definition We say that two integers x 1 and x collide under a prime number p if x 1 x mod p. A prime number p is said to scatter a list L if no two integers in L collide under p. Remark - remainder theorem Given two distinct integers x, y below n = N, there can be at most log c N = n primes p log c i > c such that x and y collide under p i This is because if there where k such primes with k log c N, we would get a contradiction: On one hand, by the Chinese remainder theorem we would find x y mod p 1 p p k. On the other hand: x, y n = c log c n c k p 1 p p k But this would mean that x = y, which is not the case. Lemma For every list L of d integers below n, there exists a prime number p nd that scatters the list. Proof. (based on a proof from the Kolmogorov complexity [7]) We consider primes between c and c. Let us first fix x i and x j from our list (with i j). Then there are at most log c n = n primes p between c and c with x log c i x j mod p. If we do the same for all 1d(d 1) pairs in our list we find that there are at most 1d(d 1) n log c primes that cause a collision in our list. By the prime number theorem there are c at least c primes between c and c, we can approximate this by c. Now, ln c ln c log c taking c = nd nd we have primes of which a maximum of 1d(d 1) n will log nd log nd not scatter the list. We conclude that at least one prime below nd will suffice Efficient communication for the list problem 15

16 CHAPTER 3. THE LIST PROBLEM The protocol Using the above we can construct a very simple -round protocol: 1. Bob calculates which prime scatters his list and sends this to Alice using log nd bits.. Alice calculates her value modulus this number and sends that back, also using log nd bits. The total cost is then log nd = 4 log d + log n + bits. This result is significant as it proves that we can actually get close to the lower bound we found in the last section. In fact we can do even better for two round communication, in their proof Naor et al. found a -round protocol that used only 3 log d + log n + 4 bits, using a similar proof we find that we can do almost as good using the remainder under prime number division instead of general functions: Lemma 3.1. There exists a set P of nd primes below 4nd such that for each list of size d at least one prime in P will scatter that list. Proof. Pick some random set P of nd prime numbers below 4nd. As shown in the nd last proof, less than nd of the primes below 4nd won t scatter a specific log nd log nd list. This means that if we pick a random prime it will have a lees then 1 chance of not scattering a specific list. If we take the nd primes of our set P we have a less than 1 chance that none of them will scatter the list. In total we have nd ( ) n = n ( n 1) ( n d + 1) < nd d d! possible lists. Now by the union bound over all lists, the chance that there is a list which none of the primes will scatter it is strictly less than 1 so the chance that our set P will scatter all lists is bigger than 0. We conclude that there has to be a set P that does have the desired properties. The protocol The protocol that follows from this is almost the same as the last time. Bob and Alice fix a set P with the properties described above in advance. 1. Bob calculates which prime scatters his list and sends the index in the set P to Alice in log nd bits.. Alice calculates her value modulus the corresponding prime number and sends that back, using log 4nd bits. The total cost this time is log nd + log 4nd = 3 log d + log n +. The only difference with the result by Naor et al. is an extra log n term, this is because the size of the primes does not only depend on the size of the list,but also on the size of the integers. However, in most cases the 3 log d term will be dominant and the extra log n bits will not make a significant difference. A more significant difference can be found if we look at the computation complexity. Where we used the remainder under division, Naor et al. used random functions f : {0, 1} n {0, 1} log nd. A proof from Kolmogorov complexity shows us that a random function in general has a high circuit complexity [7]. For us this means that writing such a function down in closed form is not an option if we still 16 Efficient communication for the list problem

17 3.. FINDING A PROTOCOL want to be efficient. The only other way to store the function is by storing a list of all function values in memory. This are n values of length log d for each of the nd functions, resulting in a O(nd n ) memory usage. For the primes in the last protocol we only need to use log 4nd bits for each of the nd primes, resulting in O(nd log(4nd )) bits, an exponential improvement. If we are really short on memory we could use the use the first protocol given above, this will cost an extra log d bits of communication, but it will bring the memory footprint down to O(log 4nd ). The last protocol works well once the set P of primes has been found. In the worst-case Bob will have to test all nd primes for all d elements in his list, resulting in nd operations. However, finding the set P of nd primes is more problematic since each possible set of primes has to be tested for each list: ( ) ( ) n 4nd Ω( n ) d nd Luckily, once such a set is found for a certain n and d it can be stored and used forever. At this point Naor et al. go on to find a 4-round protocol that brings the communication down to log d. Sadly the proof they use doesn t work well when we use the remainder under prime numbers instead of random functions. Luckily, we can use random linear maps instead of prime numbers, these are also efficiently computable. Before we look at the 4-round protocol with linear maps, we will first proof that they can replace the prime numbers in our -round protocol. Definition In agreement with before, we say that two integers x 1 and x collide under a linear map f if f(x 1 ) = f(x ). A linear map f is said to scatter a list L if no two integers in L collide under f. Lemma There exists a set M of nd linear maps f : {0, 1} n {0, 1} log(d )+1 such that for each list of size d at least one map in M will scatter that list. Proof. We will look at a map as an (n log(d ) + 1)-matrix with entries from F. We can write an n-bit number as an n-dimensional vector, after applying the map we get a log(d ) + 1 dimensional vector. As with Lemma 3.11 we will first look at just two elements, x and y, and and a single random linear map. For x and y to collide under f they will have to collide on every element of the resulting log nd dimensional vector. This means that we can look at one row of f at a time. Such a row is just an n-dimensional vector and the resulting element after applying the map to x or y is the inner product with x or y. Lets call the row R, we are interested in the probability of it causing x and y to collide on the resulting element, or in other words, the chance that R x = R y. Since x and y are different, there is at least one index i such that x i y i. Without loss of generality, assume that x i = 1 and y i = 0. If R x = R y for a row R than we can find a row R for which R x R y by flipping the ith bit in R. This is because R y = R y, since y is zero on the ith bit, but R x R x, since x is one on the changed bit. This means that for every row that causes a collision, there is one that does not. The reverse is also true, meaning that exactly half the rows will cause a collision. For a collision to happen on all bits, all of the log(d ) + 1 rows have to fall in this half, leaving only ( n 1 ) log(d )+1 of the ( n ) log d linear maps, in other words, one Efficient communication for the list problem 17

18 CHAPTER 3. THE LIST PROBLEM in log d +1 = d maps. Now, just as before with the prime numbers, we have less than d pairs in a list, meaning that less than d in d maps will cause a collision, and thus we get a probability of less than 1 that a random linear map will cause a collision for a specific list. Now we continue just as in Lemma 3.1: we can look at a set M of nd random linear maps, the probability that all will cause a collision will be become less then 1. There are again less than nd lists, so by using the union bound we find that nd the probability of all maps in M causing a collision for some list is less than 1. We conclude that there is a set M of nd linear maps such that for each list there will always be a map in M that will not cause a collision. The last Lemma showed that we can use linear maps instead of prime numbers in the two round protocol. Further more, since the size of the function value does no longer depend on n, it allows us to bring down the amount of bits with an other log n bits. This makes our efficiently calculable protocol just as good when it comes to communication as the one found by Naor et al rounds The -round protocol found by Naor et al. and the efficient variation we made on it in the last section showed that is was possible to get very close to the lower bound found in Lemma 3.9, our best -round protocol was only a factor 3 higher. Naor et al. also found a 4-round protocol that used only log d + log n + 5 bits [8]. In this section we will show that, just like in the -round case, an efficiently computable protocol can be used with the same amount of communication. We will again do this by replacing random functions by linear maps. Definition We say a set of integers collides under a map f if all elements in the set are mapped to the same value by f. A largest collision under f in a list L is a subset of L that collides and for which there is no larger subset of L that also collides. the largest number of collisions in L is the size of such a set. We will abbreviate largest number of collisions by LNC. The protocol, as found by Naor et al. is as follows: 1. Bob finds a map f : F n F log d in a previously agreed upon set of size nd, with a LNC less than log d for this list, and sends an identifier of this map to Alice in log nd bits.. Alice calculates the result of her value under this map and sends that back in log d bits. 3. Bob can now filter out all but log d possible elements from his list since there are at most log d elements that would have been mapped to the value Alice send him. 4. They use the -round protocol with 3 log log d + log n bits to send Alice her input. The proof mostly consists of showing that for a fixed list there is a probability of at least 1 that a random map has a LNC less than log d. After this we continue like 18 Efficient communication for the list problem

19 3.. FINDING A PROTOCOL in the -round case: we take nd maps so the probability of them all having a LNC bigger than log d will be less than 1. There are less than nd lists, so by the union nd bound the probability that there is a list such that all maps in a set of nd random maps will will have a too high LNC will be less than 1. We conclude that there has to be a set so that for every list, at least one map will will not send more than log d elements to the same value. We will now first show how Naor et al. has proven this for random, not necessarily linear, maps. Random functions They looked at maps f : F n F log d. We are interested in the probability of more than log d elements being mapped to one value under a random map. It will be enough to look at the probability of log d + 1 elements colliding, since if more elements collide, then there is always a subset of log d + 1 elements that collides. For a list of size d there are ( d log d+1) subsets of size log d + 1 that could collide. There are d different values on which such a subset can collide. The probability of 1 colliding on one value is (d) log d+1 since all the elements have to be send to the same value. The resulting probability by the union bound will be ( d log d ) d d < (d) log d+1 (log d + 1)! < 1 They conclude that the chance that the LNC is higher than log d is under 1 Example The main idea of the last proof was that for two elements to collide, the chance was 1 d. for three it would bee 1 d. In other words, each added element would bring the probability down by a factor 1 d. This approach will not work for prime numbers; in the last step we use that the probability of each element colliding with the previous ones is stochastically independent of the last one. However, with our primes we can find a list for which this is not true. Since two elements collide under a prime number if there difference contains that prime as a factor, we could create a list so that all the differences contain the same prime factors: If we now take a random prime number and see that 4 and 10 collide, then, since there difference is 6, we know the prime is either or 3. However, the difference between 4 and 16 in 1, which contains both possibilities as a factor. So for this list, if the first two elements collide, than all elements collide. This is clearly not stochasticly independent so the proof given by Naor et al. will not work. Example A similar problem arises with linear maps. If we look at the following list: 0000 Efficient communication for the list problem 19

20 CHAPTER 3. THE LIST PROBLEM Now assume that we have a map f for which the first three elements collide. We know that 0000 will go to zero for every map, so for the first two to collide, all rows in our map should be of the form 0. Because the third element should also go to zero we know that all rows look like 00. However, now the fourth element already goes to zero and thus collides with the rest, again showing that stochastic independence does not hold. We will have to find an other approach to prove that linear maps will not cause to many collisions on one value. The solution comes from a paper by Alon et al. [1]. Their proof showed that for a random linear map to F log d we have P (LNC > C log d log log d) < 1 3 with C a constant. We will present the main proof here, we will however not recite their whole paper again and so will not prove the following two Lemmas here. Lemma [1] For every ɛ > 0 there is a constant c ɛ such that if we take a subset L of F n and an integer k > 0 so L c ɛ k k : for a uniform random linear map or affine map T : F n F k we get: P (T (L) = F k ) 1 ɛ Lemma [1] let L be a finite subset of F n with density L n = 1 α < 1 and let t be an integer so 0 t < n. Then for a uniform random linear map T : F n F t we get: P (T (L) F t ) α n t log t log log 1 α Overview of the proof We want to prove that P (E 1 ) is small, where E 1 stands for the event that a linear map has a too high LNC. To do this we will introduce an other event E so that P (E ) is small but P (E E 1 ) is large. Before we go into the details of E 1 and E there is an important note on how we will pick a map h : F n F log d. We will do so through an intermediate space F l with l log d by picking two linear maps h 1 : F n F l and h : F l F log d. Since both maps are uniformly chosen at random the result h = h h 1 will be so too and it will also be linear. It is also important to note that we will now first prove everything for a list of size d log d as the original article does the same. It is not hard to see however that if for a random linear map and a list of size d log d the LNC is low enough with high probability, that then for a list of size d the LNC will be low enough with at least the same probability. Now for a fixed list L we can define the event E 1 as: α F log d : h 1 (α) L > t 0 Efficient communication for the list problem

21 3.. FINDING A PROTOCOL where t is the maximally allowed number of collisions. We now define E as: Lemma 3.0. If k = l d log d Proof. We start by rewriting E : α F log d : h 1 (α) h 1 (L) > 1, we have log k log log k P (E ) k h (F l \ h 1 (L)) F log d Now fix h 1. We now use Lemma 3.19 on F l \ h 1 (L) and an uniformly chosen h. For its density we find Fl \h 1(L) = Fl F l h 1(L) = 1 h 1(L) = 1 α and so F l F l α = h 1(L) F l L F l = d log d l = 1 k Applying the Lemma we find that P (E ) α l log d log log d log log 1 α k log k log log k This is so for any fixed h 1, so it is also true if h 1 is random. Lemma 3.1. If t > c l l 1 log with c d d 1 as in Lemma 3.18 P (E E 1 ) 1 Proof. Fix an h for which E 1 holds and fix any full rank h. We will show that even with h fixed the probability is still at least 1. Now since E 1 holds for h there is a set S L of size at least t that collides on an element α F log d. Now define D = h 1 (α) and A = h 1 (α). We will now consider the distribution under h 1, restricted to D. As shown in the original paper, this is the same as the distribution under a random affine or linear map. For E to happen we will need A h 1 (S). h is onto so A = l. Due to E d 1 holding for h we have D L = t > c 1 But this is exactly the premise of Lemma 3.18 with k = log l Lemma we find that P (E E 1 ) 1 We are now ready to prove that P (E 1 ) is small. d l d log l d.. So by applying the Lemma 3.. There is a constant C so that if we map a set of size d log d to F log d P (LNC > C log(d) log log(d)) 1 3 Proof. Let l = log d + log log d +. If we take k = l d log d = log d+log log d+ d log d = d log(d) d log d = 4 > 1 Efficient communication for the list problem 1

22 CHAPTER 3. THE LIST PROBLEM and note that we find that l d = log d+log log d+ d = 4 log d l l c 1 log d d = c 1 4 log(d) log(4 log d) = c 1 4 log(d)( + log(log d)) < c 1 4 log(d)(3 log(log d)) = 1c 1 log(d) log log(d) So by setting t = 1c 1 log(d) log log(d) we can use Lemma 3.1 to find that: and so that P (E E 1 ) 1 P (E 1 ) = P (E 1 E )P (E ) P (E E 1 ) And use Lemma 3.0 to find the final probability: P (E ) P (E E 1 ) P (E ) P (E 1 ) P (E ) k log k log log k ( 1 4 )log 4+log log 4 = 1 3 By defining C := 1c 1 we get t = C log(d) log log(d) and so E 1 will be the event from the theorem and the probability will be lower than 1 3. Everything is now set for proving that a set of nd linear maps will suffice in the 4-round protocol on page 18. Theorem 3.3. There exists a set M of nd linear maps to F log d so that for every list L containing d different n-bit integers, there is at least one map f M so that f 1 (x) L C log d log log d for each x, with C a constant. Proof. Using Lemma 3. we find that a random linear map f : F n F log d has a probability of less than 1 for the LNC to be larger than C log d log log d when we 3 map d log d integers. It is trivial that the same will hold for the d integers in one of 1 our lists. If we now pick nd linear maps randomly, we will have a probability 3 nd that they all have a LNC larger then C log d log log d for some list. There are ( ) n d lists, so using the union bound: d P (All maps in M have a too large LNC for some list) = 3 ( n ) nd < nd 3 < 1 nd 16 < 1 nd We conclude that there exists a set M of nd linear maps f : F n F log d, such that for every list the LNC of at least one map is lower than C log d log log d. Efficient communication for the list problem

23 3.. FINDING A PROTOCOL The protocol Like before: 1. Bob finds a linear map f : {0, 1} n {0, 1} log d in a fixed set of size nd that causes at most a LNC of C log(d) log log(d) for his list. He can do so by Theorem 3.3. He sends the index of this map to Alice using log nd bits.. Alice calculates the image of her input under this map and sends that back using log d bits. 3. Bob can now bring his list down to C log(d) log log(d) possible elements that could be Alice her input. 4. They now use the -round protocol with this new list of size C log(d) log log(d). This will use log n + 3 log(c log d log log d) + O(1)) = log n + 3 log log d + 3 log log log d + O(1)) bits. This protocol uses log d + log n + 3 log log d + 3 log log log d + O(1)) bits. This is a little bit worse than the protocol found by Naor et al. [8], but the dominant terms are the same. Just as before however, our version of the protocol uses only efficiently calculable functions. It is important to note that the 4-round protocols are as good as optimal, there is almost no difference with the lower bound found in Theorem 3.9. To setup the first part of our protocol we would, just like for the -rounds, do a lot of work once. In the worst case we would have to try all possible ( ) n log d nd sets M of maps with all ( ) n d lists to find one that works for all. The theorem however also showed that the probability of a set M not working for all maps was less then 1. So with very high probability the first set will work and we will only have to 16 nd check against all the lists. An other reassurance is that with probability more than 31 the first map we test for a list will already work, so we will not have to test all 3 the maps in the set for each list. After the initial setup we will have nd linear maps to store, each using n log d bits, resulting in a total of n d log d bits. In comparison, storing the random nonlinear maps used by Naor et al. would take nd log(d) n bits. In both cases we would of course also have the maps for the -round protocol in the second step to store, these however are relatively small. Efficient communication for the list problem 3

24 Chapter 4 Overview of quantum computations In this chapter we will give a brief introduction to quantum computing and quantum communication. We will first discuss the basics of quantum computing and then move on to a few of the main examples of how quantum computing can make a difference. Most information in this chapter is based on lecture notes by Ronald de Wolf [4]. 4.1 The qubit In classical computing we work with bits, entities that are either 0 or 1. In quantum computing this changes, we look et qubits. If we measure a qubit it will still be either 0 or 1, however an unmeasured qubit can be both at the same time, we call this a superposition. We call the two states 0 and 1. These are the basis states in C, 0 = ( ) 1 0 A general state than is of the form and 1 = Φ = α 0 + β 1 ( ) 0 1 If we measure this state we have a probability α of finding 0 and a probability β of finding 1. This adds the extra restriction that α + β = 1, since we have a probability distribution. This means that we can identify qubits with unit vectors in C. 4. Multiple bits Computers would not be very powerfull if they could only manipulate one bit, the same is the case for qubits. Luckily we can combine multiple qubits. If we could only place them next to each other and describe them one by one, we would not have a big improvement on classical computers, the only thing we would gain is true randomness. However, it turns out that we can combine qubits to form a combined state where we have a superposition of all n possible bases states. We do this by looking at 4

25 4.3. MANIPULATING STATES the tensor product space of the multiple single qubits. For example, The two-qubit state has the following bassis states: Where the first part stands for the state of the first bit and the second part for the state of the second bit. We will abbreviate this to 00 = = 1 A pair of qubits can be in a general superposition of these four states Φ = α 00 + β 01 + γ 10 + δ 11 We see that for n qubits we have n basis states, so an n qubit register is a vector in C n Now we can not always describe the qubits separately anymore. For example, the so called EPR-pair Φ = can not be written as the tensor product of two single qubit states. In this state measuring one qubit will also fix the other qubit, due to this we will call the state entangeled. In general, a state is entageld if it can not be writen as the tensor product of single qubit states. 4.3 Manipulating states By now we have a quite interesting form of memory. However, we would of course like to manipulate this memory. Quantum mechanics allows us to apply linear transformations to states, which, in analog to the classical case, we will call gates. In our vector setting these linear transformations can be represented by n n complex matrices. Since the states need to stay normalized we get the restriction that the matrix corresponding to a gate is unitary bit flip A simple example is the bit flip gate, the quantum version of the NOT-gate. Due to the linear nature of matrices we only need to define what it does to the basis states: this would give the matrix 0 1 and 1 0 ( ) Efficient communication for the list problem 5

26 CHAPTER 4. OVERVIEW OF QUANTUM COMPUTATIONS 4.3. phase flip Next we define the phase flip 0 0 and 1 1 this gives the matrix ( ) Hadamard One of the most important transformation in quantum computing is the Hadamard gate. It brings a qubit from one of the base states to a state in which a measurement would hold a zero or one with equal probability. or in matrix form 0 1 ( ) and 1 1 ( 0 1 ) H = 1 ( ) It allows us to go from a deterministic state to a superposition. We could also apply it to multiple bits to get an equal distribution over all n-bit states. Important to note is that it is its own inverse: H = Controlled not ( ) ( ) = ( ) 0 = I 0 The above gates are all limited to one qubit. One of the main two qubit gates is the controlled not, or CNOT. It is the quantum version of the XOR gate. If the first bit is a one, we bitflip the second: a b CNOT a b Which gives the matrix As can be expected the CNOT gate can entangle pair of qubits. If we start in the 00 state and do a Hadamard on the first qubit we get 1 ( ) 6 Efficient communication for the list problem

Quantum Communication Complexity

Quantum Communication Complexity Quantum Communication Complexity Ronald de Wolf Communication complexity has been studied extensively in the area of theoretical computer science and has deep connections with seemingly unrelated areas,

More information

Quantum Error Correcting Codes and Quantum Cryptography. Peter Shor M.I.T. Cambridge, MA 02139

Quantum Error Correcting Codes and Quantum Cryptography. Peter Shor M.I.T. Cambridge, MA 02139 Quantum Error Correcting Codes and Quantum Cryptography Peter Shor M.I.T. Cambridge, MA 02139 1 We start out with two processes which are fundamentally quantum: superdense coding and teleportation. Superdense

More information

Quantum Computing: Foundations to Frontier Fall Lecture 3

Quantum Computing: Foundations to Frontier Fall Lecture 3 Quantum Computing: Foundations to Frontier Fall 018 Lecturer: Henry Yuen Lecture 3 Scribes: Seyed Sajjad Nezhadi, Angad Kalra Nora Hahn, David Wandler 1 Overview In Lecture 3, we started off talking about

More information

Lecture 16: Communication Complexity

Lecture 16: Communication Complexity CSE 531: Computational Complexity I Winter 2016 Lecture 16: Communication Complexity Mar 2, 2016 Lecturer: Paul Beame Scribe: Paul Beame 1 Communication Complexity In (two-party) communication complexity

More information

Short introduction to Quantum Computing

Short introduction to Quantum Computing November 7, 2017 Short introduction to Quantum Computing Joris Kattemölle QuSoft, CWI, Science Park 123, Amsterdam, The Netherlands Institute for Theoretical Physics, University of Amsterdam, Science Park

More information

Lecture 22: Quantum computational complexity

Lecture 22: Quantum computational complexity CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 22: Quantum computational complexity April 11, 2006 This will be the last lecture of the course I hope you have enjoyed the

More information

Tutorial on Quantum Computing. Vwani P. Roychowdhury. Lecture 1: Introduction

Tutorial on Quantum Computing. Vwani P. Roychowdhury. Lecture 1: Introduction Tutorial on Quantum Computing Vwani P. Roychowdhury Lecture 1: Introduction 1 & ) &! # Fundamentals Qubits A single qubit is a two state system, such as a two level atom we denote two orthogonal states

More information

Lecture 1: Overview of quantum information

Lecture 1: Overview of quantum information CPSC 59/69: Quantum Computation John Watrous, University of Calgary References Lecture : Overview of quantum information January 0, 006 Most of the material in these lecture notes is discussed in greater

More information

Communication Complexity

Communication Complexity Communication Complexity Jie Ren Adaptive Signal Processing and Information Theory Group Nov 3 rd, 2014 Jie Ren (Drexel ASPITRG) CC Nov 3 rd, 2014 1 / 77 1 E. Kushilevitz and N. Nisan, Communication Complexity,

More information

1 Randomized Computation

1 Randomized Computation CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at

More information

CS Communication Complexity: Applications and New Directions

CS Communication Complexity: Applications and New Directions CS 2429 - Communication Complexity: Applications and New Directions Lecturer: Toniann Pitassi 1 Introduction In this course we will define the basic two-party model of communication, as introduced in the

More information

Lecture Lecture 9 October 1, 2015

Lecture Lecture 9 October 1, 2015 CS 229r: Algorithms for Big Data Fall 2015 Lecture Lecture 9 October 1, 2015 Prof. Jelani Nelson Scribe: Rachit Singh 1 Overview In the last lecture we covered the distance to monotonicity (DTM) and longest

More information

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness Lecture 19 NP-Completeness I 19.1 Overview In the past few lectures we have looked at increasingly more expressive problems that we were able to solve using efficient algorithms. In this lecture we introduce

More information

Quantum Gates, Circuits & Teleportation

Quantum Gates, Circuits & Teleportation Chapter 3 Quantum Gates, Circuits & Teleportation Unitary Operators The third postulate of quantum physics states that the evolution of a quantum system is necessarily unitary. Geometrically, a unitary

More information

Compute the Fourier transform on the first register to get x {0,1} n x 0.

Compute the Fourier transform on the first register to get x {0,1} n x 0. CS 94 Recursive Fourier Sampling, Simon s Algorithm /5/009 Spring 009 Lecture 3 1 Review Recall that we can write any classical circuit x f(x) as a reversible circuit R f. We can view R f as a unitary

More information

SUPERDENSE CODING AND QUANTUM TELEPORTATION

SUPERDENSE CODING AND QUANTUM TELEPORTATION SUPERDENSE CODING AND QUANTUM TELEPORTATION YAQIAO LI This note tries to rephrase mathematically superdense coding and quantum teleportation explained in [] Section.3 and.3.7, respectively (as if I understood

More information

Fourier Sampling & Simon s Algorithm

Fourier Sampling & Simon s Algorithm Chapter 4 Fourier Sampling & Simon s Algorithm 4.1 Reversible Computation A quantum circuit acting on n qubits is described by an n n unitary operator U. Since U is unitary, UU = U U = I. This implies

More information

Notes for Lecture 11

Notes for Lecture 11 Stanford University CS254: Computational Complexity Notes 11 Luca Trevisan 2/11/2014 Notes for Lecture 11 Circuit Lower Bounds for Parity Using Polynomials In this lecture we prove a lower bound on the

More information

1 Recommended Reading 1. 2 Public Key/Private Key Cryptography Overview RSA Algorithm... 2

1 Recommended Reading 1. 2 Public Key/Private Key Cryptography Overview RSA Algorithm... 2 Contents 1 Recommended Reading 1 2 Public Key/Private Key Cryptography 1 2.1 Overview............................................. 1 2.2 RSA Algorithm.......................................... 2 3 A Number

More information

QUANTUM COMMUNICATIONS BASED ON QUANTUM HASHING. Alexander Vasiliev. Kazan Federal University

QUANTUM COMMUNICATIONS BASED ON QUANTUM HASHING. Alexander Vasiliev. Kazan Federal University QUANTUM COMMUNICATIONS BASED ON QUANTUM HASHING Alexander Vasiliev Kazan Federal University Abstract: In this paper we consider an application of the recently proposed quantum hashing technique for computing

More information

Entanglement and information

Entanglement and information Ph95a lecture notes for 0/29/0 Entanglement and information Lately we ve spent a lot of time examining properties of entangled states such as ab è 2 0 a b è Ý a 0 b è. We have learned that they exhibit

More information

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked

More information

Single qubit + CNOT gates

Single qubit + CNOT gates Lecture 6 Universal quantum gates Single qubit + CNOT gates Single qubit and CNOT gates together can be used to implement an arbitrary twolevel unitary operation on the state space of n qubits. Suppose

More information

Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore

Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore (Refer Slide Time: 00:15) Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore Lecture No. # 03 Mathematical Preliminaries:

More information

We set up the basic model of two-sided, one-to-one matching

We set up the basic model of two-sided, one-to-one matching Econ 805 Advanced Micro Theory I Dan Quint Fall 2009 Lecture 18 To recap Tuesday: We set up the basic model of two-sided, one-to-one matching Two finite populations, call them Men and Women, who want to

More information

Error Correcting Codes Prof. Dr. P Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore

Error Correcting Codes Prof. Dr. P Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore (Refer Slide Time: 00:54) Error Correcting Codes Prof. Dr. P Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore Lecture No. # 05 Cosets, Rings & Fields

More information

Asymmetric Communication Complexity and Data Structure Lower Bounds

Asymmetric Communication Complexity and Data Structure Lower Bounds Asymmetric Communication Complexity and Data Structure Lower Bounds Yuanhao Wei 11 November 2014 1 Introduction This lecture will be mostly based off of Miltersen s paper Cell Probe Complexity - a Survey

More information

Quantum Computation CMU BB, Fall Week 6 work: Oct. 11 Oct hour week Obligatory problems are marked with [ ]

Quantum Computation CMU BB, Fall Week 6 work: Oct. 11 Oct hour week Obligatory problems are marked with [ ] Quantum Computation CMU 15-859BB, Fall 2018 Week 6 work: Oct. 11 Oct. 18 12-hour week Obligatory problems are marked with [ ] 1. [CCNOT.] In class we showed how to simulate classical AND, OR, NOT, and

More information

1 Maintaining a Dictionary

1 Maintaining a Dictionary 15-451/651: Design & Analysis of Algorithms February 1, 2016 Lecture #7: Hashing last changed: January 29, 2016 Hashing is a great practical tool, with an interesting and subtle theory too. In addition

More information

6.896 Quantum Complexity Theory September 9, Lecture 2

6.896 Quantum Complexity Theory September 9, Lecture 2 6.96 Quantum Complexity Theory September 9, 00 Lecturer: Scott Aaronson Lecture Quick Recap The central object of study in our class is BQP, which stands for Bounded error, Quantum, Polynomial time. Informally

More information

Lecture 15: A Brief Look at PCP

Lecture 15: A Brief Look at PCP IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 15: A Brief Look at PCP David Mix Barrington and Alexis Maciel August 4, 2000 1. Overview

More information

6.896 Quantum Complexity Theory 30 October Lecture 17

6.896 Quantum Complexity Theory 30 October Lecture 17 6.896 Quantum Complexity Theory 30 October 2008 Lecturer: Scott Aaronson Lecture 17 Last time, on America s Most Wanted Complexity Classes: 1. QMA vs. QCMA; QMA(2). 2. IP: Class of languages L {0, 1} for

More information

Lecture 11: Hash Functions, Merkle-Damgaard, Random Oracle

Lecture 11: Hash Functions, Merkle-Damgaard, Random Oracle CS 7880 Graduate Cryptography October 20, 2015 Lecture 11: Hash Functions, Merkle-Damgaard, Random Oracle Lecturer: Daniel Wichs Scribe: Tanay Mehta 1 Topics Covered Review Collision-Resistant Hash Functions

More information

Algebraic Problems in Computational Complexity

Algebraic Problems in Computational Complexity Algebraic Problems in Computational Complexity Pranab Sen School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai 400005, India pranab@tcs.tifr.res.in Guide: Prof. R.

More information

Theoretical Cryptography, Lecture 10

Theoretical Cryptography, Lecture 10 Theoretical Cryptography, Lecture 0 Instructor: Manuel Blum Scribe: Ryan Williams Feb 20, 2006 Introduction Today we will look at: The String Equality problem, revisited What does a random permutation

More information

Ph 219b/CS 219b. Exercises Due: Wednesday 20 November 2013

Ph 219b/CS 219b. Exercises Due: Wednesday 20 November 2013 1 h 219b/CS 219b Exercises Due: Wednesday 20 November 2013 3.1 Universal quantum gates I In this exercise and the two that follow, we will establish that several simple sets of gates are universal for

More information

1 Probability Review. CS 124 Section #8 Hashing, Skip Lists 3/20/17. Expectation (weighted average): the expectation of a random quantity X is:

1 Probability Review. CS 124 Section #8 Hashing, Skip Lists 3/20/17. Expectation (weighted average): the expectation of a random quantity X is: CS 24 Section #8 Hashing, Skip Lists 3/20/7 Probability Review Expectation (weighted average): the expectation of a random quantity X is: x= x P (X = x) For each value x that X can take on, we look at

More information

Quantum Pseudo-Telepathy

Quantum Pseudo-Telepathy Quantum Pseudo-Telepathy Michail Lampis mlambis@softlab.ntua.gr NTUA Quantum Pseudo-Telepathy p.1/24 Introduction In Multi-Party computations we are interested in measuring communication complexity. Communication

More information

Entanglement Manipulation

Entanglement Manipulation Entanglement Manipulation Steven T. Flammia 1 1 Perimeter Institute for Theoretical Physics, Waterloo, Ontario, N2L 2Y5 Canada (Dated: 22 March 2010) These are notes for my RIT tutorial lecture at the

More information

Lecture 20: Bell inequalities and nonlocality

Lecture 20: Bell inequalities and nonlocality CPSC 59/69: Quantum Computation John Watrous, University of Calgary Lecture 0: Bell inequalities and nonlocality April 4, 006 So far in the course we have considered uses for quantum information in the

More information

CISC 4090: Theory of Computation Chapter 1 Regular Languages. Section 1.1: Finite Automata. What is a computer? Finite automata

CISC 4090: Theory of Computation Chapter 1 Regular Languages. Section 1.1: Finite Automata. What is a computer? Finite automata CISC 4090: Theory of Computation Chapter Regular Languages Xiaolan Zhang, adapted from slides by Prof. Werschulz Section.: Finite Automata Fordham University Department of Computer and Information Sciences

More information

Introduction to Quantum Algorithms Part I: Quantum Gates and Simon s Algorithm

Introduction to Quantum Algorithms Part I: Quantum Gates and Simon s Algorithm Part I: Quantum Gates and Simon s Algorithm Martin Rötteler NEC Laboratories America, Inc. 4 Independence Way, Suite 00 Princeton, NJ 08540, U.S.A. International Summer School on Quantum Information, Max-Planck-Institut

More information

Lecture 5. 1 Review (Pairwise Independence and Derandomization)

Lecture 5. 1 Review (Pairwise Independence and Derandomization) 6.842 Randomness and Computation September 20, 2017 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Tom Kolokotrones 1 Review (Pairwise Independence and Derandomization) As we discussed last time, we can

More information

. Here we are using the standard inner-product over C k to define orthogonality. Recall that the inner-product of two vectors φ = i α i.

. Here we are using the standard inner-product over C k to define orthogonality. Recall that the inner-product of two vectors φ = i α i. CS 94- Hilbert Spaces, Tensor Products, Quantum Gates, Bell States 1//07 Spring 007 Lecture 01 Hilbert Spaces Consider a discrete quantum system that has k distinguishable states (eg k distinct energy

More information

Lecture 4: Postulates of quantum mechanics

Lecture 4: Postulates of quantum mechanics Lecture 4: Postulates of quantum mechanics Rajat Mittal IIT Kanpur The postulates of quantum mechanics provide us the mathematical formalism over which the physical theory is developed. For people studying

More information

Lecture 7: More Arithmetic and Fun With Primes

Lecture 7: More Arithmetic and Fun With Primes IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Advanced Course on Computational Complexity Lecture 7: More Arithmetic and Fun With Primes David Mix Barrington and Alexis Maciel July

More information

An exponential separation between quantum and classical one-way communication complexity

An exponential separation between quantum and classical one-way communication complexity An exponential separation between quantum and classical one-way communication complexity Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical

More information

Tensor product Take two tensors, get together enough inputs to feed into both, and take the product of their results.

Tensor product Take two tensors, get together enough inputs to feed into both, and take the product of their results. 1 Tensors Tensors are a representation of linear operators. Much like with bra-ket notation, we want a notation which will suggest correct operations. We can represent a tensor as a point with n legs radiating

More information

b = 10 a, is the logarithm of b to the base 10. Changing the base to e we obtain natural logarithms, so a = ln b means that b = e a.

b = 10 a, is the logarithm of b to the base 10. Changing the base to e we obtain natural logarithms, so a = ln b means that b = e a. INTRODUCTION TO CRYPTOGRAPHY 5. Discrete Logarithms Recall the classical logarithm for real numbers: If we write b = 10 a, then a = log 10 b is the logarithm of b to the base 10. Changing the base to e

More information

Matching Theory and the Allocation of Kidney Transplantations

Matching Theory and the Allocation of Kidney Transplantations University of Utrecht Bachelor Thesis Matching Theory and the Allocation of Kidney Transplantations Kim de Bakker Supervised by Dr. M. Ruijgrok 14 June 2016 Introduction Matching Theory has been around

More information

Fourier analysis of boolean functions in quantum computation

Fourier analysis of boolean functions in quantum computation Fourier analysis of boolean functions in quantum computation Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge

More information

Lecture 14: IP = PSPACE

Lecture 14: IP = PSPACE IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 14: IP = PSPACE David Mix Barrington and Alexis Maciel August 3, 2000 1. Overview We

More information

Quantum Communication

Quantum Communication Quantum Communication Harry Buhrman CWI & University of Amsterdam Physics and Computing Computing is physical Miniaturization quantum effects Quantum Computers ) Enables continuing miniaturization ) Fundamentally

More information

Lecture 3: Constructing a Quantum Model

Lecture 3: Constructing a Quantum Model CS 880: Quantum Information Processing 9/9/010 Lecture 3: Constructing a Quantum Model Instructor: Dieter van Melkebeek Scribe: Brian Nixon This lecture focuses on quantum computation by contrasting it

More information

15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions

15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions 15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions Definitions We say a (deterministic) protocol P computes f if (x, y) {0,

More information

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrovsky. Lecture 10

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrovsky. Lecture 10 CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrovsky Lecture 10 Lecture date: 14 and 16 of March, 2005 Scribe: Ruzan Shahinian, Tim Hu 1 Oblivious Transfer 1.1 Rabin Oblivious Transfer

More information

Introduction to Algebra: The First Week

Introduction to Algebra: The First Week Introduction to Algebra: The First Week Background: According to the thermostat on the wall, the temperature in the classroom right now is 72 degrees Fahrenheit. I want to write to my friend in Europe,

More information

Lecture 3: Superdense coding, quantum circuits, and partial measurements

Lecture 3: Superdense coding, quantum circuits, and partial measurements CPSC 59/69: Quantum Computation John Watrous, University of Calgary Lecture 3: Superdense coding, quantum circuits, and partial measurements Superdense Coding January 4, 006 Imagine a situation where two

More information

Seminar 1. Introduction to Quantum Computing

Seminar 1. Introduction to Quantum Computing Seminar 1 Introduction to Quantum Computing Before going in I am also a beginner in this field If you are interested, you can search more using: Quantum Computing since Democritus (Scott Aaronson) Quantum

More information

Lecture 11 - Basic Number Theory.

Lecture 11 - Basic Number Theory. Lecture 11 - Basic Number Theory. Boaz Barak October 20, 2005 Divisibility and primes Unless mentioned otherwise throughout this lecture all numbers are non-negative integers. We say that a divides b,

More information

Lecture 12: Interactive Proofs

Lecture 12: Interactive Proofs princeton university cos 522: computational complexity Lecture 12: Interactive Proofs Lecturer: Sanjeev Arora Scribe:Carl Kingsford Recall the certificate definition of NP. We can think of this characterization

More information

Solving Quadratic & Higher Degree Equations

Solving Quadratic & Higher Degree Equations Chapter 7 Solving Quadratic & Higher Degree Equations Sec 1. Zero Product Property Back in the third grade students were taught when they multiplied a number by zero, the product would be zero. In algebra,

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 1: Quantum circuits and the abelian QFT

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 1: Quantum circuits and the abelian QFT Quantum algorithms (CO 78, Winter 008) Prof. Andrew Childs, University of Waterloo LECTURE : Quantum circuits and the abelian QFT This is a course on quantum algorithms. It is intended for graduate students

More information

AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS

AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS The integers are the set 1. Groups, Rings, and Fields: Basic Examples Z := {..., 3, 2, 1, 0, 1, 2, 3,...}, and we can add, subtract, and multiply

More information

6.080/6.089 GITCS May 6-8, Lecture 22/23. α 0 + β 1. α 2 + β 2 = 1

6.080/6.089 GITCS May 6-8, Lecture 22/23. α 0 + β 1. α 2 + β 2 = 1 6.080/6.089 GITCS May 6-8, 2008 Lecturer: Scott Aaronson Lecture 22/23 Scribe: Chris Granade 1 Quantum Mechanics 1.1 Quantum states of n qubits If you have an object that can be in two perfectly distinguishable

More information

(Refer Slide Time: 0:21)

(Refer Slide Time: 0:21) Theory of Computation Prof. Somenath Biswas Department of Computer Science and Engineering Indian Institute of Technology Kanpur Lecture 7 A generalisation of pumping lemma, Non-deterministic finite automata

More information

P (E) = P (A 1 )P (A 2 )... P (A n ).

P (E) = P (A 1 )P (A 2 )... P (A n ). Lecture 9: Conditional probability II: breaking complex events into smaller events, methods to solve probability problems, Bayes rule, law of total probability, Bayes theorem Discrete Structures II (Summer

More information

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 Computational complexity studies the amount of resources necessary to perform given computations.

More information

Where do pseudo-random generators come from?

Where do pseudo-random generators come from? Computer Science 2426F Fall, 2018 St. George Campus University of Toronto Notes #6 (for Lecture 9) Where do pseudo-random generators come from? Later we will define One-way Functions: functions that are

More information

Theoretical Cryptography, Lectures 18-20

Theoretical Cryptography, Lectures 18-20 Theoretical Cryptography, Lectures 18-20 Instructor: Manuel Blum Scribes: Ryan Williams and Yinmeng Zhang March 29, 2006 1 Content of the Lectures These lectures will cover how someone can prove in zero-knowledge

More information

Quantum information and quantum computing

Quantum information and quantum computing Middle East Technical University, Department of Physics January 7, 009 Outline Measurement 1 Measurement 3 Single qubit gates Multiple qubit gates 4 Distinguishability 5 What s measurement? Quantum measurement

More information

An Introduction to Quantum Information and Applications

An Introduction to Quantum Information and Applications An Introduction to Quantum Information and Applications Iordanis Kerenidis CNRS LIAFA-Univ Paris-Diderot Quantum information and computation Quantum information and computation How is information encoded

More information

Number theory (Chapter 4)

Number theory (Chapter 4) EECS 203 Spring 2016 Lecture 12 Page 1 of 8 Number theory (Chapter 4) Review Compute 6 11 mod 13 in an efficient way What is the prime factorization of 100? 138? What is gcd(100, 138)? What is lcm(100,138)?

More information

Note: Please use the actual date you accessed this material in your citation.

Note: Please use the actual date you accessed this material in your citation. MIT OpenCourseWare http://ocw.mit.edu 18.06 Linear Algebra, Spring 2005 Please use the following citation format: Gilbert Strang, 18.06 Linear Algebra, Spring 2005. (Massachusetts Institute of Technology:

More information

9 Knapsack Cryptography

9 Knapsack Cryptography 9 Knapsack Cryptography In the past four weeks, we ve discussed public-key encryption systems that depend on various problems that we believe to be hard: prime factorization, the discrete logarithm, and

More information

Ph 219b/CS 219b. Exercises Due: Wednesday 22 February 2006

Ph 219b/CS 219b. Exercises Due: Wednesday 22 February 2006 1 Ph 219b/CS 219b Exercises Due: Wednesday 22 February 2006 6.1 Estimating the trace of a unitary matrix Recall that using an oracle that applies the conditional unitary Λ(U), Λ(U): 0 ψ 0 ψ, 1 ψ 1 U ψ

More information

Unitary evolution: this axiom governs how the state of the quantum system evolves in time.

Unitary evolution: this axiom governs how the state of the quantum system evolves in time. CS 94- Introduction Axioms Bell Inequalities /7/7 Spring 7 Lecture Why Quantum Computation? Quantum computers are the only model of computation that escape the limitations on computation imposed by the

More information

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011 Math 31 Lesson Plan Day 5: Intro to Groups Elizabeth Gillaspy September 28, 2011 Supplies needed: Sign in sheet Goals for students: Students will: Improve the clarity of their proof-writing. Gain confidence

More information

MITOCW ocw f99-lec30_300k

MITOCW ocw f99-lec30_300k MITOCW ocw-18.06-f99-lec30_300k OK, this is the lecture on linear transformations. Actually, linear algebra courses used to begin with this lecture, so you could say I'm beginning this course again by

More information

arxiv:quant-ph/ v1 15 Jan 2006

arxiv:quant-ph/ v1 15 Jan 2006 Shor s algorithm with fewer (pure) qubits arxiv:quant-ph/0601097v1 15 Jan 2006 Christof Zalka February 1, 2008 Abstract In this note we consider optimised circuits for implementing Shor s quantum factoring

More information

Lecture 21: Quantum communication complexity

Lecture 21: Quantum communication complexity CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 21: Quantum communication complexity April 6, 2006 In this lecture we will discuss how quantum information can allow for a

More information

1 Reductions and Expressiveness

1 Reductions and Expressiveness 15-451/651: Design & Analysis of Algorithms November 3, 2015 Lecture #17 last changed: October 30, 2015 In the past few lectures we have looked at increasingly more expressive problems solvable using efficient

More information

Lecture 1: Shannon s Theorem

Lecture 1: Shannon s Theorem Lecture 1: Shannon s Theorem Lecturer: Travis Gagie January 13th, 2015 Welcome to Data Compression! I m Travis and I ll be your instructor this week. If you haven t registered yet, don t worry, we ll work

More information

Sequence convergence, the weak T-axioms, and first countability

Sequence convergence, the weak T-axioms, and first countability Sequence convergence, the weak T-axioms, and first countability 1 Motivation Up to now we have been mentioning the notion of sequence convergence without actually defining it. So in this section we will

More information

Quadratic Equations Part I

Quadratic Equations Part I Quadratic Equations Part I Before proceeding with this section we should note that the topic of solving quadratic equations will be covered in two sections. This is done for the benefit of those viewing

More information

You separate binary numbers into columns in a similar fashion. 2 5 = 32

You separate binary numbers into columns in a similar fashion. 2 5 = 32 RSA Encryption 2 At the end of Part I of this article, we stated that RSA encryption works because it s impractical to factor n, which determines P 1 and P 2, which determines our private key, d, which

More information

Quantum Information & Quantum Computation

Quantum Information & Quantum Computation CS90A, Spring 005: Quantum Information & Quantum Computation Wim van Dam Engineering, Room 509 vandam@cs http://www.cs.ucsb.edu/~vandam/teaching/cs90/ Administrative The Final Examination will be: Monday

More information

NP-Completeness. Until now we have been designing algorithms for specific problems

NP-Completeness. Until now we have been designing algorithms for specific problems NP-Completeness 1 Introduction Until now we have been designing algorithms for specific problems We have seen running times O(log n), O(n), O(n log n), O(n 2 ), O(n 3 )... We have also discussed lower

More information

Pseudorandom Generators

Pseudorandom Generators Principles of Construction and Usage of Pseudorandom Generators Alexander Vakhitov June 13, 2005 Abstract In this report we try to talk about the main concepts and tools needed in pseudorandom generators

More information

Lecture 1: Introduction to Public key cryptography

Lecture 1: Introduction to Public key cryptography Lecture 1: Introduction to Public key cryptography Thomas Johansson T. Johansson (Lund University) 1 / 44 Key distribution Symmetric key cryptography: Alice and Bob share a common secret key. Some means

More information

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

Zero-Knowledge Against Quantum Attacks

Zero-Knowledge Against Quantum Attacks Zero-Knowledge Against Quantum Attacks John Watrous Department of Computer Science University of Calgary January 16, 2006 John Watrous (University of Calgary) Zero-Knowledge Against Quantum Attacks QIP

More information

Quantum Simultaneous Contract Signing

Quantum Simultaneous Contract Signing Quantum Simultaneous Contract Signing J. Bouda, M. Pivoluska, L. Caha, P. Mateus, N. Paunkovic 22. October 2010 Based on work presented on AQIS 2010 J. Bouda, M. Pivoluska, L. Caha, P. Mateus, N. Quantum

More information

Lecture 3: Latin Squares and Groups

Lecture 3: Latin Squares and Groups Latin Squares Instructor: Padraic Bartlett Lecture 3: Latin Squares and Groups Week 2 Mathcamp 2012 In our last lecture, we came up with some fairly surprising connections between finite fields and Latin

More information

communication complexity lower bounds yield data structure lower bounds

communication complexity lower bounds yield data structure lower bounds communication complexity lower bounds yield data structure lower bounds Implementation of a database - D: D represents a subset S of {...N} 2 3 4 Access to D via "membership queries" - Q for each i, can

More information

Proportional Division Exposition by William Gasarch

Proportional Division Exposition by William Gasarch 1 Introduction Proportional Division Exposition by William Gasarch Whenever we say something like Alice has a piece worth 1/ we mean worth 1/ TO HER. Lets say we want Alice, Bob, Carol, to split a cake

More information

Lecture 3: Randomness in Computation

Lecture 3: Randomness in Computation Great Ideas in Theoretical Computer Science Summer 2013 Lecture 3: Randomness in Computation Lecturer: Kurt Mehlhorn & He Sun Randomness is one of basic resources and appears everywhere. In computer science,

More information

1 Quantum Circuits. CS Quantum Complexity theory 1/31/07 Spring 2007 Lecture Class P - Polynomial Time

1 Quantum Circuits. CS Quantum Complexity theory 1/31/07 Spring 2007 Lecture Class P - Polynomial Time CS 94- Quantum Complexity theory 1/31/07 Spring 007 Lecture 5 1 Quantum Circuits A quantum circuit implements a unitary operator in a ilbert space, given as primitive a (usually finite) collection of gates

More information

Lecture 6 Sept. 14, 2015

Lecture 6 Sept. 14, 2015 PHYS 7895: Quantum Information Theory Fall 205 Prof. Mark M. Wilde Lecture 6 Sept., 205 Scribe: Mark M. Wilde This document is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0

More information

On the tightness of the Buhrman-Cleve-Wigderson simulation

On the tightness of the Buhrman-Cleve-Wigderson simulation On the tightness of the Buhrman-Cleve-Wigderson simulation Shengyu Zhang Department of Computer Science and Engineering, The Chinese University of Hong Kong. syzhang@cse.cuhk.edu.hk Abstract. Buhrman,

More information