:s ej2mttlm-(iii+j2mlnm )(J21nm/m-lnm/m)

Similar documents
Application: Bucket Sort

CS5314 Randomized Algorithms. Lecture 15: Balls, Bins, Random Graphs (Hashing)

Algorithms for Data Science

Randomized Load Balancing:The Power of 2 Choices

Problem 1: (Chernoff Bounds via Negative Dependence - from MU Ex 5.15)

CSE 190, Great ideas in algorithms: Pairwise independent hash functions

Hash tables. Hash tables

Hash tables. Hash tables

The first bound is the strongest, the other two bounds are often easier to state and compute. Proof: Applying Markov's inequality, for any >0 we have

1 Probability Review. CS 124 Section #8 Hashing, Skip Lists 3/20/17. Expectation (weighted average): the expectation of a random quantity X is:

Balls & Bins. Balls into Bins. Revisit Birthday Paradox. Load. SCONE Lab. Put m balls into n bins uniformly at random

As mentioned, we will relax the conditions of our dictionary data structure. The relaxations we

1 Maintaining a Dictionary

Part 1: Hashing and Its Many Applications

Lecture 5: Hashing. David Woodruff Carnegie Mellon University

Lecture and notes by: Alessio Guerrieri and Wei Jin Bloom filters and Hashing

Hash tables. Hash tables

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Topic 4 Randomized algorithms

4/26/2017. More algorithms for streams: Each element of data stream is a tuple Given a list of keys S Determine which tuples of stream are in S

Randomized algorithm

Introduction to Hash Tables

6.1 Occupancy Problem

Lecture: Analysis of Algorithms (CS )

CS341 info session is on Thu 3/1 5pm in Gates415. CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Bloom Filters and Locality-Sensitive Hashing

Streaming - 2. Bloom Filters, Distinct Item counting, Computing moments. credits:

compare to comparison and pointer based sorting, binary trees

Lecture 4 Thursday Sep 11, 2014

CS 125 Section #12 (More) Probability and Randomized Algorithms 11/24/14. For random numbers X which only take on nonnegative integer values, E(X) =

Introduction to Randomized Algorithms III

Randomized Algorithms. Zhou Jun

A General-Purpose Counting Filter: Making Every Bit Count. Prashant Pandey, Michael A. Bender, Rob Johnson, Rob Patro Stony Brook University, NY

Ad Placement Strategies

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 32

Module 1: Analyzing the Efficiency of Algorithms

In a five-minute period, you get a certain number m of requests. Each needs to be served from one of your n servers.

CSCB63 Winter Week 11 Bloom Filters. Anna Bretscher. March 30, / 13

12 Hash Tables Introduction Chaining. Lecture 12: Hash Tables [Fa 10]

Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions

Cache-Oblivious Hashing

Motivation. Dictionaries. Direct Addressing. CSE 680 Prof. Roger Crawfis

Lecture 04: Balls and Bins: Birthday Paradox. Birthday Paradox

Rainbow Tables ENEE 457/CMSC 498E

Algorithm Efficiency. Algorithmic Thinking Luay Nakhleh Department of Computer Science Rice University

6.854 Advanced Algorithms

Introduction to Algorithms March 10, 2010 Massachusetts Institute of Technology Spring 2010 Professors Piotr Indyk and David Karger Quiz 1

Introduction to Hashtables

Data Structures and Algorithm. Xiaoqing Zheng

Using the Power of Two Choices to Improve Bloom Filters

Homework 10 Solution

X = X X n, + X 2

Bloom Filters, Minhashes, and Other Random Stuff

Sliding Windows with Limited Storage

A Lecture on Hashing. Aram-Alexandre Pooladian, Alexander Iannantuono March 22, Hashing. Direct Addressing. Operations - Simple

CS 591, Lecture 7 Data Analytics: Theory and Applications Boston University

Randomized Algorithms

Probabilistic Near-Duplicate. Detection Using Simhash

Lecture 4: Two-point Sampling, Coupon Collector s problem

Insert Sorted List Insert as the Last element (the First element?) Delete Chaining. 2 Slide courtesy of Dr. Sang-Eon Park

arxiv: v1 [cs.ds] 3 Feb 2018

1. When applied to an affected person, the test comes up positive in 90% of cases, and negative in 10% (these are called false negatives ).

CS246 Final Exam. March 16, :30AM - 11:30AM

Hashing Data Structures. Ananda Gunawardena

Lecture 6. Today we shall use graph entropy to improve the obvious lower bound on good hash functions.

Solutions to Problem Set 4

A Model for Learned Bloom Filters, and Optimizing by Sandwiching

Assignment 4: Solutions

Randomized Algorithms Week 2: Tail Inequalities

The Bloom Paradox: When not to Use a Bloom Filter

Filters. Alden Walker Advisor: Miller Maley. May 23, 2007

Proofs, Strings, and Finite Automata. CS154 Chris Pollett Feb 5, 2007.

Tail Inequalities. The Chernoff bound works for random variables that are a sum of indicator variables with the same distribution (Bernoulli trials).

Lecture Note 2. 1 Bonferroni Principle. 1.1 Idea. 1.2 Want. Material covered today is from Chapter 1 and chapter 4

Lecture 18 April 26, 2012

Lecture 7: More Arithmetic and Fun With Primes

6 Filtering and Streaming

Cuckoo Hashing and Cuckoo Filters

CS 125 Section #10 (Un)decidability and Probability November 1, 2016

1 Randomized Computation

1 Cryptographic hash functions

How many hours would you estimate that you spent on this assignment?

14.1 Finding frequent elements in stream

CPSC 467b: Cryptography and Computer Security

Homework 6: Solutions Sid Banerjee Problem 1: (The Flajolet-Martin Counter) ORIE 4520: Stochastics at Scale Fall 2015

2. This exam consists of 15 questions. The rst nine questions are multiple choice Q10 requires two

Outline. Martingales. Piotr Wojciechowski 1. 1 Lane Department of Computer Science and Electrical Engineering West Virginia University.

Lecture 12: Interactive Proofs

N/4 + N/2 + N = 2N 2.

12 Count-Min Sketch and Apriori Algorithm (and Bloom Filters)

Advanced Implementations of Tables: Balanced Search Trees and Hashing

CS 124 Math Review Section January 29, 2018

Theoretical Cryptography, Lecture 10

1. Write a program to calculate distance traveled by light

Where do pseudo-random generators come from?

Expectation, inequalities and laws of large numbers

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #15: Mining Streams 2

Lecture 5. 1 Review (Pairwise Independence and Derandomization)

Computational Models Lecture 8 1

Transcription:

BALLS, BINS, AND RANDOM GRAPHS We use the Chernoff bound for the Poisson distribution (Theorem 5.4) to bound this probability, writing the bound as Pr(X 2: x) :s ex-ill-x In(x/m). For x = m + J2m In m, we use that In(l + z) 2: z - z2/2 for z 2: 0 to show Pr( X > m + J 2m In m ) :s e J2m In 111- (m+j2mlnm ) In(I+J21nm/m ) :s ej2mttlm-(iii+j2mlnm )(J21nm/m-lnm/m) = e-lnlll+j2mlnlll(lnm/m) = 0(1). A similar argument holds if x < m, so Pr(IX - ml > J2m Inm) = 0(1). We now show the second fact, that IPr(E I IX - ml :s J2m Inm) - Pr(E I X = m)1 = 0(1). Note that Pr(E I X = k) is increasing in k, since this probability corresponds to the probability that all bins are nonempty when k balls are thrown independently and uniformly at random. The more balls that are thrown, the more likely all bins are nonempty. It follows that Pr(E I X = m - J2mlnm):s Pr(E IIX -ml:s J2mlnm) :s Pr(E I X = m + J2m Inm). Hence we have the bound IPr(E I IX - ml :s J2m Inm) - Pr(E I X = m)1 :s Pr(E I X = m + J2m Inm) - Pr(E I X = m - J2m Inm), and we show the right-hand side is 0(1). This is the difference between the probability that all bins receive at least one ball when 111 - J 2111 In 111 balls are thrown and when m + J2m In m balls are thrown. This difference is equivalent to the probability of the following experiment: we throw 111 - J211l In m balls and there is still at least one empty bin, but after throwing an additional 2../2m In m balls, all bins are nonempty. In order for this to happen, there must be at least one empty bin after m - J2m In m balls; the probability that one of the next 2J2m In m balls covers this bin is at most (2J2m In m )/n = 0(1) by the union bound. Hence this difference is 0(1) as well. 5.5. Application: Hashing 5.5.1. Chain Hashing The balls-and-bins-model is also useful for modeling hashing. For example, consider the application of a password checker, which prevents people from using common, easily cracked passwords by keeping a dictionary of unacceptable passwords. When a user tries to set up a password, the application would like to check if the requested password is part of the unacceptable set. One possible approach for a password checker would be to store the unacceptable passwords alphabetically and do a binary search on 106

5.5 APPLICATION: HASHING the dictionary to check if a proposed password is unacceptable. A binary search would require G (log m) time for m words. Another possibility is to place the words into bins and then search the appropriate bin for the word. The words in a bin would be represented by a linked list. The placement of words into bins is accomplished by using a hash function. A hash function f from a universe U into a range [0, n - I] can be thought of as a way of placing items from the universe into n bins. Here the universe U would consist of possible password strings. The collection of bins is called a hash table. This approach to hashing is called chain hashing, since items that fall in the same bin are chained together in a linked list. Using a hash table turns the dictionary problem into a balls-and-bins problem. If our dictionary of unacceptable passwords consists of JJ1 words and the range of the hash function is [0, n - 1], then we can model the distribution of words in bins with the same distribution as m balls placed randomly in n bins. We are making a rather strong assumption by presuming that our hash function maps words into bins in a fashion that appears random, so that the location of each word is independent and identically distributed. There is a great deal of theory behind designing hash functions that appear random, and we will not delve into that theory here. We simply model the problem by assuming that hash functions are random. In other words, we assume that (a) for each x E U, the probability that f(x) = j is I/n (for :s j :s n - 1) and that (b) the values of f(x) for each x are independent of each other. Notice that this does not mean that every evaluation of f(x) yields a different random answer! The value of fei) is fixed for all time; it is just equally likely to take on any value in the range. Let us consider the search time when there are n bins and m words. To search for an item, we first hash it to find the bin that it lies in and then search sequentially through the linked list for it. If we search for a word that is not in our dictionary, the expected number of words in the bin the word hashes to is m/n. If we search for a word that is in our dictionary, the expected number of other words in that word's bin is (JJ1 -I )/11, so the expected number of words in the bin is 1 + (m -l)/n. Ifwe choose n = III bins for our hash table, then the expected number of words we must search through in a bin is constant. If the hashing takes constant time, then the total expected time for the search is constant. The maximum time to search for a word, however, is proportional to the maximum number of words in a bin. We have shown that when n = m this maximum load is GOnn/ln Inn) with probability close to 1, and hence with high probability this is the maximum search time in such a hash table. While this is still faster than the required time for standard binary search, it is much slower than the average, which can be a drawback for many applications. Another drawback of chain hashing can be wasted space. If we use II bins for n items, several of the bins will be empty, potentially leading to wasted space. The space wasted can be traded off against the search time by making the average number of words per bin larger than 1. 5.5.2. Hashing: Bit Strings If we want to save space instead of time, we can use hashing in another way. Again, we consider the problem of keeping a dictionary of unsuitable passwords. Assume that 107

BALLS, BINS, AND RANDOM GRAPHS a password is restricted to be eight ASCII characters, which requires 64 bits (8 bytes) to represent. Suppose we use a hash function to map each word into a 32-bit string. This string will serve as a short fingerprint for the word; just as a fingerprint is a succinct way of identifying people, the fingerprint string is a succinct way of identifying a word. We keep the fingerprints in a sorted list. To check if a proposed password is unacceptable, we calculate its fingerprint and look for it on the list, say by a binary search.2 If the fingerprint is on the list, we declare the password unacceptable. In this case, our password checker may not give the correct answer! It is possible for a user to input an acceptable password, only to have it rejected because its fingerprint matches the fingerprint of an unacceptable password. Hence there is some chance that hashing will yield afalse positive: it may falsely declare a match when there is not an actual match. The problem is that - unlike fingerprints for human beings - our fingerprints do not uniquely identify the associated word. This is the only type of mistake this algorithm can make; it does not allow a password that is in the dictionary of unsuitable passwords. In the password application, allowing false positives means our algorithm is overly conservative, which is probably acceptable. Letting easily cracked passwords through, however, would probably not be acceptable. To place the problem in a more general context. we describe it as an approximate set membership problem. Suppose we have a set 5 = {SI,S2,.. "SIII} of m elements from a large universe U. We would like to represent the elements in such a way that we can quickly answer queries of the form "Is x an element of 57" We would also like the representation to take as little space as possible. In order to save space, we would be willing to allow occasional mistakes in the form of false positives. Here the unallowable passwords correspond to our set S. How large should the range of the hash function used to create the fingerprints be? Specifically, if we are working with bits, how many bits should we use to create a fingerprint? Obviously, we want to choose the number of bits that gives an acceptable probability for a false positive match. The probability that an acceptable password has a fingerprint that is different from any specific unallowable password in 5 is (l-1/2 h ). It follows that if the set 5 has size m and if we use b bits for the fingerprint, then the probability of a false positive for an acceptable password is I - (1 - l/itii ~ I - e- Ill / 2 ". If we want this probability of a false positive to be less than a constant c, we need which implies that m h> log'). - '-- In(l/O - c)) That is, we need b = Q (log~ 111) bits. On the other hand, if we use b = 210g 2 m bits, then the probability of a false positive falls to 1 )"1 I 1- ( 1- -') <-. 171 4 111 In this case the fingerprints will be uniformly distributed over all 32-bit strings. There are faster algorithms for searching over sets of number.~ \\ ith this distribution. just as Bucket sort allows faster sorting than standard comparison-based sorting when the elements to he sorted are from a uniform distribution. but we will not concern ourselves with this point here. 108

5.5 APPLICATION: HASHING Start with an array of Os. 1010101010101010101010101 Each element of 5 is hashed k times; each hash gives an array location to set to 1. XI X 1 I 0 I 0 II~ oi61i~fid) I 0 I To check if y is in 5, check the k hash locations. If a 0 appears, y is not in S. v loloilioililill:ilirol 1 0 1 If only 1 s appear, conclude that y is in S. This may yield false positives. I 0 I 0111ITG frogl I 0 \' II I 0 I Figure 5.1: Example of how a Bloom tilter function~. In our example, if our dictionary has 2 16 = 65,536 words, then using 32 bits when hashing yields a false positive probability of just less than 1/65,536. 5.5.3. Bloom Filters We can generalize the hashing ideas of Sections 5.5.1 and 5.5.2 to achieve more interesting trade-offs between the space required and the false positi\'e probability. The resulting data structure for the approximate set membership problem is called a Bloom filter. A Bloom filter consists of an array of Jl bits, A[O] to A[n - I]. initially all set to 0. A Bloom filter uses k independent random hash functions 11],... Ilk with range {a,..., n - I}. We make the usual assumption for analysis that these hash functions map each element in the universe to a random number uniformly over the range {O,.... 11 -I}. Suppose that we use a Bloom filter to represent a set 5 = {s], S2,..., S/I/} of JJ1 elements from a large universe U. For each element S E 5, the bits A [h i (s) J are set to I for I :s i :s k. A bit location can be set to I multiple times, but only the first change has an effect. To check ifan element x is in 5, we check whether all array locations A[hi(x)] for 1 :s i :s k are set to 1. Ifnot, then clearly x is not a member of 5, because if x were in 5 then all locations A[h;(x)] for 1 :s i :s k would be set to I by construction. If all A[h;(x)] are set to L we assume that x is in 5, although we could be wrong. We would be wrong if all of the positions A[h;(x)J were set to I by elements of 5 even though x is not in the set. Hence Bloom filters may yield false positives. Figure 5.1 shows an example. 109

BALLS, BINS, AND RANDOM GRAPHS The probability of a false positive for an element not in the set - the false positive probability - can be calculated in a straightforward fashion, given our assumption that the hash functions are random. After all the elements of 5 are hashed into the Bloom filter, the probability that a specific bit is still 0 is ( 1 )klll I - - :::::::; e- klll / ll n We let p = e- km / n. To simplify the analysis, let us temporarily assume that a fraction p of the entries are still 0 after all of the elements of 5 are hashed into the Bloom filter. The probability of a false positive is then I - I - -;; 1 )kll1)k :::::::; (I - e-kll//li)k = (I - pl. ( ( We let f = (1- e- km / l1 )k = (1- p)k. From now on. for convenience we use the asymptotic approximations p and f to represent (respectively) the probability that a bit in the Bloom filter is 0 and the probability of a false positive. Suppose that we are given m and 11 and wish to optimize the number of hash functions k in order to minimize the false positive probability f. There are two competing forces: using more hash functions gives us more chances to find a O-bit for an element that is not a member of 5, but using fewer hash functions increases the fraction of O-bits in the array. The optimal number of hash functions that minimizes f as a function of k is easily found taking the derivative. Let g = k In(l - e- km / Il ), so that f = et; and minimizing the false positive probability f is equivalent to minimizing g with respect to k. We find d f?.. kill e- km / Il ~=In(l-e-bl/I/)+-. dk 11 I - e- km / II It is easy to check that the derivative is zero when k = (In 2). (njm) and that this point is a global minimum. In this case the false positive probability f is (lj2)k :::::::; (0.6185)I1/m. The false positive probability falls exponentially in njm, the number of bits used per item. In practice. of course, k must be an integer, so the best possible choice of k may lead to a slightly higher false positive rate. A Bloom filter is like a hash table. but instead of storing set items we simply use one bit to keep track of whether or not an item hashed to that location. If k = 1, we have just one hash function and the Bloom filter is equivalent to a hashing-based fingerprint system, where the list of the fingerprints is stored in a 0-1 bit array. Thus Bloom filters can be seen as a generalization of the idea of hashing-based fingerprints. As we saw when using fingerprints, to get even a small constant probability of a false positive required Q (log m) fingerprint bits per item. In many practical applications, Q (log m) bits per item can be too many. Bloom filters allow a constant probability of a false positive while keeping n jm, the number of bits of storage required per item, constant. For many applications, the small space requirements make a constant probability of error acceptable. For example, in the password application, we may be willing to accept false positive rates of I % or 2%. 110

5.5 APPLICATION: HASHING Bloom filters are highly effective even if n = em for a small constant c, such as c = 8. In this case, when k = 5 or k = 6 the false positive probability is just over 0.02. This contrasts with the approach of hashing each element into 0) (log m) bits. Bloom filters require significantly fewer bits while still achieving a very good false positive probability. It is also interesting to frame the optimization another way. Consider f, the probability of a false positive, as a function of p. We find = (1 - p) ( -In I'll/II II) (5.8) From the symmetry of this expression, it is easy to check that p = 1/2 minimizes the false positive probability f. Hence the optimal results are achieved when each bit of the Bloom filter is 0 with probability 1/2. An optimized Bloom filter looks like a random bit string. To conclude, we reconsider our assumption that the fraction of entries that are still 0 after all of the elements of 5 are hashed into the Bloom filter is p. Each bit in the array can be thought of as a bin, and hashing an item is like throwing a ball. The fraction of entries that are still 0 after all of the elements of 5 are hashed is therefore equivalent to the fraction of empty bins after mk balls are thrown into II bins. Let X be the number of such bins when mk balls are thrown. The expected fraction of such hins is I ( 1 )bl1 P = 1-- 11 The events of different bins being empty are not independent. but \\c can apply Corollary 5.9, along with the Chernoff bound of Eqn. (4.6), to obtain Pr(1 X - np'l 2: 611) :s 2evln e-iif~nji'. Actually, Corollary 5.11 applies as well, since the number of O-entries - which corresponds to the number of empty bins - is monotonically decreasing in the number of balls thrown. The bound tells us that the fraction of empty bins is close to p' (when n is reasonably large) and that pi is very close to p. Our assumption that the fraction of O-entries in the Bloom filter is p is therefore quite accurate for predicting actual performance. 5.5.4. Breaking Symmetry As our last application of hashing, we consider how hashing provides a simple way to break symmetry. Suppose that n users want to utilize a resource. such as time on a supercomputer. They must use the resource sequentially, one at a time. Of course, each user wants to be scheduled as early as possible. How can we decide a permutation of the users quickly and fairly? If each user has an identifying name or number, hashing provides one possible solution. Hash each user's identifier into 2b bits, and then take the permutation given by 111