Jeffrey D. Ullman Stanford University

Similar documents
2.6 Complexity Theory for Map-Reduce. Star Joins 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51

arxiv: v2 [cs.db] 20 Oct 2016

Finding Similar Sets. Applications Shingling Minhashing Locality-Sensitive Hashing Distance Measures. Modified from Jeff Ullman

CS425: Algorithms for Web Scale Data

The tensor product of commutative monoids

1 Closest Pair of Points on the Plane

DATA MINING LECTURE 6. Similarity and Distance Sketching, Locality Sensitive Hashing

15. Polynomial rings Definition-Lemma Let R be a ring and let x be an indeterminate.

Examples of the Cantor-Bendixson process on the reals Harold Simmons

CS1800: Mathematical Induction. Professor Kevin Gold

The Complexity of a Reliable Distributed System

Jeffrey D. Ullman Stanford University

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

COMPUTING SIMILARITY BETWEEN DOCUMENTS (OR ITEMS) This part is to a large extent based on slides obtained from

Machine Learning! in just a few minutes. Jan Peters Gerhard Neumann

Lecture 11: Extrema. Nathan Pflueger. 2 October 2013

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

Students should be able to (a) produce a truth table from a given logic diagram

Simple Interactions CS 105 Lecture 2 Jan 26. Matthew Stone

Lower Bounds for q-ary Codes with Large Covering Radius

Name Period Date. RNS1.3 Scientific Notation Read and write large and small numbers. Use scientific notation to write numbers and solve problems.

CS4800: Algorithms & Data Jonathan Ullman

Lecture Notes, Week 6

CSC236 Week 3. Larry Zhang

CSC 261/461 Database Systems Lecture 13. Spring 2018

CS4026 Formal Models of Computation

A Tightly-coupled XML Encoderdecoder For 3D Data Transaction

Beautiful Mathematics

You separate binary numbers into columns in a similar fashion. 2 5 = 32

Dedekind Domains. Mathematics 601

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

Introduction to Number Theory. The study of the integers

Mat Week 8. Week 8. gcd() Mat Bases. Integers & Computers. Linear Combos. Week 8. Induction Proofs. Fall 2013

CS60021: Scalable Data Mining. Similarity Search and Hashing. Sourangshu Bha>acharya

Mathematical Background

Student Responsibilities Week 8. Mat Section 3.6 Integers and Algorithms. Algorithm to Find gcd()

Nondeterministic finite automata

Better proofs for rekeying

Administrative notes. Computational Thinking ct.cs.ubc.ca

How to write proofs - II

Chapter 5. Number Theory. 5.1 Base b representations

Relationships between variables. Association Examples: Smoking is associated with heart disease. Weight is associated with height.

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

Chapter 26: Comparing Counts (Chi Square)

Notes on Pumping Lemma

4 Number Theory and Cryptography

Estimates for factoring 1024-bit integers. Thorsten Kleinjung, University of Bonn

The Growth of Functions. A Practical Introduction with as Little Theory as possible

An analogy from Calculus: limits

Finding similar items

CPSC 467: Cryptography and Computer Security

Lecture 10: F -Tests, ANOVA and R 2

ESTIMATING STATISTICAL CHARACTERISTICS UNDER INTERVAL UNCERTAINTY AND CONSTRAINTS: MEAN, VARIANCE, COVARIANCE, AND CORRELATION ALI JALAL-KAMALI

Math 32A Discussion Session Week 5 Notes November 7 and 9, 2017

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

Communication Complexity. The dialogues of Alice and Bob...

Lecture 26 December 1, 2015

Information and Entropy. Professor Kevin Gold

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

More Approximation Algorithms

Arithmetic and Incompleteness. Will Gunther. Goals. Coding with Naturals. Logic and Incompleteness. Will Gunther. February 6, 2013

CS 347 Parallel and Distributed Data Processing

No, not the PIE you eat.

Lecture 4: Constructing the Integers, Rationals and Reals

Quantum algorithms (CO 781/CS 867/QIC 823, Winter 2013) Andrew Childs, University of Waterloo LECTURE 13: Query complexity and the polynomial method

High Fidelity to Low Weight. Daniel Gottesman Perimeter Institute

Sequential Logic (3.1 and is a long difficult section you really should read!)

CS246 Final Exam, Winter 2011

Kernelization Lower Bounds: A Brief History

MA554 Assessment 1 Cosets and Lagrange s theorem

Linear Congruences. The equation ax = b for a, b R is uniquely solvable if a 0: x = b/a. Want to extend to the linear congruence:

One-to-one functions and onto functions

T U M. Performing Permuting on a Multiprocessor Architecture Using Packet Communication I N S T I T U T F Ü R I N F O R M A T I K

Independencies. Undirected Graphical Models 2: Independencies. Independencies (Markov networks) Independencies (Bayesian Networks)

1 Two-Way Deterministic Finite Automata

Previous Exam Questions, Chapter 2

Simultaneous equations for circuit analysis

Lecture 5. 1 Review (Pairwise Independence and Derandomization)

Assignment 3: Chapter 2 & 3 (2.6, 3.8)

MITOCW watch?v=0usje5vtiks

V. Graph Sketching and Max-Min Problems

Assignment 13 Assigned Mon Oct 4

MATH 243E Test #3 Solutions

CPSC 467: Cryptography and Computer Security

Using the Derivative. Chapter Local Max and Mins. Definition. Let x = c be in the domain of f(x).

Computational Complexity. This lecture. Notes. Lecture 02 - Basic Complexity Analysis. Tom Kelsey & Susmit Sarkar. Notes

Essentials of Intermediate Algebra

3 The fundamentals: Algorithms, the integers, and matrices

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Hilbert s theorem 90, Dirichlet s unit theorem and Diophantine equations

Notes on ordinals and cardinals

ECNS 561 Multiple Regression Analysis

MS&E 226. In-Class Midterm Examination Solutions Small Data October 20, 2015

Complex Differentials and the Stokes, Goursat and Cauchy Theorems

2. Two binary operations (addition, denoted + and multiplication, denoted

Math Lecture 1: Differential Equations - What Are They, Where Do They Come From, and What Do They Want?

CS 347 Distributed Databases and Transaction Processing Notes03: Query Processing

Russell s logicism. Jeff Speaks. September 26, 2007

Expected Value II. 1 The Expected Number of Events that Happen

Transcription:

Jeffrey D. Ullman Stanford University

A real story from CS341 data-mining project class. Students involved did a wonderful job, got an A. But their first attempt at a MapReduce algorithm caused them problems and led to the development of an interesting theory. 3

4 Data consisted of records for 3000 drugs. List of patients taking, dates, diagnoses. About 1M of data per drug. Problem was to find drug interactions. Example: two drugs that when taken together increase the risk of heart attack. Must examine each pair of drugs and compare their data.

5 The first attempt used the following plan: Key = set of two drugs {i, j}. Value = the record for one of these drugs. Given drug i and its record R i, the mapper generates all key-value pairs ({i, j}, R i ), where j is any other drug besides i. Each reducer receives its key and a list of the two records for that pair: ({i, j}, [R i, R j ]).

6 Mapper for drug 1 {1, 2} Drug 1 data Reducer for {1,2} {1, 3} Drug 1 data Mapper for drug 2 {1, 2} Drug 2 data Reducer for {1,3} {2, 3} Drug 2 data Mapper for drug 3 {1, 3} {2, 3} Drug 3 data Drug 3 data Reducer for {2,3}

7 Mapper for drug 1 {1, 2} Drug 1 data Reducer for {1,2} {1, 3} Drug 1 data Mapper for drug 2 {1, 2} Drug 2 data Reducer for {1,3} {2, 3} Drug 2 data Mapper for drug 3 {1, 3} {2, 3} Drug 3 data Drug 3 data Reducer for {2,3}

8 {1, 2} Reducer Drug 1 data Drug 2 data for {1,2} {1, 3} Drug 1 data Drug 3 data Reducer for {1,3} {2, 3} Drug 2 data Drug 3 data Reducer for {2,3}

3000 drugs times 2999 key-value pairs per drug times 1,000,000 bytes per key-value pair = 9 terabytes communicated over a 1Gb Ethernet = 90,000 seconds of network use. 9

10 The team grouped the drugs into 30 groups of 100 drugs each. Say G 1 = drugs 1-100, G 2 = drugs 101-200,, G 30 = drugs 2901-3000. Let g(i) = the number of the group into which drug i goes.

11 A key is a set of two group numbers. The mapper for drug i produces 29 key-value pairs. Each key is the set containing g(i) and one of the other group numbers. The value is a pair consisting of the drug number i and the megabyte-long record for drug i.

12 The reducer for pair of groups {m, n} gets that key and a list of 200 drug records the drugs belonging to groups m and n. Its job is to compare each record from group m with each record from group n. Special case: also compare records in group n with each other, if m = n+1 or if n = 30 and m = 1. Notice each pair of records is compared at exactly one reducer, so the total computation is not increased.

13 The big difference is in the communication requirement. Now, each of 3000 drugs 1MB records is replicated 29 times. Communication cost = 87GB, vs. 9TB.

15 1. A set of inputs. Example: the drug records. 2. A set of outputs. Example: one output for each pair of drugs, telling whether a statistically significant interaction was detected. 3. A many-many relationship between each output and the inputs needed to compute it. Example: The output for the pair of drugs {i, j} is related to inputs i and j.

16 Drug 1 Output 1-2 Drug 2 Output 1-3 Drug 3 Output 1-4 Drug 4 Output 2-3 Output 2-4 Output 3-4

17 j j i = i

18 Reducer size, denoted q, is the maximum number of inputs that a given reducer can have. I.e., the length of the value list. Limit might be based on how many inputs can be handled in main memory. Or: make q low to force lots of parallelism.

19 The average number of key-value pairs created by each mapper is the replication rate. Denoted r. Represents the communication cost per input.

20 Suppose we use g groups and d drugs. A reducer needs two groups, so q = 2d/g. Each of the d inputs is sent to g-1 reducers, or approximately r = g. Replace g by r in q = 2d/g to get r = 2d/q. Tradeoff! The bigger the reducers, the less communication.

21 What we did gives an upper bound on r as a function of q. A solid investigation of MapReduce algorithms for a problem includes lower bounds. Proofs that you cannot have lower r for a given q.

22 A mapping schema for a problem and a reducer size q is an assignment of inputs to sets of reducers, with two conditions: 1. No reducer is assigned more than q inputs. 2. For every output, there is some reducer that receives all of the inputs associated with that output. Say the reducer covers the output. If some output is not covered, we can t compute that output.

Every MapReduce algorithm has a mapping schema. The requirement that there be a mapping schema is what distinguishes MapReduce algorithms from general parallel algorithms. 23

d drugs, reducer size q. Each drug has to meet each of the d-1 other drugs at some reducer. If a drug is sent to a reducer, then at most q-1 other drugs are there. Thus, each drug is sent to at least (d-1)/(q-1) reducers, and r > (d-1)/(q-1). Or approximately r > d/q. Half the r from the algorithm we described. Better algorithm gives r = d/q + 1, so lower bound is actually tight. 24

26 Given a set of bit strings of length b, find all those that differ in exactly one bit. Example: For b=2, the inputs are 00, 01, 10, 11, and the outputs are (00,01), (00,10), (01,11), (10,11). Theorem: r > b/log 2 q. (Part of) the proof later.

If all bit strings of length b are in the input, then we already know the answer, and running MapReduce is a waste of time. A more realistic scenario is that we are doing a similarity search, where some of the possible bit strings are present and others not. Example: Find viewers who like the same set of movies except for one. We can adjust q to be the expected number of inputs at a reducer, rather than the maximum number. 27

We can use one reducer for every output. Each input is sent to b reducers (so r = b). Each reducer outputs its pair if both its inputs are present, otherwise, nothing. Subtle point: if neither input for a reducer is present, then the reducer doesn t really exist. 28

Alternatively, we can send all inputs to one reducer. No replication (i.e., r = 1). The lone reducer looks at all pairs of inputs that it receives and outputs pairs at distance 1. 29

30 Assume b is even. Two reducers for each string of length b/2. Call them the left and right reducers for that string. String w = xy, where x = y = b/2, goes to the left reducer for x and the right reducer for y. If w and z differ in exactly one bit, then they will both be sent to the same left reducer (if they disagree in the right half) or to the same right reducer (if they disagree in the left half). Thus, r = 2; q = 2 b/2.

31 Lemma: A reducer of size q cannot cover more than (q/2)log 2 q outputs. Induction on b; proof omitted. (b/2)2 b outputs must be covered. There are at least p = (b/2)2 b /((q/2)log 2 q) = (b/q)2 b /log 2 q reducers. Sum of inputs over all reducers > pq = b2 b /log 2 q. Replication rate r = pq/2 b = b/log 2 q. Omits possibility that smaller reducers help.

Algorithms Matching Lower Bound Generalized Splitting b One reducer for each output Splitting r = replication rate 2 1 r = b/log 2 q All inputs to one reducer 2 1 2 b/2 2 b q = reducer size 32

33 Represent problems by mapping schemas Get upper bounds on number of outputs covered by one reducer, as a function of reducer size. Turn these into lower bounds on replication rate as a function of reducer size. For All-Pairs ( drug interactions ) problem and HD1 problem: exact match between upper and lower bounds. Other problems for which a match is known: matrix multiplication, computing marginals.

34 Get matching upper and lower bounds for the Hamming-distance problem for distances greater than 1. Ugly fact: For HD=1, you cannot have a large reducer with all pairs at distance 1; for HD=2, it is possible. Consider all inputs of weight 1 and length b.

35 1. Give an algorithm that takes an input-output mapping and a reducer size q, and gives a mapping schema with the smallest replication rate. 2. Is the problem even tractable? 3. A recent extension by Afrati, Dolev, Korach, Sharma, and U. lets inputs have weights, and the reducer size limits the sum of the weights of the inputs received. What can be extended to this model?