Design and Analysis of Algorithms

Similar documents
Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018)

CS / MCS 401 Homework 3 grader solutions

CS 330 Discussion - Probability

Problem Set 2 Solutions

Lecture 2: April 3, 2013

A Probabilistic Analysis of Quicksort

n outcome is (+1,+1, 1,..., 1). Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + +X n,

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22

CS 171 Lecture Outline October 09, 2008

Discrete Mathematics for CS Spring 2005 Clancy/Wagner Notes 21. Some Important Distributions

An Introduction to Randomized Algorithms

Skip Lists. Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 S 3 S S 1

Analysis of Algorithms. Introduction. Contents

Optimally Sparse SVMs

Hashing and Amortization

Bertrand s Postulate

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 17 Lecturer: David Wagner April 3, Notes 17 for CS 170

Lecture 12: November 13, 2018

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 15

Definitions: Universe U of keys, e.g., U N 0. U very large. Set S U of keys, S = m U.

7.7 Hashing. 7.7 Hashing. Perfect Hashing. Direct Addressing

Mixtures of Gaussians and the EM Algorithm

CS 270 Algorithms. Oliver Kullmann. Growth of Functions. Divide-and- Conquer Min-Max- Problem. Tutorial. Reading from CLRS for week 2

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 19


HOMEWORK 2 SOLUTIONS

Recurrence Relations

Lecture 2: Concentration Bounds

It is always the case that unions, intersections, complements, and set differences are preserved by the inverse image of a function.

Sorting Algorithms. Algorithms Kyuseok Shim SoEECS, SNU.

Discrete Mathematics and Probability Theory Spring 2012 Alistair Sinclair Note 15

Lecture 4: April 10, 2013

CS284A: Representations and Algorithms in Molecular Biology

Lecture 9: Hierarchy Theorems

Simulation. Two Rule For Inverting A Distribution Function

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ.

Frequentist Inference

Ma/CS 6b Class 19: Extremal Graph Theory

Lecture 6: Coupon Collector s problem

Topic 9: Sampling Distributions of Estimators

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations

Convergence of random variables. (telegram style notes) P.J.C. Spreij

The Maximum-Likelihood Decoding Performance of Error-Correcting Codes

6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer.

1 Hash tables. 1.1 Implementation

Markov Decision Processes

CS161 Handout 05 Summer 2013 July 10, 2013 Mathematical Terms and Identities

Trial division, Pollard s p 1, Pollard s ρ, and Fermat s method. Christopher Koch 1. April 8, 2014

ORIE 633 Network Flows September 27, Lecture 8

Chapter 6 Infinite Series

Law of the sum of Bernoulli random variables

Lecture 5: April 17, 2013

On Random Line Segments in the Unit Square

CS161 Design and Analysis of Algorithms. Administrative

Final Review for MATH 3510

The picture in figure 1.1 helps us to see that the area represents the distance traveled. Figure 1: Area represents distance travelled

April 18, 2017 CONFIDENCE INTERVALS AND HYPOTHESIS TESTING, UNDERGRADUATE MATH 526 STYLE

1 Introduction to reducing variance in Monte Carlo simulations

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

CSE 5311 Notes 1: Mathematical Preliminaries

Lecture 19: Convergence

CS:3330 (Prof. Pemmaraju ): Assignment #1 Solutions. (b) For n = 3, we will have 3 men and 3 women with preferences as follows: m 1 : w 3 > w 1 > w 2

Basics of Probability Theory (for Theory of Computation courses)

Homework 5 Solutions

Lecture Notes 15 Hypothesis Testing (Chapter 10)

PH 425 Quantum Measurement and Spin Winter SPINS Lab 1

Random Models. Tusheng Zhang. February 14, 2013

Lecture 11: Pseudorandom functions

CSE 4095/5095 Topics in Big Data Analytics Spring 2017; Homework 1 Solutions

Recursive Algorithm for Generating Partitions of an Integer. 1 Preliminary

Analysis of Algorithms -Quicksort-

Online hypergraph matching: hiring teams of secretaries

Lecture Chapter 6: Convergence of Random Sequences

Square-Congruence Modulo n

Chapter 9 - CD companion 1. A Generic Implementation; The Common-Merge Amplifier. 1 τ is. ω ch. τ io

Chapter 2. Asymptotic Notation

Discrete Mathematics and Probability Theory Spring 2013 Anant Sahai Lecture 18

Problem Set 4 Due Oct, 12

0, otherwise. EX = E(X 1 + X n ) = EX j = np and. Var(X j ) = np(1 p). Var(X) = Var(X X n ) =

Application to Random Graphs

STAT Homework 1 - Solutions

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Massachusetts Institute of Technology

2 Definition of Variance and the obvious guess

Lecture 2 February 8, 2016

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis

1. ARITHMETIC OPERATIONS IN OBSERVER'S MATHEMATICS

Learning Theory: Lecture Notes

f X (12) = Pr(X = 12) = Pr({(6, 6)}) = 1/36

Lecture 01: the Central Limit Theorem. 1 Central Limit Theorem for i.i.d. random variables

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS

Theorem: Let A n n. In this case that A does reduce to I, we search for A 1 as the solution matrix X to the matrix equation A X = I i.e.

Exercise 4.3 Use the Continuity Theorem to prove the Cramér-Wold Theorem, Theorem. (1) φ a X(1).

The Random Walk For Dummies

Design and Analysis of ALGORITHM (Topic 2)

Hashing. Algorithm : Design & Analysis [09]

Skip lists: A randomized dictionary

Lecture 7: October 18, 2017

Topic 9: Sampling Distributions of Estimators

Empirical Process Theory and Oracle Inequalities

Transcription:

Desig ad Aalysis of Algorithms Probabilistic aalysis ad Radomized algorithms Referece: CLRS Chapter 5 Topics: Hirig problem Idicatio radom variables Radomized algorithms Huo Hogwei 1

The hirig problem Sceario You eed to hire a assistat. A employmet agecy seds you a cadidate each day. You are committed to havig the best cadidate, so that if you iterview a cadidate who is better tha your curret assistat, you hire the ew cadidate. Each time you hire a ew assistat you pay $c. Goal What is the price of this strategy? We are ot iterested i the ruig time of this algorithm. Istead we would like to kow the cost of employig the algorithm, perhaps switchig to a differet strategy if it achieves the same goal but with lower cost. Huo Hogwei 2

The hirig problem Assume that the cadidates are umbered 1 through. HIRE-ASSISTANT HIRE-ASSISTANT() 1 best 0 2 for i 1 to 3 if cadidate i is better tha best 4 the best i 5 hire cadicate i Best case Cadidate 1 is the best cadidate: Θ(1) Worst case Cadidate is the best cadidate: Θ(c h ) Probabilistic aalysis Sice we do t kow what order the cadidates appear i, we could assume that they show up i radom order, i. e., each of the! orderigs of the cadidates are equally likely. What is the expected value of the cost uder this distributio? Huo Hogwei 3

A radomized algorithm If we radomize the iput we ca impose a distributio o it (istead of assumig oe). For the hirig problem: if we had a list of the cadidates, each day we could pick oe uiformly at radom from the cadidates that have t bee iterviewed yet. [Exercise. Check that this iduces a uiform distributio o the! orderigs of the cadidates.] The executio of a radomized algorithm o the same iput chages from oe ru to the ext. We assume the existece of a fuctio RANDOM(a, b) that returs a uiformly radom iteger i the set {a,..., b}. Huo Hogwei 4

A radomized algorithm Give a evet A i a probability space (S, Pr), the idicator of A is defied as the radom variable I{A} = 1 if A occurs, 0 if A does ot occur. Lemma. E[X A ]= Pr[A]. Proof. E[X A ]= E[I{A}] = 1 Pr[A] + 0 (1- Pr[A]) = Pr[A]. This lemma (combied with liearity of expectatio) is useful for computig expectatios of radom variables that ca be writte as sums of idicators. Huo Hogwei 5

Example X := umber of heads i coi flips. What is EX? Method 1. E[X]= k Pr[X = k] k=0 = k 1 k 1 -k k 2 2 k=0 = 1 2 k k=1 k = 1-1 2 k=1 k -1 = 1 2-1 = 2 2 Huo Hogwei 6

Example X := umber of heads i coi flips. What is E[X]? Method2. Recogize that X = X i i=1 By liearity of expectatio, 1 E[X] = E X i = E[X i ] = 2 = i=1 i=1 Remark. Liearity of expectatio is applicable eve if the radom variable are depedet. (Not a issue i this example) i=1 2 Huo Hogwei 7

Hirig problem: probabilistic aalysis or radomized algorithm X = umber of times we hire a ew assistat. By the defiitio of expected value from C.19. E[X] = x Pr{X = x} x=1 The calculatio is cumbersome, we istead use idicator radom variable to simplify the calculatio Let X i be the idicator radom variable associated with the evet i which the ith cadidate is hired, i =1,,, X i = I{cadicate i is hired} = 1 if cadicate i is hired, 0 if cadicate i is ot hired, Huo Hogwei 8

Hirig problem: We ca compute E[X] E[X] = E X i i=1 = E[X i ] = i=1 i=1 1 i = l + O(1) By equatio A.7 Eve we iterview people, we oly actually hire approximately l of the o the average. Lemma: Assumig that the cadidate are preseted i a radom order, algorithm HIRE-ASSISTANT has a total hirig cost of O(c h l ). Huo Hogwei 9

Radom permutatio Goal. Produce a uiformly radom permutatio. (Each of the! permutatio is equally likely.) Idea. At the i th iteratio, swap A[i] with a radom elemet from A[i,..., ]. RANDOMIZE-IN-PLACE RANDOMIZE-IN-PLACE(A) 1 Legth[A] 2 for i 1 to 3 A[i] A[RANDOM(i,)] Huo Hogwei 10

Radom permutatio Goal. Produce a uiformly radom permutatio. (Each of the! permutatio is equally likely.) Idea. At the i th iteratio, swap A[i] with a radom elemet from A[i,..., ]. RANDOMIZE-IN-PLACE RANDOMIZE-IN-PLACE(A) 1 Legth[A] 2 for i 1 to 3 A[i] A[RANDOM(i,)] Loop ivariat. At the start of the for loop, A[1,..., i -1]cotais each of the (i -1)! C i -1 possible (i -1)-permutatios with equal probability. Iitializatio. i = 1. Trivial. Termiatio. i = + 1. A[1,..., ] cotais each of the! permutatios with equal probability. Huo Hogwei 11

Radom permutatio Maiteace. Cosider the ith iteratio. Assumig that whe the loop was etered A[1,..., i-1] cotai a uiformly radom (i -1)-permutatio of A, show that whe the loop is exited A[1,..., i] cotais a uiformly radom i -permutatio of A. Let <x 1,..., x i > deote the elemets i A[1,..., i] after the ith iteratio. It cosists of <x 1,..., x i-1 >, a (i-1)- permutatio followed by x i. E 1 := evet that the first i-1 iteratios produced <x 1,...,x i-1 >. E 2 := evet that the ith iteratio puts x i ito A[i]. Note that E 1 E 2 is the evet that we get the i-permutatio <x 1,...,x i >. The (-(i-1))! Pr[E 1 E 2 ] = Pr[E 1 E 2 ]Pr[E 1 ]= 1 (-i)! = -i+1!! Huo Hogwei 12

O-lie hirig problem Variat o the hirig problem. We ll settle for ot ecessarily the best office assistat i exchage for hirig exactly oce, which will make our cost O(1) [as opposed to O() i the worst- case or O(log ) o average]. Strategy. Look at the first k < cadidates. Reject each of them, otig the score of the best cadidate see so far. The, hire the first cadidate whose score exceeds this best score. (If o such cadidate is foud, hire the th cadidate.) Goal. Fid a k which maximizes the probability of hirig the best cadidate. Huo Hogwei 13

O-lie hirig problem Aalysis. S = evet that the best cadidate is foud, S i = evet that the best cadidate is foud by hirig the ith cadidate. The, S = i=1.. S i, ad sice the evets are idepedet ad we re discardig the first k cadidates, Pr[S] = Pr[S i ] = Pr[S i ] i=1 i=k+1 Notice that S i = B i O i where B i = evet that the ith cadidate is the best cadidate, O i = evet that oe of cadidates k + 1,..., i -1 are picked. Huo Hogwei 14

O-lie hirig problem Claim. The evets B i ad O i are idepedet. Proof. O i oly depeds o the relative order of the cadidates 1,..., i -1, i.e., kowig that the ith cadidate is the best does ot affect whether cadidates k+1,..., i-1 are goig to be picked by the strategy. Pr[S i ] = Pr[B i O i ] = Pr[B i ] Pr[O i ] = the last equality follows sice O i is the same evet as havig ay oe of the first k cadidates be the best amog the first i -1. Puttig everythig together k I order to maximize Pr[S], we should choose k/ to be a costat c < 1. (What would happe if we chose k to be a costat istead?) 1 k i-1 Pr[S i ] = = = [H -1 H k-1 ] i=k+1 1 i-1 k -1 1 k i=k i Huo Hogwei 15

O-lie hirig problem Heuristic argumet. (See the text for a precise argumet usig itegral approximatios to sums.) k Pr[S] [l lk] = c l c -1 which is maximized whe c = 1/ e 0. 37. Thus, if we choose k = 0. 37, with probability 0.37 we will pick the best cadidate. Huo Hogwei 16

Recap Probabilistic aalysis: assume distributio o the iputs ad examie expected value of cost. Radomized algorithms: iduce a distributio o the iputs by makig radom choices o determiistic iput. Tools: Idicators, liearity of expectatio,.... Huo Hogwei 17

For example oe who likes logic programmig - prolog ad who does ot like c, c++, java,... The first step is dowload (it is a free) xsb software package. The package has a lot of documetatio - prit ad start to read... The goal may be a implemetatio of the system to simulate da-computatio i the prolog programmig laguage... Somebody who is good i theory... ca prove equivalece betwee prolog computig model ad da computig model... The third who is good i compilers ca improve prolog laguage itroducig multyhead rules istead a:-b,c,d,... ad x:-b,c,... => a&x :- b,c,d Huo Hogwei 18