Tail Inequalities Randomized Algorithms. Sariel Har-Peled. December 20, 2002
|
|
- Rodger Moody
- 6 years ago
- Views:
Transcription
1 Tail Inequalities Randomized Algorithms Sariel Har-Peled December 0, 00 Wir mssen wissen, wir werden wissen (We must know, we shall know) David Hilbert 1 Tail Inequalities 1.1 The Chernoff Bound Special Case Theorem 1.1 Let X 1,..., X n be n independent random varaibles, such that PrX i 1 PrX i 1 1, for i 1,..., n. Let Y n i1 X i. Then, for any > 0, we have PrY e /n. Proof: Clearly, for an arbitrary t, to specified shortly, we have PrY Prexp(tY ) exp(t ) E exp(ty ) exp(t ), the first part follows by the fact that exp( ) preserve ordering, and the second part follows by the Markov inequality. Observe that Eexp(tX i ) 1 et + 1 e t et + e t 1 ) t (1 + + t1!! + t3 3! ) t (1 + t1!! t3 3! + ) (1 + + t! tk (k)! +, by the Taylor expansion of exp( ). Note, that (k)! (k!) k, and thus Eexp(tX i ) i0 t i (i)! i0 t i i (i!) i0 1 i! ( ) t i exp ( t / ), 1
2 again, by the Taylor expansion of exp( ). Next, by the independence of the X i s, we have ( ) n Eexp(tY ) E exp tx i E exp(tx i ) Eexp(tX i ) We have n e t / e nt /. i1 PrY exp(nt /) exp(t ) i i i1 exp ( nt / t ). Next, by minimizing the above quantity for t, we set t /n. We conclude, ( ( ) ) n PrY exp ) n n exp (. n By the symmetry of Y, we get the following: Corollary 1. Let X 1,..., X n be n independent random variables, such that PrX i 1 PrX i 1 1, for i 1,..., n. Let Y n i1 X i. Then, for any > 0, we have Pr Y e /n. Corollary 1.3 Let X 1,..., X n be n independent coin flips, such that PrX i 0 PrX i 1 1, for i 1,..., n. Let Y n i1 X i. Then, for any > 0, we have Y n Pr e /n. 1. The Chernoff Bound General Case Here we present the Chernoff bound in a more general settings. Question 1.4 Let 1. X 1,..., X n - n independent Bernoulli trials, where Each X i is known as a Poisson trials.. X b i1 X i. µ EX i p i. Question: Probability that X > (1 + δ)µ? PrX i 1 p i, and PrX i 0 q i 1 p i.
3 Theorem 1.5 For any δ > 0, ( ) e δ µ PrX > (1 + δ)µ <. (1 + δ) 1+δ Or in a more simplified form, for any δ e 1, PrX > (1 + δ)µ < exp ( µδ /4 ), (1) and for δ e 1. PrX > (1 + δ)µ < µ(1+δ), () Remark 1.6 Before going any further, it is maybe instrumental to understand what this inequality implies. Set all probabilities to be p i 1/, and set δ t/ µ. ( µ is approximately the standard deviation of X if p i 1/) Using very fluffy math, in particular e δ 1 + δ), we get the following: Pr X µ > tσ X PrX > (1 + δ)µ < ( ) e δ µ ( 1 + δ (1 + δ) 1+δ (1 + δ) 1+δ ( ) (t/ n)n/ 1 ( e δ) (t/ n)n/ 1 + δ e (t /n)n/ e t. Thus, Chernoff inequality implies exponential decay with the standard deviation, instead of just polynomial (like the Cheby s inequality). We emphasize again that above calculation is incorrect, and should only be interpreted as an intuition of what is going on. Proof: (of Theorem 1.5) By Markov inequality, we have: On the other hand, Ee tx E PrX > (1 + δ)µ Pr Pr X > (1 + δ)µ < e tx > e t(1+δ)µ. E e tx e t(1+δ)µ e t(x 1+X...+X n) E e tx 1 E e txn. ) n/ Namely, Pr X > (1 + δ)µ < n i1 E e tx i e t(1+δ)µ n i1 ((1 p i)e 0 + p i e t ) e t(1+δ)µ n i1 (1 + p i(e t 1)) e t(1+δ)µ. 3
4 Let y p i (e t 1). We know that 1 + y < e y (since y > 0). Thus, n i1 Pr X > (1 + δ)µ < exp(p i(e t 1)). exp( n i1 p i(e t 1)) e t(1+δ)µ e t(1+δ)µ exp((et 1) n i1 p ( i) exp((et 1)µ) exp(e t 1) e t(1+δ)µ e t(1+δ)µ e ( ) t(1+δ) µ exp(δ), (1 + δ) (1+δ) if we set t log(1 + δ). For the proof of the simplified form, see Section 1.3. Definition 1.7 F + e µ. (µ, δ) δ (1+δ) (1+δ) Example 1.8 Arkansas Aardvarks win a game with probability 1/3. What is their probability to have a winning season with n games. By Chernoff inequality, this probability is smaller than e F + 1/ n/3 (n/3, 1/) ( ) n/ n For n 40, this probability is smaller than For n 100 this is less than For n 1000, this is smaller than (which is pretty slim and shady). Namely, as the number of experiments is increases, the distribution converges to its expectation, and this converge is exponential. Exercise 1.9 Prove that for δ > e 1, we have F + (µ, δ) < (1+δ)µ e (1+δ)µ. 1 + δ ) µ Theorem 1.10 Under the same assumptions as Theorem 1.5, we have: PrX < (1 δ)µ < e µδ /. Definition 1.11 F (µ, δ) e µδ /. (µ, ε) - what should be the value of δ, so that the probability is smaller than ε. log 1/ε (µ, ε) µ For large δ: + (µ, ε) < log (1/ε) µ 1 4
5 1.3 A More Convenient Form Proof: (of simplified form of Theorem 1.5) Equation () is just Exercise 1.9. As for Equation (1), we prove this only for δ 1/. For details about the case 1/ δ e 1, see MR95. By Theorem 1.5, we have ( ) e δ µ PrX > (1 + δ)µ < exp(µδ µ(1 + δ) ln(1 + δ)). (1 + δ) 1+δ The Taylor expansion of ln(1 + δ) is for δ 1. Thus, δ δ + δ3 3 δ4 4 + δ δ, PrX > (1 + δ)µ < exp ( µ ( δ (1 + δ) ( δ δ / ))) exp ( µ ( δ δ + δ / δ + δ 3 / )) for δ 1/. exp ( µ ( δ / + δ 3 / )) exp ( µδ /4 ), Application of the Chernoff Inequality Routing in a Parallel Computer The following is based on Section 4. in MR95. G: A graph of processors. Packets can be sent on edges. 1,..., N: The vertices (i.e., processors) of G. N n, and G is a hypercube. Each processes is a binary string b 1 b... b n. Question: Given a permutation π, how to send the permutation and create minimum delay? Theorem.1 For any deterministic oblivious permutation routing algorithm on a network of N nodes each of out-degree n, there is a permutation for which the routing of the permutation takes Ω( N/n) time. How do we sent a packet? We use bit fixing. Namely, the packet from the i node, always go to the current adjacent node that have the first different bit as we scan the destination string d(i). For example, packet from (0000) going to (1101), would pass through (1000), (1100), (1101). We assume each edge have a FIFO queue. Here is the algorithm: (i) Pick a random intermediate destination σ(i) from 1,..., N. Packet v i travels to σ(i). (ii) Wait till all the packet arrive to their intermediate destination. (iii) Packet v i travels from σ(i) to its destination d(i). We analyze only (i) as (iii) follows from the same analysis. ρ i - the route taken by v i in (i). 5
6 Exercise. Once a packet v j that travel along a path ρ j can not leave a path ρ i, and then join it again later. Namely, ρ i ρ j is (maybe an empty) path. Lemma.3 Let the route of v i follow the sequence of edges ρ i (e 1, e,..., e k ). Let S be the set of packets whose routes pass through at least one of (e 1,..., e k ). Then, the delay incurred by v i is at most S. Let H ij be an indicator variable that is 1 if ρ i, ρ j share an edge, 0 otherwise. Total delay for v i is j H ij. Note, that for a fixed i, the variables H i1,..., H in are independent (not however, that H 11,..., H NN are not independent!). For ρ i (e 1,..., e k ), let T (e) be the number of packets (i.e., paths) that pass through e. N k N k H ij T (e j ) and thus E H ij E T (e j ). Because of symmetry, the variables T (e) have the same distribution for all the edges of G. On the other hand, the expected length of a path is n/, there are N packets, and there are Nn/ edges. We conclude ET (e) 1. Thus N k µ E H ij E T (e j ) E ρ i n. By the Chernoff inequality (Exercise 1.9), we have Pr H ij > 7n Pr H ij > (1 + 13)µ < 13µ 6n. j j Since there are N n packets, we know that with probability 5n all packets arrive to their temporary destination in a delay of most 7n. Theorem.4 Each packet arrives to its destination in 14n stages, in probability at least 1 1/N (note that this is very conservative). References MR95 R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, New York, NY,
variance of independent variables: sum of variances So chebyshev predicts won t stray beyond stdev.
Announcements No class monday. Metric embedding seminar. Review expectation notion of high probability. Markov. Today: Book 4.1, 3.3, 4.2 Chebyshev. Remind variance, standard deviation. σ 2 = E[(X µ X
More informationChapter 4 - Tail inequalities
Chapter 4 - Tail inequalities Igor Gonopolskiy & Nimrod Milo BGU November 10, 2009 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, 2009 1 / 28 Outline 1 Definitions 2 Chernoff Bound Upper deviation
More informationTail Inequalities. The Chernoff bound works for random variables that are a sum of indicator variables with the same distribution (Bernoulli trials).
Tail Inequalities William Hunt Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV William.Hunt@mail.wvu.edu Introduction In this chapter, we are interested
More informationChernoff Bounds. Theme: try to show that it is unlikely a random variable X is far away from its expectation.
Chernoff Bounds Theme: try to show that it is unlikely a random variable X is far away from its expectation. The more you know about X, the better the bound you obtain. Markov s inequality: use E[X ] Chebyshev
More informationSecond main application of Chernoff: analysis of load balancing. Already saw balls in bins example. oblivious algorithms only consider self packet.
Routing Second main application of Chernoff: analysis of load balancing. Already saw balls in bins example synchronous message passing bidirectional links, one message per step queues on links permutation
More informationAssignment 4: Solutions
Math 340: Discrete Structures II Assignment 4: Solutions. Random Walks. Consider a random walk on an connected, non-bipartite, undirected graph G. Show that, in the long run, the walk will traverse each
More informationRandomized Algorithms I. Spring 2013, period III Jyrki Kivinen
582691 Randomized Algorithms I Spring 2013, period III Jyrki Kivinen 1 Position of the course in the studies 4 credits advanced course (syventävät opinnot) in algorithms and machine learning prerequisites:
More informationPart 2: Random Routing and Load Balancing
1 Part 2: Random Routing and Load Balancing Sid C-K Chau Chi-Kin.Chau@cl.cam.ac.uk http://www.cl.cam.ac.uk/~ckc25/teaching Problem: Traffic Routing 2 Suppose you are in charge of transportation. What do
More informationLecture 4: Sampling, Tail Inequalities
Lecture 4: Sampling, Tail Inequalities Variance and Covariance Moment and Deviation Concentration and Tail Inequalities Sampling and Estimation c Hung Q. Ngo (SUNY at Buffalo) CSE 694 A Fun Course 1 /
More informationLecture 5: The Principle of Deferred Decisions. Chernoff Bounds
Randomized Algorithms Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized
More informationCS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003
CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming
More informationRandomized Algorithms. Zhou Jun
Randomized Algorithms Zhou Jun 1 Content 13.1 Contention Resolution 13.2 Global Minimum Cut 13.3 *Random Variables and Expectation 13.4 Randomized Approximation Algorithm for MAX 3- SAT 13.6 Hashing 13.7
More informationChapter 11. Min Cut Min Cut Problem Definition Some Definitions. By Sariel Har-Peled, December 10, Version: 1.
Chapter 11 Min Cut By Sariel Har-Peled, December 10, 013 1 Version: 1.0 I built on the sand And it tumbled down, I built on a rock And it tumbled down. Now when I build, I shall begin With the smoke from
More informationThe first bound is the strongest, the other two bounds are often easier to state and compute. Proof: Applying Markov's inequality, for any >0 we have
The first bound is the strongest, the other two bounds are often easier to state and compute Proof: Applying Markov's inequality, for any >0 we have Pr (1 + ) = Pr For any >0, we can set = ln 1+ (4.4.1):
More informationUNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours.
UNIVERSITY OF YORK MSc Examinations 2004 MATHEMATICS Networks Time Allowed: 3 hours. Answer 4 questions. Standard calculators will be provided but should be unnecessary. 1 Turn over 2 continued on next
More informationRandomized algorithm
Tutorial 4 Joyce 2009-11-24 Outline Solution to Midterm Question 1 Question 2 Question 1 Question 2 Question 3 Question 4 Question 5 Solution to Midterm Solution to Midterm Solution to Midterm Question
More informationX = X X n, + X 2
CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk
More informationLecture 4. P r[x > ce[x]] 1/c. = ap r[x = a] + a>ce[x] P r[x = a]
U.C. Berkeley CS273: Parallel and Distributed Theory Lecture 4 Professor Satish Rao September 7, 2010 Lecturer: Satish Rao Last revised September 13, 2010 Lecture 4 1 Deviation bounds. Deviation bounds
More informationMinimum Cut in a Graph Randomized Algorithms
Minimum Cut in a Graph 497 - Randomized Algorithms Sariel Har-Peled September 3, 00 Paul Erdős 1913-1996) - a famous mathematician, believed that God has a book, called The Book in which god maintains
More informationExponential Tail Bounds
Exponential Tail Bounds Mathias Winther Madsen January 2, 205 Here s a warm-up problem to get you started: Problem You enter the casino with 00 chips and start playing a game in which you double your capital
More information11.1 Set Cover ILP formulation of set cover Deterministic rounding
CS787: Advanced Algorithms Lecture 11: Randomized Rounding, Concentration Bounds In this lecture we will see some more examples of approximation algorithms based on LP relaxations. This time we will use
More informationChapter 2: Random Variables
ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:
More informationAdvanced Algorithms (2IL45)
Technische Universiteit Eindhoven University of Technology Course Notes Fall 2013 Advanced Algorithms (2IL45) Mark de Berg Contents 1 Introduction to randomized algorithms 4 1.1 Some probability-theory
More informationCS 70 Discrete Mathematics and Probability Theory Summer 2016 Dinh, Psomas, and Ye Final Exam
CS 70 Discrete Mathematics and Probability Theory Summer 2016 Dinh, Psomas, and Ye Final Exam PRINT Your Name:, (last) By signing below, I agree that (a) I will not give or receive help from others during
More informationExpectation, inequalities and laws of large numbers
Chapter 3 Expectation, inequalities and laws of large numbers 3. Expectation and Variance Indicator random variable Let us suppose that the event A partitions the sample space S, i.e. A A S. The indicator
More informationThe Fitness Level Method with Tail Bounds
The Fitness Level Method with Tail Bounds Carsten Witt DTU Compute Technical University of Denmark 2800 Kgs. Lyngby Denmark arxiv:307.4274v [cs.ne] 6 Jul 203 July 7, 203 Abstract The fitness-level method,
More informationRandomized Algorithms Week 2: Tail Inequalities
Randomized Algorithms Week 2: Tail Inequalities Rao Kosaraju In this section, we study three ways to estimate the tail probabilities of random variables. Please note that the more information we know about
More informationLecture 5: Probabilistic tools and Applications II
T-79.7003: Graphs and Networks Fall 2013 Lecture 5: Probabilistic tools and Applications II Lecturer: Charalampos E. Tsourakakis Oct. 11, 2013 5.1 Overview In the first part of today s lecture we will
More informationEssentials on the Analysis of Randomized Algorithms
Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random
More informationRandom Variable. Pr(X = a) = Pr(s)
Random Variable Definition A random variable X on a sample space Ω is a real-valued function on Ω; that is, X : Ω R. A discrete random variable is a random variable that takes on only a finite or countably
More informationRandomized Algorithms
Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours
More informationHandout 5. α a1 a n. }, where. xi if a i = 1 1 if a i = 0.
Notes on Complexity Theory Last updated: October, 2005 Jonathan Katz Handout 5 1 An Improved Upper-Bound on Circuit Size Here we show the result promised in the previous lecture regarding an upper-bound
More informationLecture 18: March 15
CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 18: March 15 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They may
More informationWith high probability
With high probability So far we have been mainly concerned with expected behaviour: expected running times, expected competitive ratio s. But it would often be much more interesting if we would be able
More informationCSE 312 Foundations, II Final Exam
CSE 312 Foundations, II Final Exam 1 Anna Karlin June 11, 2014 DIRECTIONS: Closed book, closed notes except for one 8.5 11 sheet. Time limit 110 minutes. Calculators allowed. Grading will emphasize problem
More informationEECS 70 Discrete Mathematics and Probability Theory Fall 2015 Walrand/Rao Final
EECS 70 Discrete Mathematics and Probability Theory Fall 2015 Walrand/Rao Final PRINT Your Name:, (last) SIGN Your Name: (first) PRINT Your Student ID: CIRCLE your exam room: 220 Hearst 230 Hearst 237
More informationLecture 2 Sep 5, 2017
CS 388R: Randomized Algorithms Fall 2017 Lecture 2 Sep 5, 2017 Prof. Eric Price Scribe: V. Orestis Papadigenopoulos and Patrick Rall NOTE: THESE NOTES HAVE NOT BEEN EDITED OR CHECKED FOR CORRECTNESS 1
More informationECEN 689 Special Topics in Data Science for Communications Networks
ECEN 689 Special Topics in Data Science for Communications Networks Nick Duffield Department of Electrical & Computer Engineering Texas A&M University Lecture 3 Estimation and Bounds Estimation from Packet
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d
More informationDiscrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 20
CS 70 Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 20 Today we shall discuss a measure of how close a random variable tends to be to its expectation. But first we need to see how to compute
More informationExample 1. The sample space of an experiment where we flip a pair of coins is denoted by:
Chapter 8 Probability 8. Preliminaries Definition (Sample Space). A Sample Space, Ω, is the set of all possible outcomes of an experiment. Such a sample space is considered discrete if Ω has finite cardinality.
More informationLocality Sensitive Hashing
Locality Sensitive Hashing February 1, 016 1 LSH in Hamming space The following discussion focuses on the notion of Locality Sensitive Hashing which was first introduced in [5]. We focus in the case of
More informationMath 180B Problem Set 3
Math 180B Problem Set 3 Problem 1. (Exercise 3.1.2) Solution. By the definition of conditional probabilities we have Pr{X 2 = 1, X 3 = 1 X 1 = 0} = Pr{X 3 = 1 X 2 = 1, X 1 = 0} Pr{X 2 = 1 X 1 = 0} = P
More informationProbability & Computing
Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...
More informationErrata: First Printing of Mitzenmacher/Upfal Probability and Computing
Errata: First Printing of Mitzenmacher/Upfal Probability and Computing Michael Mitzenmacher and Eli Upfal October 10, 2006 We would like to thank the many of you who have bought our book, and we would
More informationLecture 4: Inequalities and Asymptotic Estimates
CSE 713: Random Graphs and Applications SUNY at Buffalo, Fall 003 Lecturer: Hung Q. Ngo Scribe: Hung Q. Ngo Lecture 4: Inequalities and Asymptotic Estimates We draw materials from [, 5, 8 10, 17, 18].
More informationQuick Tour of Basic Probability Theory and Linear Algebra
Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions
More informationFrom Bandits to Experts: A Tale of Domination and Independence
From Bandits to Experts: A Tale of Domination and Independence Nicolò Cesa-Bianchi Università degli Studi di Milano N. Cesa-Bianchi (UNIMI) Domination and Independence 1 / 1 From Bandits to Experts: A
More informationSolution Set for Homework #1
CS 683 Spring 07 Learning, Games, and Electronic Markets Solution Set for Homework #1 1. Suppose x and y are real numbers and x > y. Prove that e x > ex e y x y > e y. Solution: Let f(s = e s. By the mean
More informationTail and Concentration Inequalities
CSE 694: Probabilistic Analysis and Randomized Algorithms Lecturer: Hung Q. Ngo SUNY at Buffalo, Spring 2011 Last update: February 19, 2011 Tail and Concentration Ineualities From here on, we use 1 A to
More informationThe Markov Chain Monte Carlo Method
The Markov Chain Monte Carlo Method Idea: define an ergodic Markov chain whose stationary distribution is the desired probability distribution. Let X 0, X 1, X 2,..., X n be the run of the chain. The Markov
More informationTheorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr( )
Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr Pr = Pr Pr Pr() Pr Pr. We are given three coins and are told that two of the coins are fair and the
More informationCS Communication Complexity: Applications and New Directions
CS 2429 - Communication Complexity: Applications and New Directions Lecturer: Toniann Pitassi 1 Introduction In this course we will define the basic two-party model of communication, as introduced in the
More informationEleventh Problem Assignment
EECS April, 27 PROBLEM (2 points) The outcomes of successive flips of a particular coin are dependent and are found to be described fully by the conditional probabilities P(H n+ H n ) = P(T n+ T n ) =
More informationUmans Complexity Theory Lectures
Umans Complexity Theory Lectures Lecture 8: Introduction to Randomized Complexity: - Randomized complexity classes, - Error reduction, - in P/poly - Reingold s Undirected Graph Reachability in RL Randomized
More informationCS261: A Second Course in Algorithms Lecture #11: Online Learning and the Multiplicative Weights Algorithm
CS61: A Second Course in Algorithms Lecture #11: Online Learning and the Multiplicative Weights Algorithm Tim Roughgarden February 9, 016 1 Online Algorithms This lecture begins the third module of the
More informationCS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms
CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms Tim Roughgarden March 3, 2016 1 Preamble In CS109 and CS161, you learned some tricks of
More informationOther properties of M M 1
Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected
More informationNotes on Discrete Probability
Columbia University Handout 3 W4231: Analysis of Algorithms September 21, 1999 Professor Luca Trevisan Notes on Discrete Probability The following notes cover, mostly without proofs, the basic notions
More informationRandomized Complexity Classes; RP
Randomized Complexity Classes; RP Let N be a polynomial-time precise NTM that runs in time p(n) and has 2 nondeterministic choices at each step. N is a polynomial Monte Carlo Turing machine for a language
More informationConflict-Free Colorings of Rectangles Ranges
Conflict-Free Colorings of Rectangles Ranges Khaled Elbassioni Nabil H. Mustafa Max-Planck-Institut für Informatik, Saarbrücken, Germany felbassio, nmustafag@mpi-sb.mpg.de Abstract. Given the range space
More informationNotes on uniform convergence
Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean
More information1 Dimension Reduction in Euclidean Space
CSIS0351/8601: Randomized Algorithms Lecture 6: Johnson-Lindenstrauss Lemma: Dimension Reduction Lecturer: Hubert Chan Date: 10 Oct 011 hese lecture notes are supplementary materials for the lectures.
More informationPart I: Discrete Math.
Part I: Discrete Math. 1. Propositions. 10 points. 3/3/4 (a) The following statement expresses the fact that there is a smallest number in the natural numbers, ( y N) ( x N) (y x). Write a statement that
More informationMARKING A BINARY TREE PROBABILISTIC ANALYSIS OF A RANDOMIZED ALGORITHM
MARKING A BINARY TREE PROBABILISTIC ANALYSIS OF A RANDOMIZED ALGORITHM XIANG LI Abstract. This paper centers on the analysis of a specific randomized algorithm, a basic random process that involves marking
More informationHoeffding, Chernoff, Bennet, and Bernstein Bounds
Stat 928: Statistical Learning Theory Lecture: 6 Hoeffding, Chernoff, Bennet, Bernstein Bounds Instructor: Sham Kakade 1 Hoeffding s Bound We say X is a sub-gaussian rom variable if it has quadratically
More informationCPSC 536N: Randomized Algorithms Term 2. Lecture 2
CPSC 536N: Randomized Algorithms 2014-15 Term 2 Prof. Nick Harvey Lecture 2 University of British Columbia In this lecture we continue our introduction to randomized algorithms by discussing the Max Cut
More informationPerformance Evaluation of Queuing Systems
Performance Evaluation of Queuing Systems Introduction to Queuing Systems System Performance Measures & Little s Law Equilibrium Solution of Birth-Death Processes Analysis of Single-Station Queuing Systems
More informationExpectation is linear. So far we saw that E(X + Y ) = E(X) + E(Y ). Let α R. Then,
Expectation is linear So far we saw that E(X + Y ) = E(X) + E(Y ). Let α R. Then, E(αX) = ω = ω (αx)(ω) Pr(ω) αx(ω) Pr(ω) = α ω X(ω) Pr(ω) = αe(x). Corollary. For α, β R, E(αX + βy ) = αe(x) + βe(y ).
More informationError Detection and Correction: Small Applications of Exclusive-Or
Error Detection and Correction: Small Applications of Exclusive-Or Greg Plaxton Theory in Programming Practice, Fall 2005 Department of Computer Science University of Texas at Austin Exclusive-Or (XOR,
More informationCSCI8980 Algorithmic Techniques for Big Data September 12, Lecture 2
CSCI8980 Algorithmic Techniques for Big Data September, 03 Dr. Barna Saha Lecture Scribe: Matt Nohelty Overview We continue our discussion on data streaming models where streams of elements are coming
More informationCS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify)
CS5314 Randomized Algorithms Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify) 1 Introduce two topics: De-randomize by conditional expectation provides a deterministic way to construct
More informationRandomization in Algorithms
Randomization in Algorithms Randomization is a tool for designing good algorithms. Two kinds of algorithms Las Vegas - always correct, running time is random. Monte Carlo - may return incorrect answers,
More informationA = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1
Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random
More informationTHE METHOD OF CONDITIONAL PROBABILITIES: DERANDOMIZING THE PROBABILISTIC METHOD
THE METHOD OF CONDITIONAL PROBABILITIES: DERANDOMIZING THE PROBABILISTIC METHOD JAMES ZHOU Abstract. We describe the probabilistic method as a nonconstructive way of proving the existence of combinatorial
More informationBalls & Bins. Balls into Bins. Revisit Birthday Paradox. Load. SCONE Lab. Put m balls into n bins uniformly at random
Balls & Bins Put m balls into n bins uniformly at random Seoul National University 1 2 3 n Balls into Bins Name: Chong kwon Kim Same (or similar) problems Birthday paradox Hash table Coupon collection
More informationSTAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)
STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 07 Néhémy Lim Moment functions Moments of a random variable Definition.. Let X be a rrv on probability space (Ω, A, P). For a given r N, E[X r ], if it
More informationCSE 548: Analysis of Algorithms. Lectures 18, 19, 20 & 21 ( Randomized Algorithms & High Probability Bounds )
CSE 548: Analysis of Algorithms Lectures 18, 19, 20 & 21 ( Randomized Algorithms & High Probability Bounds ) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Fall 2012 Markov s Inequality
More informationLecture Notes 1 Basic Probability. Elements of Probability. Conditional probability. Sequential Calculation of Probability
Lecture Notes 1 Basic Probability Set Theory Elements of Probability Conditional probability Sequential Calculation of Probability Total Probability and Bayes Rule Independence Counting EE 178/278A: Basic
More informationStochastic Models in Computer Science A Tutorial
Stochastic Models in Computer Science A Tutorial Dr. Snehanshu Saha Department of Computer Science PESIT BSC, Bengaluru WCI 2015 - August 10 to August 13 1 Introduction 2 Random Variable 3 Introduction
More information1 Randomized Computation
CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at
More informationCSE 525 Randomized Algorithms & Probabilistic Analysis Spring Lecture 3: April 9
CSE 55 Randomized Algorithms & Probabilistic Analysis Spring 01 Lecture : April 9 Lecturer: Anna Karlin Scribe: Tyler Rigsby & John MacKinnon.1 Kinds of randomization in algorithms So far in our discussion
More information1 Large Deviations. Korea Lectures June 2017 Joel Spencer Tuesday Lecture III
Korea Lectures June 017 Joel Spencer Tuesday Lecture III 1 Large Deviations We analyze how random variables behave asymptotically Initially we consider the sum of random variables that take values of 1
More informationLecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes
Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities
More informationLecture 09: Next-bit Unpredictability. Lecture 09: Next-bit Unpredictability
Indistinguishability Consider two distributions X and Y over the sample space Ω. The distributions X and Y are ε-indistinguishable from each other if: For all algorithms A: Ω {0, 1} the following holds
More informationProbability Theory and Statistics (EE/TE 3341) Homework 3 Solutions
Probability Theory and Statistics (EE/TE 3341) Homework 3 Solutions Yates and Goodman 3e Solution Set: 3.2.1, 3.2.3, 3.2.10, 3.2.11, 3.3.1, 3.3.3, 3.3.10, 3.3.18, 3.4.3, and 3.4.4 Problem 3.2.1 Solution
More informationJanson s Inequality and Poisson Heuristic
Janson s Inequality and Poisson Heuristic Dinesh K CS11M019 IIT Madras April 30, 2012 Dinesh (IITM) Janson s Inequality April 30, 2012 1 / 11 Outline 1 Motivation Dinesh (IITM) Janson s Inequality April
More informationRobust Network Codes for Unicast Connections: A Case Study
Robust Network Codes for Unicast Connections: A Case Study Salim Y. El Rouayheb, Alex Sprintson, and Costas Georghiades Department of Electrical and Computer Engineering Texas A&M University College Station,
More informationACO Comprehensive Exam October 14 and 15, 2013
1. Computability, Complexity and Algorithms (a) Let G be the complete graph on n vertices, and let c : V (G) V (G) [0, ) be a symmetric cost function. Consider the following closest point heuristic for
More informationLecture 12: Randomness Continued
CS 710: Complexity Theory 2/25/2010 Lecture 12: Randomness Continued Instructor: Dieter van Melkebeek Scribe: Beth Skubak & Nathan Collins In the last lecture we introduced randomized computation in terms
More information13. RANDOMIZED ALGORITHMS
Randomization Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley http://www.cs.princeton.edu/~wayne/kleinberg-tardos Algorithmic design patterns. Greedy. Divide-and-conquer. Dynamic programming.
More informationHomework 10 Solution
CS 174: Combinatorics and Discrete Probability Fall 2012 Homewor 10 Solution Problem 1. (Exercise 10.6 from MU 8 points) The problem of counting the number of solutions to a napsac instance can be defined
More information6.842 Randomness and Computation Lecture 5
6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its
More informationCSE 291: Fourier analysis Chapter 2: Social choice theory
CSE 91: Fourier analysis Chapter : Social choice theory 1 Basic definitions We can view a boolean function f : { 1, 1} n { 1, 1} as a means to aggregate votes in a -outcome election. Common examples are:
More informationProbability and Statistics Concepts
University of Central Florida Computer Science Division COT 5611 - Operating Systems. Spring 014 - dcm Probability and Statistics Concepts Random Variable: a rule that assigns a numerical value to each
More informationRandomized Computation
Randomized Computation Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Approach. Ahto Buldas Ahto.Buldas@ut.ee We do not assume anything about the distribution of the instances of the problem
More informationSolutions to Homework Discrete Stochastic Processes MIT, Spring 2011
Exercise 6.5: Solutions to Homework 0 6.262 Discrete Stochastic Processes MIT, Spring 20 Consider the Markov process illustrated below. The transitions are labelled by the rate q ij at which those transitions
More informationLecture 7: February 6
CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 7: February 6 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They
More informationLecture 5: January 30
CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 5: January 30 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They
More informationCS5314 Randomized Algorithms. Lecture 15: Balls, Bins, Random Graphs (Hashing)
CS5314 Randomized Algorithms Lecture 15: Balls, Bins, Random Graphs (Hashing) 1 Objectives Study various hashing schemes Apply balls-and-bins model to analyze their performances 2 Chain Hashing Suppose
More information